Skip to main content
Simulation Testing allows you to test your AI agents using fully synthetic, AI-generated phone conversations. Instead of relying solely on real inbound calls, you can run hundreds of controlled simulations to see how your agent behaves under different scenarios, before any customer ever interacts with it. This guide explains what Simulation Testing is, how to create and run simulation test cases, and how to review results to improve your agent’s performance.

What Is Simulation Testing?

Simulation Testing uses AI to generate realistic phone conversations between a virtual “customer” and your Phonely agent. These simulations behave like real calls, including intent changes, interruptions, topic variations, and unexpected questions. Simulation Testing helps you:
  • Validate workflows before going live.
  • Detect misconfigurations, routing errors, and dead-ends.
  • Test knowledge base accuracy.
  • Stress-test prompts and guardrails.
  • Preview how your agent handles different tones and behaviors.
  • Ensure your agent behaves consistently after updates.
Simulation tests do not use real customers and do not affect live call routing.

Where to Access Simulation Testing

To open the simulation workspace:
  1. Go to your Phonely Dashboard.
  2. Click the Simulation Testing tab at the top (beside A/B Testing).
  3. You will see two sub-tabs:
    • Test Cases – Create and configure simulations.
    • Results – Review past simulation outputs.
The main screen shows:
  • Test Cases at the top.
  • Data Sources (optional call recordings) at the bottom.
Simulated Testing Pn

Test Cases Overview

In the Test Cases area, you can manage every simulation you create. Each test case card includes two key actions: Config and Run. Clicking Config opens the editor, where you can update the case name, description, number of test instances, and the randomness (variance) of the generated conversations. You can fine-tune how the simulated caller behaves before running the test. The Run button immediately triggers the simulation using the settings you’ve configured. Test Main Page If you want to remove a test case entirely, click the ellipsis (⋮) in the top-right corner of the card, this reveals a Delete option that lets you safely remove test cases you no longer need. This layout makes it easy to adjust, execute, and clean up simulation tests as you iterate on your agent’s behavior.

How to Create a Simulation Test Case

Follow these steps to build your first simulation:
  1. Go to Simulation Testing > Test Cases.
  2. Click the + card.
  3. A modal window titled Add Test Case will appear.
Add Test Form Fill in the following details: Case Name A short, descriptive label such as:
  • “Support Call – Device Setup”
  • “Customer Asking About Plans”
  • “Lead Qualification Script”
Description Explain the goal of the simulation. Example:
“Simulates a conversation where a customer asks about pricing and plan options.”

Configure Test Parameters

Two sliders allow you to define the nature and quantity of simulations:

Test Instance

Defines how many conversations to generate.
  • Range: 1–25 per run
  • Up to 1000 total runs (billed at $0.10 per run)

Variance

Controls the randomness of the simulated customer behavior. 1 = Low Variance: Conversations stay similar across runs 10 = High Variance: Conversations differ dramatically (ideal for stress testing) Higher variance helps reveal weaknesses in:
  • Prompts
  • Routing
  • Tone handling
  • Unexpected customer questions

Working With Data Sources

Data sources determine where the “customer side” of the simulated conversation comes from.
You can attach any of the following:
  • Call Recordings
  • Phonely’s Calls (past real calls)
  • AI Evaluators
Each of these appears as a tab at the top-right of the Data Sources panel. You can attach one data source or mix multiple types depending on how complex your test case is.

Call Recordings

Call Recordings allow you to upload audio files that represent real customer speech.
This is useful when you want the AI to react to real-world phrasing, accents, pacing, or emotional tone.
When you choose Call Recordings, Phonely displays an empty table until you upload something. To upload:
  1. Click Upload Files
  2. Enter a recording name (e.g., “Customer refund call”)
  3. Choose Inbound or Outbound
  4. Drag or browse to add audio files
  5. Press Upload
Uploaded recordings appear in a list. Select the checkboxes to attach them to a test case. Phonely automatically transcribes and uses the audio during the simulation.

Phonely’s Calls

This tab displays recordings from actual calls your agent has already handled inside Phonely. It is extremely helpful for:
  • regression testing
  • verifying bug fixes
  • comparing old vs new agent behavior
You simply select any past call and link it to a test case. Phonely replays the transcript during the simulated test.

AI Evaluators

AI Evaluators are the most flexible data source. Instead of using audio recordings, an evaluator acts as the customer during simulation. To create one:
  1. Click Add Evaluator and give it a name.
  2. Choose a conversation style such as Casual, Angry, Frustrated, Happy, Distracted, Sad, or Demanding
  3. Write detailed instructions, this defines how the evaluator behaves
    Example: “Act like a customer trying to book a flight but constantly changing their mind.”
  4. Define the success criteria
    Example: “Success means the agent confirms date, destination, and seat type.”
  5. Click Add
Once created, the evaluator appears in the list and you can link it to any test case. Evaluators produce extremely realistic interactions because they generate natural, dynamic dialogue instead of repeating fixed audio.

Run the Simulation

After saving the test case:
  1. Find the test card
  2. Click Run
  3. Phonely will begin generating simulated calls
  4. Progress indicators will appear
You can continue working while simulations run in the background.

Reviewing Simulation Results

After simulations complete, open the Results tab. For each simulation run, you will see: Each test case, the respective number of runs, status of test case run, when its was created and whether it succeeded, failes or is in progress.