All Guides

Testing Your Agent

Before you run your agent on real data, test it. The Test tab lets you create sandboxed runs with sample inputs, watch each step execute in real time, and inspect the results. This guide walks you through the full testing workflow.

Getting Started·5 min read
1Open the Test Tab1/6
01

Open the Test Tab

In the Agent Studio sidebar, click the Test tab (flask icon). You'll see the Test Runs screen — a list of your previous test runs with status badges (Completed, Failed, Running) and a + New Test button in the top right. If you haven't run any tests yet, you'll see an empty state prompting you to create your first run. The Test tab is disabled until all required connections are set up on the Build tab.

Why test?
Test runs are sandboxed — emails go to you, not real recipients. Catch issues before they affect real data
What you see
A list of previous test runs with status icons, timestamps, and a badge showing Completed, Failed, or Running
02

Create a Test Run

Click + New Test to open the input form. Fill in the fields your agent needs — these match the required inputs you defined in the workflow (e.g. a Google Sheet URL, a date range, a search query). Required fields are marked with a red asterisk. If you've run tests before, click Fill from last run to pre-populate the form with your previous inputs. Click Run Test when ready.

Input fields
Each field matches a workflow input your agent expects — fill them with real or sample data
Fill from last run
Pre-populate the form with inputs from your most recent test — saves time when re-testing

Use real data when possible — a real Google Sheet URL, a real Trello board. This gives you the most accurate test results. Test mode ensures emails are sent to you, not the actual recipients.

03

Watch the Execution

Once you click Run Test, the agent starts executing. A progress bar at the top tracks overall completion (e.g. "3/8 steps"). Each step shows its status in real time — a blue spinner while running, a green checkmark when complete, or a red X if something went wrong. Steps that can run in parallel execute simultaneously. You can click Stop at any time to cancel the run.

Running
Blue spinner icon and blue border — the step is currently executing
Complete
Green checkmark and green border — the step finished successfully
Failed
Red X icon and red border — something went wrong, click to see the error
Skipped
Dashed border — the step was skipped because its condition wasn't met
04

Inspect Results

Click any completed step to expand it. You'll see an AI-generated summary of what the step did, followed by detailed sections: what it does (intent), parameters used, input sources, and the full result output. For steps that produce documents or links, those appear as clickable artifacts below the result. Each step also shows its duration and when it ran.

Click to expand
Click any step to see its full output — AI summary, parameters, inputs, and raw result
Verify the data
Check that the right rows were fetched, the right fields were extracted, and the right actions were taken
Duration
Each step shows how long it took (e.g. "2.3s") and when it ran
Re-run a step
Click "Re-run" on any completed step to execute just that one step again

Pay attention to the data flowing between steps. If Step 2 depends on Step 1's output, check that Step 1 produced the right data first.

05

Debug Failures

If a step fails, click it to expand the error message shown in red at the bottom of the expanded card. Each failed step has a Retry button to re-run just that step after you've fixed the issue. If a step fails unexpectedly, click Report to send a bug report to the Kindgi team with the error details and any additional context you provide.

Missing connection
The step needs an app that isn't connected yet — go to Build and click the orange badge
Wrong field names
The step expected a column or field that doesn't exist — check your sheet headers or app structure
Retry
Click "Retry" on a failed step to re-run just that step without restarting the entire test
Report
Click "Report" to send the error details to Kindgi — add context about what you expected to happen

Most failures come from one of three things: a missing connection, a typo in a field name, or a filter that's too strict. Check those first.

06

Iterate and Improve

Testing is iterative. After inspecting your results, switch back to the Build tab to make changes, then return to Test and run another test. Your test runs list keeps a full history — each run shows its status, a description from the inputs you provided, and how long ago it ran. Click any past run to review its results. Vary your inputs across test runs to cover different scenarios.

Testing checklist
Run at least one test with realistic data before going live
Check each step's output — not just the final result
Vary your inputs to cover edge cases (empty rows, missing fields, different statuses)
If a step fails, read the error message before changing anything
After fixing an issue in Build, always re-test to confirm the fix works
Build
Make changes to your workflow
Test
Run a test to verify the changes
Repeat
Keep iterating until every step passes

You're all set

Your Testing Your Agent guide is complete.

KindgiIntelligent automation for professional workflows
Testing Your Agent Guide | Kindgi