How to Customize and Test AI Agent Capabilities in BoldDesk
Testing AI Agents in BoldDesk is crucial to guarantee they provide accurate and consistent support. This process enables the enhancement of the AI’s training before engaging with real users. This guide explains how to simulate customer queries, validate intent recognition, and verify AI Actions. It also highlights the significance of multi-agent testing, which ensures that each AI agent responds appropriately within its designated role and channel—essential for maintaining seamless support across specialized areas such as billing, onboarding, or technical assistance.
Testing AI before deploying in BoldDesk is crucial to ensure your AI agent performs reliably and delivers a high-quality customer experience. Using this approach, you will be able to determine if your AI Agent will work and how it will work in the deployed channel.
Steps to Test an AI Agent in BoldDesk
- Access the AI Testing Interface:
- Navigate to the AI module from the left sidebar.
- Click on AI Agent and select the agent you want to test.
- Click on the more options icon on the Agent tab.
- Then select Test Agent to open the testing tab or interface provided for sample interactions.
- Use Sample Queries:
- Enter realistic customer queries to simulate live interactions.
- Test across different topics and intents relevant to your business.
- Observe how the agent responds and whether it uses the correct content sources.
Check out the visual below.
- Validate Intent Recognition:
- Check if the AI correctly identifies the user’s intent.
- Ensure it routes queries to the appropriate department or provides accurate answers.
- For multi-agent setups, confirm that the right agent is triggered based on the query context.
- Test AI Actions:
- If you’ve configured AI Actions, test them by entering trigger phrases.
- For example, simulate a request to update a billing address or check an order status.
- Confirm that the agent performs the action or calls the external API as expected.
- Repeat and Improve:
- Based on test results, refine:
- Training content (KB articles, Q&A, files).
- Communication style (tone and length).
- AI Actions and workflows.
- Re-test until the agent meets your performance standards.
- Based on test results, refine:
Testing Multi-Agent Deployments
If you’re using multiple AI agents:
- Test each agent individually for its assigned role (for example, billing, onboarding).
- Validate that agents respond appropriately when mapped to different channels.
To perform testing and manage AI Agents, users must have the Manage AI Agents permission.
To test AI Agents, users must have Manage AI Agents permission.
Go to Admin > Roles and Permissions > Manage AI Agents