How to Measure AI Copilot Impact in BoldDesk (AI Copilot Report)
AI Copilot helps support agents respond faster by generating context-aware answers. The AI Copilot Report in the Reports module (Analytics) provides AI Copilot usage metrics. The Asked vs. Answered Questions metric helps quantify how often agents request Copilot assistance and how often AI Copilot successfully returns an AI-generated response.
This guide explains how to track AI Copilot’s performance using the Asked vs. Answered Questions metric.
What “Asked vs. Answered Questions” measures
Measuring the effectiveness of AI Copilot gives you clear insights into:
Agent Productivity Signals
- How frequently agents submit questions to AI Copilot (Asked Questions)
- How often AI Copilot returns an AI-generated response (Answered Questions)
Operational Efficiency
When Copilot reliably returns answers, agents typically spend less time drafting responses manually, which can support improvements in response and resolution time KPIs (for example, first response and overall resolution time).
Support Quality
Consistent answer returns, plus high adoption (see “Adoption” below), suggest AI-generated responses are more usable for real tickets. By monitoring AI‑generated responses, you can clearly demonstrate how AI Copilot reduces handling time and improves support outcomes.
Access prerequisites (roles/permissions)
To view the AI Copilot Report, a user must have Reports access (typically Agent or Admin roles with reporting permissions).
If a user cannot access the AI Copilot Report, verify the user’s role or reporting permissions in BoldDesk administration.
How to Track AI Copilot’s Impact
1. Open the AI Copilot Report
-
Navigate to Reports.
-
Open AI Usage Dashboard.
-
Select the Conversation Report tab (AI Copilot Report).
This dashboard displays detailed insights on Copilot usage across your support team.
2. Review “Asked Questions vs. Answered Questions”
This is the most critical metric for measuring Copilot effectiveness.
| Metric | Description | Formula |
|---|---|---|
| Asked Questions (Copilot Questions) | The number of queries agents submitted to AI Copilot. | — |
| Answered Questions | The number of AI-generated responses that were successfully returned by AI Copilot. | — |
| Unanswered Questions | The number of agent queries that did not receive an AI-generated response. | Asked Questions − Answered Questions |
3. How to interpret “Asked vs. Answered Questions”
Healthy pattern: Answered Questions ≈ Asked Questions
When Answered Questions is close to (or matches) Asked Questions, it usually indicates:
- AI Copilot is responding successfully to agent requests
- Agents can draft replies faster because Copilot returns usable starting points more consistently
- Support workflows are more likely to move faster because less time is spent waiting for content or rewriting from scratch
Concerning pattern: a widening gap between Asked and Answered
When Asked Questions is meaningfully higher than Answered Questions, it may indicate:
- Copilot is not returning answers consistently (potential configuration or availability issues)
- Agents are asking questions that Copilot cannot answer reliably (knowledge/training coverage gaps)
- A need for agent guidance on how to phrase questions for best Copilot results
Focus on Unanswered Questions and monitor for improvement trends after changes.
Frequently Asked Questions (FAQs)
- Where can I find AI Copilot usage metrics?
Go to Reports → AI Usage Dashboard → Conversation Report tab (AI Copilot Report).
-
What does “Asked Questions” mean?
“Asked Questions” is the count of queries agents submitted to AI Copilot. -
What does “Answered Questions” mean?
“Answered Questions” is the count of AI-generated responses successfully returned by AI Copilot. -
How does “Asked vs. Answered Questions” relate to response and resolution time?
When Answered Questions is close to Asked Questions, AI Copilot is returning usable answers more consistently, which typically reduces manual drafting time and can improve response and resolution time KPIs. A larger gap often implies more manual work, rewrites, or escalation, which can slow handling. -
What does “adoption” mean, and why track it?
“Adoption” is how often agents use Copilot’s suggestion (as-is or edited) versus discarding it. Higher adoption generally indicates AI Copilot output is trusted and time-saving; lower adoption often signals knowledge gaps or mismatched query types.
Here’s the content converted into a clean FAQ section:
-
Why can’t agents see the AI Copilot Report?
Ensure the agent has Reports access (role/permission). Verify navigation: Reports → AI Usage Dashboard → Conversation Report tab. -
Why is “Asked Questions” high but “Answered Questions” low?
This indicates a need for configuration or knowledge base improvements. Check if queries are within the scope of available knowledge sources.