You can evaluate the performance of AI agents using Zendesk QA by analyzing their interactions with customers. This involves both manual and automatic evaluation methods.
For manual evaluation, you set up a scorecard to assess various performance categories of your bots, similar to how you would evaluate human agents. You can filter and select specific bot conversations to review and rate them using the scorecard. For automatic evaluation, you can enable AutoScoring, which assesses bots on categories like Greeting, Empathy, and Comprehension. The results can be viewed in the conversation panel or the Reviews section.Learn more.
Zendesk QA can evaluate various types of AI agents, including conversation bots, Ultimate messaging bots, and bots created with Sunshine Conversations. To see which bots have been detected, navigate to Zendesk QA, click your profile icon, select…
To manually evaluate AI agent conversations in Zendesk QA, you need to set up a scorecard for the categories you want to assess. Start by clicking the Conversations icon, then select or create a filter to identify the bot conversations you wish to…
AutoScoring in Zendesk QA is a feature that automatically evaluates bot conversations based on predefined categories. Once AutoScoring is set up, it assesses bots on categories like Greeting, Empathy, and Tone. The results are visible in the Review…
Zendesk QA is primarily designed for evaluating AI agents within the Zendesk ecosystem, such as conversation bots and Sunshine Conversations bots. While you can connect Intercom to Zendesk QA, the AI agent monitoring feature is specifically built…