Evaluations are a key component of your Quality Assurance program. They provide a consistent and structured way to measure performance, identify strengths and opportunities, and ensure that every customer interaction reflects your organization’s standards.
Before completing evaluations, make sure your scorecards and scoring models are already set up. These define what will be measured and how each interaction is scored.
To learn how to build a scorecard, see Building your Scorecard Structure.
To learn more about scoring methods, see Understanding Scoring Models.
Once your scorecards are complete and published, you can begin completing evaluations—either manually through a quality analyst or automatically through AI—to capture insights, measure consistency, and drive meaningful coaching discussions.
To begin an evaluation, you must first select the agent you will be evaluating and the scorecard you want to use.
There are four ways to start a new evaluation and select both the agent and scorecard:
To start an evaluation from the Users menu, go to the Organization section and select Users. Locate the user you want to evaluate, then click the Action icon at the end of their row and select Evalaute.
A session setup window will appear with the selected user already pre-populated. From there, choose the scorecard you want to use for the evalaution.
To start an evaluation from the Teams menu, go to the Organization section and select Teams. Locate the team you want to view, then click the Action icon at the end of the row and select View. This will display all users assigned to that team.
Find the agent you want to evaluate, click the Action icon at the end of their row, and select Evaluate. A session setup window will appear with the agent pre-populated. Choose the scorecard you want to use, just as you would when starting an evaluation directly from the Users menu.
To start an evaluation from a specific scorecard, go to the Quality menu and select Scorecards. This will display all available scorecards. Locate the form you want to use, then click the Action icon at the end of the row and select Evaluate.
An evaluation setup window will appear with the selected scorecard already pre-populated. You will then be prompted to choose the agent for the evaluation. A selection window will open where you can select only one agent to coach.
The final two ways to start an evaluation are from the Evaluations page.
Evaluations can be completed in two ways: Manual or AI.
Each scorecard has two main sections—Meta Data and Scoring (or Audit)—and the process for completing these sections differs depending on the evaluation type.
In a manual evaluation, the quality analyst completes all fields directly.
Meta Data fields are free-form inputs used to capture information that is not tied to a specific Yes/No or Selective response. These may include details such as call reason, unique ID, or other contextual information entered by the analyst.
The Scoring/Audit section contains the measurable criteria defined by the scorecard’s scoring model. Depending on the model, responses will be one of the following:
Each scoring or audit section also includes a + Notes option. Analysts can enter notes to provide context or specific feedback for each section. These notes appear on the final evaluation for both the manager and agent to review.
In an AI evaluation, both the Meta Data and Scoring/Audit sections are completed automatically by the AI.
After selecting the agent and scorecard, the evaluator will be taken to the Upload Calls page. From there:

The AI will transcribe each call into a readable transcript, then compare that transcript against the scorecard criteria using the AI scoring instructions. The evaluation will appear as Pending until the process is complete.
Once either the manual or AI evaluation is completed, it is automatically sent to the evaluator or assigned manager for coaching.
The manager will see the entire evaluation, can complete the coaching session, and record commitments or feedback. The completed evaluation and coaching session will then appear on the agent’s dashboard for review.
Note: Evaluations using the Audit scoring model are for data collection only. They do not generate a coaching session or appear in the manager’s queue but remain accessible for reporting and trend analysis.