Skip to main content
Automated quality management and conversation analysis.

Overview

Quality AI enables you to:
  • Evaluate 100% of customer interactions.
  • Identify coaching opportunities automatically.
  • Monitor compliance and adherence.
  • Drive continuous improvement with data.

How It Works

┌───────────────────────────────────────────────────────┐ │ Customer Interactions │ │ (Voice, Chat, Email, Social) │ └───────────────────────────┬───────────────────────────┘ │ ▼ ┌───────────────────────────────────────────────────────┐ │ Quality AI Engine │ │ │ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │ │ Speech/ │ │ Evaluation │ │ Insight │ │ │ │ Text │ │ Scoring │ │ Generation │ │ │ │ Analysis │ │ │ │ │ │ │ └─────────────┘ └─────────────┘ └─────────────┘ │ └───────────────────────────┬───────────────────────────┘ │ ┌──────────────────┼──────────────────┐ │ ▼ ▼ ▼ ┌────────────────┐ ┌───────────────┐ ┌────────────────┐ │ Auto-Scores │ │ Coaching │ │ Compliance │ │ & Evaluations │ │ Assignments │ │ Monitoring │ └────────────────┘ └───────────────┘ └────────────────┘

Evaluation Criteria

Standard Criteria

Pre-built evaluation criteria available out of the box:
CriteriaDescription
GreetingProper introduction and identification
EmpathyAcknowledging customer emotions
Issue understandingCorrectly identifying the problem
ResolutionProviding accurate solution
ClosingProper wrap-up and next steps
ComplianceFollowing required disclosures

Custom Criteria

Create custom criteria to match your business needs:
Criteria: Product Knowledge
Description: Agent demonstrates accurate product knowledge
Weight: 15%
Scoring:
  - 5: Excellent - Comprehensive, accurate information
  - 4: Good - Mostly accurate with minor gaps
  - 3: Acceptable - Basic knowledge demonstrated
  - 2: Below expectations - Significant gaps
  - 1: Unacceptable - Incorrect information provided
Auto-evaluate: true
Keywords:
  positive: ["correct", "accurate", "helpful"]
  negative: ["wrong", "incorrect", "misinformation"]

Evaluation Forms

Group criteria into evaluation forms and apply them to queues:
Form: Customer Service Standard
Criteria:
  - greeting (10%)
  - empathy (15%)
  - issue_understanding (20%)
  - resolution (30%)
  - product_knowledge (15%)
  - closing (10%)
Pass threshold: 80%
Apply to:
  - queue: support
  - channel: all

Auto-Scoring

How It Works

AI evaluates interactions in four steps:
  1. Speech/text analysis — Transcribe and analyze the conversation.
  2. Criteria matching — Map conversation content to evaluation criteria.
  3. Scoring — Assign scores based on evidence found.
  4. Confidence flagging — Flag low-confidence scores for human review.

Configuration

Auto-Scoring:
  enabled: true
  evaluation_rate: 100%  # Evaluate all interactions
  confidence_threshold: 0.8
  human_review:
    - low_confidence_scores
    - failed_evaluations
    - random_sample: 5%

Calibration

Keep scores consistent across evaluators:
  1. Select a calibration sample.
  2. Have multiple evaluators score the same interactions.
  3. Compare scores and discuss differences.
  4. Refine criteria definitions.
  5. Re-train the auto-scoring model.

Conversation Mining

Topic Analysis

Automatically identify and track conversation topics:
AnalysisDescription
Topic clusteringGroup conversations by theme
Trend detectionIdentify emerging topics
Sentiment by topicTrack sentiment for each topic
Volume trackingMonitor topic frequency

Root Cause Analysis

Identify what is driving quality issues:
Quality Issue: Low resolution scores in billing queue

Root Causes Identified:
├── 45% - Complex billing system navigation
├── 30% - Outdated knowledge articles
├── 15% - Missing escalation paths
└── 10% - Training gaps on new features

Insights Dashboard

Automatic insights surface:
  • Top coaching opportunities
  • Emerging quality trends
  • Best performing agents and teams
  • Areas needing attention

Coaching

Auto-Assignment

Trigger coaching automatically based on evaluation scores:
Coaching Rule: Resolution Improvement
Trigger:
  criteria: resolution
  score: < 3
  count: 3 consecutive
Action:
  assign_coaching: resolution_training
  notify: supervisor
  priority: high

Coaching Workflow

┌─────────────────────────────────────┐ │ [Quality Issue Detected] │ │ │ │ │ ▼ │ │ [Coaching Assigned to Supervisor] │ │ │ │ │ ▼ │ │ [Supervisor Reviews Evidence] │ │ │ │ │ ▼ │ │ [Coaching Session Scheduled] │ │ │ │ │ ▼ │ │ [Session Completed] │ │ │ │ │ ▼ │ │ [Follow-up Evaluation] │ └─────────────────────────────────────┘

Evidence Attachment

Each coaching assignment includes:
  • Conversation transcript
  • Audio recording
  • Evaluation scorecard
  • Specific timestamps and sections
  • Comparison to best practices

Compliance Monitoring

Compliance Rules

Define rules to enforce regulatory requirements:
Compliance: PCI-DSS Card Handling
Rules:
  - must_not_say: ["full card number", "CVV"]
  - must_say: ["secure", "encrypted"]
  - action_required: mask_card_data
Alert:
  severity: critical
  notify: compliance_team

Required Disclosures

Track mandatory script elements by interaction type:
DisclosureRequired For
Recording noticeAll calls
Rate disclosureFinancial products
Terms and conditionsNew accounts
Privacy policyData collection

Compliance Dashboard

Monitor compliance health at a glance:
  • Compliance rate by disclosure type
  • Violations by agent and team
  • Trend analysis
  • Alert history

Taxonomy Builder

Create Taxonomies

Organize quality categories into a structured hierarchy:
Taxonomy: Quality Categories
├── Communication
│   ├── Clarity
│   ├── Tone
│   └── Active listening
├── Knowledge
│   ├── Product
│   ├── Process
│   └── Policy
├── Problem Solving
│   ├── Issue identification
│   ├── Solution accuracy
│   └── Efficiency
└── Compliance
    ├── Disclosures
    ├── Data handling
    └── Regulatory

Apply Taxonomies

Use taxonomies to structure:
  • Evaluation forms
  • Analytics categorization
  • Coaching focus areas
  • Reporting dimensions

Analytics

Quality Dashboards

DashboardMetrics
OverviewQuality score trends, pass rates
Agent performanceIndividual scores, improvement
Team comparisonTeam-level benchmarking
Criteria analysisPerformance by criteria
ComplianceCompliance rates, violations

Reports

Automated reports delivered on schedule:
  • Daily quality summary
  • Weekly team performance
  • Monthly trend analysis
  • Compliance audit reports

Setup Quality AI

Complete these steps in order to get Quality AI running.

1. Configure Permissions

  • Go to User Management > Role Management > New Role > Other Modules. Learn more.
  • Assign the Supervisor role or create custom roles with QM permissions. Learn more.

2. Set Up Contact Center

  • Assign Supervisors and Auditors to the relevant queues so they can access the right interactions. Learn more.

3. Enable Features

  • Enable Conversation Intelligence, Auto QA, and Bookmarks in Quality AI Settings. Learn more.
  • Enable Answer and Utterance suggestions in GenAI Settings. Learn more.

4. Create Evaluation Metrics

  • Choose a measurement type: By Question, Question Answer Pair, or Adherence (Static or Dynamic). Learn more.
  • Create evaluation metrics. Learn more.
  • Set the count type: Entire Conversation or Time Bound. Learn more.

5. Create Evaluation Forms

  • Assign a name, description, channel, and pass score.
  • Select metrics, assign weights, and link the form to queues.
  • Learn more.

6. Analyze Interactions in Conversation Mining

  • Use filters to review scored interactions. Learn more.
  • Save filters to reuse them in audit assignments. Learn more.

7. Create Audit Allocations

  • Assign interactions to auditors for manual evaluation. Learn more.

8. Run AI-Assisted Manual Audits

  • Use AI-assisted audits for faster, more consistent scoring.
  • Navigate interactions using adherence moments and violations.
  • Learn more.

9. Monitor Performance

  • Use the Dashboard to track individual QA progress and queue statistics. Learn more.
  • Use the Conversation Intelligence Dashboard for contact center-wide performance trends. Learn more.