AI-Assisted Manual Audit lets supervisors and QA teams evaluate voice and chat interactions using AI insights combined with manual review. Use it to assess performance, enforce compliance, and deliver targeted coaching.
Key capabilities:
| Capability | Description |
|---|
| Conversation Insights | AI-generated summaries of key moments and outcomes. |
| Multi-language Support | Audit interactions across all supported languages. |
| Topics & Intents | Identify customer purpose and discussion themes. |
| Emotion Analysis | Track sentiment and emotional shifts throughout an interaction. |
| Automated QA | Score interactions against configured metrics. |
| Audit Logs | Review detailed evaluation history. |
Prerequisites
Before using AI-Assisted Manual Audit, confirm the following:
- AutoQA Permission – Access to manage metric types in Quality AI General Settings.
- QA Access – Permission to perform self-assignment and auditing.
- Role-Based Access – Appropriate permissions assigned based on your organizational role.
Access AI-Assisted Manual Audit
Navigate to Quality AI > ANALYZE > Conversation Mining > Interactions > AI-Assisted Manual Audit.
You can open interactions for audit from two places:
- Conversation Mining – View all conversations within your assigned queues.
- Allocations – View all interactions assigned to you for evaluation.
Audit Screen Overview
The audit screen has three tabs:
| Tab | Description |
|---|
| Audit | Main workspace for evaluating transcripts, metrics, and AI insights. |
| Conversation Details | Interaction metadata including start/end time, agent, queue, and audit scores. |
| Audit Logs | Full audit trail of system and user actions, including GenAI execution records. |
Audit Tab
The Audit tab is divided into three sections:
| Section | Description |
|---|
| Transcript | Full conversation dialogue for review and verification. |
| Audit Evaluation | Configured metrics and a high-level summary of AI analysis. |
| AI Overview | AI-powered widgets for assessing performance, compliance, and sentiment. |
AI Overview
Displays conversation insights through AI-powered widgets so supervisors can evaluate key metrics without reading full transcripts.
Topics
Lists all subjects and themes discussed in the conversation (for example, Customer Support Process).
- Multi-language topic identification.
- AI natural language processing extracts topics.
- Category-based performance analysis.
- Identifies training needs based on topic patterns.
Intents
Captures the customer’s purpose and desired outcome.
- Analyzes conversation context, phrasing, and patterns.
- Each intent appears as an individual chip.
- Supports measuring intent resolution success rates.
Configured Topics and Resolution
Provides visibility into detected topics, sentiment, and resolution status for more effective QA and coaching.
| Element | Description |
|---|
| Configured Intents | Intents detected based on your taxonomy. Click to jump to the detection point in the transcript. |
| Generated Intents | Intents detected by AI. Each includes a color indicator for sentiment (positive, negative, or neutral). |
| Overall Resolution | The resolution status of the conversation. |
| Topic Sentiment | Sentiment detected for each topic. Click an L3 topic to see how its resolution was addressed in the transcript. |
Generated Topics
Uses taxonomy-based topic discovery to expand analytics on the Audit screen.
- Supports topic discovery and topic-level sentiment detection.
- Displays positive, negative, or neutral sentiment for each discovered topic.
Transcript
The Transcript presents a unified timeline of the full interaction, showing both agent and customer behavior, events, and emotions. It supports real-time audio navigation with transcript details.
Sentiment Analysis
Shows the overall sentiment of the customer and agent across three phases of the call.
| Phase | Description |
|---|
| Call Opening | From agent transfer to issue identification. |
| Development | From issue identification to resolution discussion. |
| Call Closing | From resolution discussion to call termination. |
Sentiment Ratio
Shows how sentiment was distributed across the interaction as a percentage breakdown.
| Sentiment | Meaning |
|---|
| Positive | Customer satisfaction, successful resolution. |
| Neutral | Standard interaction without strong emotion. |
| Negative | Dissatisfaction or unresolved issues. |
Sentiment Patterns
| Pattern | Transition | Meaning |
|---|
| A | Negative → Positive | Customer satisfaction recovery. |
| B | Positive → Positive | Consistent positive experience. |
| C | Neutral → Positive | Effective positive experience creation. |
| D | Positive → Negative | Service degradation requiring attention. |
| E | Neutral → Negative | Missed opportunities or failures. |
| F | Negative → Negative | Persistent dissatisfaction requiring escalation. |
| G | Positive → Neutral | Adequate service delivery. |
| H | Negative → Neutral | Partial improvement opportunity. |
| I | Neutral → Neutral | Steady interaction without emotional impact. |
Resolution-Aware Scoring uses a weighted algorithm that prioritizes final customer sentiment, applying exponential weighting to recent messages. Scores use a 1–10 scale (5 = Neutral, 7 = Positive) and produce a final classification of Positive, Neutral, or Negative.
Emotions
Ranks agent and customer emotions across the interaction timeline, with emotional states such as anger, frustration, and satisfaction detected throughout.
Agent emotions tracked:
| Emotion | Description |
|---|
| Patience | Handling difficult situations calmly. |
| Happy | Positive attitude and engagement. |
| Empathy | Understanding and compassionate responses. |
| Confusion | Uncertainty about processes or information. |
| Fear | Anxiety or hesitation in responses. |
| Anger | Frustration or irritation (coaching opportunity). |
Customer emotions tracked:
| Emotion | Description |
|---|
| Happy | Satisfaction and positive experience. |
| Anger | Frustration requiring attention. |
| Confusion | Need for clarification. |
| Sadness | Disappointment requiring empathy. |
| Fear | Anxiety about products or services. |
| Escalation | Rising frustration levels. |
| Churn Risk | Departure probability indicators. |
Emotions are ranked from highest to lowest by duration percentage. The top three emotions are shown for each party, with timeline visualization and emoticon indicators.
Audit Evaluation
By Question
Evaluates agent performance on specific inquiry types using configurable evaluation forms. Each criterion is scored individually, supported by AI-powered quality assurance for consistency and precision.
The Audit Progress Bar at the top right of the panel shows the completion percentage based on answered audit questions.
Evaluation Marking (Yes / No / N/A)
| Mark | When to use |
|---|
| Yes | The agent clearly performed the required action with evidence in the conversation log. |
| No | The agent failed to perform the required action or the action was incomplete. |
| N/A | The situation did not arise or the requirement was not applicable in this interaction. |
Keyword-Based Conversation Analysis
Keyword filters applied on the Conversation Mining page carry over to the Audit screen. The transcript view shows the full conversation with keyword highlighting.
| Feature | Description |
|---|
| Timeline Integration | Visual markers show exact keyword positions on the timeline. Click a marker to jump to that point in the transcript. |
| Keyword Highlighting | Matched keywords are highlighted inline in the transcript, color-coded with up to 8 distinct colors. Excluded keywords are not highlighted. |
| QA Question Mapping | Keyword matches are linked to relevant QA questions and scoring impact in the AI Overview panel, with speaker attribution and count. |
| Context Display | Selecting a keyword expands the surrounding transcript and shows speaker labels, sentiment, and QA impact. |
| Expand/Collapse View | The Keywords Found panel expands when keyword filters are active and collapses when none are applied. |
| Speaker Filtering | Navigate by keyword hits filtered to Agent only or Customer only. |
| Session Preservation | Keyword filters are saved in the user session until manually cleared. |
| Clear Filter Keywords | Removes all keyword filters (include and exclude) from the transcript. Other filters (date, sentiment, QA score) remain active. |
Omissions
Highlights instances where the agent failed to follow configured compliance elements, such as playbook steps or dialog tasks.
- Omitted playbook steps (for playbook metrics).
- Omitted dialog tasks (for dialog metrics).
- Only shown when relevant metrics are configured for the interaction.
Violations
Highlights speech metric violations that occurred during the call (for example, Cross Talk, Dead Air, or Speaking Rate Violation). Each violation includes a timestamp so you can navigate directly to that point in the recording.
By Playbook
Enables evaluators to assess adherence to configured playbook metrics.
Displays for each playbook metric:
- Configured minimum adherence.
- Observed adherence within the interaction.
- Missing steps not completed during the interaction.
- Expected vs. observed steps in a dropdown format.
To audit Speech and Playbook metrics, enable Audit Speech Metrics and Audit Playbook Metrics under Settings. If not enabled, these metrics appear in view-only mode.
Adherence Scoring Logic
| Result | Condition |
|---|
| Adhered | Similarity score ≥ configured threshold (for example, ≥ 60%). |
| Not Adhered | Similarity score < configured threshold. |
| N/A | Trigger not detected in the interaction. |
By Value
Tracks value-related metrics during interaction evaluations. Leverages GenAI to analyze agent behavior beyond predefined scripts.
Agent Adherence fields:
- Source System Value – Value obtained from the source system.
- Agent Mentioned Value – Value mentioned by the agent during the conversation.
- AI Justification – Explanation of the AI’s adherence decision.
- GenAI-based adherence – Combines business rule validation with tolerance range analysis.
- Custom script adherence – Includes the agent-mentioned value and business rule justification.
By AI Agent
Delivers advanced sentiment analysis through GenAI, enabling Post-Interaction Sentiment Analytics and Key Emotion Moments. Integrates with GenAI Copilot to leverage LLMs for detailed post-interaction insights.
Key capabilities:
- Real-time AI-driven analysis.
- Sentiment and emotion detection.
- Topic modeling and intent recognition.
- Predictive analytics.
AI Justification fields (for GenAI-based evaluations):
- Clear reasoning for the AI’s Yes/No outcome (adhered/not adhered/not applicable) with observation time.
- Evidence of trigger presence or absence for dynamic adherence types.
- Specific agent behaviors that influenced the metric outcome.
- Timestamps for all relevant conversation segments.
Adherence Filter Status
Filter and sort compliance questions by adherence status:
| Status | Description |
|---|
| Adhered | The response fully meets the compliance requirement. |
| Not Adhered | The response does not meet the compliance requirement. |
| Not Applicable | The question is not relevant to this specific context. |
Conversation Insights
Provides AI-generated overviews of customer interactions without requiring a full transcript review.
| Metric | Description |
|---|
| Customer Talk Ratio | Percentage of total call duration the customer is speaking. |
| Agent Talk Ratio | Percentage of total call duration the agent is speaking. |
| Silence Percentage | Call time in which neither party speaks (excludes hold time). |
| Speaking Rate | Agent speech speed in Words Per Minute (WPM). |
Conversation Insights are available for voice interactions only.
Agent Speech Insights
Displays agent-specific performance metrics.
| Metric | Description |
|---|
| Speaking Rate | Words Per Minute value. |
| Crutch Words | Count of filler words (for example, “um,” “uh,” “like”). |
| Empathy Score | Measurement of empathy in agent utterances. |
Displays all feedback submitted by auditors during the evaluation process. Comments appear both inline in the Transcript and in the Comments tab. Commenter details are shown based on privacy settings (for example, Hide Auditor Details).
-
Click Assign to Me to begin auditing the conversation.
-
Hover over any message in the Transcript section — a Comment icon appears.
-
Click the Comment icon and enter a comment Name and Comment text (both required).
-
Add or delete your comment before sending.
-
Click Send to publish the comment.
Once submitted, comments appear:
- Inline in the Transcript, linked to the corresponding message.
- In the Comments tab with the comment title, text, and commenter details (based on privacy settings).
Auditors and supervisors can add comments in By Question, By Value, and By AI Agent metrics when the audit is self-assigned.
| Type | Description |
|---|
| Metric Comments | Added to specific evaluation criteria (By Question, By Value, or By AI Agent). Click + Add Comment, enter your comment, then click Save. |
| Message Comments | Contextual comments added at the message level in the Transcript. Support click-through navigation for quick review. |
Click-Through Navigation
All users — including agents without QA permissions — can click a comment to navigate to the related message. The system centers the commented message in the Transcript window, enabling agents to review feedback from supervisors and QA auditors.
Near-Miss Scenarios
Near-miss evaluations flag responses that closely resemble, but do not fully meet, adherence standards. Applicable only in Deterministic Adherence mode.
How it works:
- The system compares agent responses against predefined similarity thresholds.
- Near-miss cases are flagged for auditor review.
- When you click the View button, the evaluation is marked Yes (highlighted in green) and the relevant customer response is highlighted.
- By Question metrics are selected by default and cannot be deselected.
- Auditors can only audit the metric types they have selected.
- Supervisor score calculation includes all enabled metric types.
Self-Assignment for Audit
QA users (auditors or supervisors) can self-assign unclaimed interactions for auditing. Interactions that are already audited, completed, or assigned to another user are not eligible.
To self-assign an interaction:
-
Navigate to the Conversation Mining page.
-
Select an interaction that is not yet audited or assigned.
-
Click Assign to Me. A success message confirms the assignment.
-
The interaction is marked as Self-Assigned on the Audit Allocations page and becomes unavailable for reassignment.
Only users with QA permission can add feedback comments at any point in the conversation, regardless of the evaluation metrics.
Audit Submission
The Submit button is enabled only when the interaction is assigned to you through Audit Allocations.
Before submitting:
-
If By Question, By Value, or By AI Agent metrics are present, select appropriate responses for all required audit questions.
-
Ensure the adherence percentage totals 100%.
-
Click Submit.
After submission:
- The interaction is marked as Self-Assigned on the Audit Allocations page.
- The audited interaction is unavailable for reassignment.
- A completed and submitted interaction cannot be re-audited.
Agent access to scored interactions is controlled by the Agent Access to Scored Interactions setting:
| Setting | What agents see |
|---|
| Only manually audited interactions | Supervisor Audit Score interactions with Date & Time and Queues. |
| Manually audited and Auto QA scored | Kore Evaluation Score (Auto QA) and Supervisor Audited Score interactions. |
Hide Auditor Details for Agent:
- On – Auditor details are anonymized in the audit screen.
- Off – Auditor details are visible.
Only supervisors can view auditor details. Agents cannot see auditor details.
Search
Provides keyword search across the entire transcript to locate specific topics, compliance issues, customer concerns, or training opportunities.
Conversation Details Tab
Provides contextual information about the interaction for review before or after evaluation.
Conversation Details:
- Start Time, Termination Time, End Time, Agent Name, Queue, Customer Phone, CSAT, Disposition, Evaluation Form, and Language.
Audit Details:
- Auditor Name, Audited Date, Audit Score, and Kore Evaluation Score.
Identifiers (each includes a copy icon):
| Identifier | Example Value |
|---|
| Call ID | NA |
| Session ID | 699d3d5ef39661f7c0aa4b95 |
| Channel User ID | NA |
| Call Conversation ID | NA |
| Agent Conversation ID | c-358c3b1-d472-4c2a-89bd-eebcca3dxxxx |
| User ID | u-e481d17b-aba0-5110-9377-05bc36f0xxxx |
You can also use Assign to Me on this tab to assign the interaction to yourself for audit.
Audit Logs Tab
Provides a complete audit trail of the evaluation process, recording system and user actions, GenAI metric executions, and status changes.
Log entries capture:
| Detail | Description |
|---|
| Log creation and updates | Records audit creation and updates with user ID, display name, and timestamps. |
| Supervisor and reviewer changes | Tracks who made each change and what was modified. |
| AI model execution data | Logs model version, execution duration, request/response token counts, and enabled GenAI features. |
Each execution log entry includes:
- Date and Time of execution.
- GenAI Feature Name (for example, By Hold Adherence).
- Language.
- Model Name (for example, GPT-4o).
- Integration Type (System or Custom).
- Prompt Name and Type (Default or Custom).
- Request Token Count, Response Token Count, and Response Duration.
- Execution Status (Success or Failure).
Payload Visibility: View Request and Response payloads with options to expand/collapse, format or compact, copy to clipboard, or open in full-screen mode for debugging.
Select Assign to Me to assign a log entry to yourself for audit. The system records who assigned it and when, and displays the assigned user in the header and audit history.