Overview
Search AI is the next-generation evolution of SearchAssist. It moves beyond traditional keyword-based search by using advanced techniques and the latest LLMs for indexing, search, and answer generation—resulting in more accurate, relevant, and natural answers. A key difference is platform integration. SearchAssist is a standalone application; Search AI is part of the AI for Service platform. This integration lets users build applications and access multiple AI-powered products within a unified ecosystem. Because of these advanced techniques, many configurations that were required in SearchAssist are now handled automatically in Search AI, making setup simpler.Architecture at a Glance
Search AI introduces a modernized architecture with greater flexibility, transparency, and control over how search and answers are generated. The new design replaces rigid, predefined workflows with a modular and configurable framework. Key changes:- Indexes: Search AI uses a single index for both search results and answers. SearchAssist maintained separate indexes for each.
- Chunk Workbench: Search AI provides a Chunk Workbench to review and refine how ingested content is split before indexing, improving downstream results.
- Extraction Techniques and Advanced Vector Models: Search AI introduces an Extraction Module supporting multiple strategies and advanced vector generation techniques for more accurate, context-aware results.
- Agentic RAG: Search AI uses Agentic Retrieval-Augmented Generation to enhance user queries with contextual information, improving precision and naturalness of answers.
Migration Steps
Creating an App
Search AI is a product within the AI for Service platform.- Go to the platform homepage. After signing in, you are directed to the landing page showing your existing apps.
- Click Create New under Search AI.
- Provide the app details and click Create App.
- The app is created and you are taken to the search configurations.
Migrating Sources
Websites
- Go to the Websites page in your Search AI app.
- Set up a web crawl for each web source. Provide the URL or sitemap file, or use the Upload URL option for a CSV file with multiple sitemaps.
- Configure the crawl settings for each source. The following SearchAssist crawling options are available under Advanced Crawl Configurations:
- Crawl Options
- Use Cookies
- Javascript rendered
- Crawl beyond sitemap
- Respect robots.txt
File Upload
- Go to the Documents page.
- Upload your files and directories.
- Organize the content into directories.
FAQs
- Manage FAQs using the Knowledge Module under Automation AI.
- Export existing FAQs from SearchAssist and import them into Automation AI as part of its Knowledge Graph.
- Alternatively, manually add FAQs or extract them from web pages or files.
- You can also use the JSON connector to add FAQs by saving the question as
chunkTitleand the answer aschunkText.
Connectors
- Import data from third-party applications using their respective connectors. Search AI supports 60+ connectors.
- For each connector, refer to the relevant documentation for setup information.
- The SearchAssist Sync Specific Content option is available in Search AI as Ingest Filtered Content. Select Ingest Filtered Content, go to Advanced Filters, and define your filtering conditions.
- Search AI supports Access Control (RACLs) for connectors. Refer to the RACL documentation for details.
Structured Data
- Import structured data using the JSON connector.
- Prepare the structured data in the expected JSON format using the sample file, then upload it.
- Alternatively, import structured data via APIs. Refer to this guide for details.
Migrating Extraction Configuration
In SearchAssist, there is no dedicated Extraction module. Extraction happens automatically:- When Extractive Answers are enabled, a snippet extraction module using rule-based methods is added.
- If not enabled, simple text extraction is applied by default.
Text Extraction
By default, text extraction is applied to all content. No changes needed if you rely on this method.Rule-Based Extraction
Rule-based snippet extraction (via Extractive Answers in SearchAssist) is deprecated in Search AI. The recommended alternative is Layout-aware Extraction, which provides higher accuracy without manual rules. For HTML-based content, use HTML-aware Extraction. Search AI also offers other advanced extraction techniques to suit your content type. Learn more. To change the default extraction strategy, go to the Extract page and configure your extraction strategy.Migrating Workbench Stages
In SearchAssist, the Document Workbench processes and enriches documents from different sources. In Search AI, this capability is extended with the addition of a Chunk Workbench, allowing enrichment at both the document and chunk level.- Use the Document Workbench to apply transformations during extraction.
- Use the Chunk Workbench to refine or enrich content after it is split into chunks.
Set Up Document Workbench
- Go to the Extract page.
- Open the Content Strategy configured for your content type.
- Add and configure the stage as required.
Set Up Chunk Workbench
The Chunk Workbench is a new workbench in Search AI for processing chunks. Learn more.- Go to the Enrich page.
- Click +New Stage and add a new Chunk Workbench stage.
- Refer to this guide for supported stage types and configuration details.
Points to Note
- The document and chunk workbench in Search AI support these stages:
- Search AI only supports script conditions for the Custom Script Stage. If you wrote conditions as scripts for any other stage, convert the entire stage to a Custom Script stage or rewrite the conditions in the “basic” format.
- Search AI does not support the Rename action through the workbench. Use Manage Schema to edit field names or descriptions instead.
- The following SearchAssist stages are deprecated in Search AI because advanced semantic embeddings make them unnecessary:
- Entity Extraction
- Traits Extraction
- Keyword Extraction
- Semantic Meaning
- The SearchAssist Snippet Extraction stage (automatically added when Extractive Answers is enabled) is deprecated. Instead, configure an appropriate extraction strategy on the Extract page.
- The Custom LLM Prompt stage is available as Transform Documents with LLM in the Document Workbench and Enrich Chunks with LLM in the Chunk Workbench.
- Use the API Stage for any other custom chunk processing.
- Use the simulator to test the application’s behavior after processing.
- Field names have been updated to the unified schema. Ensure you use the correct field names during migration.
Migrating Answer Snippets
As in SearchAssist, answers can be generated using Extractive Answers or Generative Answers. Search AI presents Generative Answers by default. To verify, go to the Answer Configuration page.Generative Answers
Generative Answers are selected with default configurations. Update the configurations as required. For Generative Answers to work, configure and enable the LLM for answer generation:- Go to Generative AI Tools > Models Library.
- Configure the required model.
- Go to Gen AI features.
- Select the configured model for Answer Generation. A default prompt is used; create a custom prompt if needed.
Extractive Answers
In SearchAssist, extractive answers use rule-based chunking. In Search AI, configure extraction strategies and enable Extractive Answers to achieve the same outcome.- Go to the Answer Configuration page, select Extractive Answers, and configure the options.
- Go to the Extraction page in Search AI.
- Configure Layout-Aware Extraction using the General template for PDFs, DOCX, and documents with tables or images.
- Configure Advanced HTML Extraction using the General template for HTML content.
- Alternatively, choose any strategy best suited for your content type.
Migrating Search Configuration
The following settings are deprecated because the Search AI processing pipeline uses semantic similarity and enhanced retrieval methods, removing the need for traditional keyword-based search configurations:- Weights
- Presentable
- Prefix Search
- Search Relevance
- Small Talk can be managed through conversational flows in Automation AI. Go to the Small Talk page and provide the details. Learn more.
- Synonym and Stop Word support is not currently available in Search AI. It will be added in upcoming releases.
- Spell correction is not required since Search AI does not rely on keyword matching.
Migrating Business Rules
In Search AI, semantic search replaces traditional keyword-based search, so NLP-based rules from SearchAssist are deprecated. Configure Contextual Business Rules manually instead. The process for setting contextual rules in Search AI is the same as in SearchAssist. However, field names have been updated to the unified schema, so some names may have changed. Review and select the correct field names when migrating. Thecontext object in the condition block now refers to the context object and session variables provided by the AI for Service platform. Learn more.
Migrating Facets
Facets are supported in Search AI as filters. Currently, search results with facets can only be presented using the Advanced Search API. Facets created are used when returning results through the API only. To include facets in search results, set theisFacetsEnable field in the API request.
To create facets or filters:
- Go to the Search Results page.
- Manually create the required filters. A default filter that organizes content into tabs by
sourceTypeis already available.
Other Updates
Language Support
Search AI supports 100+ languages—all languages supported by the underlying LLM and embedding models. Refer to this guide for language-specific configuration and recommendations.LLM Configuration
Search AI uses the AI for Service platform’s model library to configure LLMs and prompts for all Gen AI capabilities. The library is a common place to configure models and prompts. The Gen AI features page lets developers choose models and prompts for different Search AI features. All configurations are available under Generative AI Tools. Learn more.Custom Configurations
The following table describes updates to custom configurations from SearchAssist.| Configuration | Migration Notes |
|---|---|
| Chunk Extraction Method | Multiple extraction methods are available, including the text and layout-aware methods from SearchAssist. Configure on the Extract page. Learn more. |
| Chunk Token Size | Configure when defining the Extraction Strategy for content. Learn more. |
| Chunk Vector Fields | Available in Vector configuration. |
| Number Of Chunks | Replaced by the Token Budget field, which dynamically calculates the number of chunks based on the LLM and extraction strategies. Available in Answer Configuration. |
| Rewrite Query | Enhanced with two options: (1) Query Rephrase for Advanced Search API via Agentic RAG; (2) Rephrase User Query to reconstruct incomplete or ambiguous inputs using conversation context. |
| Chunk Retrieval Strategy | Some legacy retrieval methods are deprecated. Configure Vector Retrieval and Hybrid Retrieval via the Retrieval page. Learn more. |
| Enable Vector Search | Enable using configurations on the Retrieval page. |
| Chunk Deviation Percent | Configure in Retrieval under Proximity Threshold. |
| Rerank Chunks | Available as an advanced configuration. |
| Rerank Chunk Fields | Available as an advanced configuration. |
| Maximum re-rank chunks | Available as an advanced configuration. |
| Chunk Order | Available as a config field on the Answer Configuration page. |
| Snippet Selection | Deprecated. No longer supported. |
| Snippet Selection LLM | Deprecated. No longer supported. |
| Max Token Size | Implemented using the Token Budget field in Answer Generation. In SearchAssist, Max Token Size defined the total tokens sent to the LLM (prompt + chunks + output). Token Budget specifies tokens allocated only for retrieved chunks, excluding system prompt and LLM output tokens. Learn more. |
| top_p | In SearchAssist, this was specific to OpenAI. In Search AI, configure it for any LLM that supports it via Generative AI Tools > GenAI Features > Advanced Settings for the Answer Generation feature. Learn more. |
| Response Size | Use the Response Length parameter in Answer Configuration (Generative Answers). |
| Answer Response Length | Use the Response Length parameter in Answer Configuration (Extractive Answers). |
| Enable Page Body Cleaning | Enable via the Automatic Cleaning option in web crawl configuration. Navigate to Advance Crawl Configurations for a web crawl and select Automatic Cleaning under Processing Options. |
| Custom Vector Model | Set the Embedding model on the Vector Configuration page. |
| Hypothetical Embeddings | Deprecated. No longer supported. |
| Crawl Delay | Configure in Advance Crawl Configurations for a web crawl. This field applies only when the JavaScript Rendered option is enabled. |
Unified Schema
Search AI introduces a Unified Schema that standardizes how content from diverse sources is ingested and managed. The schema supports a broad range of connectors and content types, ensuring consistency across ingestion and retrieval. Developers can also define up to 50 custom fields to capture domain-specific metadata. Wherever schema fields are referenced in Search AI (e.g., workbench, business rules, filters), use the correct field names to avoid mapping or retrieval errors.Training
Search AI enhances the training process with greater automation and efficiency. During training, design-time configurations are applied and ingested content is split into chunks. Unlike SearchAssist, Search AI introduces Automatic Training, which runs during both initial ingestion and incremental updates, reducing the need for manual intervention. Manual training may still be required in certain scenarios. Learn more about app training.New Crawler
The new web crawler is faster, more reliable, and LLM-ready. It processes more pages in less time, reduces errors and interruptions, and automatically converts content into Markdown for seamless use with AI/LLM applications. It also handles JavaScript-heavy pages more efficiently for smoother, more consistent crawling.Application Administration
Workspace Management
Search AI is part of the AI for Service platform. The platform organizes users and resources through Workspaces. Each workspace functions as a container that manages users, applications, and access controls. Workspaces can be shared between users to collaborate on all apps within the workspace. Learn how to create and manage workspaces.App Sharing
Search AI supports App Sharing, allowing applications to be shared with other users for collaboration. Shared users can access and work on the same application based on their assigned permissions. To share your app:- Go to the User Management page.
- Click Invite User and provide the user details and role.
- Send the invite.
Change Logs
Change logs are maintained at the platform level and are common to all application components, including Search AI. To view change logs:- Navigate to App Settings.
- Open the Change Logs page. By default, logs for all modules are shown.
- To filter for Search AI logs, click More Filters, select Search AI from the Modules dropdown, then click Add.
App Analytics
Search AI provides analytics at the platform level, consolidating insights across all integrated applications. To view answer insights:- Navigate to the Analytics page in the Search AI module.
- Under the Search AI section, open Answer Insights to view search-related metrics.
App Testing
Search AI supports testing at multiple levels:- Document and Chunk Workbench: Provides a simulator to test the functionality and behavior of processing stages during development.
- Answer Generation: Use the Test Answers tool on the Answer Generation page to validate end-to-end search behavior after all configurations are made.
- Complete Application Testing: The Test option (top-right of each page) lets you test the complete agent, including all integrated modules, using both voice and chat widgets.