Active agent assist mode
In active agent assist mode, AI Agent may actively involve LLM during the call to provide real-time call insights, typically to the live agent. The insights are provided via a side-channel, for example, webhook, as AI Agent never speaks.
This mode can be used for the following use-cases:
-
Real-time knowledge retrieval – surfacing relevant documentation, FAQs, or policy excerpts based on the current topic of the conversation.
-
Intent and sentiment detection – identifying the customer’s goal, emotional state, or level of frustration.
-
Compliance guidance – alerting the agent when regulatory or company policy requirements apply (e.g., mandatory disclosures).
-
Coaching and training – providing feedback to agents on tone, clarity, and effectiveness in real time.
Configuring active agent assist mode
To configure active agent assist mode:
-
Start with configuring passive agent assist mode, as described in the previous section. Skip the Post call analysis configuration if it’s not relevant for your use-case.
-
Navigate to the 'AI Agents > Agents' screen, locate the “agent-assist” agent that you created, and click Edit.
-
Switch to the Advanced tab and add the following to the advanced configuration parameters:
{ "agent_assist": { "mode": "webhook" } }
-
Click Update.
-
Active agent assist mode operation
Setting non-empty value for agent_assist.mode configures AI Agent to operate in active agent assist mode. In this mode AI Agent for every user utterance that it hears during the call, sends it to the LLM and the latter generates response based on the configured prompt. AI Agent may use tools and documents, as usual, to provide LLM with grounding and additional context.
Response generated by LLM can be relayed to the external system via Webhook, as described in Webhooks configuration. You need to configure webhook for llm event. For example:
{
"webhooks": [
{
"events": [
"llm"
],
"url": "https://webhook.site/123456",
}
You may relay LLM response via the Voice AI Connect “metadata” event,as described in Voice AI Connect > Bot integration > Controlling the call > Sending metadata. In order to do this, set agent_assist.mode advanced configuration parameter to metadata.
Reducing active agent assist mode “chattiness”
AI Agents in active agent assist mode calls LLM and relays its responses for every user utterance that it hears. In many scenarios this may be “too chatty” – as LLM will typically have nothing to say for most of the utterances and will generate meaningful insights only a handful of times during the call.
To reduce “chattiness” of active assist mode by doing the following:
-
Instruct LLM in your prompt to produce some “fixed” responses when it has nothing to say. For example:
If user asks about the weather, provide weather forecast.Otherwise respond with “NOTHING-TO-SAY”. -
Configure the “no response” phrase in the
agent_assist.no_response_phrasesparameter. For example:{ "agent_assist": { "mode": "webhook", "no_response_phrases": [ "NOTHING-TO-SAY" ] } }
-
Phrases specified in
agent_assist.no_response_phrasesare matched against LLM response case-insensitively. And if a match is found, LLM response is silently discarded and not relayed via webhook / metadata event.
-
You can configure multiple “no response” phrases if needed, for example:
... "no_response_phrases": [ "Nothing to say", "I don't know" ]
Configuring triggers for active agent assist mode
Alternative approach to reduce “chattiness” of the active agent assist mode, is to configure keywords that trigger LLM response. This option enables you to create “alexa-like” experiences – where participant must say some “trigger word” – which triggers the AI Agent to produce the insights. The trigger word may be matched either in customer or in live agent utterances – depending on the configuration.
To configure trigger words for active agent assist mode:
-
Configure the
agent_assist.trigger_wordsparameter. You may specify multiple triggers. For each one, you must configure atextparameter. -
You may also optionally configure
participantparameters, toagentorcustomer, to match the corresponding participant utterance. Leave theparticipantparameter undefined, if you want to match trigger words in utterances from both participants. -
Trigger words are matched case-insensitively and the system checks whether user utterance contains one of the specified trigger words, rather than matching the whole utterance.
Example
{
"agent_assist": {
"mode": "webhook",
"trigger_words": [
{
"participant": "customer",
"text": "weather"
}
]
}
}