Active agent assist mode

In active agent assist mode, AI Agent may actively involve LLM during the call to provide real-time call insights, typically to the live agent. The insights are provided via a side-channel, for example, webhook, as AI Agent never speaks.

This mode can be used for the following use-cases:

Configuring active agent assist mode

To configure active agent assist mode:

  1. Start with configuring passive agent assist mode, as described in the previous section. Skip the Post call analysis configuration if it’s not relevant for your use-case.

  2. Navigate to the 'AI Agents > Agents' screen, locate the “agent-assist” agent that you created, and click Edit.

    1. Switch to the Advanced tab and add the following to the advanced configuration parameters:

      {
          "agent_assist": {
              "mode": "webhook"
          }
      }
      
    1. Click Update.

Active agent assist mode operation

Setting non-empty value for agent_assist.mode configures AI Agent to operate in active agent assist mode. In this mode AI Agent for every user utterance that it hears during the call, sends it to the LLM and the latter generates response based on the configured prompt. AI Agent may use tools and documents, as usual, to provide LLM with grounding and additional context.

Response generated by LLM can be relayed to the external system via Webhook, as described in Webhooks configuration. You need to configure webhook for llm event. For example:

{
  "webhooks": [
    {
      "events": [
        "llm"
      ],
      "url": "https://webhook.site/123456",
 }

You may relay LLM response via the Voice AI Connect “metadata” event,as described in Voice AI Connect > Bot integration > Controlling the call > Sending metadata. In order to do this, set agent_assist.mode advanced configuration parameter to metadata.

Reducing active agent assist mode “chattiness”

AI Agents in active agent assist mode calls LLM and relays its responses for every user utterance that it hears. In many scenarios this may be “too chatty” – as LLM will typically have nothing to say for most of the utterances and will generate meaningful insights only a handful of times during the call.

To reduce “chattiness” of active assist mode by doing the following:

  1. Instruct LLM in your prompt to produce some “fixed” responses when it has nothing to say. For example:

    If user asks about the weather, provide weather forecast.

    Otherwise respond with “NOTHING-TO-SAY”.

  2. Configure the “no response” phrase in the agent_assist.no_response_phrases parameter. For example:

    {
        "agent_assist": {
            "mode": "webhook",
            "no_response_phrases": [
                "NOTHING-TO-SAY"
            ]
        }
    }
    
  1. Phrases specified in agent_assist.no_response_phrases are matched against LLM response case-insensitively. And if a match is found, LLM response is silently discarded and not relayed via webhook / metadata event.

  1. You can configure multiple “no response” phrases if needed, for example:

            ...
            "no_response_phrases": [
                "Nothing to say",
                "I don't know"
            ]
    

Configuring triggers for active agent assist mode

Alternative approach to reduce “chattiness” of the active agent assist mode, is to configure keywords that trigger LLM response. This option enables you to create “alexa-like” experiences – where participant must say some “trigger word” – which triggers the AI Agent to produce the insights. The trigger word may be matched either in customer or in live agent utterances – depending on the configuration.

To configure trigger words for active agent assist mode:

  1. Configure the agent_assist.trigger_words parameter. You may specify multiple triggers. For each one, you must configure a text parameter.

  2. You may also optionally configure participant parameters, to agent or customer, to match the corresponding participant utterance. Leave the participant parameter undefined, if you want to match trigger words in utterances from both participants.

  3. Trigger words are matched case-insensitively and the system checks whether user utterance contains one of the specified trigger words, rather than matching the whole utterance.

Example
{
    "agent_assist": {
        "mode": "webhook",
        "trigger_words": [
            {
                "participant": "customer",
                "text": "weather"
            }
        ]
    }
}