Advanced configuration
Use the Advanced tab in Agent’s configuration screen for configuring Agent’s advanced configuration parameters. Configuration must be provided in JSON format and is validated at the time of entry. Use editor’s auto-complete feature and tooltips to simplify configuration process.
The following table lists all supported advanced configuration parameters and references to relevant documentation sections.
Parameter name |
Documentation section |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Triggering call transfers by increment_counter tool
|
|
Increment dynamic counters on specific messages |
|
|
|
|
|
|
|
Language-based question routing
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
STT error correction configuration
|
|
|
|
|
|
|
|
|
|
|
|
Advanced configuration parameters in multi-agent topologies
In multi-agent topologies, the following configuration parameters are inherited by default by sub-agents (i.e. parameters specified at top-level agent are applied to its sub-agents too):
-
explicit_tool_errors
-
ignore_first_call_transfer
-
max_turns_message
-
webhooks
All the rest of advanced configuration parameters apply only to the agent where they are defined.
You may change this default behavior via the inherit_config
advanced configuration parameter:
Parameter |
Type |
Description |
---|---|---|
|
list[str] |
List of advanced configuration parameters that are inherited by sub-agents. |
Specify list of advanced configuration parameters that you want to be inherited from the agent to its sub-agents. Note that the new list overrides the default configuration, therefore don’t forget to include in it ALL parameters that you want to be inherited.
Example
{
"inherit_config": ["rag_chunks", "webhooks"]
}
STT error correction configuration
The replace_words and remove_symbols advanced configuration parameters provide automatic correction capabilities for Speech-to-Text (STT) errors in user input, helping to improve the accuracy of voice interactions.
Word replacement
Use the replace_words
variable to automatically correct commonly misidentified words in user speech.
Parameter |
Type |
Description |
---|---|---|
|
dict[str, str] |
JSON dictionary mapping incorrect words to their correct replacements. |
The specified words are matched case-sensitively and at word boundaries.
Example
{
"replace_words": {"halo": "hello", "hye": "hi"}
}
Symbol removal
Use the remove_symbols
variable to automatically strip unwanted punctuation and symbols from user input.
Parameter |
Type |
Description |
---|---|---|
|
list[str] |
List of symbols to be removed from user utterance. |
Example
{
"remove_symbols": [".", ",", "!"]
}
Numbers sequence correction
Some STT engines may hallucinate on large numbers. For example they may produce “Line 500 29” when user says “Line 529”.
There are also scenarios where users spell large numbers digit by digit – so instead of “Line 529” they say “Line 5 2 9”.
The numbers_sequence
variable allows you to resolve both of these problems.
Parameter |
Type |
Description |
---|---|---|
|
NumbersSequence
|
Correct numbers sequences in user utterances |
NumbersSequence
Parameter |
Type |
Description |
---|---|---|
|
str
|
Operation mode:
|
|
List[str] |
List of prefixes. If specified, only number sequences that follow one of the specified prefixes will be corrected. Special prefix value “ |
|
|
Conditional progress messages when calling tools |
Example
{
"numbers_sequence": {
"mode": "auto",
"prefixes": ["line"]
}
}
Conditional progress messages when calling tools
The progress_message_conditions
advanced configuration parameter enables the generation of progress messages when calling specific tools. This makes conversation more fluent and mitigates slow response from certain 3rd party APIs.
Parameter |
Type |
Description |
---|---|---|
|
list[ProgressCondition] |
Play progress message when specific tools are called. |
ProgressCondition
Parameter |
Type |
Description |
---|---|---|
|
str |
Name of the tool, for example,
“ |
|
list[str] |
Progress messages.When more than one message is provided, the system randomly chooses which message to play. |
In orchestration workflows, when an agent calls pass_question
or send_message
, the receiving agent's "enter" condition executes first, followed by the calling agent's tool-specific conditions.
Note, only one progress message per user utterance is supported, either time-based or conditional.
Example
"progress_message_conditions": [
{
"condition": "pass_question",
"messages": ["one moment", "just a second"]
}
]
Customizing tools behavior
The customize_tools
advanced configuration parameter allows you to customize tools behavior for specific agent.
Parameter |
Type |
Description |
---|---|---|
|
dict[str, ToolCustomize] |
Customize tool behavior for specific agent. |
ToolCustomize
Parameter |
Type |
Description |
---|---|---|
|
int |
Maximum length of tool response to be returned to LLM. |
|
bool |
Redact tool response from message history. |
|
str |
Redact message to be used in message history if tool response is redacted. Default: "<redacted>". |
Example
{
"customize_tools ": {
"get_weather": {"response_len": 20000}
}
}
Triggering call transfers based on patterns in LLM response
The call_transfer_conditions
advanced configuration parameter enables automatic call transfers based on patterns detected in the LLM's responses. This feature helps identify when the agent is struggling to provide assistance and automatically escalates to human support.
Parameter |
Type |
Description |
---|---|---|
|
list |
List of call transfer conditions |
CallTransferCondition
Parameter |
Type |
Description |
---|---|---|
|
list[str] |
List of phrases or patterns to monitor in LLM responses |
|
int |
How many LLM response that match the patterns are required to trigger call transfer. Default: 1 |
|
str |
Message to be played before the call transfer. |
|
str |
Phone number to transfer the call to (supports variable substitution). |
The system continuously monitors each response generated by the LLM, scanning for any of the specified patterns. When a pattern is detected in the agent's response, the system increments an internal condition counter that is maintained throughout the entire conversation. When the condition counter reaches the specified threshold value, the system automatically triggers the call_transfer tool.
Example
{
"call_transfer_conditions": [
{
"patterns": ["cannot answer", "can't answer", "don't know"],
"threshold": 2,
"phone": "18005550123",
"message": "Let me connect you with a human agent who can better assist you."
}
]
}
Triggering call transfers by increment_counter tool
The increment_counter_call_transfer
advanced configuration parameter enables automatic call transfers based on variable thresholds reached through the increment_counter
tool. This capability is particularly valuable for scenarios where repeated failures or issues need to escalate to human intervention, such as when multiple user requests cannot be properly categorized or answered.
Parameter |
Type |
Description |
---|---|---|
|
list |
List of increment counter call transfer configurations |
IncrementCounterCallTransfer
Parameter |
Type |
Description |
---|---|---|
|
str |
Name of the counter to monitor (must match the counter name used in increment_counter tool) |
|
int |
Threshold value that triggers call transfer. Default: 1 |
|
str |
Message to be played before the call transfer. |
|
str |
Phone number to transfer the call to (supports variable substitution). |
Example
{
"increment_counter_call_transfer": [
{
"name": ["error_count "],
"threshold": 2,
"phone": "18005550123",
"message": " Please wait while I transfer you to human attendant."
}
]
}
Increment dynamic counters on specific messages
The increment_counter_conditions
advanced configuration parameter increments dynamic counters, as used by increment_counter
pre-defined tool, on specific user utterances or LLM responses.
Parameter |
Type |
Description |
---|---|---|
|
List[IncrementCounterConditions]
|
Increment dynamic counters on specific user utterances or LLM responses |
IncrementCounterConditions
Parameter |
Type |
Description |
---|---|---|
|
str
|
Message sender:
|
|
List[str] |
List of patterns to match. If message contains one of the specified patterns, dynamic counter is incremented. |
|
str |
Name of dynamic counter to be incremented |
Example
{
"increment_counter_conditions": [
{
"sender": "user",
"patterns": ["angry", "mad", "frustrated"],
"counter": "user_dissatisfied"
}
]
}
Ignoring first call transfer request
The ignore_first_call_transfer
advanced configuration parameter allows you to prevent the agent from transferring calls on the first attempt, encouraging users to interact with the agent before escalating to human assistance.
When configured, this variable intercepts the first call transfer request made by the agent and returns the specified text response instead of executing the call_transfer
tool call. This is particularly useful for scenarios where you want to:
-
Encourage users to engage with the agent before requesting human help
-
Provide a friendly message explaining the agent's capabilities
-
Reduce unnecessary call transfers on initial interactions
Parameter |
Type |
Description |
---|---|---|
|
str |
Message you want the agent to respond with when a call transfer is first attempted. |
Example
{
"ignore_first_call_transfer": "Let me try to help you first."
}
Inheritance
The ignore_first_call_transfer
value is inherited by sub-agents by default. This ensures consistent behavior across the entire agent hierarchy.
Utterance Counting
Each agent and sub-agent maintains its own independent count of utterances. This means:
-
The master agent tracks its own interactions separately from sub-agents
-
When a sub-agent is invoked, it starts with a fresh utterance count
-
A sub-agent's first call transfer attempt will be intercepted, regardless of how many interactions the master agent has already handled
Consider this interaction flow:
-
Master agent handles multiple user utterances.
-
Master agent delegates a question to a sub-agent.
-
Sub-agent immediately attempts to call call_transfer.
In this case, the sub-agent's transfer attempt is still considered the "first utterance" from the sub-agent's perspective and will be intercepted by the ignore_first_call_transfer
configuration.
Language-based question routing
The language_detected_pass_question
advanced configuration parameter enables automatic passing of user question to another agent based on the language detected by the Speech-to-Text (STT) system. This may be used as a faster and more reliable alternative to LLM-based language detection.
Parameter |
Type |
Description |
---|---|---|
|
dict[str, str] |
JSON dictionary mapping language codes to their corresponding agent names. |
|
list[str] |
List of phrases to be answered by LLM. |
Refer to Multi-language setup for detailed instructions on how to configure multiple languages detection in Bot connection.
You should configure language_detected_pass_question
parameter in the top-level agent that starts the conversation. You may use other
language name as a fallback for any language name not explicitly specified.
Example
{
"language_detected_pass_question": {
"he-IL": "main-agent-he",
"en-US": "main-agent-en"
}
}
Call settings parameters configuration
The session_params
, and activity_params
advanced configuration parameters enable dynamic control over call settings during agent interactions. These parameters modify call behavior such as barge-in settings, audio configurations, and other Voice.AI Connect features.
For complete details on available settings, see Changing call settings in the Voice AI Connect guide.
All variables use JSON dictionary format, for example:
{ "activity_params": { "bargeIn": true } }
Parameter |
Type |
Description |
---|---|---|
|
dict[str, Any] |
Included in every agent response. |
|
dict[str, Any] |
Included in the first agent's response after "context switch". |
|
dict[str, Any] |
Included in welcome message (instead of activity_params). Should be configured and the top-level agent that generates welcome message. |
Init conditions
The init_conditions
advanced configuration parameter provides conditional agent initialization based on local configuration, offering similar functionality to “init” webook (see Webhooks configuration for details) but using predefined rules instead of external API calls. This enables dynamic agent behavior based on conversation data such as caller information, called number, or other variables.
If your init_conditions
configuration doesn’t behave as expected, use init_logs
parameter to enable logs during agent initialization.
Parameter |
Type |
Description |
---|---|---|
|
list[InitCondition] |
Conditional agent initialization. |
|
bool |
Enable logs for agent initialization. |
InitCondition
Parameter |
Type |
Description |
---|---|---|
|
dict[str, str] |
Match conditions
Multiple match elements use AND logic, meaning all conditions must be satisfied for the rule to apply. |
|
dict[str, str] |
Dictionary of variables that will be added / merged to the current agent's variables. |
|
dict[str, str] |
Dictionary of advanced configuration parameters that will be added / merged to the current agent. |
|
str |
Name of the agent that will start the conversation. |
|
list[str] |
Names of documents that agent has access to. May be used to limit access to specific documents based, for example, on callee number. |
Example
{
"init_conditions ": {
"match": {"callee": "12024567041"},
"variables": {"destination": "White House"}
}
}
Mock-LLM agent flavors
The agent_flavor
advanced configuration parameter enables the creation of specialized AI agents that use “mock LLM” instead of real large language models. These "mock LLM" configurations are designed for specific use cases such as testing, monitoring, and data collection.
Parameter |
Type |
Description |
---|---|---|
|
str |
Mock-LLM agent flavor
|
Available flavors
-
echo
– The agent repeats back exactly what the user says, without any LLM processing or interpretation. -
listen
– The agent silently monitors the conversation and collects the transcript without generating any responses. You may use webhooks configuration of Post call analysis to process the collected transcript. This mode is automatically activated when AI Agent is used in “agent assist” mode. -
say
– The prompt is split into blocks separated by empty line and agent "says" one block after another sequentially, regardless of user utterance. When there are no more blocks left,end_call
tool is called. You may configure "no user response" in bot configuration to make agent say blocks without any user intervention.
When mock-LLM flavor is configured, LLM parameters in agent’s configuration screen (e.g. model name and temperature) are ignored.
Example
{
"agent_flavor": "listen"
}
Pre-establishing the LLM connection
The establish_llm_connection
advanced configuration parameter minimizes the response delays during conversations by initiating the LLM connection while the welcome message plays to users. This optimization applies exclusively to predefined welcome messages, not dynamically generated ones from the LLM.
Parameter |
Type |
Description |
---|---|---|
|
bool |
Establish LLM connection during agent initilization |
Example
{
"establish_llm_connection": true
}
Enhanced call recording control
The call_recording
advanced configuration parameter provides granular control of the recording behavior. In order to use it, set Call recording feature to “controlled by bot” in the Features tab of the Bot connection attached to your AI Agent.
Parameter |
Type |
Description |
---|---|---|
|
str |
Call recording configuration
|
Example
{
"call_recording": "start_on_transfer"
}
Immediate playback of the first sentence
The llm_stream_first_sentence
advanced configuration parameter enables immediate playback of the initial short sentence from LLM responses. When enabled, the system sends the first complete sentence to the Text-to-Speech (TTS) engine as soon as it's generated, followed by the remaining content.
For example, if the LLM generates "Hi there! I'm Jonathan, your sales assistant, how can I help you?", the system will immediately process "Hi there!" through TTS while continuing to generate the rest of the response.
This feature may result in slightly reduced TTS output quality and therefore is disabled by default. However, it can significantly improve conversation fluency when using slower LLMs or generating longer responses.
Parameter |
Type |
Description |
---|---|---|
|
bool |
Immediately play first sentence of LLM response. |
Example
{
"llm_stream_first_sentence": true
}