Reasoning effort

Certain Large Language Models (LLMs) – such as OpenAI's gpt-5, feature advanced reasoning or "thinking" capabilities. By default, Live Hub configures these models to the minimal supported thinking level to minimize latency, ensuring voice interactions remain fluid, responsive, and natural.

The following two advanced configuration parameters allow you to control the reasoning level for such models and the visibility of the reasoning process.

Parameter

Type

Description

reasoning_effort

enum

Configures how much time and computational resources the model should dedicate to its internal reasoning process.

Supported values: none, minimal, low, medium, high

Default: none

reasoning_logs

bool

Adds “reasoning” entries to AI Agent logs for models that support such functionality (e.g. gemini-2.5-flash-native-audio)

Default: false

Example
{
    "reasoning_effort": "low",
    "reasoning_logs": true
}