Large language models

Live Hub provides the following pre-deployed large language models that can be used in AI Agents:

Model

Pros & Cons

gpt-4o-mini

  • Fast and cost-effective model from OpenAI

  • Provides adequate performance for typical AI Agent scenarios

  • May struggle with complex multi-step instructions

gpt-4o

  • Powerful model from OpenAI for complex tasks

  • Effectively follows complex instructions

gpt-4.1-nano

  • Most cost-effective model from OpenAI

  • Recommended only for simple tasks

gpt-4.1-mini

  • Latest generation of fast and affordable model from OpenAI

  • Good balance between cost and intelligence for typical AI Agent scenarios

gpt-4.1

  • Most powerful last generation model from OpenAI

  • Well suited for complex AI Agent scenarios

gemini-2.0-flash

  • Powerful model from Google

  • Great performance across wide range of tasks

  • Significant cost savings compared to gpt-4o / gpt-4.1

gemini-2.0-flash-lite

  • Cost-efficient model from Google

  • May struggle with complex multi-step instructions

gemini-2.5-flash

  • Latest generation of powerful model from Google

  • Optimal price-performance and well-rounded capabilities

gemini-2.5-flash-lite

  • Last-generation model from Google optimized for cost efficiency and low latency

  • Less suitable for complex use-cases

gpt-4o-realtime-mini

  • Cost-effective real-time model from OpenAI

  • Supports voice and text modalities

  • Extremely low latency and natural conversation experience

gpt-4o-realtime

  • Flagship real-time model from OpenAI

  • Supports voice and text modalities

In addition to the above, you may bring your own large language models from different providers by configuring corresponding API Keys in the Models screen. For details, see Models.