Speech-to-text service information

To connect VoiceAI Connect to a speech-to-text service provider, certain information is required from the provider, which is then used in the VoiceAI Connect configuration for the bot.

Microsoft Azure Speech Services

Connectivity

To connect to Azure's Speech Service, you need to provide AudioCodes with your subscription key for the service. To obtain the key, see Azure's documentation at https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started.

The key is configured on VoiceAI Connect using the credentials key parameter in the providers section.

Note: The key is only valid for a specific region. The region is configured using the region parameter.

Language Definition

To define the language, you need to provide AudioCodes with the following from Azure's Speech-to-text table:

This value is configured on VoiceAI Connect using the language parameter. For example, for Italian, the parameter should be configured to it-IT.

VoiceAI Connect can also use Azure's Custom Speech service. For more information, see Azure's documentation at https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/how-to-custom-speech-deploy-model . If you use this service, you need to provide AudioCodes with the custom endpoint details.

Google Cloud Speech-to-Text

Connectivity

To connect to Google Cloud Speech-to-Text service, you need to provide AudioCodes with the following:

Configuration

The keys are configured on VoiceAI Connect using the privateKey and clientEmail parameters in the providers > credentials section. To create the account key, refer to Google's documentation. From the JSON object representing the key, extract the private key (including the "-----BEGIN PRIVATE KEY-----" prefix) and the service account email. These two values must be configured on VoiceAI Connect using the privateKey and clientEmail parameters.

API Version

To choose the API version for the Speech-to-Text service, configure thegoogleSttVersion parameter. This parameter is configured per bot by the Administrator, or dynamically by the bot during conversation:

Parameter Type Description
googleSttVersion String

Enables the bot to use a specific Google Speech-to-Text service version.

  • v1

  • v1p1beta1 (default)

  • v2

This feature is applicable only to VoiceAI Connect Enterprise (Version 3.20-2 and later).

Language Definition

To define the language, you need to provide AudioCodes with the following from Google's Cloud Speech-to-Text table:

This value is configured on VoiceAI Connect using the language parameter. For example, for English (South Africa), the parameter should be configured to en-ZA.

Nuance

Connectivity

To connect VoiceAI Connect to this Nuance Krypton speech service, it can use the WebSocket API or the open source Remote Procedure Calls (gRPC) API. To connect to Nuance Mix, it must use the gRPC API.

VoiceAI Connect is configured to connect to the specific Nuance API type, by setting the type parameter in the providers section, to nuance or nuance-grpc.

You need to provide AudioCodes with the URL of your Nuance's speech-to-text endpoint instance. This URL (with port number) is configured on the VoiceAI Connect using the sttHost parameter.

Note: Nuance offers a cloud service (Nuance Mix) as well as an option to install an on-premise server. The on-premise server is without authentication while the cloud service uses OAuth 2.0 authentication (see below).

VoiceAI Connect supports Nuance Mix, Nuance Conversational AI services (gRPC) API interfaces. VoiceAI Connect authenticates itself with Nuance Mix (which is located in the public cloud), using OAuth 2.0. To configure OAuth 2.0, use the following providers parameters: oauthTokenUrl, credentials > oauthClientId, and credentials > oauthClientSecret.

Nuance Mix is supported only by VoiceAI Connect Enterprise Version 2.6 and later.

Language Definition

This value (ISO 639-1 format) is configured on VoiceAI Connect using the language parameter. For example, for English (USA), the parameter should be configured to en-US.

Amazon Transcribe Speech-to-Text Service

To connect to Amazon Transcribe speech-to-text service, you need to provide AudioCodes with the following:

VoiceAI Connect is configured to connect to Amazon Transcribe, by setting the type parameter to aws under the providers section.

For languages supported by Amazon Transcribe, click here. The language value is configured on VoiceAI Connect using the language parameter under the bots section.

Amazon transcribe is supported only by VoiceAI Connect Enterprise Version 3.4 and later.

AmiVoice

To define the AmiVoice speech-to-text engine at the VoiceAI Connect, you need to provide AudioCodes with the following from AmiVoice's customer service:

AmiVoice is supported only by VoiceAI Connect Enterprise Version 2.8 and later.

AudioCodes LVCSR

Connectivity

To connect VoiceAI Connect to the AudioCodes LVCSR service, set the type parameter in the providers section, to "audiocodes-lvcsr".

You need to provide AudioCodes with the URL of your AudioCodes LVCSR’s speech-to-text endpoint instance. This URL (<address>:<port>) is configured on the VoiceAI Connect using the sttHost parameter.

The key (optional) is used in the query parameter of the URL of the WebSocket (if not provided, the key “1” is used).

The LVCSR speech-to-text provider is currently only applicable to agent-assist calls, for which the bot will receive the speech-to-text events using the speechHypothesis event.

AudioCodes LVCSR speech-to-text provider is supported only by VoiceAI Connect Enterprise Version 3.0 and later.

This parameter is configured per bot by the Administrator, or dynamically by the bot during conversation:

Parameter Type Description
stopRecognitionMaintainSession Boolean

Enables the bot to stop or start speech recognition.

  • true: Stop speech recognition.

  • false: (Default) Resume speech recognition.

This feature is applicable only to VoiceAI Connect Enterprise (Version 3.20-2 and later).

Yandex

To connect to Yandex, please contact AudioCodes for information.

Deepgram

Connectivity

VoiceAI Connect uses WebSocket to communicate with Deepgram’s speech service. To connect your VoiceAI Connect instance to Deepgram, you need to provide AudioCodes with the following:

Note: When configuring the providers, make sure that you define the API key as a "key" and not as a "token".

Language Definition

To define the language, provide the appropriate BCP-47 language code from Deepgram’s documentation. This is configured on VoiceAI Connect using the language parameter.

Advanced Features

Deepgram supports a number of advanced features such as different models, keyword boosting, profanity filter, and more. Review Deepgram’s features documentation for a full list.

These additional features are configured on VoiceAI Connect using the sttGenericData field. The parameters are passed along as key-value pairs in a JSON object. For the Deepgram parameters which can contain multiple items (like keywords), when multiple values are required, replace the string value with an array of string values like below and the multiple values will be accounted for. See below example:

{
  "sttGenericData": {
    "model": "custom_model_name",
    "tier": "enhanced",
    "keywords": [
      "operator:20",
      "account:2"
    ]
  }
}

Connecting Deepgram using AudioCodes Live Hub

If you want to connect to Deepgram's speech services using AudioCodes Live Hub:

  1. Sign into the Live Hub portal.

  2. From the Navigation Menu pane, click Speech Services.

  3. Click the + (plus) icon, and then do the following:

    1. In the 'Speech service name' field, type a name for the speech service.

    2. Select only the Speech To Text check box.

    3. Select the Generic provider option.

    4. Click Next.

  4. In the 'Authentication Key' field, enter the token supplied by Deepgram.

  5. In the 'Speech To Text (STT) URL' field, enter the URL supplied by Deepgram.

  6. Click Create.

Uniphore

To connect VoiceAI Connect to the Uniphore speech-to-text service, set the type parameter in the providers section to uniphore.

You need to provide AudioCodes with the URL of your Uniphore speech-to-text endpoint instance. This URL (<address>:<port>) is configured on the VoiceAI Connect using the sttHost parameter.

Uniphore speech-to-text provider is applicable only to VoiceAI Connect Enterprise Version 3.6 and later.