diff --git a/fern/customization/transcriber-fallback-plan.mdx b/fern/customization/transcriber-fallback-plan.mdx
new file mode 100644
index 000000000..63d966997
--- /dev/null
+++ b/fern/customization/transcriber-fallback-plan.mdx
@@ -0,0 +1,174 @@
+---
+title: Transcriber fallback configuration
+subtitle: Configure fallback transcribers that activate automatically if your primary transcriber fails.
+slug: customization/transcriber-fallback-plan
+---
+
+## Overview
+
+Transcriber fallback configuration ensures your calls continue even if your primary speech-to-text provider experiences issues. Your assistant will sequentially fallback to the transcribers you configure, in the exact order you specify.
+
+**Key benefits:**
+- **Call continuity** during provider outages
+- **Automatic failover** with no user intervention required
+- **Provider diversity** to protect against single points of failure
+
+
+ Without a fallback plan configured, your call will end with an error if your chosen transcription provider fails.
+
+
+## How it works
+
+When a transcriber failure occurs, Vapi will:
+1. Detect the failure of the primary transcriber
+2. Switch to the first fallback transcriber in your plan
+3. Continue through your specified list if subsequent failures occur
+4. Terminate only if all transcribers in your plan have failed
+
+## Configure via Dashboard
+
+
+
+ Navigate to your assistant and select the **Transcriber** tab.
+
+
+ Scroll down to find the **Fallback Transcribers** collapsible section. A warning indicator appears if no fallback transcribers are configured.
+
+
+ Click **Add Fallback Transcriber** to configure your first fallback:
+ - Select a **provider** from the dropdown
+ - Choose a **model** (if the provider offers multiple models)
+ - Select a **language** for transcription
+
+
+ Expand **Additional Configuration** to access provider-specific settings like numerals formatting, VAD settings, and confidence thresholds.
+
+
+ Repeat to add additional fallback transcribers. Order matters—the first fallback in your list is tried first.
+
+
+
+
+ If HIPAA or PCI compliance is enabled on your account or assistant, only **Deepgram** and **Azure** transcribers will be available as fallback options.
+
+
+## Configure via API
+
+Add the `fallbackPlan` property to your assistant's transcriber configuration, and specify the fallback transcribers within the `transcribers` property.
+
+```json
+{
+ "transcriber": {
+ "provider": "deepgram",
+ "model": "nova-3",
+ "language": "en",
+ "fallbackPlan": {
+ "transcribers": [
+ {
+ "provider": "assembly-ai",
+ "speechModel": "universal-streaming-multilingual",
+ "language": "en"
+ },
+ {
+ "provider": "azure",
+ "language": "en-US"
+ }
+ ]
+ }
+ }
+}
+```
+
+## Provider-specific settings
+
+Each transcriber provider supports different configuration options. Expand the accordion below to see available settings for each provider.
+
+
+
+ - **model**: Model selection (`nova-3`, `nova-3-general`, `nova-3-medical`, `nova-2`, `flux-general-en`, etc.).
+ - **language**: Language code for transcription.
+ - **keywords**: Keywords with optional boost values for improved recognition (e.g., `["companyname", "productname:2"]`).
+ - **keyterm**: Keyterm prompting for up to 90% keyword recall rate improvement.
+ - **smartFormat** (boolean): Enable smart formatting for numbers and dates.
+ - **eotThreshold** (0.5-0.9): End-of-turn confidence threshold. Only available with Flux models.
+ - **eotTimeoutMs** (500-10000): Maximum time to wait after speech before finalizing turn. Only available with Flux models. Default is 5000ms.
+
+
+ - **language**: Language code (`multi` for multilingual, `en` for English).
+ - **speechModel**: Streaming speech model (`universal-streaming-english` or `universal-streaming-multilingual`).
+ - **wordBoost**: Custom vocabulary array (up to 2500 characters total).
+ - **keytermsPrompt**: Array of keyterms for improved recognition (up to 100 terms, 50 characters each). Costs additional $0.04/hour.
+ - **endUtteranceSilenceThreshold**: Duration of silence in milliseconds to detect end of utterance.
+ - **disablePartialTranscripts** (boolean): Set to `true` to disable partial transcripts.
+ - **confidenceThreshold** (0-1): Minimum confidence threshold for accepting transcriptions. Default is 0.4.
+ - **vadAssistedEndpointingEnabled** (boolean): Enable VAD-based endpoint detection.
+
+
+ - **language**: Language code in BCP-47 format (e.g., `en-US`, `es-MX`, `fr-FR`).
+ - **segmentationSilenceTimeoutMs** (100-5000): Duration of silence after which a phrase is finalized. Configure to adjust sensitivity to pauses.
+ - **segmentationMaximumTimeMs** (20000-70000): Maximum duration a segment can reach before being cut off.
+ - **segmentationStrategy**: Controls phrase boundary detection. Options: `Default`, `Time`, or `Semantic`.
+
+
+ - **model**: Model selection (`fast`, `accurate`, or `solaria-1`).
+ - **language**: Language code.
+ - **confidenceThreshold** (0-1): Minimum confidence for transcription acceptance. Default is 0.4.
+ - **endpointing** (0.01-10): Time in seconds to wait before considering speech ended.
+ - **speechThreshold** (0-1): Speech detection sensitivity (0.0 to 1.0).
+ - **prosody** (boolean): Enable prosody detection (laugh, giggle, music, etc.).
+ - **audioEnhancer** (boolean): Pre-process audio for improved accuracy (increases latency).
+ - **transcriptionHint**: Hint text to guide transcription.
+ - **customVocabularyEnabled** (boolean): Enable custom vocabulary.
+ - **customVocabularyConfig**: Custom vocabulary configuration with vocabulary array and default intensity.
+ - **region**: Processing region (`us-west` or `eu-west`).
+ - **receivePartialTranscripts** (boolean): Enable partial transcript delivery.
+
+
+ - **model**: Model selection (currently only `default`).
+ - **language**: Language code.
+ - **operatingPoint**: Accuracy level. `standard` for faster turnaround, `enhanced` for highest accuracy. Default is `enhanced`.
+ - **region**: Processing region (`eu` for Europe, `us` for United States). Default is `eu`.
+ - **enableDiarization** (boolean): Enable speaker identification for multi-speaker conversations.
+ - **maxDelayMs**: Maximum delay in milliseconds for partial transcripts. Balances latency and accuracy.
+
+
+ - **model**: Gemini model selection.
+ - **language**: Language selection (e.g., `Multilingual`, `English`, `Spanish`, `French`).
+
+
+ - **model**: OpenAI Realtime STT model selection (required).
+ - **language**: Language code for transcription.
+
+
+ - **model**: Model selection (currently only `scribe_v1`).
+ - **language**: ISO 639-1 language code.
+
+
+ - **model**: Model selection (currently only `ink-whisper`).
+ - **language**: ISO 639-1 language code.
+
+
+
+## Best practices
+
+- Use **different providers** for fallbacks to protect against provider-wide outages.
+- Consider **language compatibility** when selecting fallbacks—ensure all fallback transcribers support your required languages.
+- Test your fallback configuration to ensure smooth transitions between transcribers.
+- For **HIPAA/PCI compliance**, ensure all fallbacks are compliant providers (Deepgram or Azure).
+
+## FAQ
+
+
+
+ All major transcriber providers are supported: Deepgram, AssemblyAI, Azure, Gladia, Google, Speechmatics, Cartesia, ElevenLabs, and OpenAI.
+
+
+ No additional fees for using fallback transcribers. You are only billed for the transcriber that processes the audio.
+
+
+ Failover typically occurs within milliseconds of detecting a failure, ensuring minimal disruption to the call.
+
+
+ Yes, each fallback transcriber can have its own language configuration. However, for the best user experience, we recommend using the same or similar languages across all fallbacks.
+
+
diff --git a/fern/docs.yml b/fern/docs.yml
index f2f0131cd..504cff71d 100644
--- a/fern/docs.yml
+++ b/fern/docs.yml
@@ -160,8 +160,10 @@ navigation:
path: customization/speech-configuration.mdx
- page: Voice pipeline configuration
path: customization/voice-pipeline-configuration.mdx
- - page: Voice fallback plan
+ - page: Voice fallback configuration
path: voice-fallback-plan.mdx
+ - page: Transcriber fallback configuration
+ path: customization/transcriber-fallback-plan.mdx
- page: OpenAI realtime speech-to-speech
path: openai-realtime.mdx
- page: Provider keys
diff --git a/fern/voice-fallback-plan.mdx b/fern/voice-fallback-plan.mdx
index 7b8836e46..c735359d4 100644
--- a/fern/voice-fallback-plan.mdx
+++ b/fern/voice-fallback-plan.mdx
@@ -1,35 +1,56 @@
---
-title: Voice Fallback Plan
+title: Voice fallback configuration
subtitle: Configure fallback voices that activate automatically if your primary voice fails.
slug: voice-fallback-plan
---
-
- Voice fallback plans can currently only be configured through the API. We are working on making this available through our dashboard.
-
-
-## Introduction
+## Overview
-Voice fallback plans give you the ability to continue your call in the event that your primary voice fails. Your assistant will sequentially fallback to only the voices you configure within your plan, in the exact order you specify.
+Voice fallback configuration gives you the ability to continue your call in the event that your primary voice fails. Your assistant will sequentially fallback to only the voices you configure within your plan, in the exact order you specify.
Without a fallback plan configured, your call will end with an error in the event that your chosen voice provider fails.
-## How It Works
+## How it works
When a voice failure occurs, Vapi will:
1. Detect the failure of the primary voice
2. If a custom fallback plan exists:
- - Switch to the first fallback voice in your plan
- - Continue through your specified list if subsequent failures occur
- - Terminate only if all voices in your plan have failed
+ - Switch to the first fallback voice in your plan
+ - Continue through your specified list if subsequent failures occur
+ - Terminate only if all voices in your plan have failed
-## Configuration
+## Configure via Dashboard
+
+
+
+ Navigate to your assistant and select the **Voice** tab.
+
+
+ Scroll down to find the **Fallback Voices** collapsible section. A warning indicator appears if no fallback voices are configured.
+
+
+ Click **Add Fallback Voice** to configure your first fallback:
+ - Select a **provider** from the dropdown (supports 20+ voice providers)
+ - Choose a **voice** from the searchable popover (shows gender, language, and deprecated status)
+ - The **model** is automatically selected based on your voice choice
+
+
+ Expand **Additional Configuration** to access provider-specific settings like stability, speed, and emotion controls.
+
+
+ Repeat to add additional fallback voices. Order matters—the first fallback in your list is tried first.
+
+
+
+## Configure via API
Add the `fallbackPlan` property to your assistant's voice configuration, and specify the fallback voices within the `voices` property.
-- Please note that fallback voices must be valid JSON configurations, and not strings.
-- The order matters. Vapi will choose fallback voices starting from the beginning of the list.
+
+
+ Fallback voices must be valid JSON configurations, not strings. The order matters—Vapi will choose fallback voices starting from the beginning of the list.
+
```json
{
@@ -37,26 +58,125 @@ Add the `fallbackPlan` property to your assistant's voice configuration, and spe
"provider": "openai",
"voiceId": "shimmer",
"fallbackPlan": {
- "voices": [
- {
- "provider": "cartesia",
- "voiceId": "248be419-c632-4f23-adf1-5324ed7dbf1d"
- },
- {
- "provider": "11labs",
- "voiceId": "cgSgspJ2msm6clMCkdW9"
- }
- ]
+ "voices": [
+ {
+ "provider": "cartesia",
+ "voiceId": "248be419-c632-4f23-adf1-5324ed7dbf1d"
+ },
+ {
+ "provider": "11labs",
+ "voiceId": "cgSgspJ2msm6clMCkdW9",
+ "stability": 0.5,
+ "similarityBoost": 0.75
+ }
+ ]
}
}
}
```
+## Provider-specific settings
+
+Each voice provider supports different configuration options. Expand the accordion below to see available settings for each provider.
+
+
+
+ - **stability** (0-1): Controls voice consistency. Lower values allow more emotional range; higher values produce more stable output.
+ - **similarityBoost** (0-1): Enhances similarity to the original voice. Higher values make the voice more similar to the reference.
+ - **style** (0-1): Voice style intensity. Higher values amplify the speaker's style.
+ - **useSpeakerBoost** (boolean): Enable to boost similarity to the original speaker.
+ - **speed** (0.7-1.2): Speech speed multiplier. Default is 1.0.
+ - **optimizeStreamingLatency** (0-4): Controls streaming latency optimization. Default is 3.
+ - **enableSsmlParsing** (boolean): Enable SSML pronunciation support.
+ - **model**: Select from `eleven_multilingual_v2`, `eleven_turbo_v2`, `eleven_turbo_v2_5`, `eleven_flash_v2`, `eleven_flash_v2_5`, or `eleven_monolingual_v1`.
+
+
+ - **model**: Model selection (`sonic-english`, `sonic-3`, etc.).
+ - **language**: Language code for the voice.
+ - **experimentalControls.speed**: Speech speed adjustment (-1 to 1). Negative values slow down; positive values speed up.
+ - **experimentalControls.emotion**: Array of emotion configurations (e.g., `["happiness:high", "curiosity:medium"]`).
+ - **generationConfig** (sonic-3 only):
+ - **speed** (0.6-1.5): Fine-grained speed control.
+ - **volume** (0.5-2.0): Volume adjustment.
+ - **experimental.accentLocalization** (0 or 1): Toggle accent localization.
+
+
+ - **speed** (0.5-2): Speech rate multiplier. Default is 1.0.
+
+
+ - **speed** (0.25-4): Speech speed multiplier. Default is 1.0.
+ - **model**: Select from `tts-1`, `tts-1-hd`, or realtime models.
+ - **instructions**: Voice prompt to control the generated audio style. Does not work with `tts-1` or `tts-1-hd` models.
+
+
+ - **speed** (0.25-2): Speech rate multiplier. Default is 1.0.
+ - **language**: Two-letter ISO 639-1 language code, or `auto` for auto-detection.
+
+
+ - **model**: Select from `arcana`, `mistv2`, or `mist`. Defaults to `arcana`.
+ - **speed** (0.1+): Speech speed multiplier.
+ - **pauseBetweenBrackets** (boolean): Enable pause control using angle brackets (e.g., `<200>` for 200ms pause).
+ - **phonemizeBetweenBrackets** (boolean): Enable phonemization using curly brackets (e.g., `{h'El.o}`).
+ - **reduceLatency** (boolean): Optimize for reduced streaming latency.
+ - **inlineSpeedAlpha**: Inline speed control using alpha notation.
+
+
+ - **speed** (0.1-5): Speech rate multiplier.
+ - **temperature** (0.1-2): Controls voice variance. Lower values are more predictable; higher values allow more variation.
+ - **emotion**: Emotion preset (e.g., `female_happy`, `male_sad`, `female_angry`, `male_surprised`).
+ - **voiceGuidance** (1-6): Controls voice uniqueness. Lower values reduce uniqueness.
+ - **styleGuidance** (1-30): Controls emotion intensity. Higher values create more emotional performance.
+ - **textGuidance** (1-2): Controls text adherence. Higher values are more accurate to input text.
+ - **model**: Select from `PlayHT2.0`, `PlayHT2.0-turbo`, `Play3.0-mini`, or `PlayDialog`.
+
+
+ - **model**: Select from `aura` or `aura-2`. Defaults to `aura-2`.
+ - **mipOptOut** (boolean): Opt out of the Deepgram Model Improvement Partnership program.
+
+
+ - **model**: Model selection (e.g., `octave2`).
+ - **description**: Natural language instructions describing how the speech should sound (tone, intonation, pacing, accent).
+ - **isCustomHumeVoice** (boolean): Indicates whether using a custom Hume voice.
+
+
+ - **model**: Select from `speech-02-hd` (high-fidelity) or `speech-02-turbo` (low latency). Defaults to `speech-02-turbo`.
+ - **emotion**: Emotion preset (`happy`, `sad`, `angry`, `fearful`, `surprised`, `disgusted`, `neutral`).
+ - **pitch** (-12 to 12): Voice pitch adjustment in semitones.
+ - **speed** (0.5-2): Speech speed adjustment.
+ - **volume** (0.5-2): Volume adjustment.
+
+
+ - **model**: Model selection.
+ - **enableSsml** (boolean): Enable limited SSML translation for input text.
+ - **libraryIds**: Array of library IDs to use for voice synthesis.
+
+
+ - **model**: Model selection (e.g., `neu_fast`).
+ - **language**: Language code (required).
+ - **speed** (0.25-2): Speech speed multiplier.
+
+
+ - **model**: Model selection (e.g., `lightning`).
+ - **speed**: Speech speed multiplier.
+
+
+
## Best practices
-- Use different providers for your fallback voices to protect against provider-wide outages.
+- Use **different providers** for your fallback voices to protect against provider-wide outages.
- Select voices with **similar characteristics** (tone, accent, gender) to maintain consistency in the user experience.
+- Test your fallback configuration to ensure smooth transitions between voices.
-## How will pricing work?
+## FAQ
-There is no change to the pricing of the voices. Your call will not incur any extra fees while using fallback voices, and you will be able to see the cost for each voice in your end-of-call report.
+
+
+ There is no change to the pricing of the voices. Your call will not incur any extra fees while using fallback voices, and you will be able to see the cost for each voice in your end-of-call report.
+
+
+ You can configure as many fallback voices as you need. However, we recommend 2-3 fallbacks from different providers for optimal reliability.
+
+
+ Users may notice a brief pause and a change in voice characteristics when switching to a fallback voice. Selecting voices with similar properties helps minimize this disruption.
+
+