VoiceRouter

adapters/assemblyai-adapter

Voice Router SDK - AssemblyAI Provider / adapters/assemblyai-adapter

adapters/assemblyai-adapter

Classes

AssemblyAIAdapter

AssemblyAI transcription provider adapter

Implements transcription for the AssemblyAI API with support for:

  • Synchronous and asynchronous transcription
  • Speaker diarization (speaker labels)
  • Multi-language detection and transcription
  • Summarization and sentiment analysis
  • Entity detection and content moderation
  • Custom vocabulary and spelling
  • Word-level timestamps
  • PII redaction

See

https://www.assemblyai.com/docs AssemblyAI API Documentation

Examples

import { AssemblyAIAdapter } from '@meeting-baas/sdk';

const adapter = new AssemblyAIAdapter();
adapter.initialize({
  apiKey: process.env.ASSEMBLYAI_API_KEY
});

const result = await adapter.transcribe({
  type: 'url',
  url: 'https://example.com/audio.mp3'
}, {
  language: 'en',
  diarization: true
});

console.log(result.data.text);
console.log(result.data.speakers);
const result = await adapter.transcribe(audio, {
  language: 'en_us',
  diarization: true,
  summarization: true,
  sentimentAnalysis: true,
  entityDetection: true,
  piiRedaction: true
});

console.log('Summary:', result.data.summary);
console.log('Entities:', result.data.metadata?.entities);

Extends

Methods

buildStreamingUrl()

private buildStreamingUrl(options?): string

Build WebSocket URL with all streaming parameters

Parameters
ParameterType
options?StreamingOptions
Returns

string

buildTranscriptionRequest()

private buildTranscriptionRequest(audio, options?): TranscriptParams

Build AssemblyAI transcription request from unified options

Parameters
ParameterType
audioAudioInput
options?TranscribeOptions
Returns

TranscriptParams

createErrorResponse()

protected createErrorResponse(error, statusCode?, code?): UnifiedTranscriptResponse

Helper method to create error responses with stack traces

Parameters
ParameterTypeDescription
errorunknownError object or unknown error
statusCode?numberOptional HTTP status code
code?ErrorCodeOptional error code (defaults to extracted or UNKNOWN_ERROR)
Returns

UnifiedTranscriptResponse

Inherited from

BaseAdapter.createErrorResponse

deleteTranscript()

deleteTranscript(transcriptId): Promise<{ success: boolean; }>

Delete a transcription and its associated data

Removes the transcription data from AssemblyAI's servers. This action is irreversible. The transcript will be marked as deleted and its content will no longer be accessible.

Parameters
ParameterTypeDescription
transcriptIdstringThe ID of the transcript to delete
Returns

Promise<{ success: boolean; }>

Promise with success status

Example
const result = await adapter.deleteTranscript('abc123');
if (result.success) {
  console.log('Transcript deleted successfully');
}
See

https://www.assemblyai.com/docs/api-reference/transcripts/delete

extractSpeakers()

private extractSpeakers(transcript): object[] | undefined

Extract speaker information from AssemblyAI response

Parameters
ParameterType
transcriptTranscript
Returns

object[] | undefined

extractUtterances()

private extractUtterances(transcript): object[] | undefined

Extract utterances from AssemblyAI response

Parameters
ParameterType
transcriptTranscript
Returns

object[] | undefined

extractWords()

private extractWords(transcript): object[] | undefined

Extract word timestamps from AssemblyAI response

Parameters
ParameterType
transcriptTranscript
Returns

object[] | undefined

getAxiosConfig()

protected getAxiosConfig(): object

Get axios config for generated API client functions Configures headers and base URL using authorization header

Returns

object

baseURL

baseURL: string

headers

headers: Record<string, string>

timeout

timeout: number

Overrides

BaseAdapter.getAxiosConfig

getTranscript()

getTranscript(transcriptId): Promise<UnifiedTranscriptResponse<TranscriptionProvider>>

Get transcription result by ID

Parameters
ParameterType
transcriptIdstring
Returns

Promise<UnifiedTranscriptResponse<TranscriptionProvider>>

Overrides

BaseAdapter.getTranscript

handleWebSocketMessage()

private handleWebSocketMessage(message, callbacks?): void

Handle all WebSocket message types from AssemblyAI streaming

Parameters
ParameterType
messageStreamingEventMessage
callbacks?StreamingCallbacks
Returns

void

initialize()

initialize(config): void

Initialize the adapter with configuration

Parameters
ParameterType
configProviderConfig
Returns

void

Inherited from

BaseAdapter.initialize

listTranscripts()

listTranscripts(options?): Promise<{ transcripts: UnifiedTranscriptResponse<TranscriptionProvider>[]; hasMore?: boolean; total?: number; }>

List recent transcriptions with filtering

Retrieves a list of transcripts with optional filtering by status and date. Transcripts are sorted from newest to oldest and can be retrieved for the last 90 days of usage.

Parameters
ParameterTypeDescription
options?ListTranscriptsOptionsFiltering and pagination options
Returns

Promise<{ transcripts: UnifiedTranscriptResponse<TranscriptionProvider>[]; hasMore?: boolean; total?: number; }>

List of transcripts with pagination info

Examples
const { transcripts, hasMore } = await adapter.listTranscripts({
  limit: 50,
  status: 'completed'
})
const { transcripts } = await adapter.listTranscripts({
  date: '2026-01-07',
  limit: 100
})
const { transcripts } = await adapter.listTranscripts({
  assemblyai: {
    after_id: 'abc123',  // Get transcripts after this ID
    limit: 50
  }
})
See

https://www.assemblyai.com/docs/api-reference/transcripts/list

normalizeListItem()

private normalizeListItem(item): UnifiedTranscriptResponse

Normalize a transcript list item to unified format

Parameters
ParameterType
itemTranscriptListItem
Returns

UnifiedTranscriptResponse

normalizeResponse()

private normalizeResponse(response): UnifiedTranscriptResponse<"assemblyai">

Normalize AssemblyAI response to unified format

Parameters
ParameterType
responseTranscript
Returns

UnifiedTranscriptResponse<"assemblyai">

pollForCompletion()

protected pollForCompletion(transcriptId, options?): Promise<UnifiedTranscriptResponse<TranscriptionProvider>>

Generic polling helper for async transcription jobs

Polls getTranscript() until job completes or times out.

Parameters
ParameterTypeDescription
transcriptIdstringJob/transcript ID to poll
options?{ intervalMs?: number; maxAttempts?: number; }Polling configuration
options.intervalMs?number-
options.maxAttempts?number-
Returns

Promise<UnifiedTranscriptResponse<TranscriptionProvider>>

Final transcription result

Inherited from

BaseAdapter.pollForCompletion

transcribe()

transcribe(audio, options?): Promise<UnifiedTranscriptResponse<TranscriptionProvider>>

Submit audio for transcription

Sends audio to AssemblyAI API for transcription. If a webhook URL is provided, returns immediately with the job ID. Otherwise, polls until completion.

Parameters
ParameterTypeDescription
audioAudioInputAudio input (currently only URL type supported)
options?TranscribeOptionsTranscription options
Returns

Promise<UnifiedTranscriptResponse<TranscriptionProvider>>

Normalized transcription response

Throws

If audio type is not 'url' (file/stream not yet supported)

Examples
const result = await adapter.transcribe({
  type: 'url',
  url: 'https://example.com/meeting.mp3'
});
const result = await adapter.transcribe({
  type: 'url',
  url: 'https://example.com/meeting.mp3'
}, {
  language: 'en_us',
  diarization: true,
  speakersExpected: 3,
  summarization: true,
  sentimentAnalysis: true,
  entityDetection: true,
  customVocabulary: ['API', 'TypeScript', 'JavaScript']
});
// Submit transcription with webhook
const result = await adapter.transcribe({
  type: 'url',
  url: 'https://example.com/meeting.mp3'
}, {
  webhookUrl: 'https://myapp.com/webhook/transcription',
  language: 'en_us'
});

// Get transcript ID for polling
const transcriptId = result.data?.id;
console.log('Transcript ID:', transcriptId); // Use this to poll for status

// Later: Poll for completion (if webhook fails or you want to check)
const status = await adapter.getTranscript(transcriptId);
if (status.data?.status === 'completed') {
  console.log('Transcript:', status.data.text);
}
Overrides

BaseAdapter.transcribe

transcribeStream()

transcribeStream(options?, callbacks?): Promise<StreamingSession & object>

Stream audio for real-time transcription

Creates a WebSocket connection to AssemblyAI for streaming transcription. Uses the v3 Universal Streaming API with full support for all parameters.

Supports all AssemblyAI streaming features:

  • Real-time transcription with interim/final results (Turn events)
  • End-of-turn detection tuning (confidence threshold, silence duration)
  • Voice Activity Detection (VAD) threshold tuning
  • Real-time text formatting
  • Profanity filtering
  • Custom vocabulary (keyterms)
  • Language detection
  • Model selection (English or Multilingual)
  • Dynamic configuration updates mid-stream
  • Force endpoint command
Parameters
ParameterTypeDescription
options?StreamingOptionsStreaming configuration options
callbacks?StreamingCallbacksEvent callbacks for transcription results
Returns

Promise<StreamingSession & object>

Promise that resolves with an extended StreamingSession

Examples
const session = await adapter.transcribeStream({
  sampleRate: 16000,
  encoding: 'linear16' // Unified format - mapped to AssemblyAI's pcm_s16le
}, {
  onOpen: () => console.log('Connected'),
  onTranscript: (event) => {
    if (event.isFinal) {
      console.log('Final:', event.text);
    } else {
      console.log('Interim:', event.text);
    }
  },
  onError: (error) => console.error('Error:', error),
  onClose: () => console.log('Disconnected')
});

// Send audio chunks
const audioChunk = getAudioChunk();
await session.sendAudio({ data: audioChunk });

// Close when done
await session.close();
const session = await adapter.transcribeStream({
  sampleRate: 16000,
  assemblyaiStreaming: {
    speechModel: 'universal-streaming-multilingual',
    languageDetection: true,
    endOfTurnConfidenceThreshold: 0.7,
    minEndOfTurnSilenceWhenConfident: 500,
    maxTurnSilence: 15000,
    vadThreshold: 0.3,
    formatTurns: true,
    filterProfanity: true,
    keyterms: ['TypeScript', 'JavaScript', 'API'],
    inactivityTimeout: 60000
  }
}, {
  onTranscript: (e) => console.log('Transcript:', e.text),
  onMetadata: (m) => console.log('Metadata:', m)
});

// Update configuration mid-stream
session.updateConfiguration?.({
  end_of_turn_confidence_threshold: 0.5,
  vad_threshold: 0.2
});

// Force endpoint detection
session.forceEndpoint?.();
validateConfig()

protected validateConfig(): void

Helper method to validate configuration

Returns

void

Inherited from

BaseAdapter.validateConfig

Constructors

Constructor

new AssemblyAIAdapter(): AssemblyAIAdapter

Returns

AssemblyAIAdapter

Inherited from

BaseAdapter.constructor

Properties

baseUrl

protected baseUrl: string = "https://api.assemblyai.com"

Base URL for provider API (must be defined by subclass)

Overrides

BaseAdapter.baseUrl

capabilities

readonly capabilities: ProviderCapabilities

Provider capabilities

Overrides

BaseAdapter.capabilities

name

readonly name: "assemblyai"

Provider name

Overrides

BaseAdapter.name

wsBaseUrl

private wsBaseUrl: string = "wss://streaming.assemblyai.com/v3/ws"

config?

protected optional config: ProviderConfig

Inherited from

BaseAdapter.config

Functions

createAssemblyAIAdapter()

createAssemblyAIAdapter(config): AssemblyAIAdapter

Factory function to create an AssemblyAI adapter

Parameters

ParameterType
configProviderConfig

Returns

AssemblyAIAdapter

createTemporaryToken()

createTemporaryToken<TData>(createRealtimeTemporaryTokenParams, options?): Promise<TData>

<Warning>Streaming Speech-to-Text is currently not available on the EU endpoint.</Warning> <Note>Any usage associated with a temporary token will be attributed to the API key that generated it.</Note> Create a temporary authentication token for Streaming Speech-to-Text

Type Parameters

Type ParameterDefault type
TDataAxiosResponse<RealtimeTemporaryTokenResponse, any>

Parameters

ParameterType
createRealtimeTemporaryTokenParamsCreateRealtimeTemporaryTokenParams
options?AxiosRequestConfig<any>

Returns

Promise<TData>

createTranscript()

createTranscript<TData>(transcriptParams, options?): Promise<TData>

<Note>To use our EU server for transcription, replace api.assemblyai.com with api.eu.assemblyai.com.</Note> Create a transcript from a media file that is accessible via a URL.

Type Parameters

Type ParameterDefault type
TDataAxiosResponse<Transcript, any>

Parameters

ParameterType
transcriptParamsTranscriptParams
options?AxiosRequestConfig<any>

Returns

Promise<TData>

deleteTranscriptAPI()

deleteTranscriptAPI<TData>(transcriptId, options?): Promise<TData>

<Note>To delete your transcriptions on our EU server, replace api.assemblyai.com with api.eu.assemblyai.com.</Note> Remove the data from the transcript and mark it as deleted. <Warning>Files uploaded via the /upload endpoint are immediately deleted alongside the transcript when you make a DELETE request, ensuring your data is removed from our systems right away.</Warning>

Type Parameters

Type ParameterDefault type
TDataAxiosResponse<Transcript, any>

Parameters

ParameterType
transcriptIdstring
options?AxiosRequestConfig<any>

Returns

Promise<TData>

getTranscriptAPI()

getTranscriptAPI<TData>(transcriptId, options?): Promise<TData>

<Note>To retrieve your transcriptions on our EU server, replace api.assemblyai.com with api.eu.assemblyai.com.</Note> Get the transcript resource. The transcript is ready when the "status" is "completed".

Type Parameters

Type ParameterDefault type
TDataAxiosResponse<Transcript, any>

Parameters

ParameterType
transcriptIdstring
options?AxiosRequestConfig<any>

Returns

Promise<TData>

listTranscriptsAPI()

listTranscriptsAPI<TData>(params?, options?): Promise<TData>

<Note>To retrieve your transcriptions on our EU server, replace api.assemblyai.com with api.eu.assemblyai.com.</Note> Retrieve a list of transcripts you created. Transcripts are sorted from newest to oldest and can be retrieved for the last 90 days of usage. The previous URL always points to a page with older transcripts.

If you need to retrieve transcripts from more than 90 days ago please reach out to our Support team at support@assemblyai.com.

Type Parameters

Type ParameterDefault type
TDataAxiosResponse<TranscriptList, any>

Parameters

ParameterType
params?ListTranscriptsParams
options?AxiosRequestConfig<any>

Returns

Promise<TData>

Interfaces

Transcript

A transcript object

Properties

acoustic_model

acoustic_model: string

The acoustic model that was used for the transcript

Deprecated
audio_url

audio_url: string

The URL of the media that was transcribed

auto_highlights

auto_highlights: boolean

Whether Key Phrases is enabled, either true or false

id

id: string

The unique identifier of your transcript

language_confidence

language_confidence: TranscriptLanguageConfidence

The confidence score for the detected language, between 0.0 (low confidence) and 1.0 (high confidence)

Minimum

0

Maximum

1

language_confidence_threshold

language_confidence_threshold: TranscriptLanguageConfidenceThreshold

The confidence threshold for the automatically detected language. An error will be returned if the language confidence is below this threshold.

Minimum

0

Maximum

1

language_model

language_model: string

The language model that was used for the transcript

Deprecated
redact_pii

redact_pii: boolean

Whether PII Redaction is enabled, either true or false

speech_model

speech_model: TranscriptSpeechModel

The speech model used for the transcription. When null, the default model is used.

status

status: TranscriptStatus

The status of your transcript. Possible values are queued, processing, completed, or error.

summarization

summarization: boolean

Whether Summarization is enabled, either true or false

webhook_auth

webhook_auth: boolean

Whether webhook authentication details were provided

audio_channels?

optional audio_channels: number

The number of audio channels in the audio file. This is only present when multichannel is enabled.

audio_duration?

optional audio_duration: TranscriptAudioDuration

The duration of this transcript object's media file, in seconds

audio_end_at?

optional audio_end_at: TranscriptAudioEndAt

The point in time, in milliseconds, in the file at which the transcription was terminated

audio_start_from?

optional audio_start_from: TranscriptAudioStartFrom

The point in time, in milliseconds, in the file at which the transcription was started

auto_chapters?

optional auto_chapters: TranscriptAutoChapters

Whether Auto Chapters is enabled, can be true or false

auto_highlights_result?

optional auto_highlights_result: TranscriptAutoHighlightsResult

An array of results for the Key Phrases model, if it is enabled. See Key Phrases for more information.

boost_param?

optional boost_param: TranscriptBoostParamProperty

The word boost parameter value

chapters?

optional chapters: TranscriptChapters

An array of temporally sequential chapters for the audio file

confidence?

optional confidence: TranscriptConfidence

The confidence score for the transcript, between 0.0 (low confidence) and 1.0 (high confidence)

Minimum

0

Maximum

1

content_safety?

optional content_safety: TranscriptContentSafety

Whether Content Moderation is enabled, can be true or false

content_safety_labels?

optional content_safety_labels: TranscriptContentSafetyLabels

An array of results for the Content Moderation model, if it is enabled. See Content moderation for more information.

custom_spelling?

optional custom_spelling: TranscriptCustomSpellingProperty

Customize how words are spelled and formatted using to and from values

custom_topics?

optional custom_topics: TranscriptCustomTopics

Whether custom topics is enabled, either true or false

Deprecated
disfluencies?

optional disfluencies: TranscriptDisfluencies

Transcribe Filler Words, like "umm", in your media file; can be true or false

entities?

optional entities: TranscriptEntities

An array of results for the Entity Detection model, if it is enabled. See Entity detection for more information.

entity_detection?

optional entity_detection: TranscriptEntityDetection

Whether Entity Detection is enabled, can be true or false

error?

optional error: string

Error message of why the transcript failed

filter_profanity?

optional filter_profanity: TranscriptFilterProfanity

Whether Profanity Filtering is enabled, either true or false

format_text?

optional format_text: TranscriptFormatText

Whether Text Formatting is enabled, either true or false

iab_categories?

optional iab_categories: TranscriptIabCategories

Whether Topic Detection is enabled, can be true or false

iab_categories_result?

optional iab_categories_result: TranscriptIabCategoriesResult

The result of the Topic Detection model, if it is enabled. See Topic Detection for more information.

keyterms_prompt?

optional keyterms_prompt: string[]

Improve accuracy with up to 1000 domain-specific words or phrases (maximum 6 words per phrase).

language_code?

optional language_code: string

The language of your audio file. Possible values are found in Supported Languages. The default value is 'en_us'.

language_detection?

optional language_detection: TranscriptLanguageDetection

Whether Automatic language detection is enabled, either true or false

multichannel?

optional multichannel: TranscriptMultichannel

Whether Multichannel transcription was enabled in the transcription request, either true or false

prompt?

optional prompt: string

This parameter does not currently have any functionality attached to it.

Deprecated
punctuate?

optional punctuate: TranscriptPunctuate

Whether Automatic Punctuation is enabled, either true or false

redact_pii_audio?

optional redact_pii_audio: TranscriptRedactPiiAudio

Whether a redacted version of the audio file was generated, either true or false. See PII redaction for more information.

redact_pii_audio_quality?

optional redact_pii_audio_quality: TranscriptRedactPiiAudioQuality

The audio quality of the PII-redacted audio file, if redact_pii_audio is enabled. See PII redaction for more information.

redact_pii_policies?

optional redact_pii_policies: TranscriptRedactPiiPolicies

The list of PII Redaction policies that were enabled, if PII Redaction is enabled. See PII redaction for more information.

redact_pii_sub?

optional redact_pii_sub: SubstitutionPolicy

The replacement logic for detected PII, can be "entity_type" or "hash". See PII redaction for more details.

sentiment_analysis?

optional sentiment_analysis: TranscriptSentimentAnalysis

Whether Sentiment Analysis is enabled, can be true or false

sentiment_analysis_results?

optional sentiment_analysis_results: TranscriptSentimentAnalysisResults

An array of results for the Sentiment Analysis model, if it is enabled. See Sentiment Analysis for more information.

speaker_labels?

optional speaker_labels: TranscriptSpeakerLabels

Whether Speaker diarization is enabled, can be true or false

speakers_expected?

optional speakers_expected: TranscriptSpeakersExpected

Tell the speaker label model how many speakers it should attempt to identify. See Speaker diarization for more details.

speech_threshold?

optional speech_threshold: TranscriptSpeechThreshold

Defaults to null. Reject audio files that contain less than this fraction of speech. Valid values are in the range [0, 1] inclusive.

Minimum

0

Maximum

1

speed_boost?

optional speed_boost: TranscriptSpeedBoost

Whether speed boost is enabled

Deprecated
summary?

optional summary: TranscriptSummary

The generated summary of the media file, if Summarization is enabled

summary_model?

optional summary_model: TranscriptSummaryModel

The Summarization model used to generate the summary, if Summarization is enabled

summary_type?

optional summary_type: TranscriptSummaryType

The type of summary generated, if Summarization is enabled

text?

optional text: TranscriptText

The textual transcript of your media file

throttled?

optional throttled: TranscriptThrottled

True while a request is throttled and false when a request is no longer throttled

topics?

optional topics: string[]

The list of custom topics provided if custom topics is enabled

utterances?

optional utterances: TranscriptUtterances

When multichannel or speaker_labels is enabled, a list of turn-by-turn utterance objects. See Speaker diarization and Multichannel transcription for more information.

webhook_auth_header_name?

optional webhook_auth_header_name: TranscriptWebhookAuthHeaderName

The header name to be sent with the transcript completed or failed webhook requests

webhook_status_code?

optional webhook_status_code: TranscriptWebhookStatusCode

The status code we received from your server when delivering the transcript completed or failed webhook request, if a webhook URL was provided

webhook_url?

optional webhook_url: TranscriptWebhookUrl

The URL to which we send webhook requests. We sends two different types of webhook requests. One request when a transcript is completed or failed, and one request when the redacted audio is ready if redact_pii_audio is enabled.

word_boost?

optional word_boost: string[]

The list of custom vocabulary to boost transcription probability for

Deprecated
words?

optional words: TranscriptWords

An array of temporally-sequential word objects, one for each word in the transcript. See Speech recognition for more information.

TranscriptListItem

Properties

audio_url

audio_url: string

The URL to the audio file

completed

completed: TranscriptListItemCompleted

The date and time the transcript was completed

Pattern

^(?:(\d4-\d2-\d2)T(\d2:\d2:\d2(?:.\d+)?))$

created

created: string

The date and time the transcript was created

Pattern

^(?:(\d4-\d2-\d2)T(\d2:\d2:\d2(?:.\d+)?))$

error

error: TranscriptListItemError

Error message of why the transcript failed

id

id: string

The unique identifier for the transcript

resource_url

resource_url: string

The URL to retrieve the transcript

status

status: TranscriptStatus

The status of the transcript

TranscriptUtterance

Properties

confidence

confidence: number

The confidence score for the transcript of this utterance

Minimum

0

Maximum

1

end

end: number

The ending time, in milliseconds, of the utterance in the audio file

speaker

speaker: string

The speaker of this utterance, where each speaker is assigned a sequential capital letter - e.g. "A" for Speaker A, "B" for Speaker B, etc.

start

start: number

The starting time, in milliseconds, of the utterance in the audio file

text

text: string

The text for this utterance

words

words: TranscriptWord[]

The words in the utterance.

channel?

optional channel: TranscriptUtteranceChannel

The channel of this utterance. The left and right channels are channels 1 and 2. Additional channels increment the channel number sequentially.

TranscriptWord

Properties

confidence

confidence: number

The confidence score for the transcript of this word

Minimum

0

Maximum

1

end

end: number

The ending time, in milliseconds, for the word

speaker

speaker: TranscriptWordSpeaker

The speaker of the word if Speaker Diarization is enabled, else null

start

start: number

The starting time, in milliseconds, for the word

text

text: string

The text of the word

channel?

optional channel: TranscriptWordChannel

The channel of the word. The left and right channels are channels 1 and 2. Additional channels increment the channel number sequentially.

Type Aliases

BeginEvent

BeginEvent = object

Properties

expires_at

expires_at: number

id

id: string

type

type: "Begin"

ErrorEvent

ErrorEvent = object

Properties

error

error: string

ListTranscriptsParams

ListTranscriptsParams = object

Properties

after_id?

optional after_id: AfterId

Get transcripts that were created after this transcript ID

before_id?

optional before_id: BeforeId

Get transcripts that were created before this transcript ID

created_on?

optional created_on: CreatedOn

Only get transcripts created on this date

limit?

optional limit: Limit

Maximum amount of transcripts to retrieve

status?

optional status: TranscriptStatus

Filter by transcript status

throttled_only?

optional throttled_only: ThrottledOnly

Only get throttled transcripts, overrides the status filter

StreamingEventMessage

StreamingEventMessage = BeginEvent | TurnEvent | TerminationEvent | ErrorEvent

StreamingForceEndpoint

StreamingForceEndpoint = object

Properties

type

type: "ForceEndpoint"

StreamingUpdateConfiguration

StreamingUpdateConfiguration = object

Properties

type

type: "UpdateConfiguration"

end_of_turn_confidence_threshold?

optional end_of_turn_confidence_threshold: number

format_turns?

optional format_turns: boolean

max_turn_silence?

optional max_turn_silence: number

min_end_of_turn_silence_when_confident?

optional min_end_of_turn_silence_when_confident: number

vad_threshold?

optional vad_threshold: number

StreamingWord

StreamingWord = object

Properties

confidence

confidence: number

end

end: number

start

start: number

text

text: string

word_is_final

word_is_final: boolean

TerminationEvent

TerminationEvent = object

Properties

audio_duration_seconds

audio_duration_seconds: number

session_duration_seconds

session_duration_seconds: number

type

type: "Termination"

TranscriptOptionalParamsSpeechModel

TranscriptOptionalParamsSpeechModel = SpeechModel | null

The speech model to use for the transcription. When null, the "best" model is used.

TranscriptParams

TranscriptParams = TranscriptParamsAllOf & AssemblyAIOptions

The parameters for creating a transcript

TranscriptStatus

TranscriptStatus = object

The status of your transcript. Possible values are queued, processing, completed, or error.

Properties

completed

readonly completed: "completed" = 'completed'

error

readonly error: "error" = 'error'

processing

readonly processing: "processing" = 'processing'

queued

readonly queued: "queued" = 'queued'

TranscriptStatus

TranscriptStatus = typeof TranscriptStatus[keyof typeof TranscriptStatus]

The status of your transcript. Possible values are queued, processing, completed, or error.

TurnEvent

TurnEvent = object

Properties

end_of_turn

end_of_turn: boolean

end_of_turn_confidence

end_of_turn_confidence: number

transcript

transcript: string

turn_is_formatted

turn_is_formatted: boolean

turn_order

turn_order: number

type

type: "Turn"

words

words: StreamingWord[]

language_code?

optional language_code: string

language_confidence?

optional language_confidence: number

On this page

adapters/assemblyai-adapterClassesAssemblyAIAdapterSeeExamplesExtendsMethodsbuildStreamingUrl()ParametersReturnsbuildTranscriptionRequest()ParametersReturnscreateErrorResponse()ParametersReturnsInherited fromdeleteTranscript()ParametersReturnsExampleSeeextractSpeakers()ParametersReturnsextractUtterances()ParametersReturnsextractWords()ParametersReturnsgetAxiosConfig()ReturnsbaseURLheaderstimeoutOverridesgetTranscript()ParametersReturnsOverrideshandleWebSocketMessage()ParametersReturnsinitialize()ParametersReturnsInherited fromlistTranscripts()ParametersReturnsExamplesSeenormalizeListItem()ParametersReturnsnormalizeResponse()ParametersReturnspollForCompletion()ParametersReturnsInherited fromtranscribe()ParametersReturnsThrowsExamplesOverridestranscribeStream()ParametersReturnsExamplesvalidateConfig()ReturnsInherited fromConstructorsConstructorReturnsInherited fromPropertiesbaseUrlOverridescapabilitiesOverridesnameOverrideswsBaseUrlconfig?Inherited fromFunctionscreateAssemblyAIAdapter()ParametersReturnscreateTemporaryToken()Type ParametersParametersReturnscreateTranscript()Type ParametersParametersReturnsdeleteTranscriptAPI()Type ParametersParametersReturnsgetTranscriptAPI()Type ParametersParametersReturnslistTranscriptsAPI()Type ParametersParametersReturnsInterfacesTranscriptPropertiesacoustic_modelDeprecatedaudio_urlauto_highlightsidlanguage_confidenceMinimumMaximumlanguage_confidence_thresholdMinimumMaximumlanguage_modelDeprecatedredact_piispeech_modelstatussummarizationwebhook_authaudio_channels?audio_duration?audio_end_at?audio_start_from?auto_chapters?auto_highlights_result?boost_param?chapters?confidence?MinimumMaximumcontent_safety?content_safety_labels?custom_spelling?custom_topics?Deprecateddisfluencies?entities?entity_detection?error?filter_profanity?format_text?iab_categories?iab_categories_result?keyterms_prompt?language_code?language_detection?multichannel?prompt?Deprecatedpunctuate?redact_pii_audio?redact_pii_audio_quality?redact_pii_policies?redact_pii_sub?sentiment_analysis?sentiment_analysis_results?speaker_labels?speakers_expected?speech_threshold?MinimumMaximumspeed_boost?Deprecatedsummary?summary_model?summary_type?text?throttled?topics?utterances?webhook_auth_header_name?webhook_status_code?webhook_url?word_boost?Deprecatedwords?TranscriptListItemPropertiesaudio_urlcompletedPatterncreatedPatternerroridresource_urlstatusTranscriptUtterancePropertiesconfidenceMinimumMaximumendspeakerstarttextwordschannel?TranscriptWordPropertiesconfidenceMinimumMaximumendspeakerstarttextchannel?Type AliasesBeginEventPropertiesexpires_atidtypeErrorEventPropertieserrorListTranscriptsParamsPropertiesafter_id?before_id?created_on?limit?status?throttled_only?StreamingEventMessageStreamingForceEndpointPropertiestypeStreamingUpdateConfigurationPropertiestypeend_of_turn_confidence_threshold?format_turns?max_turn_silence?min_end_of_turn_silence_when_confident?vad_threshold?StreamingWordPropertiesconfidenceendstarttextword_is_finalTerminationEventPropertiesaudio_duration_secondssession_duration_secondstypeTranscriptOptionalParamsSpeechModelTranscriptParamsTranscriptStatusPropertiescompletederrorprocessingqueuedTranscriptStatusTurnEventPropertiesend_of_turnend_of_turn_confidencetranscriptturn_is_formattedturn_ordertypewordslanguage_code?language_confidence?