adapters/assemblyai-adapter
Voice Router SDK - AssemblyAI Provider / adapters/assemblyai-adapter
adapters/assemblyai-adapter
Classes
AssemblyAIAdapter
AssemblyAI transcription provider adapter
Implements transcription for the AssemblyAI API with support for:
- Synchronous and asynchronous transcription
- Speaker diarization (speaker labels)
- Multi-language detection and transcription
- Summarization and sentiment analysis
- Entity detection and content moderation
- Custom vocabulary and spelling
- Word-level timestamps
- PII redaction
See
https://www.assemblyai.com/docs AssemblyAI API Documentation
Examples
import { AssemblyAIAdapter } from '@meeting-baas/sdk';
const adapter = new AssemblyAIAdapter();
adapter.initialize({
apiKey: process.env.ASSEMBLYAI_API_KEY
});
const result = await adapter.transcribe({
type: 'url',
url: 'https://example.com/audio.mp3'
}, {
language: 'en',
diarization: true
});
console.log(result.data.text);
console.log(result.data.speakers);const result = await adapter.transcribe(audio, {
language: 'en_us',
diarization: true,
summarization: true,
sentimentAnalysis: true,
entityDetection: true,
piiRedaction: true
});
console.log('Summary:', result.data.summary);
console.log('Entities:', result.data.metadata?.entities);Extends
Methods
buildStreamingUrl()
privatebuildStreamingUrl(options?):string
Build WebSocket URL with all streaming parameters
Parameters
| Parameter | Type |
|---|---|
options? | StreamingOptions |
Returns
string
buildTranscriptionRequest()
privatebuildTranscriptionRequest(audio,options?):TranscriptParams
Build AssemblyAI transcription request from unified options
Parameters
| Parameter | Type |
|---|---|
audio | AudioInput |
options? | TranscribeOptions |
Returns
createErrorResponse()
protectedcreateErrorResponse(error,statusCode?,code?):UnifiedTranscriptResponse
Helper method to create error responses with stack traces
Parameters
| Parameter | Type | Description |
|---|---|---|
error | unknown | Error object or unknown error |
statusCode? | number | Optional HTTP status code |
code? | ErrorCode | Optional error code (defaults to extracted or UNKNOWN_ERROR) |
Returns
Inherited from
BaseAdapter.createErrorResponse
deleteTranscript()
deleteTranscript(
transcriptId):Promise<{success:boolean; }>
Delete a transcription and its associated data
Removes the transcription data from AssemblyAI's servers. This action is irreversible. The transcript will be marked as deleted and its content will no longer be accessible.
Parameters
| Parameter | Type | Description |
|---|---|---|
transcriptId | string | The ID of the transcript to delete |
Returns
Promise<{ success: boolean; }>
Promise with success status
Example
const result = await adapter.deleteTranscript('abc123');
if (result.success) {
console.log('Transcript deleted successfully');
}See
https://www.assemblyai.com/docs/api-reference/transcripts/delete
extractSpeakers()
privateextractSpeakers(transcript):object[] |undefined
Extract speaker information from AssemblyAI response
Parameters
| Parameter | Type |
|---|---|
transcript | Transcript |
Returns
object[] | undefined
extractUtterances()
privateextractUtterances(transcript):object[] |undefined
Extract utterances from AssemblyAI response
Parameters
| Parameter | Type |
|---|---|
transcript | Transcript |
Returns
object[] | undefined
extractWords()
privateextractWords(transcript):object[] |undefined
Extract word timestamps from AssemblyAI response
Parameters
| Parameter | Type |
|---|---|
transcript | Transcript |
Returns
object[] | undefined
getAxiosConfig()
protectedgetAxiosConfig():object
Get axios config for generated API client functions Configures headers and base URL using authorization header
Returns
object
baseURL
baseURL:
string
headers
headers:
Record<string,string>
timeout
timeout:
number
Overrides
getTranscript()
getTranscript(
transcriptId):Promise<UnifiedTranscriptResponse<TranscriptionProvider>>
Get transcription result by ID
Parameters
| Parameter | Type |
|---|---|
transcriptId | string |
Returns
Promise<UnifiedTranscriptResponse<TranscriptionProvider>>
Overrides
handleWebSocketMessage()
privatehandleWebSocketMessage(message,callbacks?):void
Handle all WebSocket message types from AssemblyAI streaming
Parameters
| Parameter | Type |
|---|---|
message | StreamingEventMessage |
callbacks? | StreamingCallbacks |
Returns
void
initialize()
initialize(
config):void
Initialize the adapter with configuration
Parameters
| Parameter | Type |
|---|---|
config | ProviderConfig |
Returns
void
Inherited from
listTranscripts()
listTranscripts(
options?):Promise<{transcripts:UnifiedTranscriptResponse<TranscriptionProvider>[];hasMore?:boolean;total?:number; }>
List recent transcriptions with filtering
Retrieves a list of transcripts with optional filtering by status and date. Transcripts are sorted from newest to oldest and can be retrieved for the last 90 days of usage.
Parameters
| Parameter | Type | Description |
|---|---|---|
options? | ListTranscriptsOptions | Filtering and pagination options |
Returns
Promise<{ transcripts: UnifiedTranscriptResponse<TranscriptionProvider>[]; hasMore?: boolean; total?: number; }>
List of transcripts with pagination info
Examples
const { transcripts, hasMore } = await adapter.listTranscripts({
limit: 50,
status: 'completed'
})const { transcripts } = await adapter.listTranscripts({
date: '2026-01-07',
limit: 100
})const { transcripts } = await adapter.listTranscripts({
assemblyai: {
after_id: 'abc123', // Get transcripts after this ID
limit: 50
}
})See
https://www.assemblyai.com/docs/api-reference/transcripts/list
normalizeListItem()
privatenormalizeListItem(item):UnifiedTranscriptResponse
Normalize a transcript list item to unified format
Parameters
| Parameter | Type |
|---|---|
item | TranscriptListItem |
Returns
normalizeResponse()
privatenormalizeResponse(response):UnifiedTranscriptResponse<"assemblyai">
Normalize AssemblyAI response to unified format
Parameters
| Parameter | Type |
|---|---|
response | Transcript |
Returns
UnifiedTranscriptResponse<"assemblyai">
pollForCompletion()
protectedpollForCompletion(transcriptId,options?):Promise<UnifiedTranscriptResponse<TranscriptionProvider>>
Generic polling helper for async transcription jobs
Polls getTranscript() until job completes or times out.
Parameters
| Parameter | Type | Description |
|---|---|---|
transcriptId | string | Job/transcript ID to poll |
options? | { intervalMs?: number; maxAttempts?: number; } | Polling configuration |
options.intervalMs? | number | - |
options.maxAttempts? | number | - |
Returns
Promise<UnifiedTranscriptResponse<TranscriptionProvider>>
Final transcription result
Inherited from
transcribe()
transcribe(
audio,options?):Promise<UnifiedTranscriptResponse<TranscriptionProvider>>
Submit audio for transcription
Sends audio to AssemblyAI API for transcription. If a webhook URL is provided, returns immediately with the job ID. Otherwise, polls until completion.
Parameters
| Parameter | Type | Description |
|---|---|---|
audio | AudioInput | Audio input (currently only URL type supported) |
options? | TranscribeOptions | Transcription options |
Returns
Promise<UnifiedTranscriptResponse<TranscriptionProvider>>
Normalized transcription response
Throws
If audio type is not 'url' (file/stream not yet supported)
Examples
const result = await adapter.transcribe({
type: 'url',
url: 'https://example.com/meeting.mp3'
});const result = await adapter.transcribe({
type: 'url',
url: 'https://example.com/meeting.mp3'
}, {
language: 'en_us',
diarization: true,
speakersExpected: 3,
summarization: true,
sentimentAnalysis: true,
entityDetection: true,
customVocabulary: ['API', 'TypeScript', 'JavaScript']
});// Submit transcription with webhook
const result = await adapter.transcribe({
type: 'url',
url: 'https://example.com/meeting.mp3'
}, {
webhookUrl: 'https://myapp.com/webhook/transcription',
language: 'en_us'
});
// Get transcript ID for polling
const transcriptId = result.data?.id;
console.log('Transcript ID:', transcriptId); // Use this to poll for status
// Later: Poll for completion (if webhook fails or you want to check)
const status = await adapter.getTranscript(transcriptId);
if (status.data?.status === 'completed') {
console.log('Transcript:', status.data.text);
}Overrides
transcribeStream()
transcribeStream(
options?,callbacks?):Promise<StreamingSession&object>
Stream audio for real-time transcription
Creates a WebSocket connection to AssemblyAI for streaming transcription. Uses the v3 Universal Streaming API with full support for all parameters.
Supports all AssemblyAI streaming features:
- Real-time transcription with interim/final results (Turn events)
- End-of-turn detection tuning (confidence threshold, silence duration)
- Voice Activity Detection (VAD) threshold tuning
- Real-time text formatting
- Profanity filtering
- Custom vocabulary (keyterms)
- Language detection
- Model selection (English or Multilingual)
- Dynamic configuration updates mid-stream
- Force endpoint command
Parameters
| Parameter | Type | Description |
|---|---|---|
options? | StreamingOptions | Streaming configuration options |
callbacks? | StreamingCallbacks | Event callbacks for transcription results |
Returns
Promise<StreamingSession & object>
Promise that resolves with an extended StreamingSession
Examples
const session = await adapter.transcribeStream({
sampleRate: 16000,
encoding: 'linear16' // Unified format - mapped to AssemblyAI's pcm_s16le
}, {
onOpen: () => console.log('Connected'),
onTranscript: (event) => {
if (event.isFinal) {
console.log('Final:', event.text);
} else {
console.log('Interim:', event.text);
}
},
onError: (error) => console.error('Error:', error),
onClose: () => console.log('Disconnected')
});
// Send audio chunks
const audioChunk = getAudioChunk();
await session.sendAudio({ data: audioChunk });
// Close when done
await session.close();const session = await adapter.transcribeStream({
sampleRate: 16000,
assemblyaiStreaming: {
speechModel: 'universal-streaming-multilingual',
languageDetection: true,
endOfTurnConfidenceThreshold: 0.7,
minEndOfTurnSilenceWhenConfident: 500,
maxTurnSilence: 15000,
vadThreshold: 0.3,
formatTurns: true,
filterProfanity: true,
keyterms: ['TypeScript', 'JavaScript', 'API'],
inactivityTimeout: 60000
}
}, {
onTranscript: (e) => console.log('Transcript:', e.text),
onMetadata: (m) => console.log('Metadata:', m)
});
// Update configuration mid-stream
session.updateConfiguration?.({
end_of_turn_confidence_threshold: 0.5,
vad_threshold: 0.2
});
// Force endpoint detection
session.forceEndpoint?.();validateConfig()
protectedvalidateConfig():void
Helper method to validate configuration
Returns
void
Inherited from
Constructors
Constructor
new AssemblyAIAdapter():
AssemblyAIAdapter
Returns
Inherited from
Properties
baseUrl
protectedbaseUrl:string="https://api.assemblyai.com"
Base URL for provider API (must be defined by subclass)
Overrides
capabilities
readonlycapabilities:ProviderCapabilities
Provider capabilities
Overrides
name
readonlyname:"assemblyai"
Provider name
Overrides
wsBaseUrl
privatewsBaseUrl:string="wss://streaming.assemblyai.com/v3/ws"
config?
protectedoptionalconfig:ProviderConfig
Inherited from
Functions
createAssemblyAIAdapter()
createAssemblyAIAdapter(
config):AssemblyAIAdapter
Factory function to create an AssemblyAI adapter
Parameters
| Parameter | Type |
|---|---|
config | ProviderConfig |
Returns
createTemporaryToken()
createTemporaryToken<
TData>(createRealtimeTemporaryTokenParams,options?):Promise<TData>
<Warning>Streaming Speech-to-Text is currently not available on the EU endpoint.</Warning>
<Note>Any usage associated with a temporary token will be attributed to the API key that generated it.</Note>
Create a temporary authentication token for Streaming Speech-to-Text
Type Parameters
| Type Parameter | Default type |
|---|---|
TData | AxiosResponse<RealtimeTemporaryTokenResponse, any> |
Parameters
| Parameter | Type |
|---|---|
createRealtimeTemporaryTokenParams | CreateRealtimeTemporaryTokenParams |
options? | AxiosRequestConfig<any> |
Returns
Promise<TData>
createTranscript()
createTranscript<
TData>(transcriptParams,options?):Promise<TData>
<Note>To use our EU server for transcription, replace api.assemblyai.com with api.eu.assemblyai.com.</Note>
Create a transcript from a media file that is accessible via a URL.
Type Parameters
| Type Parameter | Default type |
|---|---|
TData | AxiosResponse<Transcript, any> |
Parameters
| Parameter | Type |
|---|---|
transcriptParams | TranscriptParams |
options? | AxiosRequestConfig<any> |
Returns
Promise<TData>
deleteTranscriptAPI()
deleteTranscriptAPI<
TData>(transcriptId,options?):Promise<TData>
<Note>To delete your transcriptions on our EU server, replace api.assemblyai.com with api.eu.assemblyai.com.</Note>
Remove the data from the transcript and mark it as deleted.
<Warning>Files uploaded via the /upload endpoint are immediately deleted alongside the transcript when you make a DELETE request, ensuring your data is removed from our systems right away.</Warning>
Type Parameters
| Type Parameter | Default type |
|---|---|
TData | AxiosResponse<Transcript, any> |
Parameters
| Parameter | Type |
|---|---|
transcriptId | string |
options? | AxiosRequestConfig<any> |
Returns
Promise<TData>
getTranscriptAPI()
getTranscriptAPI<
TData>(transcriptId,options?):Promise<TData>
<Note>To retrieve your transcriptions on our EU server, replace api.assemblyai.com with api.eu.assemblyai.com.</Note>
Get the transcript resource. The transcript is ready when the "status" is "completed".
Type Parameters
| Type Parameter | Default type |
|---|---|
TData | AxiosResponse<Transcript, any> |
Parameters
| Parameter | Type |
|---|---|
transcriptId | string |
options? | AxiosRequestConfig<any> |
Returns
Promise<TData>
listTranscriptsAPI()
listTranscriptsAPI<
TData>(params?,options?):Promise<TData>
<Note>To retrieve your transcriptions on our EU server, replace api.assemblyai.com with api.eu.assemblyai.com.</Note>
Retrieve a list of transcripts you created.
Transcripts are sorted from newest to oldest and can be retrieved for the last 90 days of usage. The previous URL always points to a page with older transcripts.
If you need to retrieve transcripts from more than 90 days ago please reach out to our Support team at support@assemblyai.com.
Type Parameters
| Type Parameter | Default type |
|---|---|
TData | AxiosResponse<TranscriptList, any> |
Parameters
| Parameter | Type |
|---|---|
params? | ListTranscriptsParams |
options? | AxiosRequestConfig<any> |
Returns
Promise<TData>
Interfaces
Transcript
A transcript object
Properties
acoustic_model
acoustic_model:
string
The acoustic model that was used for the transcript
Deprecated
audio_url
audio_url:
string
The URL of the media that was transcribed
auto_highlights
auto_highlights:
boolean
Whether Key Phrases is enabled, either true or false
id
id:
string
The unique identifier of your transcript
language_confidence
language_confidence:
TranscriptLanguageConfidence
The confidence score for the detected language, between 0.0 (low confidence) and 1.0 (high confidence)
Minimum
0
Maximum
1
language_confidence_threshold
language_confidence_threshold:
TranscriptLanguageConfidenceThreshold
The confidence threshold for the automatically detected language. An error will be returned if the language confidence is below this threshold.
Minimum
0
Maximum
1
language_model
language_model:
string
The language model that was used for the transcript
Deprecated
redact_pii
redact_pii:
boolean
Whether PII Redaction is enabled, either true or false
speech_model
speech_model:
TranscriptSpeechModel
The speech model used for the transcription. When null, the default model is used.
status
status:
TranscriptStatus
The status of your transcript. Possible values are queued, processing, completed, or error.
summarization
summarization:
boolean
Whether Summarization is enabled, either true or false
webhook_auth
webhook_auth:
boolean
Whether webhook authentication details were provided
audio_channels?
optionalaudio_channels:number
The number of audio channels in the audio file. This is only present when multichannel is enabled.
audio_duration?
optionalaudio_duration:TranscriptAudioDuration
The duration of this transcript object's media file, in seconds
audio_end_at?
optionalaudio_end_at:TranscriptAudioEndAt
The point in time, in milliseconds, in the file at which the transcription was terminated
audio_start_from?
optionalaudio_start_from:TranscriptAudioStartFrom
The point in time, in milliseconds, in the file at which the transcription was started
auto_chapters?
optionalauto_chapters:TranscriptAutoChapters
Whether Auto Chapters is enabled, can be true or false
auto_highlights_result?
optionalauto_highlights_result:TranscriptAutoHighlightsResult
An array of results for the Key Phrases model, if it is enabled. See Key Phrases for more information.
boost_param?
optionalboost_param:TranscriptBoostParamProperty
The word boost parameter value
chapters?
optionalchapters:TranscriptChapters
An array of temporally sequential chapters for the audio file
confidence?
optionalconfidence:TranscriptConfidence
The confidence score for the transcript, between 0.0 (low confidence) and 1.0 (high confidence)
Minimum
0
Maximum
1
content_safety?
optionalcontent_safety:TranscriptContentSafety
Whether Content Moderation is enabled, can be true or false
content_safety_labels?
optionalcontent_safety_labels:TranscriptContentSafetyLabels
An array of results for the Content Moderation model, if it is enabled. See Content moderation for more information.
custom_spelling?
optionalcustom_spelling:TranscriptCustomSpellingProperty
Customize how words are spelled and formatted using to and from values
custom_topics?
optionalcustom_topics:TranscriptCustomTopics
Whether custom topics is enabled, either true or false
Deprecated
disfluencies?
optionaldisfluencies:TranscriptDisfluencies
Transcribe Filler Words, like "umm", in your media file; can be true or false
entities?
optionalentities:TranscriptEntities
An array of results for the Entity Detection model, if it is enabled. See Entity detection for more information.
entity_detection?
optionalentity_detection:TranscriptEntityDetection
Whether Entity Detection is enabled, can be true or false
error?
optionalerror:string
Error message of why the transcript failed
filter_profanity?
optionalfilter_profanity:TranscriptFilterProfanity
Whether Profanity Filtering is enabled, either true or false
format_text?
optionalformat_text:TranscriptFormatText
Whether Text Formatting is enabled, either true or false
iab_categories?
optionaliab_categories:TranscriptIabCategories
Whether Topic Detection is enabled, can be true or false
iab_categories_result?
optionaliab_categories_result:TranscriptIabCategoriesResult
The result of the Topic Detection model, if it is enabled. See Topic Detection for more information.
keyterms_prompt?
optionalkeyterms_prompt:string[]
Improve accuracy with up to 1000 domain-specific words or phrases (maximum 6 words per phrase).
language_code?
optionallanguage_code:string
The language of your audio file. Possible values are found in Supported Languages. The default value is 'en_us'.
language_detection?
optionallanguage_detection:TranscriptLanguageDetection
Whether Automatic language detection is enabled, either true or false
multichannel?
optionalmultichannel:TranscriptMultichannel
Whether Multichannel transcription was enabled in the transcription request, either true or false
prompt?
optionalprompt:string
This parameter does not currently have any functionality attached to it.
Deprecated
punctuate?
optionalpunctuate:TranscriptPunctuate
Whether Automatic Punctuation is enabled, either true or false
redact_pii_audio?
optionalredact_pii_audio:TranscriptRedactPiiAudio
Whether a redacted version of the audio file was generated, either true or false. See PII redaction for more information.
redact_pii_audio_quality?
optionalredact_pii_audio_quality:TranscriptRedactPiiAudioQuality
The audio quality of the PII-redacted audio file, if redact_pii_audio is enabled. See PII redaction for more information.
redact_pii_policies?
optionalredact_pii_policies:TranscriptRedactPiiPolicies
The list of PII Redaction policies that were enabled, if PII Redaction is enabled. See PII redaction for more information.
redact_pii_sub?
optionalredact_pii_sub:SubstitutionPolicy
The replacement logic for detected PII, can be "entity_type" or "hash". See PII redaction for more details.
sentiment_analysis?
optionalsentiment_analysis:TranscriptSentimentAnalysis
Whether Sentiment Analysis is enabled, can be true or false
sentiment_analysis_results?
optionalsentiment_analysis_results:TranscriptSentimentAnalysisResults
An array of results for the Sentiment Analysis model, if it is enabled. See Sentiment Analysis for more information.
speaker_labels?
optionalspeaker_labels:TranscriptSpeakerLabels
Whether Speaker diarization is enabled, can be true or false
speakers_expected?
optionalspeakers_expected:TranscriptSpeakersExpected
Tell the speaker label model how many speakers it should attempt to identify. See Speaker diarization for more details.
speech_threshold?
optionalspeech_threshold:TranscriptSpeechThreshold
Defaults to null. Reject audio files that contain less than this fraction of speech. Valid values are in the range [0, 1] inclusive.
Minimum
0
Maximum
1
speed_boost?
optionalspeed_boost:TranscriptSpeedBoost
Whether speed boost is enabled
Deprecated
summary?
optionalsummary:TranscriptSummary
The generated summary of the media file, if Summarization is enabled
summary_model?
optionalsummary_model:TranscriptSummaryModel
The Summarization model used to generate the summary, if Summarization is enabled
summary_type?
optionalsummary_type:TranscriptSummaryType
The type of summary generated, if Summarization is enabled
text?
optionaltext:TranscriptText
The textual transcript of your media file
throttled?
optionalthrottled:TranscriptThrottled
True while a request is throttled and false when a request is no longer throttled
topics?
optionaltopics:string[]
The list of custom topics provided if custom topics is enabled
utterances?
optionalutterances:TranscriptUtterances
When multichannel or speaker_labels is enabled, a list of turn-by-turn utterance objects. See Speaker diarization and Multichannel transcription for more information.
webhook_auth_header_name?
optionalwebhook_auth_header_name:TranscriptWebhookAuthHeaderName
The header name to be sent with the transcript completed or failed webhook requests
webhook_status_code?
optionalwebhook_status_code:TranscriptWebhookStatusCode
The status code we received from your server when delivering the transcript completed or failed webhook request, if a webhook URL was provided
webhook_url?
optionalwebhook_url:TranscriptWebhookUrl
The URL to which we send webhook requests. We sends two different types of webhook requests. One request when a transcript is completed or failed, and one request when the redacted audio is ready if redact_pii_audio is enabled.
word_boost?
optionalword_boost:string[]
The list of custom vocabulary to boost transcription probability for
Deprecated
words?
optionalwords:TranscriptWords
An array of temporally-sequential word objects, one for each word in the transcript. See Speech recognition for more information.
TranscriptListItem
Properties
audio_url
audio_url:
string
The URL to the audio file
completed
completed:
TranscriptListItemCompleted
The date and time the transcript was completed
Pattern
^(?:(\d4-\d2-\d2)T(\d2:\d2:\d2(?:.\d+)?))$
created
created:
string
The date and time the transcript was created
Pattern
^(?:(\d4-\d2-\d2)T(\d2:\d2:\d2(?:.\d+)?))$
error
error:
TranscriptListItemError
Error message of why the transcript failed
id
id:
string
The unique identifier for the transcript
resource_url
resource_url:
string
The URL to retrieve the transcript
status
status:
TranscriptStatus
The status of the transcript
TranscriptUtterance
Properties
confidence
confidence:
number
The confidence score for the transcript of this utterance
Minimum
0
Maximum
1
end
end:
number
The ending time, in milliseconds, of the utterance in the audio file
speaker
speaker:
string
The speaker of this utterance, where each speaker is assigned a sequential capital letter - e.g. "A" for Speaker A, "B" for Speaker B, etc.
start
start:
number
The starting time, in milliseconds, of the utterance in the audio file
text
text:
string
The text for this utterance
words
words:
TranscriptWord[]
The words in the utterance.
channel?
optionalchannel:TranscriptUtteranceChannel
The channel of this utterance. The left and right channels are channels 1 and 2. Additional channels increment the channel number sequentially.
TranscriptWord
Properties
confidence
confidence:
number
The confidence score for the transcript of this word
Minimum
0
Maximum
1
end
end:
number
The ending time, in milliseconds, for the word
speaker
speaker:
TranscriptWordSpeaker
The speaker of the word if Speaker Diarization is enabled, else null
start
start:
number
The starting time, in milliseconds, for the word
text
text:
string
The text of the word
channel?
optionalchannel:TranscriptWordChannel
The channel of the word. The left and right channels are channels 1 and 2. Additional channels increment the channel number sequentially.
Type Aliases
BeginEvent
BeginEvent =
object
Properties
expires_at
expires_at:
number
id
id:
string
type
type:
"Begin"
ErrorEvent
ErrorEvent =
object
Properties
error
error:
string
ListTranscriptsParams
ListTranscriptsParams =
object
Properties
after_id?
optionalafter_id:AfterId
Get transcripts that were created after this transcript ID
before_id?
optionalbefore_id:BeforeId
Get transcripts that were created before this transcript ID
created_on?
optionalcreated_on:CreatedOn
Only get transcripts created on this date
limit?
optionallimit:Limit
Maximum amount of transcripts to retrieve
status?
optionalstatus:TranscriptStatus
Filter by transcript status
throttled_only?
optionalthrottled_only:ThrottledOnly
Only get throttled transcripts, overrides the status filter
StreamingEventMessage
StreamingEventMessage =
BeginEvent|TurnEvent|TerminationEvent|ErrorEvent
StreamingForceEndpoint
StreamingForceEndpoint =
object
Properties
type
type:
"ForceEndpoint"
StreamingUpdateConfiguration
StreamingUpdateConfiguration =
object
Properties
type
type:
"UpdateConfiguration"
end_of_turn_confidence_threshold?
optionalend_of_turn_confidence_threshold:number
format_turns?
optionalformat_turns:boolean
max_turn_silence?
optionalmax_turn_silence:number
min_end_of_turn_silence_when_confident?
optionalmin_end_of_turn_silence_when_confident:number
vad_threshold?
optionalvad_threshold:number
StreamingWord
StreamingWord =
object
Properties
confidence
confidence:
number
end
end:
number
start
start:
number
text
text:
string
word_is_final
word_is_final:
boolean
TerminationEvent
TerminationEvent =
object
Properties
audio_duration_seconds
audio_duration_seconds:
number
session_duration_seconds
session_duration_seconds:
number
type
type:
"Termination"
TranscriptOptionalParamsSpeechModel
TranscriptOptionalParamsSpeechModel =
SpeechModel|null
The speech model to use for the transcription. When null, the "best" model is used.
TranscriptParams
TranscriptParams =
TranscriptParamsAllOf&AssemblyAIOptions
The parameters for creating a transcript
TranscriptStatus
TranscriptStatus =
object
The status of your transcript. Possible values are queued, processing, completed, or error.
Properties
completed
readonlycompleted:"completed"='completed'
error
readonlyerror:"error"='error'
processing
readonlyprocessing:"processing"='processing'
queued
readonlyqueued:"queued"='queued'
TranscriptStatus
TranscriptStatus = typeof
TranscriptStatus[keyof typeofTranscriptStatus]
The status of your transcript. Possible values are queued, processing, completed, or error.
TurnEvent
TurnEvent =
object
Properties
end_of_turn
end_of_turn:
boolean
end_of_turn_confidence
end_of_turn_confidence:
number
transcript
transcript:
string
turn_is_formatted
turn_is_formatted:
boolean
turn_order
turn_order:
number
type
type:
"Turn"
words
words:
StreamingWord[]
language_code?
optionallanguage_code:string
language_confidence?
optionallanguage_confidence:number