Colibrio Reader Framework API - Cloud license
    Preparing search index...

    A TtsSynthesizer implementation using the Web Speech API for synthesizing text.

    interface IWebSpeechTtsSynthesizer {
        addUtterance(speechData: ITtsUtteranceData): void;
        clearAndPause(): void;
        destroy(): void;
        init(context: ITtsSynthesizerContext): void;
        onUtteranceBoundary(event: SpeechSynthesisEvent): void;
        onUtteranceEnd(event: SpeechSynthesisEvent): void;
        onUtteranceError(event: SpeechSynthesisErrorEvent): void;
        pause(): void;
        play(): void;
        setMuted(muted: boolean): void;
        setPlaybackRate(playbackRate: number): void;
        setVolume(volume: number): void;
    }

    Hierarchy (View Summary)

    Implemented by

    Methods

    • The implementation must stop any current playing utterance and set itself in paused state. It should then clear the queue of utterances to speak.

      Returns void

    • Destroys this instance. The synthesizer should clear the utterance queue and set itself in paused state. The reference to any context passed with init() is released.

      Returns void

    • Protected

      Parameters

      • event: SpeechSynthesisEvent

        The SpeechSynthesisEvent emitted by the underlying TTS engine when a boundary is reached in the currently spoken utterance.

      Returns void

    • Protected

      Parameters

      • event: SpeechSynthesisEvent

        The SpeechSynthesisEvent emitted by the underlying TTS engine when the currently spoken utterance has finished.

      Returns void

    • Protected

      Parameters

      • event: SpeechSynthesisErrorEvent

        The SpeechSynthesisErrorEvent emitted by the underlying TTS engine when an error occurs in the currently spoken utterance.

      Returns void

    • Will be called by the SyncMediaPlayer when it should pause playback.

      Returns void

    • Resume the playback of the current active utterance if any exists, otherwise the next utterance from the utterance queue. The implementation must use context.onUtteranceEnd() when each utterance has finished speaking.

      The implementation should also call context.onBoundary(charOffset) to indicate which part of the utterance is currently being spoken to allow the media player to get a more accurate timeline position for things like automatically turning pages and resuming playback and for highlighting.

      Returns void

    • Sets if playback should be muted. When called with false, the synthesizer should unmute and use the playback volume last set by setVolume.

      Parameters

      • muted: boolean

      Returns void

    • Will be called by the SyncMediaPlayer when its playbackRate has been changed.

      Parameters

      • playbackRate: number

        A value between 0.25 and 5.0

      Returns void

    • Will be called by the SyncMediaPlayer when its volume has been changed.

      Parameters

      • volume: number

        A value between 0.0 and 1.0

      Returns void