Share via


Note

Please see Azure Cognitive Services for Speech documentation for the latest supported speech solutions.

Audio Input for Recognition (Microsoft.Speech)

Your application can configure and monitor the audio input to a speech recognition engine using the members and types described below.

Configure the Audio Input

Applications can configure a SpeechRecognitionEngine instance to receive input using any of the SetInputToAudioStream(Stream, SpeechAudioFormatInfo), SetInputToWaveFile(String), SetInputToWaveStream(Stream), or SetInputToDefaultAudioDevice() methods. In addition, you can set the input to null with the SetInputToNull() method, which is useful for emulating recognition or to cancel another input setting.

Monitor the Audio Input

To get information about the level, format, and state of audio being received for recognition, an application can register handlers for events or query the properties of the SpeechRecognitionEngine class.

The AudioLevelUpdatedEventArgs class provides information about changes in the audio level when an AudioLevelUpdated event is raised. The AudioStateChangedEventArgs class provides information about changes in the state of the incoming audio when an AudioStateChanged event is raised, using a member of the AudioState enumeration.

In addition, the SpeechRecognitionEngine class has AudioLevel and AudioState properties that provide current information to the application. Detailed information about the format of the audio coming in to the speech recognition engine is available from the SpeechRecognitionEngine.AudioFormat property.

When an SpeechRecognitionEngine.AudioSignalProblemOccurred event is raised, applications can query the AudioSignalProblem property of the AudioSignalProblemOccurredEventArgs class to get a description of the problem.