Polyfill Web Speech API with Cognitive Services Bing Speech for both speech-to-text and text-to-speech service.
This scaffold is provided by react-component-template
.
Try out our demo at https://compulim.github.io/web-speech-cognitive-services?s=your-subscription-key.
We use react-dictate-button
and react-say
to quickly setup the playground.
Web Speech API is not widely adopted on popular browsers and platforms. Polyfilling the API using cloud services is a great way to enable wider adoption. Nonetheless, Web Speech API in Google Chrome is also backed by cloud services.
Microsoft Azure Cognitive Services Bing Speech service provide speech recognition with great accuracy. But unfortunately, the APIs are not based on Web Speech API.
This package will polyfill Web Speech API by turning Cognitive Services Bing Speech API into Web Speech API. We test this package with popular combination of platforms and browsers.
First, run npm install web-speech-cognitive-services
for latest production build. Or npm install web-speech-cognitive-services@master
for latest development build.
Then, install peer dependency by running npm install microsoft-speech-browser-sdk
.
import { createFetchTokenUsingSubscriptionKey, SpeechRecognition } from 'web-speech-cognitive-services';
const recognition = new SpeechRecognition();
recognition.lang = 'en-US';
recognition.fetchToken = createFetchTokenUsingSubscriptionKey('your subscription key');
recognition.onresult = ({ results }) => {
console.log(results);
};
recognition.start();
Note: most browsers requires HTTPS or
localhost
for WebRTC.
You can use react-dictate-button
to integrate speech recognition functionality to your React app.
import { createFetchTokenUsingSubscriptionKey, SpeechGrammarList, SpeechRecognition } from 'web-speech-cognitive-services';
import DictateButton from 'react-dictate-button';
const extra = { fetchToken: createFetchTokenUsingSubscriptionKey('your subscription key') };
export default props =>
<DictateButton
extra={ extra }
onDictate={ ({ result }) => alert(result.transcript) }
speechGrammarList={ SpeechGrammarList }
speechRecognition={ SpeechRecognition }
>
Start dictation
</DictateButton>
You can also look at our playground page to see how it works.
You can prime the speech recognition by giving a list of words.
Since Cognitive Services does not works with weighted grammars, we built another SpeechGrammarList
to better fit the scenario.
import { createFetchTokenUsingSubscriptionKey, SpeechGrammarList, SpeechRecognition } from 'web-speech-cognitive-services';
const recognition = new SpeechRecognition();
recognition.grammars = new SpeechGrammarList();
recognition.grammars.words = ['Tuen Mun', 'Yuen Long'];
recognition.fetchToken = createFetchTokenUsingSubscriptionKey('your subscription key');
recognition.onresult = ({ results }) => {
console.log(results);
};
recognition.start();
Note: you can also pass
grammars
toreact-dictate-button
viaextra
props.
import { createFetchTokenUsingSubscriptionKey, speechSynthesis, SpeechSynthesisUtterance } from 'web-speech-cognitive-services';
const fetchToken = createFetchTokenUsingSubscriptionKey('your subscription key');
const utterance = new SpeechSynthesisUtterance('Hello, World!');
speechSynthesis.fetchToken = fetchToken;
// Need to wait until token exchange is complete before speak
await fetchToken();
await speechSynthesis.speak(utterance);
Note:
speechSynthesis
is camel-casing because it is an instance.
pitch
, rate
, voice
, and volume
are supported. Only onstart
, onerror
, and onend
events are supported.
You can use react-say
to integrate speech synthesis functionality to your React app.
import { createFetchTokenUsingSubscriptionKey, speechSynthesis, SpeechSynthesisUtterance } from 'web-speech-cognitive-services';
import React from 'react';
import Say from 'react-say';
export default class extends React.Component {
constructor(props) {
super(props);
speechSynthesis.fetchToken = createFetchTokenUsingSubscriptionKey('your subscription key');
// We call it here to preload the token, the token is cached
speechSynthesis.fetchToken();
this.state = { ready: false };
}
async componentDidMount() {
await speechSynthesis.fetchToken();
this.setState(() => ({ ready: true }));
}
render() {
return (
this.state.ready &&
<Say
speechSynthesis={ speechSynthesis }
speechSynthesisUtterance={ SpeechSynthesisUtterance }
text="Hello, World!"
/>
);
}
}
For detailed test matrix, please refer to SPEC-RECOGNITION.md
or SPEC-SYNTHESIS.md
.
- Speech recognition
- Interim results do not return confidence, final result do have confidence
- We always return
0.5
for interim results
- We always return
- Cognitive Services support grammar list but not in JSGF format, more work to be done in this area
- Although Google Chrome support grammar list, it seems the grammar list is not used at all
- Continuous mode does not work
- Interim results do not return confidence, final result do have confidence
- Speech synthesis
onboundary
,onmark
,onpause
, andonresume
are not supported/fired
- Add
babel-runtime
,microsoft-speech-browser-sdk
, andsimple-update-in
- General
- Unified token exchange mechanism
- Speech recognition
- Add grammar list
- Add tests for lifecycle events
- Support
stop()
function- Currently, only
abort()
is supported
- Currently, only
- Investigate continuous mode
- Enable Opus (OGG) encoding
- Currently, there is a problem with
[email protected]
, tracking on this issue
- Currently, there is a problem with
- Support custom speech
- Support new Speech-to-Text service
- Point to new URIs
- Speech synthesis
- Event: add
pause
/resume
support - Properties: add
paused
/pending
/speaking
support - Support new Text-to-Speech service
- Custom voice fonts
- Event: add
Like us? Star us.
Want to make it better? File us an issue.
Don't like something you see? Submit a pull request.