How to programmatically use iOS voice synthesizers? (text to speech)

Starting from iOS 7, Apple provides this API. Objective-C #import <AVFoundation/AVFoundation.h> … AVSpeechUtterance *utterance = [AVSpeechUtterance speechUtteranceWithString:@”Hello World!”]; AVSpeechSynthesizer *synth = [[AVSpeechSynthesizer alloc] init]; [synth speakUtterance:utterance]; Swift import AVFoundation … let utterance = AVSpeechUtterance(string: “Hello World!”) let synth = AVSpeechSynthesizer() synth.speakUtterance(utterance)

What are language codes in Chrome’s implementation of the HTML5 speech recognition API?

Ok, if it is not published, we can try to at least figure this out. Let me put this table for the beginning and we will refine it if someone has more information. I’m making assumption that supported languages shall be similar to those supported by voice search and that google uses standard language codes … Read more

Voice Recognition Software For Developers [closed]

It’s out there, and it works… There are quite a few speech recognition programs out there, of which Dragon NaturallySpeaking is, I think, one of the most widely used ones. I’ve used it myself, and have been impressed with its quality. That being a couple of years ago, I guess things have improved even further … Read more

Getting the list of voices in speechSynthesis (Web Speech API)

According to Web Speech API Errata (E11 2013-10-17), the voice list is loaded async to the page. An onvoiceschanged event is fired when they are loaded. voiceschanged: Fired when the contents of the SpeechSynthesisVoiceList, that the getVoices method will return, have changed. Examples include: server-side synthesis where the list is determined asynchronously, or when client-side … Read more