openEars是一个开源的语音识别+TTS库,在iphone APP中有几个款用到了该库,最近了一次升级,提高了代码的效率,并升级到了xcode4。politepix网站有openEars教程.以下转载,找个时间再翻译翻译吧.
-------------------------------------------------------------------------------------------------------------------------------------------------
OpenEars is an open-source iOS library for implementing round-trip English language speech recognition and text-to-speech on the iPhone and iPad, which uses the CMU Pocketsphinx , CMU Flite , and MITLM libraries.
This version has a number of changes under the hood and two API changes for existing API calls, so if you want to stick with the previous version 0.9.02 for now, you can still download it here and it contains all of the old support documents as PDFs as well. I’ll support 0.9.0.2 until it’s clear that 0.91 is as stable as 0.9.02 — please just identify which version you are using when seeking support.
OpenEars .91 can:
In addition to its various new features and faster recognition/text-to-speech responsiveness, OpenEars now has improved recognition accuracy.
Before using OpenEars, please note that its new low-latency Audio Unit driver is not compatible with the Simulator, so it has a fallback Audio Queue driver for the Simulator provided as a convenience so you can debug recognition logic. This means is that recognition is better on the device, and that I’d appreciate it if bug reports are limited to issues which affect the device.
1. Begin with “Getting Started With OpenEars ” which will explain how to set up the libraries your app will make use of.
2. Then read “Configuring your app for OpenEars ” which will explain how to make the OpenEars libraries available to your app projects, and lastly,
3. You’ll be ready for “Using OpenEars In Your App ” which will explain the objects and methods that will be available to your app and how to use them.
If those steps give you trouble, you can check out the Support and FAQ page.