当前位置: 首页 > 工具软件 > OpenEars > 使用案例 >

Welcome to OpenEars iPhone voice recognition API!

郭麒
2023-12-01

From:   Download Politepix’s OpenEars

OpenEars is an shared-source iOS framework for iPhone voice recognition and TTS. It lets you implement round-trip English language speech recognition and text-to-speech on the iPhone and iPad and uses the open source CMU Pocketsphinx, CMU Flite, and CMUCLMTK libraries. Highly-accurate large-vocabulary recognition (that is, trying to recognize any word the user speaks out of many thousands of known words) is not yet a reality for local in-app processing on the iPhone given the hardware limitations of the platform; even Siri does its large-vocabulary recognition on the server side. However, Pocketsphinx (the open source voice recognition engine that OpenEars uses) is capable of local recognition on the iPhone of vocabularies with hundreds of words depending on the environment and other factors, and performs very well with command-and-control language models. The best part is that it uses no network connectivity — all processing occurs locally on the device.

Table of Contents:


The current version of the OpenEars iPhone speech recognition API is 1.1.

OpenEars can:

  • Listen continuously for speech on a background thread, while suspending or resuming speech processing on demand, all while using less than 8% CPU on average on a first-generation iPhone (decoding speech, text-to-speech, updating the UI and other intermittent functions use more CPU),
  • Use any of 9 voices for speech, including male and female voices with a range of speed/quality level, and switch between them on the fly,
  • Change the pitch, speed and variance of any text-to-speech voice,
  • Know whether headphones are plugged in and continue voice recognition during text-to-speech only when they are plugged in,
  • Support bluetooth audio devices (experimental),
  • Dispatch information to any part of your app about the results of speech recognition and speech, or changes in the state of the audio session (such as an incoming phone call or headphones being plugged in),
  • Deliver level metering for both speech input and speech output so you can design visual feedback for both states.
  • Support JSGF grammars,
  • Dynamically generate new ARPA language models in-app based on input from an NSArray of NSStrings,
  • Switch between ARPA language models or JSGF grammars on the fly,
  • Get n-best lists with scoring,
  • Test existing recordings,
  • Be easily interacted with via standard and simple Objective-C methods,
  • Control all audio functions with text-to-speech and speech recognition in memory instead of writing audio files to disk and then reading them,
  • Drive speech recognition with a low-latency Audio Unit driver for highest responsiveness,
  • Be installed in a Cocoa-standard fashion using an easy-peasy already-compiled framework.

In addition to its various new features and faster recognition/text-to-speech responsiveness, OpenEars now has improved recognition accuracy.

Before using OpenEars, please note that its low-latency Audio Unit driver is not compatible with the Simulator, so it has a fallback Audio Queue driver for the Simulator provided as a convenience so you can debug recognition logic. This means is that recognition is better on the device, and that I’d appreciate it if bug reports are limited to issues which affect the device.

To use OpenEars:

1. Download the distribution and unpack it.

2. Create your own app, and add the iOS frameworks AudioToolbox and AVFoundation to it.

3. Inside your downloaded distribution there is a folder called “frameworks” that is inside the folder called “OpenEars”. Drag the “frameworks” folder into your app project in Xcode.

OK, now that you’ve finished laying the groundwork, you have to…wait, that’s everything. You’re ready to start using OpenEars.

Before shipping your app, you will want to remove unused voices from it so that the app size won’t be too big, as explained here.

If the steps on this page didn’t work for you, you can get free support at the forums, read the FAQ, or open a private email support incident at the Politepix shop. Otherwise, carry on to the next part: using OpenEars in your app.

OpenEars uses the open source speech recognition engine Pocketsphinx from Carnegie Mellon University:
 类似资料:

相关阅读

相关文章

相关问答