当前位置: 首页 > 工具软件 > iPhone TTS > 使用案例 >

开源的语音识别+TTS for iPhone(一)Welcome To OpenEars

锺星洲
2023-12-01

openEars是一个开源的语音识别+TTS库,在iphone APP中有几个款用到了该库,最近了一次升级,提高了代码的效率,并升级到了xcode4。politepix网站有openEars教程.以下转载,找个时间再翻译翻译吧.

-------------------------------------------------------------------------------------------------------------------------------------------------

Welcome to OpenEars!

OpenEars is an open-source iOS library for implementing round-trip English language speech recognition and text-to-speech on the iPhone and iPad, which uses the CMU Pocketsphinx , CMU Flite , and MITLM libraries.

The current version of OpenEars is 0.91 .

This version has a number of changes under the hood and two API changes for existing API calls, so if you want to stick with the previous version 0.9.02 for now, you can still download it here and it contains all of the old support documents as PDFs as well. I’ll support 0.9.0.2 until it’s clear that 0.91 is as stable as 0.9.02 — please just identify which version you are using when seeking support.

OpenEars .91 can:

  • Use any of 8 voices for speech and switch between them on the fly,
  • Know whether headphones are plugged in and continue voice recognition during text-to-speech only when they are plugged in,
  • Support bluetooth audio devices (very experimental in this version),
  • Dispatch information to any part of your app about the results of speech recognition and speech, or changes in the state of the audio session (such as an incoming phone call or headphones being plugged in),
  • Deliver level metering for both speech input and speech output so you can design visual feedback for both states.
  • Support JSGF grammars,
  • Dynamically generate new ARPA language models in-app based on input from an NSArray of NSStrings,
  • Switch between ARPA language models on the fly,
  • Be easily interacted with via standard and simple Objective-C methods,
  • Control all audio functions with text-to-speech and speech recognition in memory instead of writing audio files to disk and then reading them,
  • Drive speech recognition with a low-latency Audio Unit driver for highest responsiveness,
  • Be installed in a Cocoa-standard fashion using static library projects that, after initial configuration, allow you to target or re-target any SDKs or architectures that are supported by the libraries (verified as going back to SDK 3.1.2 at least) by making changes to your main project only.

In addition to its various new features and faster recognition/text-to-speech responsiveness, OpenEars now has improved recognition accuracy.

Before using OpenEars, please note that its new low-latency Audio Unit driver is not compatible with the Simulator, so it has a fallback Audio Queue driver for the Simulator provided as a convenience so you can debug recognition logic. This means is that recognition is better on the device, and that I’d appreciate it if bug reports are limited to issues which affect the device.

To use OpenEars:

1. Begin with “Getting Started With OpenEars ” which will explain how to set up the libraries your app will make use of.

2. Then read “Configuring your app for OpenEars ” which will explain how to make the OpenEars libraries available to your app projects, and lastly,

3. You’ll be ready for “Using OpenEars In Your App ” which will explain the objects and methods that will be available to your app and how to use them.

If those steps give you trouble, you can check out the Support and FAQ page.

 类似资料: