我对谷歌离线语音识别进行了研究。但它在google nexus 5(操作系统:-4.4)中运行良好,但如果我在三星galaxy s5(操作系统:-5.0)中实现,则会出现相同的版本,它无法识别,并显示以下错误:
8-ERROR_RECOGNIZER_BUSY。
下面是我的代码。通过保留此链接作为参考,我进行了更改http://www.truiton.com/2014/06/android-speech-recognition-without-dialog-custom-activity/
没有互联网语音必须识别。我一直在制作袖珍狮身人面像,但它需要很多旁白,所以客户拒绝了它。
public class VoiceRecognitionActivity extends Activity implements RecognitionListener {
private TextView returnedText;
private static ProgressBar progressBar;
private static SpeechRecognizer speech = null;
private static Intent recognizerIntent;
private String LOG_TAG = "VoiceRecognitionActivity";
private Button button1;
Activity activity = VoiceRecognitionActivity.this;
private TextView textView2;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
returnedText = (TextView) findViewById(R.id.textView1);
textView2 = (TextView) findViewById(R.id.textView2);
progressBar = (ProgressBar) findViewById(R.id.progressBar1);
button1 = (Button) findViewById(R.id.button1);
getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);
// toggleButton = (ToggleButton) findViewById(R.id.toggleButton1);
PackageManager pm = getPackageManager();
List<ResolveInfo> activities = pm.queryIntentActivities( new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH), 0);
if (activities.size() != 0)
{
createSpeechAgain(VoiceRecognitionActivity.this);
}
else
{
textView2.setText("Recognizer_not_present");
}
button1.setOnClickListener(new OnClickListener() {
@Override
public void onClick(View arg0) {
speech.stopListening();
speech.destroy();
createSpeechAgain(VoiceRecognitionActivity.this);
}
});
}
private void createSpeechAgain(VoiceRecognitionActivity voiceRecognitionActivity) {
progressBar.setVisibility(View.INVISIBLE);
speech = SpeechRecognizer.createSpeechRecognizer(voiceRecognitionActivity);
speech.setRecognitionListener(voiceRecognitionActivity);
recognizerIntent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
recognizerIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_PREFERENCE, "en-US");
recognizerIntent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE, voiceRecognitionActivity.getPackageName());
recognizerIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_WEB_SEARCH);
recognizerIntent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, 3);
//recognizerIntent.putExtra(RecognizerIntent.EXTRA_PREFER_OFFLINE, Boolean.FALSE);
recognizerIntent.putExtra(RecognizerIntent.EXTRA_SPEECH_INPUT_POSSIBLY_COMPLETE_SILENCE_LENGTH_MILLIS, 20000);
recognizerIntent.putExtra(RecognizerIntent.EXTRA_SPEECH_INPUT_COMPLETE_SILENCE_LENGTH_MILLIS, 20000);
// EXTRA_PREFER_OFFLINE
progressBar.setVisibility(View.VISIBLE);
progressBar.setIndeterminate(true);
speech.startListening(recognizerIntent);
}
@Override
public void onResume() {
super.onResume();
}
@Override
protected void onPause() {
super.onPause();
/*if (speech != null) {
speech.destroy();
Log.i(LOG_TAG, "destroy");
}*/
}
@Override
public void onBeginningOfSpeech() {
Log.i(LOG_TAG, "onBeginningOfSpeech");
progressBar.setIndeterminate(false);
progressBar.setMax(10);
}
@Override
public void onBufferReceived(byte[] buffer) {
Log.i(LOG_TAG, "onBufferReceived: " + buffer);
}
@Override
public void onEndOfSpeech() {
Log.i(LOG_TAG, "onEndOfSpeech");
progressBar.setIndeterminate(false);
progressBar.setVisibility(View.INVISIBLE);
speech.stopListening();
}
@Override
public void onError(int errorCode) {
String errorMessage = getErrorText(errorCode);
Log.d(LOG_TAG, "FAILED " + errorMessage);
textView2.setText(errorMessage);
}
@Override
public void onEvent(int arg0, Bundle arg1) {
Log.i(LOG_TAG, "onEvent");
}
@Override
public void onPartialResults(Bundle arg0) {
Log.i(LOG_TAG, "onPartialResults");
}
@Override
public void onReadyForSpeech(Bundle arg0) {
Log.i(LOG_TAG, "onReadyForSpeech");
}
@Override
public void onResults(Bundle results) {
Log.i(LOG_TAG, "onResults");
ArrayList<String> matches = results.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
String text = "";
for (String result : matches)
text += result + "\n";
returnedText.setText(text);
Log.v(LOG_TAG, "onResults---> " + text);
progressBar.setVisibility(View.VISIBLE);
progressBar.setIndeterminate(true);
speech.startListening(recognizerIntent);
}
@Override
public void onRmsChanged(float rmsdB) {
//Log.i(LOG_TAG, "onRmsChanged: " + rmsdB);
progressBar.setProgress((int) rmsdB);
}
public String getErrorText(int errorCode) {
String message;
switch (errorCode) {
case SpeechRecognizer.ERROR_AUDIO:
message = "Audio recording error";
Log.v("LOG_TAG", message);
progressBar.setVisibility(View.VISIBLE);
progressBar.setIndeterminate(true);
speech.startListening(recognizerIntent);
break;
case SpeechRecognizer.ERROR_CLIENT:
message = "Client side error";
Log.v("LOG_TAG", message);
progressBar.setVisibility(View.VISIBLE);
progressBar.setIndeterminate(true);
speech.startListening(recognizerIntent);
break;
case SpeechRecognizer.ERROR_INSUFFICIENT_PERMISSIONS:
message = "Insufficient permissions";
Log.v("LOG_TAG", message);
progressBar.setVisibility(View.VISIBLE);
progressBar.setIndeterminate(true);
speech.startListening(recognizerIntent);
break;
case SpeechRecognizer.ERROR_NETWORK:
message = "Network error";
Log.v("LOG_TAG", message);
break;
case SpeechRecognizer.ERROR_NETWORK_TIMEOUT:
message = "Network timeout";
Log.v("LOG_TAG", message);
break;
case SpeechRecognizer.ERROR_NO_MATCH:
message = "No match";
Log.v("LOG_TAG", message);
progressBar.setVisibility(View.VISIBLE);
progressBar.setIndeterminate(true);
speech.startListening(recognizerIntent);
break;
case SpeechRecognizer.ERROR_RECOGNIZER_BUSY:
message = "RecognitionService busy";
Log.v("LOG_TAG", message);
speech.stopListening();
speech.destroy();
createSpeechAgain(VoiceRecognitionActivity.this);
break;
case SpeechRecognizer.ERROR_SERVER:
message = "error from server";
Log.v("LOG_TAG", message);
break;
case SpeechRecognizer.ERROR_SPEECH_TIMEOUT:
message = "No speech input";
Log.v("LOG_TAG", message);
progressBar.setVisibility(View.VISIBLE);
progressBar.setIndeterminate(true);
speech.stopListening();
speech.destroy();
createSpeechAgain(VoiceRecognitionActivity.this);
break;
default:
message = "Didn't understand, please try again.";
break;
}
return message;
}
}
Xml:-
<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical" >
<ImageView
android:id="@+id/imageView1"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_alignParentBottom="true"
android:layout_centerHorizontal="true"
android:src="@drawable/ic_launcher" />
<ProgressBar
android:id="@+id/progressBar1"
style="?android:attr/progressBarStyleHorizontal"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_alignParentLeft="true"
android:layout_below="@+id/toggleButton1"
android:layout_marginTop="28dp"
android:paddingLeft="10dp"
android:paddingRight="10dp" />
<TextView
android:id="@+id/textView1"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_below="@+id/progressBar1"
android:layout_centerHorizontal="true"
android:layout_marginTop="47dp" />
<Button
android:id="@+id/button1"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_above="@+id/imageView1"
android:layout_alignLeft="@+id/imageView1"
android:text="Restart" />
<TextView
android:id="@+id/textView2"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_above="@+id/button1"
android:layout_centerHorizontal="true"
android:layout_marginBottom="19dp"
android:text="" />
</RelativeLayout>
AndroidManifest。xml
<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.offlinegooglespeechtotext"
android:versionCode="1"
android:versionName="1.0" >
<uses-sdk
android:minSdkVersion="19"
android:targetSdkVersion="19" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<application
android:allowBackup="true"
android:icon="@drawable/ic_launcher"
android:label="@string/app_name"
android:theme="@style/AppTheme" >
<activity
android:name=".VoiceRecognitionActivity"
android:label="@string/app_name" >
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
</intent-filter>
</activity>
</application>
</manifest>
Logcat:-
09-30 18:05:54.732: D/ResourcesManager(3941): creating new AssetManager and set to /data/app/com.example.offlinegooglespeechtotext-2/base.apk
09-30 18:05:54.772: V/BitmapFactory(3941): DecodeImagePath(decodeResourceStream3) : res/drawable-xxhdpi-v4/sym_def_app_icon.png
09-30 18:05:54.772: V/BitmapFactory(3941): DecodeImagePath(decodeResourceStream3) : res/drawable-xxhdpi/ic_launcher.png
09-30 18:05:54.787: V/BitmapFactory(3941): DecodeImagePath(decodeResourceStream3) : res/drawable-xxhdpi-v4/ic_ab_back_holo_dark_am.png
09-30 18:05:54.797: V/BitmapFactory(3941): DecodeImagePath(decodeResourceStream3) : res/drawable-xxhdpi-v4/sym_def_app_icon.png
09-30 18:05:54.817: D/Activity(3941): performCreate Call secproduct feature valuefalse
09-30 18:05:54.817: D/Activity(3941): performCreate Call debug elastic valuetrue
09-30 18:05:54.827: D/OpenGLRenderer(3941): Render dirty regions requested: true
09-30 18:05:54.867: I/(3941): PLATFORM VERSION : JB-MR-2
09-30 18:05:54.867: I/OpenGLRenderer(3941): Initialized EGL, version 1.4
09-30 18:05:54.877: I/OpenGLRenderer(3941): HWUI protection enabled for context , &this =0xb39090d8 ,&mEglDisplay = 1 , &mEglConfig = -1282088012
09-30 18:05:54.887: D/OpenGLRenderer(3941): Enabling debug mode 0
09-30 18:05:54.957: V/LOG_TAG(3941): No match
09-30 18:05:54.957: D/VoiceRecognitionActivity(3941): FAILED No match
09-30 18:05:54.982: I/Timeline(3941): Timeline: Activity_idle id: android.os.BinderProxy@24862afe time:5837375
09-30 18:05:55.607: I/VoiceRecognitionActivity(3941): onReadyForSpeech
09-30 18:05:55.947: I/VoiceRecognitionActivity(3941): onBeginningOfSpeech
09-30 18:05:57.252: I/VoiceRecognitionActivity(3941): onEndOfSpeech
09-30 18:05:57.322: V/LOG_TAG(3941): No match
09-30 18:05:57.322: D/VoiceRecognitionActivity(3941): FAILED No match
09-30 18:05:57.332: V/LOG_TAG(3941): No match
09-30 18:05:57.332: D/VoiceRecognitionActivity(3941): FAILED No match
09-30 18:05:57.347: V/LOG_TAG(3941): No match
09-30 18:05:57.347: D/VoiceRecognitionActivity(3941): FAILED No match
09-30 18:05:57.367: V/LOG_TAG(3941): RecognitionService busy
09-30 18:05:57.392: D/VoiceRecognitionActivity(3941): FAILED RecognitionService busy
09-30 18:05:57.392: E/SpeechRecognizer(3941): not connected to the recognition service
09-30 18:05:58.232: I/VoiceRecognitionActivity(3941): onReadyForSpeech
09-30 18:06:03.287: V/LOG_TAG(3941): No speech input
09-30 18:06:03.302: D/VoiceRecognitionActivity(3941): FAILED No speech input
09-30 18:06:03.302: E/SpeechRecognizer(3941): not connected to the recognition service
ERROR\u RECOGNIZER\u BUSY在语音识别器已经运行时再次启动时抛出。就像你的代码一样
case SpeechRecognizer.ERROR_RECOGNIZER_BUSY:
message = "RecognitionService busy";
Log.v("LOG_TAG", message);
speech.stopListening();
speech.destroy();
createSpeechAgain(VoiceRecognitionActivity.this);
break;
只需使用
case SpeechRecognizer.ERROR_RECOGNIZER_BUSY:
break;
由于无需再次启动,它已在运行。如果您再次开始识别,它将继续抛出相同的错误。它将循环运行。
当您没有及时关闭语音识别器时,通常会抛出ERROR_RECOGNIZER_BUSY。你可能已经在使用语音识别器的一个实例了。
看到这个了吗http://developer.android.com/reference/android/speech/SpeechRecognizer.html
有人能帮我吗? 我正在开发一个通过RecognizerIntent进行语音识别的应用程序。 哪一个Android版本正式带来了API对应用程序的离线识别?有什么声明吗 据我所知,如果语音识别将通过在线服务或离线词典完成,开发人员无法选择。我说得对吗?或者是否有任何记录在案的API可以脱机设置 谢谢
我已经在Python语音识别方面工作了一个多月,制作了一个类似JARVIS的助手。我将语音识别模块与Google语音API和Pocketsphinx一起使用,并且直接使用了Pocketsphinx,没有使用其他模块。虽然识别是准确的,但我很难处理这些软件包处理语音所需的大量时间。它们的工作方式似乎是从一个静默点记录到另一个静默点,然后将记录传递到STT引擎。在录制过程中,无法录制其他声音以进行识别
Java中是否有任何方法可以检测Android设备是否安装了脱机语音识别语言,以及它是否不会提示用户下载该语言? 我知道您可以要求语音对文本以选择脱机语音对文本,但您如何知道设备是否安装了该语言? 这个问题不是关于如何使用脱机语音,这是可行的。问题是“如何从Java应用程序代码中检测和下载/安装脱机语音语言”。i、 e.让应用程序检测他们是否安装了离线德语,如果没有,则提示用户下载/安装。
我使用RecognizerIntent并实现RecognitionListener,并实现其所有回调方法来执行语音命令。我试着调整参数EXTRA\u SPEECH\u INPUT\u MINIMUM\u LENGTH\u MILLIS、EXTRA\u SPEECH\u INPUT\u mably\u COMPLETE\u SILENCE\u LENGTH\u MILLIS和EXTRA\u SPEE
本文向大家介绍Android基于讯飞语音SDK实现语音识别,包括了Android基于讯飞语音SDK实现语音识别的使用技巧和注意事项,需要的朋友参考一下 一、准备工作 1、你需要android手机应用开发基础 2、科大讯飞语音识别SDK android版 3、科大讯飞语音识别开发API文档 4、android手机 关于科大讯飞SDK及API文档,请到科大语音官网下载:http://www.xfyun
我正在为嵌入式设备的语音相关语音识别解决方案寻找解决方案。我已经研究过Pocketsphinx,但由于我仍然不熟悉它,我想也许更有经验的人可能会知道。是否有可能使用Pocketsphinx来实现这样的语音识别。它应该记录音频,提取其特征,然后将其与所说的任何内容进行匹配,而不是使用声学和语言模型。是否有可能使用Pocketsphinx实现此流程?如果没有,有人能为这样的解决方案指出正确的方向吗?谢