我正在为我的大学项目“AI肺炎检测器”建立一个android应用程序,它从用户数据库中获取输入图像,并预测这个人患肺炎的可能性。
在我的应用程序中,我导入了模型,从图库中获取图像的位图,然后将其大小调整为(224,224),这是模型的输入大小,然后将其转换为张量图像,然后从张量图像中获取bytebuffer,然后将bytebuffer作为输入提供给模型
但毕竟,对于每个图像,这个模型的值总是从0.0039开始(即使是在正图像上,它应该大于0.5)(模型在python上工作得很好)。
我的应用程序代码是:
package com.shekhardwivedi.aipneumoniadetector;
import androidx.appcompat.app.AppCompatActivity;
import android.app.Activity;
import android.content.Intent;
import android.graphics.Bitmap;
import android.graphics.Matrix;
import android.net.Uri;
import android.os.Bundle;
import android.os.Debug;
import android.provider.MediaStore;
import android.view.View;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.TextView;
import com.shekhardwivedi.aipneumoniadetector.ml.AiModel;
import com.shekhardwivedi.aipneumoniadetector.ml.Stacked;
import com.squareup.picasso.Picasso;
import org.tensorflow.lite.DataType;
import org.tensorflow.lite.support.common.ops.DequantizeOp;
import org.tensorflow.lite.support.common.ops.QuantizeOp;
import org.tensorflow.lite.support.image.ImageProcessor;
import org.tensorflow.lite.support.image.TensorImage;
import org.tensorflow.lite.support.image.ops.ResizeOp;
import org.tensorflow.lite.support.tensorbuffer.TensorBuffer;
import java.io.ByteArrayOutputStream;
import java.io.Console;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.nio.ByteBuffer;
public class predictionscreen extends AppCompatActivity {
Button upload;
TextView prediction;
ImageView imageView;
boolean debuFlag = false;
String debuGlobal = "";
public static final int GET_FROM_GALLERY = 3;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_predictionscreen);
upload = findViewById(R.id.button);
prediction = findViewById(R.id.textView);
imageView = findViewById(R.id.imageView);
prediction.setText("Result: Unavailable\nUpload Image");
Picasso.get().load(R.drawable.tittle).into(imageView);
upload.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
startActivityForResult(new Intent(Intent.ACTION_PICK, android.provider.MediaStore.Images.Media.INTERNAL_CONTENT_URI), GET_FROM_GALLERY);
prediction.setText("Upload Image");
}
});
}
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
//Detects request codes
if(requestCode==GET_FROM_GALLERY && resultCode == Activity.RESULT_OK) {
Uri selectedImage = data.getData();
Bitmap bitmap = null;
try {
bitmap = MediaStore.Images.Media.getBitmap(this.getContentResolver(), selectedImage);
//Picasso.get().load(data).centerCrop().into(imageView);
imageView.setImageBitmap(bitmap);
int width = bitmap.getWidth();
int height = bitmap.getHeight();
int size = bitmap.getRowBytes() * bitmap.getHeight();
ByteBuffer imageByteBuffer = preProcessImage(bitmap);
prediction.setText("Processing");
String pred1 = stackedModel(imageByteBuffer);
//String pred2 = aiModel(bitmap);
prediction.setText(String.format("%s", pred1));
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
public ByteBuffer preProcessImage(Bitmap imgBitmap){
int width = imgBitmap.getWidth();
int height = imgBitmap.getHeight();
int newHeight = 224;
int newWidth = 224;
float scaleWidth = ((float) newWidth) / width;
float scaleHeight = ((float) newHeight) / height;
// Create a matrix for the manipulation
Matrix matrix = new Matrix();
// Resize the bit map
matrix.postScale(scaleWidth, scaleHeight);
// Recreate the new Bitmap
Bitmap resizedBitmap = Bitmap.createBitmap(imgBitmap, 0, 0, width, height, matrix, false);
imageView.setImageBitmap(resizedBitmap);
// Initialization code
// Create an ImageProcessor with all ops required. For more ops, please
// refer to the ImageProcessor Architecture section in this README.
ImageProcessor imageProcessor = new ImageProcessor.Builder()
.add(new ResizeOp(224, 224, ResizeOp.ResizeMethod.BILINEAR)).add(new DequantizeOp(0, 1/255.0f)).build();
// Create a TensorImage object. This creates the tensor of the corresponding
// tensor type (uint8 in this case) that the TensorFlow Lite interpreter needs.
TensorImage tensorImage = new TensorImage(DataType.FLOAT32);
// Preprocess the image
tensorImage.load(resizedBitmap);
tensorImage = imageProcessor.process(tensorImage);
ByteBuffer imageBuffer = tensorImage.getBuffer();
return imageBuffer;
}
public String stackedModel(ByteBuffer byteBuffer){
String debu1 = Float.toString(byteBuffer.getFloat(100));
String debu2 = Float.toString(byteBuffer.getFloat(200));
String debu3 = Float.toString(byteBuffer.getFloat(300));
String debu4 = Float.toString(byteBuffer.getFloat(400));
String debu = "("+debu1+","+ debu2 +","+ debu3+","+ debu4+")";
debuGlobal += debu;
try {
Stacked model = Stacked.newInstance(this);
// Creates inputs for reference.
TensorBuffer inputFeature0 = TensorBuffer.createFixedSize(new int[]{1, 224, 224, 3}, DataType.FLOAT32);
inputFeature0.loadBuffer(byteBuffer);
// Runs model inference and gets result.
Stacked.Outputs outputs = model.process(inputFeature0);
TensorBuffer outputFeature0 = outputs.getOutputFeature0AsTensorBuffer();
float [] out = outputFeature0.getFloatArray();
int leng = out.length;
String pred = String.format("Probability: %s", Double.toString(out[0] / 255.0));
// Releases model resources if no longer used.
model.close();
return pred+"::"+debuGlobal;
} catch (IOException e) {
// TODO Handle the exception
}
return "errored";
}
public String aiModel(Bitmap imgBitmap){
int width = imgBitmap.getWidth();
int height = imgBitmap.getHeight();
int newHeight = 150;
int newWidth = 150;
float scaleWidth = ((float) newWidth) / width;
float scaleHeight = ((float) newHeight) / height;
// Create a matrix for the manipulation
Matrix matrix = new Matrix();
// Resize the bit map
matrix.postScale(scaleWidth, scaleHeight);
// Recreate the new Bitmap
Bitmap resizedBitmap = Bitmap.createBitmap(imgBitmap, 0, 0, width, height, matrix, false);
imageView.setImageBitmap(resizedBitmap);
// Initialization code
// Create an ImageProcessor with all ops required. For more ops, please
// refer to the ImageProcessor Architecture section in this README.
ImageProcessor imageProcessor = new ImageProcessor.Builder()
.add(new ResizeOp(150, 150, ResizeOp.ResizeMethod.BILINEAR))
.build();
// Create a TensorImage object. This creates the tensor of the corresponding
// tensor type (uint8 in this case) that the TensorFlow Lite interpreter needs.
TensorImage tensorImage = new TensorImage(DataType.FLOAT32);
// Preprocess the image
tensorImage.load(resizedBitmap);
tensorImage = imageProcessor.process(tensorImage);
ByteBuffer imageBuffer = tensorImage.getBuffer();
// String debu1 = Float.toString(imageBuffer.getFloat(100));
// String debu2 = Float.toString(imageBuffer.getFloat(200));
// String debu3 = Float.toString(imageBuffer.getFloat(300));
// String debu4 = Float.toString(imageBuffer.getFloat(400));
//
// String debu = debu1 + debu2 + debu3 + debu4;
try {
AiModel model = AiModel.newInstance(this);
// Creates inputs for reference.
TensorBuffer inputFeature0 = TensorBuffer.createFixedSize(new int[]{1, 150, 150, 3}, DataType.FLOAT32);
inputFeature0.loadBuffer(imageBuffer);
// Runs model inference and gets result.
AiModel.Outputs outputs = model.process(inputFeature0);
TensorBuffer outputFeature0 = outputs.getOutputFeature0AsTensorBuffer();
float [] out = outputFeature0.getFloatArray();
int leng = out.length;
String pred = String.format("Probability: %s", Double.toString(out[0] / 255.0));
// Releases model resources if no longer used.
model.close();
return pred;
} catch (IOException e) {
// TODO Handle the exception
}
return "none returned";
}
}
问题内容: 我是Java的新手,所以我编写了这段代码,以便将这整个五年都称为布尔值,并为所有布尔值生成答案。但是,它仅调用最后一个。我该怎么做呢? 问题答案: 您每年需要使用单独的对象,或者至少在创建该年份的对象后立即调用the年检查方法。 您所拥有的是对函数的一系列调用,该函数将值分配给同一对象的属性。因此,只有最后一条语句才起作用,因为先前的值将被覆盖。 另外请注意,您的代码似乎没有正确组织。
int random=rand()%7;cout<<“random color=”< srand((无符号int)时间(NULL))
我已经训练了一个简单的CNN模型在Ciafer-10上使用假量化(https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/quantize)。然后我用toco生成了一个. tflite文件。现在我想使用一个python解释器来测试tflite模型。 因为我用了tf。形象在训练期间,按图像标准化减去平均值并除以方
问题内容: 我目前有一个Oracle表(lovalarm),其中包含约600,000行。我需要能够运行一个查询,该查询将遍历每一行并将一个字段(lovsiteid)更新为介于14300和17300之间的随机数。 到目前为止,我有: 可悲的是,这会选择一个随机数,然后使用相同的数字更新所有行,而这并不是我所追求的! 谁能指出我正确的方向? 非常感谢,盖 问题答案: 只是不使用子查询:
问题内容: 我是Hadoop的新手,正在尝试弄清楚它是如何工作的。至于练习,我应该实现类似于WordCount- Example的东西。任务是读入多个文件,执行WordCount并为每个输入文件编写一个输出文件。Hadoop使用组合器,将map- part的输出改编为reducer的输入,然后写入一个输出文件(我猜每个正在运行的实例)。我想知道是否可以为每个输入文件写入一个输出文件(因此保留inp
我有两个应用程序:同事和服务,每个都有自己的模型 在coworkers models.py中,我可以“从services.models导入服务”。 当我尝试在services models.py中“from coworkers.models import Status”时,会收到以下错误消息: 回溯(最近一次调用):文件“/Users/lucas/Documents/projetos/cwk-ma