当前位置: 首页 > 知识库问答 >
问题:

Android Camera X ImageAnalysis图像平面缓冲区大小(限制)与图像大小不匹配

汤乐家
2023-03-14

这是一个关于camera-x的ImageAnalysis用例的一般性问题,但我将使用这个codelab的稍微修改版本作为示例来说明我看到的问题。我发现图像尺寸(image.height*image.width)和相关的ByteBuffer大小(通过其限制和/或容量测量)之间不匹配。我希望它们是相同的,并将图像的一个像素映射到ByteBuffer中的单个值。情况似乎并非如此。希望有人能澄清这是否是一个bug,如果不是,如何解释这种不匹配。

在codelab的步骤6(图像分析)中,他们为亮度分析器提供了一个子类:

package jp.oist.cameraxcodelab

import androidx.appcompat.app.AppCompatActivity
import android.os.Bundle
import android.Manifest
import android.content.pm.PackageManager
import android.net.Uri
import android.util.Log
import android.util.Size
import android.widget.Toast
import androidx.core.app.ActivityCompat
import androidx.core.content.ContextCompat
import java.util.concurrent.Executors
import androidx.camera.core.*
import androidx.camera.lifecycle.ProcessCameraProvider
import kotlinx.android.synthetic.main.activity_main.*
import java.io.File
import java.nio.ByteBuffer
import java.text.SimpleDateFormat
import java.util.*
import java.util.concurrent.ExecutorService
typealias LumaListener = (luma: Double) -> Unit

class MainActivity : AppCompatActivity() {
    private var imageCapture: ImageCapture? = null

    private lateinit var outputDirectory: File
    private lateinit var cameraExecutor: ExecutorService

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)

        // Request camera permissions
        if (allPermissionsGranted()) {
            startCamera()
        } else {
            ActivityCompat.requestPermissions(
                    this, REQUIRED_PERMISSIONS, REQUEST_CODE_PERMISSIONS)
        }

        // Set up the listener for take photo button
        camera_capture_button.setOnClickListener { takePhoto() }

        outputDirectory = getOutputDirectory()

        cameraExecutor = Executors.newSingleThreadExecutor()
    }

    private fun takePhoto() {
        // Get a stable reference of the modifiable image capture use case
        val imageCapture = imageCapture ?: return

        // Create time-stamped output file to hold the image
        val photoFile = File(
                outputDirectory,
                SimpleDateFormat(FILENAME_FORMAT, Locale.US
                ).format(System.currentTimeMillis()) + ".jpg")

        // Create output options object which contains file + metadata
        val outputOptions = ImageCapture.OutputFileOptions.Builder(photoFile).build()

        // Set up image capture listener, which is triggered after photo has
        // been taken
        imageCapture.takePicture(
                outputOptions, ContextCompat.getMainExecutor(this), object : ImageCapture.OnImageSavedCallback {
            override fun onError(exc: ImageCaptureException) {
                Log.e(TAG, "Photo capture failed: ${exc.message}", exc)
            }

            override fun onImageSaved(output: ImageCapture.OutputFileResults) {
                val savedUri = Uri.fromFile(photoFile)
                val msg = "Photo capture succeeded: $savedUri"
                Toast.makeText(baseContext, msg, Toast.LENGTH_SHORT).show()
                Log.d(TAG, msg)
            }
        })
    }

    private fun startCamera() {
        val cameraProviderFuture = ProcessCameraProvider.getInstance(this)

        cameraProviderFuture.addListener(Runnable {
            // Used to bind the lifecycle of cameras to the lifecycle owner
            val cameraProvider: ProcessCameraProvider = cameraProviderFuture.get()

            // Preview
            val preview = Preview.Builder()
                    .build()
                    .also {
                        it.setSurfaceProvider(viewFinder.createSurfaceProvider())
                    }

            imageCapture = ImageCapture.Builder()
                    .build()

            val imageAnalyzer = ImageAnalysis.Builder()
                    .setTargetResolution(Size(480, 640)) // I added this line
                    .build()
                    .also {
                        it.setAnalyzer(cameraExecutor, LuminosityAnalyzer { luma ->
//                            Log.d(TAG, "Average luminosity: $luma")
                        })
                    }

            // Select back camera as a default
            val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA

            try {
                // Unbind use cases before rebinding
                cameraProvider.unbindAll()

                // Bind use cases to camera
                cameraProvider.bindToLifecycle(
                        this, cameraSelector, preview, imageCapture, imageAnalyzer)

            } catch(exc: Exception) {
                Log.e(TAG, "Use case binding failed", exc)
            }

        }, ContextCompat.getMainExecutor(this))
    }
    
    private fun allPermissionsGranted() = REQUIRED_PERMISSIONS.all {
        ContextCompat.checkSelfPermission(
                baseContext, it) == PackageManager.PERMISSION_GRANTED
    }

    private fun getOutputDirectory(): File {
        val mediaDir = externalMediaDirs.firstOrNull()?.let {
            File(it, resources.getString(R.string.app_name)).apply { mkdirs() } }
        return if (mediaDir != null && mediaDir.exists())
            mediaDir else filesDir
    }

    override fun onDestroy() {
        super.onDestroy()
        cameraExecutor.shutdown()
    }

    companion object {
        private const val TAG = "CameraXBasic"
        private const val FILENAME_FORMAT = "yyyy-MM-dd-HH-mm-ss-SSS"
        private const val REQUEST_CODE_PERMISSIONS = 10
        private val REQUIRED_PERMISSIONS = arrayOf(Manifest.permission.CAMERA)
    }

    override fun onRequestPermissionsResult(
            requestCode: Int, permissions: Array<String>, grantResults:
            IntArray) {
        if (requestCode == REQUEST_CODE_PERMISSIONS) {
            if (allPermissionsGranted()) {
                startCamera()
            } else {
                Toast.makeText(this,
                        "Permissions not granted by the user.",
                        Toast.LENGTH_SHORT).show()
                finish()
            }
        }
    }

    private class LuminosityAnalyzer(private val listener: LumaListener) : ImageAnalysis.Analyzer {

        private fun ByteBuffer.toByteArray(): ByteArray {
            rewind()    // Rewind the buffer to zero
            val data = ByteArray(remaining())
            get(data)   // Copy the buffer into a byte array
            return data // Return the byte array
        }

        override fun analyze(image: ImageProxy) {

            val buffer = image.planes[0].buffer
            for ((index,plane) in image.planes.withIndex()){
                Log.i("analyzer", "Plane: $index" + " H: " + image.height + " W: " + 
                        image.width + " HxW: " +  image.height * image.width + " buffer.limit: " + 
                        buffer.limit() + " buffer.cap: " + buffer.capacity() + buffer.get())
            }
            val data = buffer.toByteArray()
            val pixels = data.map { it.toInt() and 0xFF }
            val luma = pixels.average()

            listener(luma)

            image.close()
        }
    }

}

我试图从外部传递平面的数组数据,所以我对亮度分析器中定义的var数据感兴趣。analyze()。

我想检查数据数组的维度,因此在val buffer的init之后添加了您看到的log语句。对于默认分辨率,我在日志中得到以下内容:

平面:0 H:480 W:640 HxW:307200缓冲器。限制:307200缓冲区。上限:3072005

平面:1 H:480 W:640 HxW:307200缓冲器。限制:307200缓冲区。上限:3072003

平面: 2 H: 480 W: 640 HxW: 307200buffer.limit: 307200buffer.cap: 3072003

这就是我所期望的。H*W=缓冲器。限度缓冲区表示特定平面中图像的像素。

如果通过使用setTargetResolution()方法设置imageAnalyzer来更改分辨率,则会得到奇怪的结果。例如,如果我将其设置为setTargetResolution(144176),则会得到以下日志:

平面:0 H:144 W:176 HxW:25344缓冲器。限制:27632缓冲区。上限:276324

平面:1 H:144 W:176 HxW:25344缓冲器。限制:27632缓冲区。上限:276324

平面:2 H:144 W:176 HxW:25344缓冲器。限制:27632缓冲区。上限:276322

注意图像的大小与缓冲区限制和容量不同。

平面0的其他几个示例(为简洁起见):

平面:0 H:288 W:352 HxW:101376缓冲器。限制:110560缓冲区。上限:1105604

飞机:0 H:600 W:800 HxW:480000buffer.limit:499168buffer.cap:4991685

平面:0 H:960 W:1280 HxW:1228800缓冲器。限制:1228800缓冲区。上限:12288004

这是否与传感器大小与标准图像大小不匹配有关?我应该期望缓冲区中的剩余条目为零还是没有意义?

我最初不是用静态编程语言运行它,而是在Java,并且在那里得到了更奇怪的结果。如果您记录三个平面中每个平面的图像大小和缓冲区限制,您会得到比不同层的图像大小更大和更小的限制:

平面:0宽度:176高度:144 WxH:25344buffer.limit:27632

平面:1宽:176高:144宽x高:25344缓冲器。限额:13807

平面:2宽度:176高度:144 WxH:25344buffer.limit:13807

无论出于何种原因,科特林的限制在飞机之间保持不变。

我应该如何解释这一点?是否在平面0中填充图像,并在平面1和2中裁剪图像?还是这是一个bug?

参考清单、布局文件和内部版本。gradle文件复制如下(应与CodeLab相同),最后我还包括了MainActivity的Java版本,该版本会导致平面之间的缓冲区限制不匹配。如果你能告诉我为什么是ByteBuffer,就可以加分。array()挂起以及为什么必须使用ByteBuffer。改为get():

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
    package="jp.oist.cameraxcodelab">

    <uses-feature android:name="android.hardware.camera.any" />
    <uses-permission android:name="android.permission.CAMERA" />

    <application
        android:allowBackup="true"
        android:icon="@mipmap/ic_launcher"
        android:label="@string/app_name"
        android:roundIcon="@mipmap/ic_launcher_round"
        android:supportsRtl="true"
        android:theme="@style/Theme.CameraXCodeLab">
        <activity android:name=".MainActivity">
            <intent-filter>
                <action android:name="android.intent.action.MAIN" />

                <category android:name="android.intent.category.LAUNCHER" />
            </intent-filter>
        </activity>
    </application>

</manifest>
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout
    xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:tools="http://schemas.android.com/tools"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context=".MainActivity">

    <Button
        android:id="@+id/camera_capture_button"
        android:layout_width="100dp"
        android:layout_height="100dp"
        android:layout_marginBottom="50dp"
        android:scaleType="fitCenter"
        android:text="Take Photo"
        app:layout_constraintLeft_toLeftOf="parent"
        app:layout_constraintRight_toRightOf="parent"
        app:layout_constraintBottom_toBottomOf="parent"
        android:elevation="2dp" />

    <androidx.camera.view.PreviewView
        android:id="@+id/viewFinder"
        android:layout_width="match_parent"
        android:layout_height="match_parent" />

</androidx.constraintlayout.widget.ConstraintLayout>

plugins {
    id 'com.android.application'
    id 'kotlin-android'
    id 'kotlin-android-extensions'
}

android {
    compileSdkVersion 30
    buildToolsVersion "30.0.2"

    defaultConfig {
        applicationId "jp.oist.cameraxcodelab"
        minSdkVersion 21
        targetSdkVersion 30
        versionCode 1
        versionName "1.0"

        testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
    }

    buildTypes {
        release {
            minifyEnabled false
            proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
        }
    }
    compileOptions {
        sourceCompatibility JavaVersion.VERSION_1_8
        targetCompatibility JavaVersion.VERSION_1_8
    }
    kotlinOptions {
        jvmTarget = '1.8'
    }
}

dependencies {

    implementation "org.jetbrains.kotlin:kotlin-stdlib:$kotlin_version"
    implementation 'androidx.core:core-ktx:1.2.0'
    implementation 'androidx.appcompat:appcompat:1.2.0'
    implementation 'com.google.android.material:material:1.2.1'
    implementation 'androidx.constraintlayout:constraintlayout:2.0.4'
    testImplementation 'junit:junit:4.+'
    androidTestImplementation 'androidx.test.ext:junit:1.1.2'
    androidTestImplementation 'androidx.test.espresso:espresso-core:3.3.0'
    def camerax_version = "1.0.0-beta07"
// CameraX core library using camera2 implementation
    implementation "androidx.camera:camera-camera2:$camerax_version"
// CameraX Lifecycle Library
    implementation "androidx.camera:camera-lifecycle:$camerax_version"
// CameraX View class
    implementation "androidx.camera:camera-view:1.0.0-alpha14"

}
package jp.oist.abcvlib.camera;

import android.Manifest;
import android.content.pm.PackageManager;
import android.media.Image;
import android.os.Bundle;
import android.util.Log;
import android.util.Size;
import android.widget.Toast;

import androidx.annotation.NonNull;
import androidx.appcompat.app.AppCompatActivity;
import androidx.camera.core.Camera;
import androidx.camera.core.CameraSelector;
import androidx.camera.core.ImageAnalysis;
import androidx.camera.core.ImageProxy;
import androidx.camera.core.Preview;
import androidx.camera.lifecycle.ProcessCameraProvider;
import androidx.camera.view.PreviewView;
import androidx.core.app.ActivityCompat;
import androidx.core.content.ContextCompat;
import androidx.lifecycle.LifecycleOwner;

import com.google.common.util.concurrent.ListenableFuture;

import java.nio.ByteBuffer;
import java.nio.DoubleBuffer;
import java.util.Arrays;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.ScheduledThreadPoolExecutor;

public class MainActivity extends AppCompatActivity implements LifecycleOwner {

    private static final int REQUEST_CODE_PERMISSIONS = 10;
    private static final String[] REQUIRED_PERMISSIONS = { Manifest.permission.CAMERA };

    private ListenableFuture<ProcessCameraProvider> mCameraProviderFuture;
    private PreviewView mPreviewView;

    private ExecutorService analysisExecutor;


    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);
        mPreviewView = findViewById(R.id.preview_view);

        // Request camera permissions
        if (allPermissionsGranted()) {
            startCamera();
        } else {
            ActivityCompat.requestPermissions(
                    this, REQUIRED_PERMISSIONS, REQUEST_CODE_PERMISSIONS);
        }

        int threadPoolSize = 8;
        analysisExecutor = new ScheduledThreadPoolExecutor(threadPoolSize);
    }

    private void bindAll(@NonNull ProcessCameraProvider cameraProvider) {
        Preview preview = new Preview.Builder().build();
        CameraSelector cameraSelector = new CameraSelector.Builder()
                .requireLensFacing(CameraSelector.LENS_FACING_FRONT)
                .build();

        ImageAnalysis imageAnalysis =
                new ImageAnalysis.Builder()
                        .setTargetResolution(new Size(10, 10))
                        .setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
                        .build();

        imageAnalysis.setAnalyzer(analysisExecutor, new ImageAnalysis.Analyzer() {
            @Override
            @androidx.camera.core.ExperimentalGetImage
            public void analyze(@NonNull ImageProxy imageProxy) {
                Image image = imageProxy.getImage();
                if (image != null) {
                    int width = image.getWidth();
                    int height = image.getHeight();
                    byte[] frame = new byte[width * height];
                    Image.Plane[] planes = image.getPlanes();
                    int idx = 0;
                    for (Image.Plane plane : planes){
                        ByteBuffer frameBuffer = plane.getBuffer();
                        int n = frameBuffer.capacity();
                        Log.i("analyzer", "Plane: " + idx + " width: " + width + " height: " + height + " WxH: " + width*height + " buffer.limit: " + n);
                        frameBuffer.rewind();
                        frame = new byte[n];
                        frameBuffer.get(frame);
                        idx++;
                    }
                }
                imageProxy.close();
            }
        });

        Camera camera = cameraProvider.bindToLifecycle(this, cameraSelector, preview, imageAnalysis);
        preview.setSurfaceProvider(mPreviewView.getSurfaceProvider());
    }

    private void startCamera() {
        mPreviewView.post(() -> {
            mCameraProviderFuture = ProcessCameraProvider.getInstance(this);
            mCameraProviderFuture.addListener(() -> {
                try {
                    ProcessCameraProvider cameraProvider = mCameraProviderFuture.get();
                    bindAll(cameraProvider);
                } catch (ExecutionException | InterruptedException e) {
                    // No errors need to be handled for this Future.
                    // This should never be reached.
                }
            }, ContextCompat.getMainExecutor(this));
        });
    }

    /**
     * Process result from permission request dialog box, has the request
     * been granted? If yes, start Camera. Otherwise display a toast
     */
    @Override
    public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
        // super.onRequestPermissionsResult(requestCode, permissions, grantResults);
        if (requestCode == REQUEST_CODE_PERMISSIONS) {
            if (allPermissionsGranted()) {
                startCamera();
            } else {
                Toast.makeText(this,
                        "Permissions not granted by the user.",
                        Toast.LENGTH_SHORT).show();
                finish();
            }
        }
    }

    /**
     * Check if all permission specified in the manifest have been granted
     */
    private boolean allPermissionsGranted() {
        for (String permission : REQUIRED_PERMISSIONS) {
            if (ContextCompat.checkSelfPermission(getBaseContext(), permission) != PackageManager.PERMISSION_GRANTED) {
                return false;
            }
        }
        return true;
    }
}

共有1个答案

仲孙奇
2023-03-14

请查看ImageProxy的详细信息。PlaneProxy类;它们不仅仅是压缩的图像数据。它们可能同时具有行步幅和像素步幅。

行跨距是在两个相邻的图像数据行之间进行填充。像素步长是两个相邻像素之间的填充。

此外,YUV\U 420\U 888图像中的平面1和2具有平面0的一半像素计数;得到相同大小的原因可能是因为像素步长为2。

对于某些分辨率,跨距可能等于宽度(通常处理硬件有一些限制,例如行跨距必须是16或32字节的倍数),但可能并非所有情况都是如此。

 类似资料:
  • 问题内容: 我正在尝试调整缓冲图像的大小。我能够存储它并显示在jframe上没有问题,但是我似乎无法调整它的大小。关于如何更改它以使其正常运行并将图像显示为200 * 200文件的任何提示都很好 问题答案: 更新的答案 我不知道为什么我的原始答案有效,但是在单独的环境中对其进行了测试,我同意,原始的可接受答案不起作用(为什么我说过,我也不记得了)。另一方面,这确实起作用:

  • 我有一个尺寸为800x800的图像,其大小为170 kb。我想将此图像调整为600x600。调整大小后,我希望缩小图像大小。我该怎么做?

  • 问题内容: 我在这里使用Go调整大小包:https://github.com/nfnt/resize 我正在从S3中提取图像,如下所示: // this gives me data []byte 之后,我需要调整图像大小: // problem is that the original_image has to be of type image.Image 将图像上传到我的S3存储桶 // pro

  • 我使用theme而不是layout来显示初始屏幕,但我不知道为不同的屏幕密度设置什么样的分辨率图像,因为标记的width和height属性可用API>22。 风格

  • 所以我有一个非常奇怪的情况。我有两个图标,都是288x288px。但第一个图标在应用程序中显示得比另一个更大。 她是我的xml: css部分:. background{background-图像: url("res://login_bg"); background-重复:无重复;background-大小:封面;background-位置:中心;} 和两个图标: 有人有同样的问题吗?

  • 我想减少一个480 X 480位图图像大小到30 X 30像素大小,但保持整个高度和宽度完整。(我不想缩放或使用高度/宽度属性!) 让我用更简单的方式--我试图将位图图像中的像素从480 X 480减少到30 X 30,高度和宽度保持不变,并且在将图像转换为30 X 30后,我预计会有一些失真。 我做了缩放,但它减少了宽度和高度,如果我再次增加宽度和高度,它只是恢复正常的像素。谢谢!