当前位置: 首页 > 知识库问答 >
问题:

使用ARCore提供视频录制功能

邹英光
2023-03-14

我正在使用这个示例(https://github.com/google-ar/arcore-android-sdk/tree/master/samples/hello_ar_java),我想提供使用放置的AR对象录制视频的功能

我尝试了很多事情,但都没有成功,有没有推荐的方法?

共有1个答案

印成天
2023-03-14

从OpenGL表面创建视频有点复杂,但是可行的。我认为最简单的理解方法是使用两个EGL表面,一个用于UI,一个用于媒体编码器。GitHub上的Grafika项目中需要EGL级别调用有一个很好的例子。我以此为起点来计算ARCore的HelloAR示例所需的修改。由于有相当多的更改,我将其分解为步骤。

进行更改以支持写入外部存储

要保存视频,您需要将视频文件写入可访问的位置,因此您需要获得此权限。

在AndroidManifest中声明权限。xml文件:

   <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>

然后更改CamaPermissionHelper.java以请求外部存储权限和相机权限。为此,请制作一个权限数组,并在请求权限时使用该数组,并在检查权限状态时遍历它:

private static final String REQUIRED_PERMISSIONS[] = {
         Manifest.permission.CAMERA,
          Manifest.permission.WRITE_EXTERNAL_STORAGE
};

public static void requestCameraPermission(Activity activity) {
    ActivityCompat.requestPermissions(activity, REQUIRED_PERMISSIONS,
             CAMERA_PERMISSION_CODE);
}

public static boolean hasCameraPermission(Activity activity) {
  for(String p : REQUIRED_PERMISSIONS) {
    if (ContextCompat.checkSelfPermission(activity, p) !=
     PackageManager.PERMISSION_GRANTED) {
      return false;
    }
  }
  return true;
}

public static boolean shouldShowRequestPermissionRationale(Activity activity) {
  for(String p : REQUIRED_PERMISSIONS) {
    if (ActivityCompat.shouldShowRequestPermissionRationale(activity, p)) {
      return true;
    }
  }
  return false;
}

将录制添加到活动

activity_main.xml底部的UI中添加一个简单的按钮和文本视图:

<Button
   android:id="@+id/fboRecord_button"
   android:layout_width="wrap_content"
   android:layout_height="wrap_content"
   android:layout_alignStart="@+id/surfaceview"
   android:layout_alignTop="@+id/surfaceview"
   android:onClick="clickToggleRecording"
   android:text="@string/toggleRecordingOn"
   tools:ignore="OnClick"/>

<TextView
   android:id="@+id/nowRecording_text"
   android:layout_width="wrap_content"
   android:layout_height="wrap_content"
   android:layout_alignBaseline="@+id/fboRecord_button"
   android:layout_alignBottom="@+id/fboRecord_button"
   android:layout_toEndOf="@+id/fboRecord_button"
   android:text="" />

HelloARactive中添加用于记录的成员变量:

private VideoRecorder mRecorder;
private android.opengl.EGLConfig mAndroidEGLConfig;

在surfacecreated()中初始化mAndroidEGLConfig。我们将使用这个配置对象来创建编码器曲面。

EGL10 egl10 =  (EGL10)EGLContext.getEGL();
javax.microedition.khronos.egl.EGLDisplay display = egl10.eglGetCurrentDisplay();
int v[] = new int[2];
egl10.eglGetConfigAttrib(display,config, EGL10.EGL_CONFIG_ID, v);

EGLDisplay androidDisplay = EGL14.eglGetCurrentDisplay();
int attribs[] = {EGL14.EGL_CONFIG_ID, v[0], EGL14.EGL_NONE};
android.opengl.EGLConfig myConfig[] = new android.opengl.EGLConfig[1];
EGL14.eglChooseConfig(androidDisplay, attribs, 0, myConfig, 0, 1, v, 1);
this.mAndroidEGLConfig = myConfig[0];

重构onDrawFrame()方法,使所有非绘图代码首先执行,并在名为draw()的方法中完成实际绘图。通过这种方式,在录制过程中,我们可以更新ARCore帧,处理输入,然后绘制到UI,然后再次绘制到编码器。

@Override
public void onDrawFrame(GL10 gl) {

 if (mSession == null) {
   return;
 }
 // Notify ARCore session that the view size changed so that
 // the perspective matrix and
 // the video background can be properly adjusted.
 mDisplayRotationHelper.updateSessionIfNeeded(mSession);

 try {
   // Obtain the current frame from ARSession. When the 
   //configuration is set to
   // UpdateMode.BLOCKING (it is by default), this will
   // throttle the rendering to the camera framerate.
   Frame frame = mSession.update();
   Camera camera = frame.getCamera();

   // Handle taps. Handling only one tap per frame, as taps are
   // usually low frequency compared to frame rate.
   MotionEvent tap = mQueuedSingleTaps.poll();
   if (tap != null && camera.getTrackingState() == TrackingState.TRACKING) {
     for (HitResult hit : frame.hitTest(tap)) {
       // Check if any plane was hit, and if it was hit inside the plane polygon
       Trackable trackable = hit.getTrackable();
       if (trackable instanceof Plane
               && ((Plane) trackable).isPoseInPolygon(hit.getHitPose())) {
         // Cap the number of objects created. This avoids overloading both the
         // rendering system and ARCore.
         if (mAnchors.size() >= 20) {
           mAnchors.get(0).detach();
           mAnchors.remove(0);
         }
         // Adding an Anchor tells ARCore that it should track this position in
         // space. This anchor is created on the Plane to place the 3d model
         // in the correct position relative both to the world and to the plane.
         mAnchors.add(hit.createAnchor());

         // Hits are sorted by depth. Consider only closest hit on a plane.
         break;
       }
     }
   }


   // Get projection matrix.
   float[] projmtx = new float[16];
   camera.getProjectionMatrix(projmtx, 0, 0.1f, 100.0f);

   // Get camera matrix and draw.
   float[] viewmtx = new float[16];
   camera.getViewMatrix(viewmtx, 0);

   // Compute lighting from average intensity of the image.
   final float lightIntensity = frame.getLightEstimate().getPixelIntensity();

   // Visualize tracked points.
   PointCloud pointCloud = frame.acquirePointCloud();
   mPointCloud.update(pointCloud);


   draw(frame,camera.getTrackingState() == TrackingState.PAUSED,
           viewmtx, projmtx, camera.getDisplayOrientedPose(),lightIntensity);

   if (mRecorder!= null && mRecorder.isRecording()) {
     VideoRecorder.CaptureContext ctx = mRecorder.startCapture();
     if (ctx != null) {
       // draw again
       draw(frame, camera.getTrackingState() == TrackingState.PAUSED,
            viewmtx, projmtx, camera.getDisplayOrientedPose(), lightIntensity);

       // restore the context
       mRecorder.stopCapture(ctx, frame.getTimestamp());
     }
   }



   // Application is responsible for releasing the point cloud resources after
   // using it.
   pointCloud.release();

   // Check if we detected at least one plane. If so, hide the loading message.
   if (mMessageSnackbar != null) {
     for (Plane plane : mSession.getAllTrackables(Plane.class)) {
       if (plane.getType() == 
              com.google.ar.core.Plane.Type.HORIZONTAL_UPWARD_FACING
               && plane.getTrackingState() == TrackingState.TRACKING) {
         hideLoadingMessage();
         break;
       }
     }
   }
 } catch (Throwable t) {
   // Avoid crashing the application due to unhandled exceptions.
   Log.e(TAG, "Exception on the OpenGL thread", t);
 }
}


private void draw(Frame frame, boolean paused,
                 float[] viewMatrix, float[] projectionMatrix,
                 Pose displayOrientedPose, float lightIntensity) {

 // Clear screen to notify driver it should not load
 // any pixels from previous frame.
 GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);

 // Draw background.
 mBackgroundRenderer.draw(frame);

 // If not tracking, don't draw 3d objects.
 if (paused) {
   return;
 }

 mPointCloud.draw(viewMatrix, projectionMatrix);

 // Visualize planes.
 mPlaneRenderer.drawPlanes(
         mSession.getAllTrackables(Plane.class),
         displayOrientedPose, projectionMatrix);

 // Visualize anchors created by touch.
 float scaleFactor = 1.0f;
 for (Anchor anchor : mAnchors) {
   if (anchor.getTrackingState() != TrackingState.TRACKING) {
     continue;
   }
   // Get the current pose of an Anchor in world space.
   // The Anchor pose is
   // updated during calls to session.update() as ARCore refines
   // its estimate of the world.
   anchor.getPose().toMatrix(mAnchorMatrix, 0);

   // Update and draw the model and its shadow.
   mVirtualObject.updateModelMatrix(mAnchorMatrix, scaleFactor);
   mVirtualObjectShadow.updateModelMatrix(mAnchorMatrix, scaleFactor);
   mVirtualObject.draw(viewMatrix, projectionMatrix, lightIntensity);
   mVirtualObjectShadow.draw(viewMatrix, projectionMatrix, lightIntensity);
 }
}

处理录音的切换:

public void clickToggleRecording(View view) {
 Log.d(TAG, "clickToggleRecording");
 if (mRecorder == null) {
   File outputFile = new File(Environment.getExternalStoragePublicDirectory(
        Environment.DIRECTORY_PICTURES) + "/HelloAR",
           "fbo-gl-" + Long.toHexString(System.currentTimeMillis()) + ".mp4");
   File dir = outputFile.getParentFile();
   if (!dir.exists()) {
     dir.mkdirs();
   }

   try {
     mRecorder = new VideoRecorder(mSurfaceView.getWidth(),
             mSurfaceView.getHeight(),
             VideoRecorder.DEFAULT_BITRATE, outputFile, this);
     mRecorder.setEglConfig(mAndroidEGLConfig);
     } catch (IOException e) {
    Log.e(TAG,"Exception starting recording", e);
   }
 }
 mRecorder.toggleRecording();
 updateControls();
}

private void updateControls() {
 Button toggleRelease = findViewById(R.id.fboRecord_button);
 int id = (mRecorder != null && mRecorder.isRecording()) ?
         R.string.toggleRecordingOff : R.string.toggleRecordingOn;
 toggleRelease.setText(id);

 TextView tv =  findViewById(R.id.nowRecording_text);
 if (id == R.string.toggleRecordingOff) {
   tv.setText(getString(R.string.nowRecording));
 } else {
   tv.setText("");
 }
}

添加监听器界面接收视频录制状态变化:

@Override
public void onVideoRecorderEvent(VideoRecorder.VideoEvent videoEvent) {
 Log.d(TAG, "VideoEvent: " + videoEvent);
 updateControls();

 if (videoEvent == VideoRecorder.VideoEvent.RecordingStopped) {
   mRecorder = null;
 }
}

实现录像机类,将图像馈送到编码器

录像机类用于将图像提供给媒体编码器。此类使用媒体编码器的输入曲面创建屏幕外EGLSurface。通常的方法是在为UI显示录制绘图一次的过程中,然后对媒体编码器表面进行相同的精确绘图调用。

构造函数获取录制参数和侦听器,以便在录制过程中将事件推送到。

public VideoRecorder(int width, int height, int bitrate, File outputFile,
                    VideoRecorderListener listener) throws IOException {
 this.listener = listener;
 mEncoderCore = new VideoEncoderCore(width, height, bitrate, outputFile);
 mVideoRect = new Rect(0,0,width,height);
}

当录制开始时,我们需要为编码器创建一个新的EGL表面。然后通知编码器新帧可用,使编码器表面成为当前EGL表面,并返回,以便调用方可以进行绘图调用。

public CaptureContext startCapture() {

 if (mVideoEncoder == null) {
   return null;
 }

 if (mEncoderContext == null) {
   mEncoderContext = new CaptureContext();
   mEncoderContext.windowDisplay = EGL14.eglGetCurrentDisplay();

   // Create a window surface, and attach it to the Surface we received.
   int[] surfaceAttribs = {
           EGL14.EGL_NONE
   };

   mEncoderContext.windowDrawSurface = EGL14.eglCreateWindowSurface(
           mEncoderContext.windowDisplay,
         mEGLConfig,mEncoderCore.getInputSurface(),
         surfaceAttribs, 0);
   mEncoderContext.windowReadSurface = mEncoderContext.windowDrawSurface;
 }

 CaptureContext displayContext = new CaptureContext();
 displayContext.initialize();

 // Draw for recording, swap.
 mVideoEncoder.frameAvailableSoon();


 // Make the input surface current
 // mInputWindowSurface.makeCurrent();
 EGL14.eglMakeCurrent(mEncoderContext.windowDisplay,
         mEncoderContext.windowDrawSurface, mEncoderContext.windowReadSurface,
         EGL14.eglGetCurrentContext());

 // If we don't set the scissor rect, the glClear() we use to draw the
 // light-grey background will draw outside the viewport and muck up our
 // letterboxing.  Might be better if we disabled the test immediately after
 // the glClear().  Of course, if we were clearing the frame background to
 // black it wouldn't matter.
 //
 // We do still need to clear the pixels outside the scissor rect, of course,
 // or we'll get garbage at the edges of the recording.  We can either clear
 // the whole thing and accept that there will be a lot of overdraw, or we
 // can issue multiple scissor/clear calls.  Some GPUs may have a special
 // optimization for zeroing out the color buffer.
 //
 // For now, be lazy and zero the whole thing.  At some point we need to
 // examine the performance here.
 GLES20.glClearColor(0f, 0f, 0f, 1f);
 GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);

 GLES20.glViewport(mVideoRect.left, mVideoRect.top,
         mVideoRect.width(), mVideoRect.height());
 GLES20.glEnable(GLES20.GL_SCISSOR_TEST);
 GLES20.glScissor(mVideoRect.left, mVideoRect.top,
         mVideoRect.width(), mVideoRect.height());

 return displayContext;
}

绘图完成后,需要将EGLContext恢复回UI表面:

public void stopCapture(CaptureContext oldContext, long timeStampNanos) {

 if (oldContext == null) {
   return;
 }
 GLES20.glDisable(GLES20.GL_SCISSOR_TEST);
 EGLExt.eglPresentationTimeANDROID(mEncoderContext.windowDisplay,
      mEncoderContext.windowDrawSurface, timeStampNanos);

 EGL14.eglSwapBuffers(mEncoderContext.windowDisplay,
      mEncoderContext.windowDrawSurface);


 // Restore.
 GLES20.glViewport(0, 0, oldContext.getWidth(), oldContext.getHeight());
 EGL14.eglMakeCurrent(oldContext.windowDisplay,
         oldContext.windowDrawSurface, oldContext.windowReadSurface,
         EGL14.eglGetCurrentContext());
}

添加一些簿记方法

public boolean isRecording() {
 return mRecording;
}

public void toggleRecording() {
 if (isRecording()) {
   stopRecording();
 } else {
   startRecording();
 }
}

protected void startRecording() {
 mRecording = true;
 if (mVideoEncoder == null) {
    mVideoEncoder = new TextureMovieEncoder2(mEncoderCore);
 }
 if (listener != null) {
   listener.onVideoRecorderEvent(VideoEvent.RecordingStarted);
 }
}

protected void stopRecording() {
 mRecording = false;
 if (mVideoEncoder != null) {
   mVideoEncoder.stopRecording();
 }
 if (listener != null) {
   listener.onVideoRecorderEvent(VideoEvent.RecordingStopped);
 }
}

public void setEglConfig(EGLConfig eglConfig) {
 this.mEGLConfig = eglConfig;
}

public enum VideoEvent {
 RecordingStarted,
 RecordingStopped
}

public interface VideoRecorderListener {

 void onVideoRecorderEvent(VideoEvent videoEvent);
}

Capture Context的内部类跟踪显示和表面,以便轻松处理与EGL上下文一起使用的多个表面:

public static class CaptureContext {
 EGLDisplay windowDisplay;
 EGLSurface windowReadSurface;
 EGLSurface windowDrawSurface;
 private int mWidth;
 private int mHeight;

 public void initialize() {
   windowDisplay = EGL14.eglGetCurrentDisplay();
   windowReadSurface = EGL14.eglGetCurrentSurface(EGL14.EGL_DRAW);
   windowDrawSurface = EGL14.eglGetCurrentSurface(EGL14.EGL_READ);
   int v[] = new int[1];
   EGL14.eglQuerySurface(windowDisplay, windowDrawSurface, EGL14.EGL_WIDTH,
       v, 0);
   mWidth = v[0];
   v[0] = -1;
   EGL14.eglQuerySurface(windowDisplay, windowDrawSurface, EGL14.EGL_HEIGHT,
       v, 0);
   mHeight = v[0];
 }

 /**
  * Returns the surface's width, in pixels.
  * <p>
  * If this is called on a window surface, and the underlying
  * surface is in the process
  * of changing size, we may not see the new size right away
  * (e.g. in the "surfaceChanged"
  * callback).  The size should match after the next buffer swap.
  */
 public int getWidth() {
   if (mWidth < 0) {
     int v[] = new int[1];
     EGL14.eglQuerySurface(windowDisplay,
       windowDrawSurface, EGL14.EGL_WIDTH, v, 0);
     mWidth = v[0];
   }
     return mWidth;
 }

 /**
  * Returns the surface's height, in pixels.
  */
 public int getHeight() {
   if (mHeight < 0) {
     int v[] = new int[1];
     EGL14.eglQuerySurface(windowDisplay, windowDrawSurface,
         EGL14.EGL_HEIGHT, v, 0);
     mHeight = v[0];
   }
   return mHeight;
 }

}

添加VideoEncoder类

VideoEncoderCore类是从Grafika以及TextureMovieEncoder2类复制的。

 类似资料:
  • 本文向大家介绍Android自定义录制视频功能,包括了Android自定义录制视频功能的使用技巧和注意事项,需要的朋友参考一下 Android录制视频MediaRecorder+SurfaceView的使用方法,供大家参考,具体内容如下 先看效果图: <1>将视频动画显示到SurfaceView控件上 <2>使用MediaRecorder类进行视频的录制 常用的方法: 下面看代码: 以上就是本文的

  • 我能够在MediaCodec和MediaMuxer的帮助下录制(编码)视频。接下来,我需要在MediaCodec和MediaMuxer的帮助下处理音频部分和带视频的mux音频。 我面临两个问题: > 如何将音频和视频数据传递给MediaMuxer(因为writeSampleData()方法一次只接受一种类型的数据)? 我提到了MediaMuxerTest,但它使用的是MediaExtractor。

  • 当在OS X上使用Chrome或Safari访问此控制器方法时,开发人员工具报告请求被取消--没有收到响应,无论是200还是404。我已经确认了SimpleResponse实际上是由这个控制器操作在请求上返回的。我希望它能提供一个好的响应,但是Play不能完成响应,或者我的浏览器不能接受它。我是在这里做了什么错误的回应,还是在框架中偶然发现了一个bug? 我的游戏版本是2.1.3。

  • 我们有一台摄像机,记录高FPS率-163的视频。 谢谢!

  • 我试图在一个单独的文件中每40毫秒生成一个视频和音频,并将其发送到云端进行直播,但创建的视频和音频无法使用ffplay播放。 命令: ffmpeg-f alsa-thread_queue_size1024-i hw: 0-f video o4linux2-i /dev/video0-c: a aac-ar48k-t 0:10-segment_time00:00.04-f段sample-3d.aac

  • 我正在使用AVPlayer播放HLS流。当用户按下录制按钮时,我还需要录制这些流。我使用的方法是分别录制音频和视频,然后在最后合并这些文件以制作最终视频。远程mp4文件是成功的。 但是现在对于HLS(.m3u8)文件,我可以使用AVAssetWriter录制视频,但音频录制有问题。 我正在使用MTAudioProccessingTap处理原始音频数据并将其写入文件。我遵循了这篇文章。我能够录制远程