当前位置: 首页 > 知识库问答 >
问题:

glkView.display()方法有时会导致崩溃。exc_bad_access

曾阳飙
2023-03-14

我正在尝试使用AVFoundationGLKitCore Image(不使用GPUImage)实现实时相机应用程序

所以,我发现这篇教程
http://altitudelabs.com/blog/real-time-filter/
它是用Objective-C编写的,所以我在Swift4.0中重写了那个代码,xcode9

它看起来工作很好,但有时(很少),它崩溃了以下错误。调用glkviewdisplay方法时

EXC_BAD_ACCESS(代码=1,地址+0x************)

崩溃的时间,GLKView存在(不是零),EAGLContext存在,CIContext存在。我的代码遵循


import UIKit
import AVFoundation
import GLKit
import OpenGLES

class ViewController: UIViewController {

    var videoDevice : AVCaptureDevice!
    var captureSession : AVCaptureSession!
    var captureSessionQueue : DispatchQueue!
    var videoPreviewView: GLKView!
    var ciContext: CIContext!
    var eaglContext: EAGLContext!
    var videoPreviewViewBounds: CGRect = CGRect.zero

    override func viewDidLoad() {
        super.viewDidLoad()
        // Do any additional setup after loading the view, typically from a nib.
        // remove the view's background color; this allows us not to use the opaque property (self.view.opaque = NO) since we remove the background color drawing altogether
        self.view.backgroundColor = UIColor.clear

        // setup the GLKView for video/image preview
        let window : UIView = UIApplication.shared.delegate!.window!!
        eaglContext = EAGLContext(api: .openGLES2)
        videoPreviewView = GLKView(frame: videoPreviewViewBounds, context: eaglContext)
        videoPreviewView.enableSetNeedsDisplay = false

        // because the native video image from the back camera is in UIDeviceOrientationLandscapeLeft (i.e. the home button is on the right), we need to apply a clockwise 90 degree transform so that we can draw the video preview as if we were in a landscape-oriented view; if you're using the front camera and you want to have a mirrored preview (so that the user is seeing themselves in the mirror), you need to apply an additional horizontal flip (by concatenating CGAffineTransformMakeScale(-1.0, 1.0) to the rotation transform)
        videoPreviewView.transform = CGAffineTransform(rotationAngle: CGFloat.pi/2.0)
        videoPreviewView.frame = window.bounds

        // we make our video preview view a subview of the window, and send it to the back; this makes ViewController's view (and its UI elements) on top of the video preview, and also makes video preview unaffected by device rotation
        window.addSubview(videoPreviewView)
        window.sendSubview(toBack: videoPreviewView)

        // bind the frame buffer to get the frame buffer width and height;
        // the bounds used by CIContext when drawing to a GLKView are in pixels (not points),
        // hence the need to read from the frame buffer's width and height;
        // in addition, since we will be accessing the bounds in another queue (_captureSessionQueue),
        // we want to obtain this piece of information so that we won't be
        // accessing _videoPreviewView's properties from another thread/queue
        videoPreviewView.bindDrawable()
        videoPreviewViewBounds = CGRect.zero
        videoPreviewViewBounds.size.width = CGFloat(videoPreviewView.drawableWidth)
        videoPreviewViewBounds.size.height = CGFloat(videoPreviewView.drawableHeight)

        // create the CIContext instance, note that this must be done after _videoPreviewView is properly set up
        ciContext = CIContext(eaglContext: eaglContext, options: [kCIContextWorkingColorSpace: NSNull()])

        if AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInDualCamera, .builtInTelephotoCamera, .builtInWideAngleCamera], mediaType: AVMediaType.video, position: .back).devices.count > 0 {
            start()
        } else {
            print("No device with AVMediaTypeVideo")
        }
    }

    override func didReceiveMemoryWarning() {
        super.didReceiveMemoryWarning()
        // Dispose of any resources that can be recreated.
    }

    func start() {
        let videoDevices = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: AVMediaType.video, position: .back).devices

        videoDevice = videoDevices.first

        var videoDeviceInput : AVCaptureInput!
        do {
            videoDeviceInput =  try AVCaptureDeviceInput(device: videoDevice)
        } catch let error {
            print("Unable to obtain video device input, error: \(error)")
            return
        }

        let preset = AVCaptureSession.Preset.high
        captureSession = AVCaptureSession()
        captureSession.sessionPreset = preset

        // core image watns bgra pixel format
        let outputSetting = [String(kCVPixelBufferPixelFormatTypeKey): kCVPixelFormatType_32BGRA]
        // crate and configure video data output
        let videoDataOutput = AVCaptureVideoDataOutput()
        videoDataOutput.videoSettings = outputSetting

        // create the dispatch queue for handling capture session delegate method calls
        captureSessionQueue = DispatchQueue(label: "capture_session_queue")
        videoDataOutput.setSampleBufferDelegate(self, queue: captureSessionQueue)
        videoDataOutput.alwaysDiscardsLateVideoFrames = true

        captureSession.beginConfiguration()
        if !captureSession.canAddOutput(videoDataOutput) {
            print("Cannot add video data output")
            captureSession = nil
            return
        }

        captureSession.addInput(videoDeviceInput)
        captureSession.addOutput(videoDataOutput)

        captureSession.commitConfiguration()

        captureSession.startRunning()
    }

}

extension ViewController : AVCaptureVideoDataOutputSampleBufferDelegate {
    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        let imageBuffer : CVImageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
        let sourceImage = CIImage(cvImageBuffer: imageBuffer, options: nil)
        let sourceExtent = sourceImage.extent

        let vignetteFilter = CIFilter(name: "CIVignetteEffect", withInputParameters: nil)
        vignetteFilter?.setValue(sourceImage, forKey: kCIInputImageKey)
        vignetteFilter?.setValue(CIVector(x: sourceExtent.size.width/2.0, y: sourceExtent.size.height/2.0), forKey: kCIInputCenterKey)
        vignetteFilter?.setValue(sourceExtent.width/2.0, forKey: kCIInputRadiusKey)
        let filteredImage = vignetteFilter?.outputImage

        let sourceAspect = sourceExtent.width/sourceExtent.height
        let previewAspect = videoPreviewViewBounds.width/videoPreviewViewBounds.height

        // we want to maintain the aspect radio of the screen size, so we clip the video image
        var drawRect = sourceExtent
        if sourceAspect > previewAspect {
            // use full height of the video image, and center crop the width
            drawRect.origin.x += (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0
            drawRect.size.width = drawRect.size.height * previewAspect
        } else {
            // use full width of the video image, and center crop the height
            drawRect.origin.y += (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0;
            drawRect.size.height = drawRect.size.width / previewAspect;
        }

        videoPreviewView.bindDrawable()

        if eaglContext != EAGLContext.current() {
            EAGLContext.setCurrent(eaglContext)
        }
        print("current thread \(Thread.current)")
        // clear eagl view to grey
        glClearColor(0.5, 0.5, 0.5, 1.0);
        glClear(GLbitfield(GL_COLOR_BUFFER_BIT));

        // set the blend mode to "source over" so that CI will use that
        glEnable(GLenum(GL_BLEND));
        glBlendFunc(GLenum(GL_ONE), GLenum(GL_ONE_MINUS_SRC_ALPHA));

        if let filteredImage = filteredImage {
            ciContext.draw(filteredImage, in: videoPreviewViewBounds, from: drawRect)
        }

        videoPreviewView.display()
    }
}


* thread #5, queue = 'com.apple.avfoundation.videodataoutput.bufferqueue', stop reason = EXC_BAD_ACCESS (code=1, address=0x8000000000000000)
frame #0: 0x00000001a496f098 AGXGLDriver`___lldb_unnamed_symbol149$$AGXGLDriver + 332
frame #1: 0x00000001923c029c OpenGLES`-[EAGLContext getParameter:to:] + 80
frame #2: 0x000000010038bca4 libglInterpose.dylib`__clang_call_terminate + 1976832
frame #3: 0x00000001001ab75c libglInterpose.dylib`__clang_call_terminate + 9400
frame #4: 0x000000010038b8b4 libglInterpose.dylib`__clang_call_terminate + 1975824
frame #5: 0x00000001001af098 libglInterpose.dylib`__clang_call_terminate + 24052
frame #6: 0x00000001001abe5c libglInterpose.dylib`__clang_call_terminate + 11192
frame #7: 0x000000010038f9dc libglInterpose.dylib`__clang_call_terminate + 1992504
frame #8: 0x000000010038d5b8 libglInterpose.dylib`__clang_call_terminate + 1983252
frame #9: 0x000000019a1e2a20 GLKit`-[GLKView _display:] + 308
* frame #10: 0x0000000100065e78 RealTimeCameraPractice`ViewController.captureOutput(output=0x0000000174034820, sampleBuffer=0x0000000119e25e70, connection=0x0000000174008850, self=0x0000000119d032d0) at ViewController.swift:160
frame #11: 0x00000001000662dc RealTimeCameraPractice`@objc ViewController.captureOutput(_:didOutput:from:) at ViewController.swift:0
frame #12: 0x00000001977ec310 AVFoundation`-[AVCaptureVideoDataOutput _handleRemoteQueueOperation:] + 308
frame #13: 0x00000001977ec14c AVFoundation`__47-[AVCaptureVideoDataOutput _updateRemoteQueue:]_block_invoke + 100
frame #14: 0x00000001926bdf38 CoreMedia`__FigRemoteOperationReceiverCreateMessageReceiver_block_invoke + 260
frame #15: 0x00000001926dce9c CoreMedia`__FigRemoteQueueReceiverSetHandler_block_invoke.2 + 224
frame #16: 0x000000010111da10 libdispatch.dylib`_dispatch_client_callout + 16
frame #17: 0x0000000101129a84 libdispatch.dylib`_dispatch_continuation_pop + 552
frame #18: 0x00000001011381f8 libdispatch.dylib`_dispatch_source_latch_and_call + 204
frame #19: 0x000000010111fa60 libdispatch.dylib`_dispatch_source_invoke + 828
frame #20: 0x000000010112b128 libdispatch.dylib`_dispatch_queue_serial_drain + 692
frame #21: 0x0000000101121634 libdispatch.dylib`_dispatch_queue_invoke + 852
frame #22: 0x000000010112b128 libdispatch.dylib`_dispatch_queue_serial_drain + 692
frame #23: 0x0000000101121634 libdispatch.dylib`_dispatch_queue_invoke + 852
frame #24: 0x000000010112c358 libdispatch.dylib`_dispatch_root_queue_drain_deferred_item + 276
frame #25: 0x000000010113457c libdispatch.dylib`_dispatch_kevent_worker_thread + 764
frame #26: 0x000000018ee56fbc libsystem_pthread.dylib`_pthread_wqthread + 772
frame #27: 0x000000018ee56cac libsystem_pthread.dylib`start_wqthread + 4

我的项目在GitHub
https://github.com/hegrecom/ios-realtimeCamerapractice

共有1个答案

鲜于裕
2023-03-14

Here解决方案:iOS 11 beta 4 presentRenderbuffer崩溃

Goto管理方案->选项->GPU帧捕获->已禁用

 类似资料:
  • 我试图用LWJGL编写一个opengl渲染器。为了打开窗户,我用的是GLFW。但是,当我调用glfwCreateWindow时,它会崩溃,出现以下错误: Java运行时环境检测到一个致命错误: 谢了!

  • 我们有修改PDF的代码,然后对修改后的PDF进行数字签名。我们使用iTextSharp库(4.1.6)的LGPL版本对PDF进行数字签名。 这是一个显示问题的示例PDF。PDF最初会打开,但随后会冻结,无法导航。无论您是否安装了我们的证书来验证此签名,问题似乎都会发生。 这个问题似乎没有始终如一地发生,问题只存在于Adobe Reader中。浏览器PDF查看器和Foxit Reader(进行签名验

  • 问题内容: 我正在尝试创建一个node.js应用程序, 导致我的程序崩溃。节点说 ReferenceError:警报未定义 然后退出。我可以在常规html页面上运行javascript时使用该函数,因此我不知所措,这是为什么…这是我必须与node.js一起使用的单独模块吗? 问题答案: 该功能是浏览器对象的属性。它实际上不是JavaScript的一部分;它只是该环境中JavaScript代码可用的

  • 如果设备Google Play services版本没有更新,在我的应用程序上使用以下代码会导致应用程序崩溃。 它在Android OS版本4.2.2和Google Play Services版本3.1.58的设备上崩溃(为了处理Google Play Services可用性检查,我特意降低了它的等级)。

  • 这些是我的分级依赖项: 这是崩溃日志 致命异常:主进程:com.example.myapp,pid:16817 java.lang.nosuchmethoderror:LCOM/mapbox/services/android/telemetry类中没有虚拟方法初始化(Landroid/content/context;LJava/lang/string;LJava/lang/string;LCOM/

  • Logcat消息: java.lang.IllegalStateException:在Android的父上下文或祖先上下文中找不到方法@={()->ViewModel.OnBtnClicked()(View):在id为“button”的视图类AndroidX.AppCompat.Widget.AppCompatButton上定义的OnClick属性 文件1:activity_main.xml 文件