当前位置: 首页 > 工具软件 > kxmovie > 使用案例 >

kxmovie 源码的详解

司寇祺
2023-12-01

kxmovie相信大部分人都很熟悉,一款非常棒的第三方开源流媒体播放器,当然你可能说ijkplay播放器更好,这里为了更好的研究ffmpeg解码播放的原理将对它进行剖析。

下载地址 点击打开链接  http://download.csdn.net/detail/itpeng523/8915993 

播放的原理:

一、打开流媒体文件

+ (id) movieViewControllerWithContentPath: (NSString *) path
                               parameters: (NSDictionary *) parameters
{
    //初始化音频
    id<KxAudioManager> audioManager = [KxAudioManager audioManager];
    [audioManager activateAudioSession];
    return [[KxMovieViewController alloc] initWithContentPath: path parameters: parameters];
}

- (id) initWithContentPath: (NSString *) path
                parameters: (NSDictionary *) parameters
{
    NSAssert(path.length > 0, @"empty path");
    
    self = [super initWithNibName:nil bundle:nil];
    if (self) {
        
        _moviePosition = 0;
//        self.wantsFullScreenLayout = YES;

        _parameters = parameters;
        
        __weak KxMovieViewController *weakSelf = self;
        
        KxMovieDecoder *decoder = [[KxMovieDecoder alloc] init];
        //设置解码器中断回调
        decoder.interruptCallback = ^BOOL(){
            
            __strong KxMovieViewController *strongSelf = weakSelf;
            return strongSelf ? [strongSelf interruptDecoder] : YES;
        };
        
        dispatch_async(dispatch_get_global_queue(0, 0), ^{
    
            NSError *error = nil;
            [decoder openFile:path error:&error];
                        
            __strong KxMovieViewController *strongSelf = weakSelf;
            if (strongSelf) {
                
                dispatch_sync(dispatch_get_main_queue(), ^{
                    
                    [strongSelf setMovieDecoder:decoder withError:error];                    
                });
            }
        });
    }
    return self;
}

我们可以看到这两个函数功能完成了音频播放器的初始化(这篇文章不解释音频播放器的做法,kxmovie用的AudioUnit来播放,后面单独介绍),初始化了KxMovieDecoder设置了中断函数的回调interruptCallback,最重要的openFile这里通过开启一个异步线程来执行此方法。下面具体看openFile:

- (BOOL) openFile: (NSString *) path
            error: (NSError **) perror
{
    NSAssert(path, @"nil path");
    NSAssert(!_formatCtx, @"already open");
    
    _isNetwork = isNetworkPath(path); //先判断是不是网络流
    
    static BOOL needNetworkInit = YES;
    if (needNetworkInit && _isNetwork) {
        
        needNetworkInit = NO;
        avformat_network_init();    //如果是网络流得先初始化
    }
    
    _path = path;
    //打开文件
    kxMovieError errCode = [self openInput: path];
    
    if (errCode == kxMovieErrorNone) {
        
        kxMovieError videoErr = [self openVideoStream];//打开视频流
        kxMovieError audioErr = [self openAudioStream];//打开音频流
        
        _subtitleStream = -1;
        
        if (videoErr != kxMovieErrorNone &&
            audioErr != kxMovieErrorNone) {
         
            errCode = videoErr; // both fails
            
        } else {
            
            _subtitleStreams = collectStreams(_formatCtx, AVMEDIA_TYPE_SUBTITLE);
        }
    }
    
    if (errCode != kxMovieErrorNone) {
        
        [self closeFile];
        NSString *errMsg = errorMessage(errCode);
        LoggerStream(0, @"%@, %@", errMsg, path.lastPathComponent);
        if (perror)
            *perror = kxmovieError(errCode, errMsg);
        return NO;
    }
        
    return YES;
}
这个函数基本上完成了ffmpeg解码前的所有准备工作,接下来一个个看。

- (kxMovieError) openInput: (NSString *) path

- (kxMovieError) openInput: (NSString *) path
{
    AVFormatContext *formatCtx = NULL;
    AVDictionary* options = NULL;
    
    av_dict_set(&options, "rtsp_transport", "tcp", 0);      //把视频流的传输模式强制成tcp传输
    //设置加载时间
    av_dict_set(&options, "analyzeduration", "2000000", 0); //解析的最大时长这里的数字代表微妙 2000000/1000000 = 2s
    av_dict_set(&options, "probesize", "122880", 0);        //解析的容量上限为122880/1024M = 120M 可以自己设置不能太小否则会导致流的信息分析不完整
    if (_interruptCallback) {
        
        formatCtx = avformat_alloc_context(); //初始化AVFormatContext 基本结构体 使用av_malloc分配了一块内存 主要用于处理封装格式(FLV/MKV/RMVB等)
        if (!formatCtx)
            return kxMovieErrorOpenFile;
        //处理中断函数 第一个参数函数指针 指向一个函数
        AVIOInterruptCB cb = {interrupt_callback, (__bridge void *)(self)};
        formatCtx->interrupt_callback = cb;
    }
    //打开文件 url_open,url_read
    if (avformat_open_input(&formatCtx, [path cStringUsingEncoding: NSUTF8StringEncoding], NULL, &options) < 0) {
        
        if (formatCtx)
            avformat_free_context(formatCtx);
        return kxMovieErrorOpenFile;
    }
    //读取视音频数据相关的信息 parser find_decoder  avcodec_open2 实现了解码器的查找,解码器的打开,视音频帧的读取,视音频帧的解码
    if (avformat_find_stream_info(formatCtx, NULL) < 0) {
        
        avformat_close_input(&formatCtx);
        return kxMovieErrorStreamInfoNotFound;
    }

    av_dump_format(formatCtx, 0, [path.lastPathComponent cStringUsingEncoding: NSUTF8StringEncoding], false);
    
    _formatCtx = formatCtx;
    return kxMovieErrorNone;
}
rtsp_transport这里主要是为了把视频流的传输模式强制成tcp传输probesize是设置解析的容量上限,analyzeduration 解析的最大时长。可以根据源码来分析:

先看AVFormatContext结构体的初始化:

AVFormatContext *avformat_alloc_context(void)
{
    AVFormatContext *ic;
    ic = av_malloc(sizeof(AVFormatContext));
    if (!ic) return ic;
    avformat_get_context_defaults(ic);
    ic->internal = av_mallocz(sizeof(*ic->internal));
     if (!ic->internal) {
         avformat_free_context(ic);
        return NULL;
    }
    return ic;
}

使用av_malloc分配的一段空间,最基本的结构体,结构体里面的变量太多不一一列举,主要包括:AVStream **streams;//视频流结构体 unsignedint packet_size;//AVPacket数据的大小 unsignedint probesize;//容量大小 AVIOInterruptCB interrupt_callback;//中断回调 AVCodec *video_codec; AVCodec *audio_codec;这里只列出此代码中用到的。

接下来看打开媒体函数 avformat_open_input 的源码

int avformat_open_input(AVFormatContext **ps, const char *filename,
                        AVInputFormat *fmt, AVDictionary **options)
{
    AVFormatContext *s = *ps;
    int ret = 0;
    AVDictionary *tmp = NULL;
    ID3v2ExtraMeta *id3v2_extra_meta = NULL;

    if (!s && !(s = avformat_alloc_context()))
        return AVERROR(ENOMEM);
    if (!s->av_class) {
        av_log(NULL, AV_LOG_ERROR, "Input context has not been properly allocated by avformat_alloc_context() and is not NULL either\n");
        return AVERROR(EINVAL);
    }
    if (fmt)
        s->iformat = fmt;

    if (options)
        av_dict_copy(&tmp, *options, 0);

    if ((ret = av_opt_set_dict(s, &tmp)) < 0)
        goto fail;

    if ((ret = init_input(s, filename, &tmp)) < 0)
        goto fail;
    s->probe_score = ret;

    if (s->format_whitelist && av_match_list(s->iformat->name, s->format_whitelist, ',') <= 0) {
        av_log(s, AV_LOG_ERROR, "Format not on whitelist\n");
        ret = AVERROR(EINVAL);
        goto fail;
    }

    avio_skip(s->pb, s->skip_initial_bytes);

    /* Check filename in case an image number is expected. */
    if (s->iformat->flags & AVFMT_NEEDNUMBER) {
        if (!av_filename_number_test(filename)) {
            ret = AVERROR(EINVAL);
            goto fail;
        }
    }

    s->duration = s->start_time = AV_NOPTS_VALUE;
    av_strlcpy(s->filename, filename ? filename : "", sizeof(s->filename));

    /* Allocate private data. */
    if (s->iformat->priv_data_size > 0) {
        if (!(s->priv_data = av_mallocz(s->iformat->priv_data_size))) {
            ret = AVERROR(ENOMEM);
            goto fail;
        }
        if (s->iformat->priv_class) {
            *(const AVClass **) s->priv_data = s->iformat->priv_class;
            av_opt_set_defaults(s->priv_data);
            if ((ret = av_opt_set_dict(s->priv_data, &tmp)) < 0)
                goto fail;
        }
    }

    /* e.g. AVFMT_NOFILE formats will not have a AVIOContext */
    if (s->pb)
        ff_id3v2_read(s, ID3v2_DEFAULT_MAGIC, &id3v2_extra_meta, 0);

    if (!(s->flags&AVFMT_FLAG_PRIV_OPT) && s->iformat->read_header)
        if ((ret = s->iformat->read_header(s)) < 0)
            goto fail;

    if (id3v2_extra_meta) {
        if (!strcmp(s->iformat->name, "mp3") || !strcmp(s->iformat->name, "aac") ||
            !strcmp(s->iformat->name, "tta")) {
            if ((ret = ff_id3v2_parse_apic(s, &id3v2_extra_meta)) < 0)
                goto fail;
        } else
            av_log(s, AV_LOG_DEBUG, "demuxer does not support additional id3 data, skipping\n");
    }
    ff_id3v2_free_extra_meta(&id3v2_extra_meta);

    if ((ret = avformat_queue_attached_pictures(s)) < 0)
        goto fail;

    if (!(s->flags&AVFMT_FLAG_PRIV_OPT) && s->pb && !s->data_offset)
        s->data_offset = avio_tell(s->pb);

    s->raw_packet_buffer_remaining_size = RAW_PACKET_BUFFER_SIZE;

    if (options) {
        av_dict_free(options);
        *options = tmp;
    }
    *ps = s;
    return 0;

fail:
    ff_id3v2_free_extra_meta(&id3v2_extra_meta);
    av_dict_free(&tmp);
    if (s->pb && !(s->flags & AVFMT_FLAG_CUSTOM_IO))
        avio_close(s->pb);
    avformat_free_context(s);
    *ps = NULL;
    return ret;
}
确实比较长,看几个重要的地方就行了,加深一下理解,主要通过init_input来完成初始化,其中通过read_header()读取多媒体的头文件,这些信息都存放在AVStream里面,有兴趣的可以再去研究ffmpeg的源码,后面就不直接贴源码了。

上面已经初始化了AVFormatContext基本结构体并且打开流媒体文件,接下来就得对流进行解码,这就牵涉到:解码器的查找、解码器的打开、视音频帧的读取、视音频帧的解码等,avformat_find_stream_info 就是用来完成这样的工作,所以它非常重要耗时也是比较长的,之所以前面设置analyzeduration、probesize在这里就起到作用了。简单看avformat_find_stream_info()函数里面的源码片段:

int i, count, ret = 0, j;
int64_t read_size;
AVStream *st;
AVPacket pkt1, *pkt;
int64_t old_offset  = avio_tell(ic->pb);
// new streams might appear, no options for those
int orig_nb_streams = ic->nb_streams;
int flush_codecs;
int64_t max_analyze_duration = ic->max_analyze_duration2;
int64_t probesize = ic->probesize2;


if (!max_analyze_duration)
max_analyze_duration = ic->max_analyze_duration;
if (ic->probesize)
probesize = ic->probesize;
flush_codecs = probesize > 0;


av_opt_set(ic, "skip_clear", "1", AV_OPT_SEARCH_CHILDREN);


if (!max_analyze_duration) {
    if (!strcmp(ic->iformat->name, "flv") && !(ic->ctx_flags & AVFMTCTX_NOHEADER)) {
        max_analyze_duration = 10*AV_TIME_BASE;
    } else
        max_analyze_duration = 5*AV_TIME_BASE;
}

从这里可以看出  avformat_find_stream_info() 定义了AVStream(音视频流结构体st),AVPacket(音视频数据包结构体pkt,后面详细讲解),max_ analyze_duration( 解析的最大时长,前面的设置在这里起到了作用 ), probesize( 解析的容量上限也是在前面就设置了的 )。

其中流的解析器的初始化都是通过 st->parser = av_parser_init(st->codec->codec_id); 

codec = find_decoder(ic, st, st->codec->codec_id)函数用来实现解码器的查找,codec就是AVCodec的类型。avcodec_open2()函数用来打开解码器。

read_frame_internal()函数用来读取一帧完整的一帧压缩编码的数据,av_read_frame()函数的内部其实就是调用它来实现的。

try_decode_frame()函数就是用来解码压缩编码数据的。

总而言之avformat_find_stream_info()基本上已经实现了整个解码的流程,可想而知它的重要性。

文件打开已经完成接下面就进去音视频的流打开函数

视频流的打开:

- (kxMovieError) openVideoStream
{
    kxMovieError errCode = kxMovieErrorStreamNotFound;
    _videoStream = -1;
    _artworkStream = -1;
    //收集视频流
    _videoStreams = collectStreams(_formatCtx, AVMEDIA_TYPE_VIDEO);
    for (NSNumber *n in _videoStreams) {
        
        const NSUInteger iStream = n.integerValue;

        if (0 == (_formatCtx->streams[iStream]->disposition & AV_DISPOSITION_ATTACHED_PIC)) {
        
            errCode = [self openVideoStream: iStream];
            if (errCode == kxMovieErrorNone)
                break;
            
        } else {
            
            _artworkStream = iStream;
        }
    }
    
    return errCode;
}

- (kxMovieError) openVideoStream: (NSInteger) videoStream
{    
    // get a pointer to the codec context for the video stream 视频编解码器结构体
    AVCodecContext *codecCtx = _formatCtx->streams[videoStream]->codec;
    
    // find the decoder for the video stream 找到解码器 我这里是H264
    AVCodec *codec = avcodec_find_decoder(codecCtx->codec_id);
    if (!codec)
        return kxMovieErrorCodecNotFound;
    
    // inform the codec that we can handle truncated bitstreams -- i.e.,
    // bitstreams where frame boundaries can fall in the middle of packets
    //if(codec->capabilities & CODEC_CAP_TRUNCATED)
    //    _codecCtx->flags |= CODEC_FLAG_TRUNCATED;
    
    // open codec 打开解码器
    if (avcodec_open2(codecCtx, codec, NULL) < 0)
        return kxMovieErrorOpenCodec;
        
    _videoFrame = av_frame_alloc(); //初始化一个视频帧 分配一次 存储原始数据对于视频就是YUV或者RGB

    if (!_videoFrame) {
        avcodec_close(codecCtx);
        return kxMovieErrorAllocateFrame;
    }
    
    _videoStream = videoStream;
    _videoCodecCtx = codecCtx;
    
    // determine fps
    //AVStream 存储每一个视频/音频流信息的结构体 st
    AVStream *st = _formatCtx->streams[_videoStream];
    //PTS*time_base=真正的时间
    avStreamFPSTimeBase(st, 0.04, &_fps, &_videoTimeBase);
    
    LoggerVideo(1, @"video codec size: %lu:%lu fps: %.3f tb: %f",
                (unsigned long)self.frameWidth,
                (unsigned long)self.frameHeight,
                _fps,
                _videoTimeBase);
    
    LoggerVideo(1, @"video start time %f", st->start_time * _videoTimeBase);
    LoggerVideo(1, @"video disposition %d", st->disposition);
    
    return kxMovieErrorNone;
}
static NSArray *collectStreams(AVFormatContext *formatCtx, enum AVMediaType codecType)
{
    NSMutableArray *ma = [NSMutableArray array];
   
    for (NSInteger i = 0; i < formatCtx->nb_streams; ++i)
        if (codecType == formatCtx->streams[i]->codec->codec_type) //判断类型
            [ma addObject: [NSNumber numberWithInteger: i]];
    return [ma copy];
}
打开视频流得先找到视频流,AVFormatContext结构体中nb_streams就存放着音频流的个数,正常一个流媒体文件里面只有一个视频流和一个音频流,这里通过collectStreams函数将视频流在streams的位置保存起来。接下来看打开视频流的过程:  
<span style="font-size:18px;">- (kxMovieError) openVideoStream: (NSInteger) videoStream</span>
代码里面都有注释主要得到了:  

_videoStream = videoStream;      //视频流在streams的位置

_videoCodecCtx = codecCtx;       //视频解码器结构体

_videoFrame = av_frame_alloc(); //视频帧结构体

_videoTimeBase                           //基时

得到这些非常重要在后面的解码都用得着,是不是感觉跟前面avformat_find_stream_info()函数操作差不多 查找解码器打开解码器。
音频流的打开流程也差不多,这里不就贴代码,在讲音频播放的时候单独拿出来。在音视频流都打开了话就代表要到最后阶段了,下面就是解码显示部分,我看来看看kxmovie是怎么实现一般解码一边显示。

回到前面的代码:

     dispatch_sync(dispatch_get_main_queue(), ^{
                    
           [strongSelf setMovieDecoder:decoder withError:error];                    
     });
在setMovieDecoder()函数里面最主要的是设置_minBufferedDuration(最小缓存时长)和_maxBufferedDuration(最大缓存时长),这两个参数非常重要,现在直播这么火怎么保持直播流畅而又没有延时怎么处理好这些数据这是个关键,当然kxmovie这里的做法为了保证播放流畅给了一个最小缓存代码里面_minBufferedDuration = 2,_maxBufferedDuration = 4,界面上的代码不看了 直接跳到play函数:

-(void) play
{
    if (self.playing)
        return;
    
    if (!_decoder.validVideo &&
        !_decoder.validAudio) {
        
        return;
    }
    
    if (_interrupted)
        return;

    self.playing = YES;
    _interrupted = NO;
    _disableUpdateHUD = NO;
    _tickCorrectionTime = 0;
    _tickCounter = 0;

#ifdef DEBUG
    _debugStartTime = -1;
#endif
    //解码frame
    [self asyncDecodeFrames];
    [self updatePlayButton];

    dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, 0.1 * NSEC_PER_SEC);
    dispatch_after(popTime, dispatch_get_main_queue(), ^(void){
        [self tick];
    });

    if (_decoder.validAudio)
        [self enableAudio:YES];

    LoggerStream(1, @"play movie");
}
这里就得分两步走了 asyncDecodeFrames 开启一个异步线程去执行解码操作 另外一边在主线程执行播放的操作。先看
asyncDecodeFrames:

- (void) asyncDecodeFrames
{
    if (self.decoding)
        return;
    
    __weak KxMovieViewController *weakSelf = self;
    __weak KxMovieDecoder *weakDecoder = _decoder;
    
    const CGFloat duration = _decoder.isNetwork ? .0f : 0.1f;
    
    self.decoding = YES;
    dispatch_async(_dispatchQueue, ^{
        
        {
            __strong KxMovieViewController *strongSelf = weakSelf;
            if (!strongSelf.playing)
                return;
        }
        
        BOOL good = YES;
        while (good) {
            
            good = NO;
            
            @autoreleasepool {
                
                __strong KxMovieDecoder *decoder = weakDecoder;
                
                if (decoder && (decoder.validVideo || decoder.validAudio)) {
                    
                    NSArray *frames = [decoder decodeFrames:duration];
                    if (frames.count) {
                        
                        __strong KxMovieViewController *strongSelf = weakSelf;
                        if (strongSelf)
                        {
                            good = [strongSelf addFrames:frames];
                        }
                    }
                }
            }
        }
                
        {
            __strong KxMovieViewController *strongSelf = weakSelf;
            if (strongSelf) strongSelf.decoding = NO;
        }
    });
}
代码很简单在这个线程里面开启一个whil(1)循环,使这个线程一直存活,一直在解码数据将解码玩的数据放addFrames进行处理。
//解码帧
- (NSArray *) decodeFrames: (CGFloat) minDuration
{
    if (_videoStream == -1 &&
        _audioStream == -1)
        return nil;

    NSMutableArray *result = [NSMutableArray array];
    
    AVPacket packet;
    
    CGFloat decodedDuration = 0;
    
    BOOL finished = NO;
    
    while (!finished) {
        //读取码流中的音频若干帧或者视频一帧
        if (av_read_frame(_formatCtx, &packet) < 0) {
            _isEOF = YES;
            break;
        }
        if (packet.stream_index ==_videoStream) {
           
            int pktSize = packet.size;
            
            while (pktSize > 0) {
                            
                int gotframe = 0;
                //解码一帧视频  gotframe如果为0 代表没有帧解码 出错为负
                int len = avcodec_decode_video2(_videoCodecCtx,
                                                _videoFrame,
                                                &gotframe,
                                                &packet);
                /**
                 *调用关键的函数 主要设置 picture
                 *avctx->codec->decode(avctx, picture, got_picture_ptr,&tmp);
                 *
                 */
                if (len < 0) {
                    LoggerVideo(0, @"decode video error, skip packet");
                    break;
                }
                
                if (gotframe) {
                    
                    if (!_disableDeinterlacing &&
                        _videoFrame->interlaced_frame) {

                        avpicture_deinterlace((AVPicture*)_videoFrame,
                                              (AVPicture*)_videoFrame,
                                              _videoCodecCtx->pix_fmt,
                                              _videoCodecCtx->width,
                                              _videoCodecCtx->height);
                    }
                    
                    KxVideoFrame *frame = [self handleVideoFrame];
                    if (frame) {
                        
                        [result addObject:frame];
                        
                        _position = frame.position;
                        decodedDuration += frame.duration;
                        if (decodedDuration > minDuration)
                            finished = YES;
                    }
                }
                                
                if (0 == len)
                    break;
                
                pktSize -= len;
            }
            
        } else if (packet.stream_index == _audioStream) {
                        
            int pktSize = packet.size;
            
            while (pktSize > 0) {
                
                int gotframe = 0;
                int len = avcodec_decode_audio4(_audioCodecCtx,
                                                _audioFrame,                                                
                                                &gotframe,
                                                &packet);
                
                if (len < 0) {
                    LoggerAudio(0, @"decode audio error, skip packet");
                    break;
                }
                
                if (gotframe) {
                    
                    KxAudioFrame * frame = [self handleAudioFrame];
                    if (frame) {
                        
                        [result addObject:frame];
                                                
                        if (_videoStream == -1) {
                            
                            _position = frame.position;
                            decodedDuration += frame.duration;
                            if (decodedDuration > minDuration)
                                finished = YES;
                        }
                    }
                }
                
                if (0 == len)
                    break;
                
                pktSize -= len;
            }
            
        } else if (packet.stream_index == _artworkStream) {
            
            if (packet.size) {

                KxArtworkFrame *frame = [[KxArtworkFrame alloc] init];
                frame.picture = [NSData dataWithBytes:packet.data length:packet.size];
                [result addObject:frame];
            }
            
        } else if (packet.stream_index == _subtitleStream) {
            
            int pktSize = packet.size;
            
            while (pktSize > 0) {
                
                AVSubtitle subtitle;
                int gotsubtitle = 0;
                int len = avcodec_decode_subtitle2(_subtitleCodecCtx,
                                                  &subtitle,
                                                  &gotsubtitle,
                                                  &packet);
                
                if (len < 0) {
                    LoggerStream(0, @"decode subtitle error, skip packet");
                    break;
                }
                
                if (gotsubtitle) {
                    
                    KxSubtitleFrame *frame = [self handleSubtitle: &subtitle];
                    if (frame) {
                        [result addObject:frame];
                    }
                    avsubtitle_free(&subtitle);
                }
                
                if (0 == len)
                    break;
                
                pktSize -= len;
            }
        }

        av_free_packet(&packet);
	}

    return result;
}
解码帧的函数,看得挺多的其实我们只需要看视频流和音频流就是了,一步一步来看。av_read_frame将读到的数据放到了一个AVPacket结构体中,如果是视频帧解码器是h264格式的话那AVPacket存的数据应该就是h264格式的数据,但是我们打印packet.data的数据并不是我们看到标准的nalu格式的数据也没有看到sps pps的一些信息,如果你们需要这些信息的话就可以这样做:

获取sps pps:

        /**
         *        获取AVPacket中的h264中的 sps与pps<span style="font-family: Arial, Helvetica, sans-serif;">数据</span>
         *
         *        unsigned char *dummy=NULL;   //输入的指针
         *        int dummy_len;
         *        AVBitStreamFilterContext* bsfc =  av_bitstream_filter_init("h264_mp4toannexb");
         *        av_bitstream_filter_filter(bsfc, _videoCodecCtx, NULL, &dummy, &dummy_len, NULL, 0, 0);
         *        av_bitstream_filter_close(bsfc);
         *        free(dummy);
         *        NSLog(@"_formatCtx extradata = %@, packet ===== %@",[NSData dataWithBytes:_videoCodecCtx->extradata length:_videoCodecCtx->extradata_size],[NSData dataWithBytes:packet.data length:packet.size]);
         *
         *
         */
获得标准的nalu格式数据:

AVPacket中的数据起始处没有分隔符(0x00000001), 也不是0x650x670x680x41等字节,所以可以AVPacket肯定这不是标准的nalu。其实,AVPacket4个字表示的是nalu的长度,从第5个字节开始才是nalu的数据。所以直接将AVPacket4个字节替换为0x00000001即可得到标准的nalu数据。

AVPacket就介绍到这里,下面看avcodec_decode_video2解码函数,将解码出来的数据放到AVFrame中,格式我这里解码出来的是YUV格式的数据。

下面来看:

- (KxVideoFrame *) handleVideoFrame
{
    if (!_videoFrame->data[0])
        return nil;
    
    KxVideoFrame *frame;
    
    if (_videoFrameFormat == KxVideoFrameFormatYUV) {
            
        KxVideoFrameYUV * yuvFrame = [[KxVideoFrameYUV alloc] init];
        //将YUV分离出来w*h*3/2 Byte的数据
        //Y 亮度  w*h Byte存储Y 拷贝一帧图片的数据
        yuvFrame.luma = copyFrameData(_videoFrame->data[0],
                                      _videoFrame->linesize[0],
                                      _videoCodecCtx->width,
                                      _videoCodecCtx->height);
        
        //U 色度 w*h*1/4 Byte存储U
        yuvFrame.chromaB = copyFrameData(_videoFrame->data[1],
                                         _videoFrame->linesize[1],
                                         _videoCodecCtx->width / 2,
                                         _videoCodecCtx->height / 2);
        
        //V 浓度 w*h*1/4 Byte存储V
        yuvFrame.chromaR = copyFrameData(_videoFrame->data[2],
                                         _videoFrame->linesize[2],
                                         _videoCodecCtx->width / 2,
                                         _videoCodecCtx->height / 2);
        
        frame = yuvFrame;
    
    } else {
    
        if (!_swsContext &&
            ![self setupScaler]) {
            
            LoggerVideo(0, @"fail setup video scaler");
            return nil;
        }
        
        sws_scale(_swsContext,
                  (const uint8_t **)_videoFrame->data,
                  _videoFrame->linesize,
                  0,
                  _videoCodecCtx->height,
                  _picture.data,
                  _picture.linesize);
        
        
        KxVideoFrameRGB *rgbFrame = [[KxVideoFrameRGB alloc] init];
        
        rgbFrame.linesize = _picture.linesize[0];
        rgbFrame.rgb = [NSData dataWithBytes:_picture.data[0]
                                    length:rgbFrame.linesize * _videoCodecCtx->height];
        frame = rgbFrame;
    }    
    
    frame.width = _videoCodecCtx->width;
    frame.height = _videoCodecCtx->height;
    //_videoTimeBase = 0.001 当前的时间 = pts*_videoTimeBase
    frame.position = av_frame_get_best_effort_timestamp(_videoFrame) * _videoTimeBase;

    const int64_t frameDuration = av_frame_get_pkt_duration(_videoFrame);
    if (frameDuration) {
        
        frame.duration = frameDuration * _videoTimeBase;
        frame.duration += _videoFrame->repeat_pict * _videoTimeBase * 0.5;
        
    } else {
        
        // sometimes, ffmpeg unable to determine a frame duration
        // as example yuvj420p stream from web camera
        frame.duration = 1.0 / _fps;
    }
#if 0
    LoggerVideo(2, @"VFD: %.4f %.4f | %lld ",
                frame.position,
                frame.duration,
                av_frame_get_pkt_pos(_videoFrame));
#endif
    
    return frame;
}
这个函数首先把YUV格式的数据分离开来分别放到luma、chromaB、chromaR中。

frame.position =av_frame_get_best_effort_timestamp(_videoFrame) *_videoTimeBase;这个参数非常重要得到当前显示的时间在播放器中用在播放时间的显示。

frame.duration =1.0 / _fps; //得到了当前帧的需要显示的时长 比如我的推流端设置的帧率是25帧那么一帧需要显示的时长就是0.04s这个参数也很重要。

解码完返回数据:

- (BOOL) addFrames: (NSArray *)frames
{
    if (_decoder.validVideo) {
        
        @synchronized(_videoFrames) {
            
            for (KxMovieFrame *frame in frames)
                if (frame.type == KxMovieFrameTypeVideo) {
                    [_videoFrames addObject:frame];
                    _bufferedDuration += frame.duration;
                }
        }
    }
    
    if (_decoder.validAudio) {
        
        @synchronized(_audioFrames) {
            
            for (KxMovieFrame *frame in frames)
                if (frame.type == KxMovieFrameTypeAudio) {
                    [_audioFrames addObject:frame];
                    if (!_decoder.validVideo)
                        _bufferedDuration += frame.duration;
                }
        }
        
        if (!_decoder.validVideo) {
            
            for (KxMovieFrame *frame in frames)
                if (frame.type == KxMovieFrameTypeArtwork)
                    self.artworkFrame = (KxArtworkFrame *)frame;
        }
    }
    
    if (_decoder.validSubtitles) {
        
        @synchronized(_subtitles) {
            
            for (KxMovieFrame *frame in frames)
                if (frame.type == KxMovieFrameTypeSubtitle) {
                    [_subtitles addObject:frame];
                }
        }
    }
      //最大缓存
    return self.playing && _bufferedDuration < _maxBufferedDuration;
}
这个函数主要对_bufferedDuration(缓存时长)进行累加,以及对数据的保存都存放一个数组里面,最后面判断当前的缓存有没有超过最大的缓存。这样一个视频帧的解码以及的采集就完成,接着回去看主线程的显示。

第一次定时(tick):

    dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, 0.1 * NSEC_PER_SEC);
    dispatch_after(popTime, dispatch_get_main_queue(), ^(void){
        [self tick];
    });
这个函数也就是延时操作,为什么这样做其实就是为了做开始加载的缓存,可以分析一下这里是0.1s后再去执行 tick函数,在此之间已经解码几十帧数据了。接下来看tick函数:

- (void) tick
{
    //缓存的时长
    if (_buffered && ((_bufferedDuration > _minBufferedDuration) || _decoder.isEOF)) {
        
        _tickCorrectionTime = 0;
        _buffered = NO;
        [_activityIndicatorView stopAnimating];        
    }
    
    CGFloat interval = 0;
    if (!_buffered)
        interval = [self presentFrame];  //显示一帧
    
    if (self.playing) {
        
        //还有可显示的音视频帧
        const NSUInteger leftFrames =
        (_decoder.validVideo ? _videoFrames.count : 0) +
        (_decoder.validAudio ? _audioFrames.count : 0);
        
        if (0 == leftFrames)  //如果没有要显示的数据了
        {
            if (_decoder.isEOF) {
                
                [self pause];
                [self updateHUD];
                return;
            }
            
            if (_minBufferedDuration > 0 && !_buffered)//确认缓存里面是否还有数据
            {
                                
                _buffered = YES;
                [_activityIndicatorView startAnimating];  //开始转
            }
        }
        
        if (!leftFrames ||
            !(_bufferedDuration > _minBufferedDuration))
        {
            
            [self asyncDecodeFrames];
        }
        
        const NSTimeInterval correction = [self tickCorrection];
        const NSTimeInterval time = MAX(interval + correction, 0.01);
        dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, time * NSEC_PER_SEC);
        dispatch_after(popTime, dispatch_get_main_queue(), ^(void){
            [self tick];
        });
    }
    
    if ((_tickCounter++ % 3) == 0) {
        [self updateHUD];
    }
}

- (CGFloat) tickCorrection
{
    if (_buffered)
        return 0;
    
    const NSTimeInterval now = [NSDate timeIntervalSinceReferenceDate];
    
    if (!_tickCorrectionTime) {
        
        _tickCorrectionTime = now;
        _tickCorrectionPosition = _moviePosition; //播放的位置 就是现在播的的时间
        return 0;
    }
    
    NSTimeInterval dPosition = _moviePosition - _tickCorrectionPosition;
    NSTimeInterval dTime = now - _tickCorrectionTime;
    NSTimeInterval correction = dPosition - dTime;
    if (correction > 1.f || correction < -1.f) {
        
        LoggerStream(1, @"tick correction reset %.2f", correction);
        correction = 0;
        _tickCorrectionTime = 0;
    }
    
    return correction;
}

- (CGFloat) presentFrame
{
    CGFloat interval = 0;
    
    if (_decoder.validVideo) {
        
        KxVideoFrame *frame;
        
        @synchronized(_videoFrames) {
            
            if (_videoFrames.count > 0) {
                
                frame = _videoFrames[0];
                [_videoFrames removeObjectAtIndex:0];
                _bufferedDuration -= frame.duration;
            }
        }
        
        if (frame)
            interval = [self presentVideoFrame:frame];
        
    } else if (_decoder.validAudio) {

        //interval = _bufferedDuration * 0.5;
                
        if (self.artworkFrame) {
            
            _imageView.image = [self.artworkFrame asImage];
            self.artworkFrame = nil;
        }
    }

    if (_decoder.validSubtitles)
        [self presentSubtitles];
    
#ifdef DEBUG
    if (self.playing && _debugStartTime < 0)
        _debugStartTime = [NSDate timeIntervalSinceReferenceDate] - _moviePosition;
#endif

    return interval;
}
tick函数其实就相当于一个被一个定时器循环调用一样隔多少秒调用一次隔多少秒调用一次,调用一次显示一帧数据,下面来看具体的操作:

首先

if (_buffered && ((_bufferedDuration >_minBufferedDuration) || _decoder.isEOF))

这里有个判断语句 _buffered表示是否需要缓存,如果数组里面有数据当然不需要缓存为NO否则为

YES。_bufferedDuration > _minBufferedDuration判断是否大于最小的缓存这里是2s。分析一下,tick()是在开始解码后0.1s才开始调用_bufferedDuration是进行帧的duration进行累加的,一帧是0.04s要大于2s的缓存肯定至少要解码50帧才可以显示。但是_buffered初始化设置为No,所以第一次缓存帧数是定时0.1的数量。

    if (!_buffered)

        interval = [selfpresentFrame];  //显示一帧

下面看一个网络不好的操作

        if (0 == leftFrames)  //如果没有要显示的数据了
        {
            if (_decoder.isEOF) {
                
                [self pause];
                [self updateHUD];
                return;
            }
            
            if (_minBufferedDuration > 0 && !_buffered)//确认缓存里面是否还有数据
            {
                                
                _buffered = YES;
                [_activityIndicatorView startAnimating];  //开始转
            }
        }
这里也很好理解,当显示数据的数组里面没有数据了,自然就要等待,进行缓存,此时_minBufferedDuration肯定为0了,因为每显示一帧数据都要减去这一帧的duration,等数据都显示完了自然也就为0,将_buffered置为YES。这时不会调用presentFrame而且必须要等到_bufferedDuration > _minBufferedDuration才开始显示。后面的OpenGLES显示就不写了,到此kxmovie的解码显示过程基本上也写清楚了。



































 







 
 类似资料: