当前位置: 首页 > 工具软件 > YYKit > 使用案例 >

阅读YYKit之YYImage实现gif展示

辛可人
2023-12-01

思路

YYKit中关于animatedImage中的gif图片处理过程

首先,gif图片属于YYKit规定的YYImage类型,而通过YYImage利用YYImageDecoder获取这个图片的帧数的信息,由YYImage遵循的YYAnimatedImage协议返回给YYAnimatedImageView.与本身是多张图片的图片数组YYFrameImage不同。

阅读YYKit之YYAnimatedImageView源码

 #define LOCK(...) dispatch_semaphore_wait(self->_lock, DISPATCH_TIME_FOREVER); \
 __VA_ARGS__; \
 dispatch_semaphore_signal(self->_lock);

 #define LOCK_VIEW(...) dispatch_semaphore_wait(view->_lock,  DISPATCH_TIME_FOREVER); \
 __VA_ARGS__; \
 dispatch_semaphore_signal(view->_lock);

读源码时看到这两个宏定义及其相关用法,不解其意。后经过上网查询得知,“\”是链接上下句为一个整体,适为宏定义使用,“VA_ARGS ”则代表宏定义多参数,常用于宏定义函数中。故而,上诉的宏定义可使任意操作处于信号量控制中。

YYImage 源码

类方法 + (NSArray *)preferredScales 是针对不同屏幕像素规格预选图片尺寸方法,在NSBundle+YYAdd文件中,NSBundle+YYAdd 是NSBundle的分类。这里涉及到编程规格问题,涉及到 [UIScreen mainScreen]方法的分类应在NSBundle分类。

[UIScreen mainScreen].scale 中scale 是原有图片坐标系转换为设备坐标系,原有坐标系用点来度量,而设备则是用像素来度量。典型的情况,对于retina屏幕来讲,scale值有可能为3.0或2.0,即一个点会被九个或四个像素代替。对于标准分辨率显示,比例因子是1,一个点等于一个像素。

YYImage重写了imageNamed:方法,其中摒弃了imageNamed缓存效果,继而用dataWithContentsOfFile方法获取图片文件路径方式来获取相应数据内容。

YYImageCoder 源码

bit-depth:图像色深,使用多少位来定义一个像素点。bit-depth越大,可以表示的色彩就越多。通常情况下,图像的像素值范围为0-255, 则其bit-depth就是8。RGB图像的bit-depth为24:8bit表示R,8bit表示G,8bit表示B。

YYImageDecoder文件内容

+ (instancetype)decoderWithData:(NSData *)data   scale:(CGFloat)scale {
if (!data) return nil;
YYImageDecoder *decoder = [[YYImageDecoder alloc] initWithScale:scale];
[decoder updateData:data final:YES];
if (decoder.frameCount == 0) return nil;
return decoder;
}

- (void)_updateSource {
switch (_type) {
    case YYImageTypeWebP: {
        [self _updateSourceWebP];
    } break;

    case YYImageTypePNG: {
        [self _updateSourceAPNG];
    } break;

    default: {
        [self _updateSourceImageIO];
    } break;
 }
}

因为本文首先针对gif格式图片进行研究,故而,这里执行的是[self _updateSourceImageIO]; 同志们,终于讲到这里了。(评书用语,借来用用)

 - (void)_updateSourceImageIO {
_width = 0;
_height = 0;
_orientation = UIImageOrientationUp;
_loopCount = 0;
dispatch_semaphore_wait(_framesLock, DISPATCH_TIME_FOREVER);
_frames = nil;
dispatch_semaphore_signal(_framesLock);

if (!_source) {
    if (_finalized) {
        _source = CGImageSourceCreateWithData((__bridge CFDataRef)_data, NULL);
    } else {
        _source = CGImageSourceCreateIncremental(NULL);
        if (_source) CGImageSourceUpdateData(_source, (__bridge CFDataRef)_data, false);
    }
} else {
    CGImageSourceUpdateData(_source, (__bridge CFDataRef)_data, _finalized);
}
if (!_source) return;

_frameCount = CGImageSourceGetCount(_source);
if (_frameCount == 0) return;

if (!_finalized) { // ignore multi-frame before finalized
    _frameCount = 1;
} else {
    if (_type == YYImageTypePNG) { // use custom apng decoder and ignore multi-frame
        _frameCount = 1;
    }
    if (_type == YYImageTypeGIF) { // get gif loop count
        CFDictionaryRef properties = CGImageSourceCopyProperties(_source, NULL);
        if (properties) {
            CFDictionaryRef gif = CFDictionaryGetValue(properties, kCGImagePropertyGIFDictionary);
            if (gif) {
                CFTypeRef loop = CFDictionaryGetValue(gif, kCGImagePropertyGIFLoopCount);
                if (loop) CFNumberGetValue(loop, kCFNumberNSIntegerType, &_loopCount);
            }
            CFRelease(properties);
        }
    }
}

/*
 ICO, GIF, APNG may contains multi-frame.
 */
NSMutableArray *frames = [NSMutableArray new];
for (NSUInteger i = 0; i < _frameCount; i++) {
    _YYImageDecoderFrame *frame = [_YYImageDecoderFrame new];
    frame.index = i;
    frame.blendFromIndex = i;
    frame.hasAlpha = YES;
    frame.isFullSize = YES;
    [frames addObject:frame];

    CFDictionaryRef properties = CGImageSourceCopyPropertiesAtIndex(_source, i, NULL);
    if (properties) {
        NSTimeInterval duration = 0;
        NSInteger orientationValue = 0, width = 0, height = 0;
        CFTypeRef value = NULL;

        value = CFDictionaryGetValue(properties, kCGImagePropertyPixelWidth);
        if (value) CFNumberGetValue(value, kCFNumberNSIntegerType, &width);
        value = CFDictionaryGetValue(properties, kCGImagePropertyPixelHeight);
        if (value) CFNumberGetValue(value, kCFNumberNSIntegerType, &height);
        if (_type == YYImageTypeGIF) {
            CFDictionaryRef gif = CFDictionaryGetValue(properties, kCGImagePropertyGIFDictionary);
            if (gif) {
                // Use the unclamped frame delay if it exists.
                value = CFDictionaryGetValue(gif, kCGImagePropertyGIFUnclampedDelayTime);
                if (!value) {
                    // Fall back to the clamped frame delay if the unclamped frame delay does not exist.
                    value = CFDictionaryGetValue(gif, kCGImagePropertyGIFDelayTime);
                }
                if (value) CFNumberGetValue(value, kCFNumberDoubleType, &duration);
            }
        }

        frame.width = width;
        frame.height = height;
        frame.duration = duration;

        if (i == 0 && _width + _height == 0) { // init first frame
            _width = width;
            _height = height;
            value = CFDictionaryGetValue(properties, kCGImagePropertyOrientation);
            if (value) {
                CFNumberGetValue(value, kCFNumberNSIntegerType, &orientationValue);
                _orientation = YYUIImageOrientationFromEXIFValue(orientationValue);
            }
        }
        CFRelease(properties);
    }
}
dispatch_semaphore_wait(_framesLock, DISPATCH_TIME_FOREVER);
_frames = frames;
dispatch_semaphore_signal(_framesLock);
} 

上诉方法主要有两部分内容,在执行for循环之前,先利用data创建CGImageSourceRef对象,代码如下:

  if (!_source) {
    if (_finalized) {
        _source = CGImageSourceCreateWithData((__bridge CFDataRef)_data, NULL);
    } else {
        _source = CGImageSourceCreateIncremental(NULL);
        if (_source) CGImageSourceUpdateData(_source, (__bridge CFDataRef)_data, false);
    }
} else {
    CGImageSourceUpdateData(_source, (__bridge CFDataRef)_data, _finalized);
}

根据_finalized状态值分别给予创建CGImageSourceRef对象或给CGImageSourceRef对象更新数据。

        IMAGEIO_EXTERN CGImageSourceRef __nullable CGImageSourceCreateWithData(CFDataRef __nonnull data, CFDictionaryRef __nullable options) IMAGEIO_AVAILABLE_STARTING(__MAC_10_4, __IPHONE_4_0);

        IMAGEIO_EXTERN CGImageSourceRef __nonnull CGImageSourceCreateIncremental(CFDictionaryRef __nullable options)  IMAGEIO_AVAILABLE_STARTING(__MAC_10_4, __IPHONE_4_0);

        IMAGEIO_EXTERN void CGImageSourceUpdateData(CGImageSourceRef __nonnull isrc, CFDataRef __nonnull data, bool final)  IMAGEIO_AVAILABLE_STARTING(__MAC_10_4, __IPHONE_4_0);

上诉方法均是imageIO framework内容。

*应注意,创建渐进性_source与非渐进性 _source方式不同。创建渐进性_source应先声明再更新数据。

      _frameCount = CGImageSourceGetCount(_source);
if (_frameCount == 0) return;

if (!_finalized) { // ignore multi-frame before finalized
    _frameCount = 1;
} else {
    if (_type == YYImageTypePNG) { // use custom apng decoder and ignore multi-frame
        _frameCount = 1;
    }
    if (_type == YYImageTypeGIF) { // get gif loop count
        CFDictionaryRef properties = CGImageSourceCopyProperties(_source, NULL);
        if (properties) {
            CFDictionaryRef gif = CFDictionaryGetValue(properties, kCGImagePropertyGIFDictionary);
            if (gif) {
                CFTypeRef loop = CFDictionaryGetValue(gif, kCGImagePropertyGIFLoopCount);
                if (loop) CFNumberGetValue(loop, kCFNumberNSIntegerType, &_loopCount);
            }
            CFRelease(properties);
        }
    }
}

接着,获取_source的图像数。

  IMAGEIO_EXTERN size_t CGImageSourceGetCount(CGImageSourceRef __nonnull isrc)  IMAGEIO_AVAILABLE_STARTING(__MAC_10_4, __IPHONE_4_0);

利用ImageIO中上诉方法可获取图像源数据中的除了缩略图以外的图像数,而且上诉方法不会从PSD图形文件中提取图像层。
利用CFTypeRef loop = CFDictionaryGetValue(gif, kCGImagePropertyGIFLoopCount)方法获取循环播放次数。

并利用if (loop) CFNumberGetValue(loop, kCFNumberNSIntegerType, &_loopCount)方法给_loopCount赋值。

最后,就是利用for循环获取每帧(图像数)的相关属性数据。以及初始化第一帧数据。关键代码如下:

  value = CFDictionaryGetValue(properties, kCGImagePropertyPixelWidth)

  value = CFDictionaryGetValue(properties, kCGImagePropertyPixelHeight)

  value = CFDictionaryGetValue(gif, kCGImagePropertyGIFUnclampedDelayTime);
                if (!value) {
                    // Fall back to the clamped frame delay if the unclamped frame delay does not exist.
                    value = CFDictionaryGetValue(gif, kCGImagePropertyGIFDelayTime);
                }

然后便是生成某一帧图像数据方法,代码如下:

      - (CGImageRef)_newUnblendedImageAtIndex:(NSUInteger)index
                     extendToCanvas:(BOOL)extendToCanvas
                            decoded:(BOOL *)decoded CF_RETURNS_RETAINED {

if (!_finalized && index > 0) return NULL;
if (_frames.count <= index) return NULL;
_YYImageDecoderFrame *frame = _frames[index];

if (_source) {
    CGImageRef imageRef = CGImageSourceCreateImageAtIndex(_source, index, (CFDictionaryRef)@{(id)kCGImageSourceShouldCache:@(YES)});
    if (imageRef && extendToCanvas) {
        size_t width = CGImageGetWidth(imageRef);
        size_t height = CGImageGetHeight(imageRef);
        if (width == _width && height == _height) {
            CGImageRef imageRefExtended = YYCGImageCreateDecodedCopy(imageRef, YES);
            if (imageRefExtended) {
                CFRelease(imageRef);
                imageRef = imageRefExtended;
                if (decoded) *decoded = YES;
            }
        } else {
            CGContextRef context = CGBitmapContextCreate(NULL, _width, _height, 8, 0, YYCGColorSpaceGetDeviceRGB(), kCGBitmapByteOrder32Host | kCGImageAlphaPremultipliedFirst);
            if (context) {
                CGContextDrawImage(context, CGRectMake(0, _height - height, width, height), imageRef);
                CGImageRef imageRefExtended = CGBitmapContextCreateImage(context);
                CFRelease(context);
                if (imageRefExtended) {
                    CFRelease(imageRef);
                    imageRef = imageRefExtended;
                    if (decoded) *decoded = YES;
                }
            }
        }
    }
    return imageRef;
}

if (_apngSource) {
    uint32_t size = 0;
    uint8_t *bytes = yy_png_copy_frame_data_at_index(_data.bytes, _apngSource, (uint32_t)index, &size);
    if (!bytes) return NULL;
    CGDataProviderRef provider = CGDataProviderCreateWithData(bytes, bytes, size, YYCGDataProviderReleaseDataCallback);
    if (!provider) {
        free(bytes);
        return NULL;
    }
    bytes = NULL; // hold by provider

    CGImageSourceRef source = CGImageSourceCreateWithDataProvider(provider, NULL);
    if (!source) {
        CFRelease(provider);
        return NULL;
    }
    CFRelease(provider);

    if(CGImageSourceGetCount(source) < 1) {
        CFRelease(source);
        return NULL;
    }

    CGImageRef imageRef = CGImageSourceCreateImageAtIndex(source, 0, (CFDictionaryRef)@{(id)kCGImageSourceShouldCache:@(YES)});
    CFRelease(source);
    if (!imageRef) return NULL;
    if (extendToCanvas) {
        CGContextRef context = CGBitmapContextCreate(NULL, _width, _height, 8, 0, YYCGColorSpaceGetDeviceRGB(), kCGBitmapByteOrder32Host | kCGImageAlphaPremultipliedFirst); //bgrA
        if (context) {
            CGContextDrawImage(context, CGRectMake(frame.offsetX, frame.offsetY, frame.width, frame.height), imageRef);
            CFRelease(imageRef);
            imageRef = CGBitmapContextCreateImage(context);
            CFRelease(context);
            if (decoded) *decoded = YES;
        }
    }
    return imageRef;
}

#if YYIMAGE_WEBP_ENABLED
if (_webpSource) {
    WebPIterator iter;
    if (!WebPDemuxGetFrame(_webpSource, (int)(index + 1), &iter)) return NULL; // demux webp frame data
    // frame numbers are one-based in webp -----------^

    int frameWidth = iter.width;
    int frameHeight = iter.height;
    if (frameWidth < 1 || frameHeight < 1) return NULL;

    int width = extendToCanvas ? (int)_width : frameWidth;
    int height = extendToCanvas ? (int)_height : frameHeight;
    if (width > _width || height > _height) return NULL;

    const uint8_t *payload = iter.fragment.bytes;
    size_t payloadSize = iter.fragment.size;

    WebPDecoderConfig config;
    if (!WebPInitDecoderConfig(&config)) {
        WebPDemuxReleaseIterator(&iter);
        return NULL;
    }
    if (WebPGetFeatures(payload , payloadSize, &config.input) != VP8_STATUS_OK) {
        WebPDemuxReleaseIterator(&iter);
        return NULL;
    }

    size_t bitsPerComponent = 8;
    size_t bitsPerPixel = 32;
    size_t bytesPerRow = YYImageByteAlign(bitsPerPixel / 8 * width, 32);
    size_t length = bytesPerRow * height;
    CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Host | kCGImageAlphaPremultipliedFirst; //bgrA

    void *pixels = calloc(1, length);
    if (!pixels) {
        WebPDemuxReleaseIterator(&iter);
        return NULL;
    }

    config.output.colorspace = MODE_bgrA;
    config.output.is_external_memory = 1;
    config.output.u.RGBA.rgba = pixels;
    config.output.u.RGBA.stride = (int)bytesPerRow;
    config.output.u.RGBA.size = length;
    VP8StatusCode result = WebPDecode(payload, payloadSize, &config); // decode
    if ((result != VP8_STATUS_OK) && (result != VP8_STATUS_NOT_ENOUGH_DATA)) {
        WebPDemuxReleaseIterator(&iter);
        free(pixels);
        return NULL;
    }
    WebPDemuxReleaseIterator(&iter);

    if (extendToCanvas && (iter.x_offset != 0 || iter.y_offset != 0)) {
        void *tmp = calloc(1, length);
        if (tmp) {
            vImage_Buffer src = {pixels, height, width, bytesPerRow};
            vImage_Buffer dest = {tmp, height, width, bytesPerRow};
            vImage_CGAffineTransform transform = {1, 0, 0, 1, iter.x_offset, -iter.y_offset};
            uint8_t backColor[4] = {0};
            vImage_Error error = vImageAffineWarpCG_ARGB8888(&src, &dest, NULL, &transform, backColor, kvImageBackgroundColorFill);
            if (error == kvImageNoError) {
                memcpy(pixels, tmp, length);
            }
            free(tmp);
        }
    }

    CGDataProviderRef provider = CGDataProviderCreateWithData(pixels, pixels, length, YYCGDataProviderReleaseDataCallback);
    if (!provider) {
        free(pixels);
        return NULL;
    }
    pixels = NULL; // hold by provider

    CGImageRef image = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, YYCGColorSpaceGetDeviceRGB(), bitmapInfo, provider, NULL, false, kCGRenderingIntentDefault);
    CFRelease(provider);
    if (decoded) *decoded = YES;
    return image;
}
   #endif

return NULL;
}

研究上诉代码,发现可通过如下方法以特定规格要求获取图像数据源中的某一帧图像数据。

IMAGEIO_EXTERN CGImageRef __nullable CGImageSourceCreateImageAtIndex(CGImageSourceRef __nonnull isrc, size_t index, CFDictionaryRef __nullable options)  IMAGEIO_AVAILABLE_STARTING(__MAC_10_4, __IPHONE_4_0);

如源码中便是规定了图片压缩时应缓存这个属性:CGImageRef imageRef = CGImageSourceCreateImageAtIndex(_source, index, (CFDictionaryRef)@{(id)kCGImageSourceShouldCache:@(YES)});

接着获取该帧图像数据的宽高属性:

        size_t width = CGImageGetWidth(imageRef);
        size_t height = CGImageGetHeight(imageRef);

创建新的图像数据

CGImageRef YYCGImageCreateDecodedCopy(CGImageRef imageRef, BOOL decodeForDisplay) {
if (!imageRef) return NULL;
size_t width = CGImageGetWidth(imageRef);
size_t height = CGImageGetHeight(imageRef);
if (width == 0 || height == 0) return NULL;

if (decodeForDisplay) { //decode with redraw (may lose some precision)
    CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef) & kCGBitmapAlphaInfoMask;
    BOOL hasAlpha = NO;
    if (alphaInfo == kCGImageAlphaPremultipliedLast ||
        alphaInfo == kCGImageAlphaPremultipliedFirst ||
        alphaInfo == kCGImageAlphaLast ||
        alphaInfo == kCGImageAlphaFirst) {
        hasAlpha = YES;
    }
    // BGRA8888 (premultiplied) or BGRX8888
    // same as UIGraphicsBeginImageContext() and -[UIView drawRect:]
    CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Host;
    bitmapInfo |= hasAlpha ? kCGImageAlphaPremultipliedFirst : kCGImageAlphaNoneSkipFirst;
    CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8, 0, YYCGColorSpaceGetDeviceRGB(), bitmapInfo);
    if (!context) return NULL;
    CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef); // decode
    CGImageRef newImage = CGBitmapContextCreateImage(context);
    CFRelease(context);
    return newImage;

} else {
    CGColorSpaceRef space = CGImageGetColorSpace(imageRef);
    size_t bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
    size_t bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
    size_t bytesPerRow = CGImageGetBytesPerRow(imageRef);
    CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
    if (bytesPerRow == 0 || width == 0 || height == 0) return NULL;

    CGDataProviderRef dataProvider = CGImageGetDataProvider(imageRef);
    if (!dataProvider) return NULL;
    CFDataRef data = CGDataProviderCopyData(dataProvider); // decode
    if (!data) return NULL;

    CGDataProviderRef newProvider = CGDataProviderCreateWithCFData(data);
    CFRelease(data);
    if (!newProvider) return NULL;

    CGImageRef newImage = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, space, bitmapInfo, newProvider, NULL, false, kCGRenderingIntentDefault);
    CFRelease(newProvider);
    return newImage;
}


}

在_frameAtIndex: decodeForDisplay:方法中,利用获取的图形数据创建新的image

 CGImageRef imageRef = [self _newUnblendedImageAtIndex:index extendToCanvas:extendToCanvas decoded:&decoded];
    if (!imageRef) return nil;
    if (decodeForDisplay && !decoded) {
        CGImageRef imageRefDecoded = YYCGImageCreateDecodedCopy(imageRef, YES);
        if (imageRefDecoded) {
            CFRelease(imageRef);
            imageRef = imageRefDecoded;
            decoded = YES;
        }
    }
    UIImage *image = [UIImage imageWithCGImage:imageRef scale:_scale orientation:_orientation];

在decoder获取了相关信息后,在YYImage中便可针对每一帧获取其内存大小进而获取整个动态图(如果是的话)的内存大小

  if (decoder.frameCount > 1) {
        _decoder = decoder;
        _bytesPerFrame = CGImageGetBytesPerRow(image.CGImage) * CGImageGetHeight(image.CGImage);
        _animatedImageMemorySize = _bytesPerFrame * decoder.frameCount;
    }

而通过YYAnimatedImage协议,YYImage可以将相关信息给相关对象。

          #pragma mark - protocol YYAnimatedImage

  - (NSUInteger)animatedImageFrameCount {
return _decoder.frameCount;
 }

  - (NSUInteger)animatedImageLoopCount {
return _decoder.loopCount;
 }

  - (NSUInteger)animatedImageBytesPerFrame {
return _bytesPerFrame;
}

 - (UIImage *)animatedImageFrameAtIndex:(NSUInteger)index {
if (index >= _decoder.frameCount) return nil;
dispatch_semaphore_wait(_preloadedLock, DISPATCH_TIME_FOREVER);
UIImage *image = _preloadedFrames[index];
dispatch_semaphore_signal(_preloadedLock);
if (image) return image == (id)[NSNull null] ? nil : image;
return [_decoder frameAtIndex:index decodeForDisplay:YES].image;
}

 - (NSTimeInterval)animatedImageDurationAtIndex:(NSUInteger)index {
NSTimeInterval duration = [_decoder frameDurationAtIndex:index];

/*
 http://opensource.apple.com/source/WebCore/WebCore-7600.1.25/platform/graphics/cg/ImageSourceCG.cpp
 Many annoying ads specify a 0 duration to make an image flash as quickly as 
 possible. We follow Safari and Firefox's behavior and use a duration of 100 ms 
 for any frames that specify a duration of <= 10 ms.
 See <rdar://problem/7689300> and <http://webkit.org/b/36082> for more information.

 See also: http://nullsleep.tumblr.com/post/16524517190/animated-gif-minimum-frame-delay-browser.
 */
if (duration < 0.011f) return 0.100f;
return duration;
 }

也是孤陋寡闻,代理还可以这样用。这样就只要curAnimatedImage遵循协议即可。

   _totalLoop = _curAnimatedImage.animatedImageLoopCount;
    _totalFrameCount = _curAnimatedImage.animatedImageFrameCount;

PS:一千个读者就有一千个哈勒姆特,优秀的源码可以全方位的提高自己,拿这次对YYAnimatedImageView实现对gif文件展示的学习过程来说,大大小小的总结性文件就有十几个之多,个人怕贻笑大方,就不一一贴出来了。在学习期间随着研读内容的深入对ibireme愈加佩服,无论是代码风格还是其C++的纯熟都令人佩服。

 类似资料: