我正在尝试将用户相机拍摄的视频从UIImagePickerController压缩(不是现有视频,而是一个正在播放的视频)以上传到我的服务器,并花费少量时间这样做,因此较小的尺寸是理想的选择,而不是30-在更新质量的相机上为45
mb。
这是为iOS 8快速压缩的代码,压缩效果非常好,我轻松地从35 mb降至2.1 mb。
func convertVideo(inputUrl: NSURL, outputURL: NSURL)
{
//setup video writer
var videoAsset = AVURLAsset(URL: inputUrl, options: nil) as AVAsset
var videoTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0] as AVAssetTrack
var videoSize = videoTrack.naturalSize
var videoWriterCompressionSettings = Dictionary(dictionaryLiteral:(AVVideoAverageBitRateKey,NSNumber(integer:960000)))
var videoWriterSettings = Dictionary(dictionaryLiteral:(AVVideoCodecKey,AVVideoCodecH264),
(AVVideoCompressionPropertiesKey,videoWriterCompressionSettings),
(AVVideoWidthKey,videoSize.width),
(AVVideoHeightKey,videoSize.height))
var videoWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoWriterSettings)
videoWriterInput.expectsMediaDataInRealTime = true
videoWriterInput.transform = videoTrack.preferredTransform
var videoWriter = AVAssetWriter(URL: outputURL, fileType: AVFileTypeQuickTimeMovie, error: nil)
videoWriter.addInput(videoWriterInput)
var videoReaderSettings: [String:AnyObject] = [kCVPixelBufferPixelFormatTypeKey:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange]
var videoReaderOutput = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: videoReaderSettings)
var videoReader = AVAssetReader(asset: videoAsset, error: nil)
videoReader.addOutput(videoReaderOutput)
//setup audio writer
var audioWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: nil)
audioWriterInput.expectsMediaDataInRealTime = false
videoWriter.addInput(audioWriterInput)
//setup audio reader
var audioTrack = videoAsset.tracksWithMediaType(AVMediaTypeAudio)[0] as AVAssetTrack
var audioReaderOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: nil) as AVAssetReaderOutput
var audioReader = AVAssetReader(asset: videoAsset, error: nil)
audioReader.addOutput(audioReaderOutput)
videoWriter.startWriting()
//start writing from video reader
videoReader.startReading()
videoWriter.startSessionAtSourceTime(kCMTimeZero)
//dispatch_queue_t processingQueue = dispatch_queue_create("processingQueue", nil)
var queue = dispatch_queue_create("processingQueue", nil)
videoWriterInput.requestMediaDataWhenReadyOnQueue(queue, usingBlock: { () -> Void in
println("Export starting")
while videoWriterInput.readyForMoreMediaData
{
var sampleBuffer:CMSampleBufferRef!
sampleBuffer = videoReaderOutput.copyNextSampleBuffer()
if (videoReader.status == AVAssetReaderStatus.Reading && sampleBuffer != nil)
{
videoWriterInput.appendSampleBuffer(sampleBuffer)
}
else
{
videoWriterInput.markAsFinished()
if videoReader.status == AVAssetReaderStatus.Completed
{
if audioReader.status == AVAssetReaderStatus.Reading || audioReader.status == AVAssetReaderStatus.Completed
{
}
else {
audioReader.startReading()
videoWriter.startSessionAtSourceTime(kCMTimeZero)
var queue2 = dispatch_queue_create("processingQueue2", nil)
audioWriterInput.requestMediaDataWhenReadyOnQueue(queue2, usingBlock: { () -> Void in
while audioWriterInput.readyForMoreMediaData
{
var sampleBuffer:CMSampleBufferRef!
sampleBuffer = audioReaderOutput.copyNextSampleBuffer()
println(sampleBuffer == nil)
if (audioReader.status == AVAssetReaderStatus.Reading && sampleBuffer != nil)
{
audioWriterInput.appendSampleBuffer(sampleBuffer)
}
else
{
audioWriterInput.markAsFinished()
if (audioReader.status == AVAssetReaderStatus.Completed)
{
videoWriter.finishWritingWithCompletionHandler({ () -> Void in
println("Finished writing video asset.")
self.videoUrl = outputURL
var data = NSData(contentsOfURL: outputURL)!
println("Byte Size After Compression: \(data.length / 1048576) mb")
println(videoAsset.playable)
//Networking().uploadVideo(data, fileName: "Test2")
self.dismissViewControllerAnimated(true, completion: nil)
})
break
}
}
}
})
break
}
}
}// Second if
}//first while
})// first block
// return
}
这是我的UIImagePickerController的代码,该代码调用compress方法
func imagePickerController(picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [NSObject : AnyObject])
{
// Extract the media type from selection
let type = info[UIImagePickerControllerMediaType] as String
if (type == kUTTypeMovie)
{
self.videoUrl = info[UIImagePickerControllerMediaURL] as? NSURL
var uploadUrl = NSURL.fileURLWithPath(NSTemporaryDirectory().stringByAppendingPathComponent("captured").stringByAppendingString(".mov"))
var data = NSData(contentsOfURL: self.videoUrl!)!
println("Size Before Compression: \(data.length / 1048576) mb")
self.convertVideo(self.videoUrl!, outputURL: uploadUrl!)
// Get the video from the info and set it appropriately.
/*self.dismissViewControllerAnimated(true, completion: { () -> Void in
//self.next.enabled = true
})*/
}
}
正如我上面提到的,这在减小文件大小方面是可行的,但是当我取回文件时(它仍然是.mov类型),QuickTime无法播放它。Quicktime确实会尝试最初进行转换,但会中途失败(打开文件后1-2秒)。我什至在AVPlayerController中测试了视频文件,但未提供有关电影的任何信息,只是一个播放按钮,没有蚂蚁加载,没有任何长度,只是“-”,时间通常在播放器中。IE浏览器损坏的文件将无法播放。
我不确定这与写资产的设置有关,无论我不确定是视频还是音频。甚至可能是导致损坏的资产读取。我尝试过更改变量并为读写设置不同的键,但是我没有找到正确的组合,这很糟糕,我可以压缩但得到损坏的文件。我完全不确定,任何帮助都将不胜感激。Pleeeeeeeeease。
该答案已被完全重写并带有注释,以支持 Swift 4.0
。请记住,更改AVFileType
和presetName
值可以让您调整最终输出的大小和质量。
import AVFoundation
extension ViewController: AVCaptureFileOutputRecordingDelegate {
// Delegate function has been updated
func fileOutput(_ output: AVCaptureFileOutput, didFinishRecordingTo outputFileURL: URL, from connections: [AVCaptureConnection], error: Error?) {
// This code just exists for getting the before size. You can remove it from production code
do {
let data = try Data(contentsOf: outputFileURL)
print("File size before compression: \(Double(data.count / 1048576)) mb")
} catch {
print("Error: \(error)")
}
// This line creates a generic filename based on UUID, but you may want to use your own
// The extension must match with the AVFileType enum
let path = NSTemporaryDirectory() + UUID().uuidString + ".m4v"
let outputURL = URL.init(fileURLWithPath: path)
let urlAsset = AVURLAsset(url: outputURL)
// You can change the presetName value to obtain different results
if let exportSession = AVAssetExportSession(asset: urlAsset,
presetName: AVAssetExportPresetMediumQuality) {
exportSession.outputURL = outputURL
// Changing the AVFileType enum gives you different options with
// varying size and quality. Just ensure that the file extension
// aligns with your choice
exportSession.outputFileType = AVFileType.mov
exportSession.exportAsynchronously {
switch exportSession.status {
case .unknown: break
case .waiting: break
case .exporting: break
case .completed:
// This code only exists to provide the file size after compression. Should remove this from production code
do {
let data = try Data(contentsOf: outputFileURL)
print("File size after compression: \(Double(data.count / 1048576)) mb")
} catch {
print("Error: \(error)")
}
case .failed: break
case .cancelled: break
}
}
}
}
}
以下是为Swift 3.0编写的原始答案:
extension ViewController: AVCaptureFileOutputRecordingDelegate {
func capture(_ captureOutput: AVCaptureFileOutput!, didFinishRecordingToOutputFileAt outputFileURL: URL!, fromConnections connections: [Any]!, error: Error!) {
guard let data = NSData(contentsOf: outputFileURL as URL) else {
return
}
print("File size before compression: \(Double(data.length / 1048576)) mb")
let compressedURL = NSURL.fileURL(withPath: NSTemporaryDirectory() + NSUUID().uuidString + ".m4v")
compressVideo(inputURL: outputFileURL as URL, outputURL: compressedURL) { (exportSession) in
guard let session = exportSession else {
return
}
switch session.status {
case .unknown:
break
case .waiting:
break
case .exporting:
break
case .completed:
guard let compressedData = NSData(contentsOf: compressedURL) else {
return
}
print("File size after compression: \(Double(compressedData.length / 1048576)) mb")
case .failed:
break
case .cancelled:
break
}
}
}
func compressVideo(inputURL: URL, outputURL: URL, handler:@escaping (_ exportSession: AVAssetExportSession?)-> Void) {
let urlAsset = AVURLAsset(url: inputURL, options: nil)
guard let exportSession = AVAssetExportSession(asset: urlAsset, presetName: AVAssetExportPresetMediumQuality) else {
handler(nil)
return
}
exportSession.outputURL = outputURL
exportSession.outputFileType = AVFileTypeQuickTimeMovie
exportSession.shouldOptimizeForNetworkUse = true
exportSession.exportAsynchronously { () -> Void in
handler(exportSession)
}
}
}
问题内容: 因此,目前我正在使用它来压缩视频: 当我在2秒钟内录制视频时,大小为 4.3 MB ,当我在6秒钟内录制视频时,文件的大小为 9.3 MB 。 有任何减小尺寸的提示吗? 问题答案: 此扩展专注于将其导出到较低质量的设置(在本例中为“中”),并使用容器,而不是iOS偏爱的容器。这可能会导致质量下降,但是您可以在尝试微调输出时尝试使用更高的输出设置和不同的格式。
我正在尝试使用ffmpeg和使用以下命令压缩一个视频。 然而,生成的视频的大小大于原始视频的大小。有没有人可以指出为什么会发生这种情况,如果有其他命令我应该使用。
我试着按照ffmpeg4android库,我得到了压缩视频时的问题,如下面的日志图像。 我使用意图转移到压缩媒体类, 03-13 14:49:45.770:I/FFMPEG4Android(6065):WorkingFolderPath://sdcard/videokit/ 03-13 14:49:45.775:d/ffmpeg4android(6065):工作目录存在,不存在(许可证文件和演示视
我正试图在我的网站上添加三个视频。我在用JW-Player。它可以工作,但一个简单的10秒视频(3MB)需要一分钟以上的时间来加载。我以为这是我的主机服务的问题,但我已经尝试上传另一个类似格式的视频建立在互联网上,比我的长(22MB),它加载在几秒钟内! 我正在用Avidemux将一个。AVI(MPEG4,9280 Kbps,640x480,25 fps,273 MB,4分钟,无音频)视频剪切和转
我正在建立一个应用程序,我需要压缩视频之前,上传到服务器。未经压缩的视频约为五分钟,60M,Android视频位2x1024x1024,640*480。现在我正在使用FFMPEG库压缩视频http://androidwarzone.blogspot.co.il/2011/12/ffmpeg4android.html。以下是command commandStr=“ffmpeg-y-i”+url+“-
我可以看到视频播放在我的TextureView,但它是相当腐败。我已经验证了我正在以正确的顺序接收完整的数据包。我已经能够正确解析RTP头。我相信我的问题与SPS和PPS以及MediaCodec有关。 正如您所看到的,我没有从我的视频流中为MediaFormat提供SPS和PPS,而是使用了一个internet示例中的硬编码集。我试图找到解释如何从数据包中提取SPS和PPS的源,但没有找到任何东西