我正在尝试使用MediaCodec API解码现场流屏幕捕获从PC由FFMPEG。
对于发件人(PC ffmpeg)
输出如下所示
Output #0, rtp, to 'rtp://192.168.1.6:1234':
Metadata:
encoder : Lavf56.15.104
Stream #0:0: Video: h264 (libx264), yuv420p, 1280x720, q=-1--1, 29.97 fps, 90k tbn, 29.97 tbc
Metadata:
encoder : Lavc56.14.100 libx264
Stream mapping:
Stream #0:0 -> #0:0 (bmp (native) -> h264 (libx264))
SDP:
v=0
o=- 0 0 IN IP4 127.0.0.1
s=No Name
c=IN IP4 192.168.1.6
t=0 0
a=tool:libavformat 56.15.104
m=video 1234 RTP/AVP 96
a=rtpmap:96 H264/90000
a=fmtp:96 packetization-mode=1; sprop-parameter-sets=Z0LAH9kAUAW6EAAAPpAADqYI8YMkgA==,aMuDyyA=; profile-level-id=42C01F
Press [q] to stop, [?] for help
frame= 19 fps=0.0 q=17.0 size= 141kB time=00:00:00.63 bitrate=1826.0kbits/
frame= 34 fps= 32 q=17.0 size= 164kB time=00:00:01.13 bitrate=1181.5kbits/
frame= 50 fps= 32 q=18.0 size= 173kB time=00:00:01.66 bitrate= 850.9kbits/
用于接收器(Android MediaCodec)
我用surface创建了活动并实现了SurfaceHolder.Callback
@Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height) {
Log.i("sss", "surfaceChanged");
if( playerThread == null ) {
playerThread = new PlayerThread(holder.getSurface());
playerThread.start();
}
}
对于PlayerThread
class PlayerThread extends Thread {
MediaCodec decoder;
Surface surface;
public PlayerThread(Surface surface) {
this.surface = surface;
}
@Override
public void run() {
running = true;
try {
MediaFormat format = MediaFormat.createVideoFormat("video/avc", 1280, 720);
byte[] header = new byte[] {0,0,0,1};
byte[] sps = Base64.decode("Z0LAH9kAUAW6EAAAPpAADqYI8YMkgA==", Base64.DEFAULT);
byte[] pps = Base64.decode("aMuDyyA=", Base64.DEFAULT);
byte[] header_sps = new byte[sps.length + header.length];
System.arraycopy(header,0,header_sps,0,header.length);
System.arraycopy(sps,0,header_sps,header.length, sps.length);
byte[] header_pps = new byte[pps.length + header.length];
System.arraycopy(header,0, header_pps, 0, header.length);
System.arraycopy(pps, 0, header_pps, header.length, pps.length);
format.setByteBuffer("csd-0", ByteBuffer.wrap(header_sps));
format.setByteBuffer("csd-1", ByteBuffer.wrap(header_pps));
format.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 1280 * 720);
// format.setInteger("durationUs", 63446722);
// format.setByteBuffer("csd-2", ByteBuffer.wrap((hexStringToByteArray("42C01E"))));
// format.setInteger(MediaFormat.KEY_COLOR_FORMAT ,MediaCodecInfo.CodecCapabilities.COLOR_FormatYUV420Planar);
Log.i("sss", "Format = " + format);
try {
decoder = MediaCodec.createDecoderByType("video/avc");
decoder.configure(format, surface, null, 0);
decoder.start();
} catch (IOException ioEx) {
ioEx.printStackTrace();
}
DatagramSocket socket = new DatagramSocket(1234);
byte[] bytes = new byte[4096];
DatagramPacket packet = new DatagramPacket(bytes, bytes.length);
byte[] data;
ByteBuffer[] inputBuffers;
ByteBuffer[] outputBuffers;
ByteBuffer inputBuffer;
ByteBuffer outputBuffer;
MediaCodec.BufferInfo bufferInfo;
bufferInfo = new MediaCodec.BufferInfo();
int inputBufferIndex;
int outputBufferIndex;
byte[] outData;
inputBuffers = decoder.getInputBuffers();
outputBuffers = decoder.getOutputBuffers();
int minusCount = 0;
byte[] prevData = new byte[65535];
List<byte[]> playLoads = new ArrayList<>();
int playloadSize = 0;
while (true) {
try {
socket.receive(packet);
data = new byte[packet.getLength()];
System.arraycopy(packet.getData(), packet.getOffset(), data, 0, packet.getLength());
inputBufferIndex = decoder.dequeueInputBuffer(-1);
Log.i("sss", "inputBufferIndex = " + inputBufferIndex);
if (inputBufferIndex >= 0)
{
inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
inputBuffer.put(data);
decoder.queueInputBuffer(inputBufferIndex, 0, data.length, 0, 0);
// decoder.flush();
}
outputBufferIndex = decoder.dequeueOutputBuffer(bufferInfo, 10000);
Log.i("sss", "outputBufferIndex = " + outputBufferIndex);
while (outputBufferIndex >= 0)
{
outputBuffer = outputBuffers[outputBufferIndex];
outputBuffer.position(bufferInfo.offset);
outputBuffer.limit(bufferInfo.offset + bufferInfo.size);
outData = new byte[bufferInfo.size];
outputBuffer.get(outData);
decoder.releaseOutputBuffer(outputBufferIndex, false);
outputBufferIndex = decoder.dequeueOutputBuffer(bufferInfo, 0);
}
} catch (SocketTimeoutException e) {
Log.d("thread", "timeout");
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
因此,我删除了前12个字节,并将具有相同时间戳的数据包组合在一起。然后像这样将其放入缓冲区
在接收数据包后输入while(true)
Log.i("sss", "Received = " + data.length + " bytes");
Log.i("sss","prev " + prevData.length + " bytes = " + getBytesStr(prevData));
Log.i("sss","data " + data.length + " bytes = " + getBytesStr(data));
if(data[4] == prevData[4] && data[5] == prevData[5] && data[6] == prevData[6] && data[7] == prevData[7]){
byte[] playload = new byte[prevData.length -12];
System.arraycopy(prevData,12,playload, 0, prevData.length-12);
playLoads.add(playload);
playloadSize += playload.length;
Log.i("sss", "Same timeStamp playload " + playload.length + " bytes = " + getBytesStr(playload));
} else {
if(playLoads.size() > 0){
byte[] playload = new byte[prevData.length -12];
System.arraycopy(prevData,12,playload, 0, prevData.length-12);
playLoads.add(playload);
playloadSize += playload.length;
Log.i("sss", "last playload " + playload.length + " bytes = " + getBytesStr(playload));
inputBufferIndex = decoder.dequeueInputBuffer(-1);
if (inputBufferIndex >= 0){
inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
byte[] allPlayload = new byte[playloadSize];
int curLength = 0;
for(byte[] playLoad:playLoads){
System.arraycopy(playLoad,0,allPlayload, curLength, playLoad.length);
curLength += playLoad.length;
}
Log.i("sss", "diff timeStamp AlllayLoad " + allPlayload.length + "bytes = " + getBytesStr(allPlayload));
inputBuffer.put(allPlayload);
decoder.queueInputBuffer(inputBufferIndex, 0, data.length, 0, 0);
decoder.flush();
}
bufferInfo = new MediaCodec.BufferInfo();
outputBufferIndex = decoder.dequeueOutputBuffer(bufferInfo, 10000);
if(outputBufferIndex!= -1)
Log.i("sss", "outputBufferIndex = " + outputBufferIndex);
playLoads = new ArrayList<>();
prevData = new byte[65535];
playloadSize = 0;
}
}
prevData = data.clone();
我已经找了一个星期了,但还是没有运气
为什么dequeueOutputBuffer总是返回-1?
我的代码有什么问题?
谢谢@mstorsjo指导我打包,我发现了有用的信息
如何处理原始UDP数据包,以便directshow源筛选器中的解码器筛选器对其进行解码
然后我在下面编辑了我的代码
if((data[12] & 0x1f) == 28){
if((data[13] & 0x80) == 0x80){ //found start bit
inputBufferIndex = decoder.dequeueInputBuffer(-1);
if (inputBufferIndex >= 0){
inputBuffer = inputBuffers[inputBufferIndex];
inputBuffer.clear();
byte result = (byte)((bytes[12] & 0xe0) + (bytes[13] & 0x1f));
inputBuffer.put(new byte[] {0,0,1});
inputBuffer.put(result);
inputBuffer.put(data,14, data.length-14);
}
} else if((data[13] &0x40) == 0x40){ //found stop bit
inputBuffer.put(data, 14, data.length -14);
decoder.queueInputBuffer(inputBufferIndex, 0, data.length, 0, 0);
bufferInfo = new MediaCodec.BufferInfo();
outputBufferIndex = decoder.dequeueOutputBuffer(bufferInfo, 10000);
switch(outputBufferIndex)
{
case MediaCodec.INFO_OUTPUT_BUFFERS_CHANGED:
outputBuffers = decoder.getOutputBuffers();
Log.w("sss", "Output Buffers Changed");
break;
case MediaCodec.INFO_OUTPUT_FORMAT_CHANGED:
Log.w("sss", "Output Format Changed");
MediaFormat newFormat = decoder.getOutputFormat();
Log.i("sss","New format : " + newFormat);
break;
case MediaCodec.INFO_TRY_AGAIN_LATER:
Log.w("sss", "Try Again Later");
break;
default:
outputBuffer = outputBuffers[outputBufferIndex];
outputBuffer.position(bufferInfo.offset);
outputBuffer.limit(bufferInfo.offset + bufferInfo.size);
decoder.releaseOutputBuffer(outputBufferIndex, true);
}
} else {
inputBuffer.put(data, 14, data.length -14);
}
}
谢谢你。
您不能丢弃RTP报头,假装数据包的其余部分是正常的H264帧--事实并非如此。有关将H264打包成RTP时使用的格式的说明,请参见RFC6184。您需要撤消此打包以将数据恢复为普通解码器可以处理的格式。您可以查看libav/ffmpeg中的libavformat/rtpdec_h264.c
来获得如何执行此操作的示例。
我正在开发一个使用MediaCodec API的H.264解码器。我试图在JNI层中调用MediaCodec java API,该函数如下所示: 稍后,我将把发送到我现有的视频呈现管道,并在上呈现。 我希望我能够编写一个Java函数来解码输入流,但这些将是一个挑战- 此资源声明- …你不能对解码的视频帧做任何事情,只能将它们呈现在表面 这里,一个已被传递在表面上呈现输出并且声明。 那么,我是否能够
我正在使用Java API实现一个解码器,用于解码实时H.264远程流。我正在使用回调()从本机层接收H.264编码数据,并在的上解码和呈现。我的实现已经完成(使用回调、解码和呈现等方式检索编码流)。下面是我的解码器类: 现在的问题是-流正在解码和呈现在表面,但视频不清楚。看起来像是框架被打破了,场景被扭曲了/脏了。移动是破碎的和方形的碎片到处(我真的很抱歉,因为我没有截图现在)。 关于我的流-它
我正在试图弄清楚如何使用Android的MediaCodec类来解码H.264视频。首先,我尝试手动解析H.264文件中的NAL单元,并将它们提供给MediaCodec进行解码。我相信我正确地解析了文件中的NAL单元(在文件中搜索0x00 0x00 0x01序列,表示NAL单元的开始),但每次调用dequeueOutputBuffer()时,MediaCodec总是超时并返回-1。有人知道如何将H
背景: 两天来,我一直在努力实现一个像Vine一样的录像机。首先,我试了MediaRecorder。但我需要的视频可能是由小视频剪辑组成的。此类不能用于录制短时视频剪辑。然后我找到了MediaCodec、FFmpeg和JavaCV。FFmpeg和JavaCV可以解决这个问题。但是我必须用许多库文件来编译我的项目。它将生成一个非常大的APK文件。所以我更喜欢用MediaCodec实现它,尽管这个类只
我从服务器接收到h264数据,我想在Android上使用mediacodec和texture view对该流进行解码。我从服务器获取数据,解析它得到SPS、PPS和视频帧数据,然后我将该数据传递给mediacodec,但函数dequeueOutputBuffer(info,100000)总是返回-1,并且我得到dequeueOutputBuffer超时。 请帮忙,我三周来一直在忙这个问题。 这是用
我做了一些假设: 1.我给Mediacodec的每个输入缓冲区提供一个完整的访问单元,从00 00 00 01 09开始 2.我使用GLSurface.getholder().getsurface()从xml布局中预定义的GLSurface解码为一个表面。我不确定这是不是正确的做法。 3.字节数组包含NAL单元,我要自己拆分。但这些nal单元只有0x25、0x27、0x28、0x06和0x09类型