最近无聊了,又把它们捡起来,发现VGA的数据似乎太大了,软件编码500ms,真的不能忍,换了JPEG硬编码发现也差不多,这就很尴尬了。。然后换成QVGA(320x240)用硬件MFC编码成H264,大概20~30ms完成一帧数据,还是慢。。。不知道是我的Tiny6410太慢了,还是哪里处理得不对。
#正文
mjpg-streamer,就像它的名字一样,jpg格式的流传输工具,用它可以实现对摄像头数据采集到传输的一整套过程。
mjpg-streamer的实现也相对比较简单:
##主函数
在main函数里面首先当然是实现命令输入解析,对一些信号的处理,主要是SIG_PIPE和SIG_INT:
/* ignore SIGPIPE (send by OS if transmitting to closed TCP sockets) */
signal(SIGPIPE, SIG_IGN);
/* register signal handler for <CTRL>+C in order to clean up */
if (signal(SIGINT, signal_handler) == SIG_ERR) {
LOG("could not register signal handler\n");
closelog();
exit(EXIT_FAILURE);
}
忽略掉SIG_PIPE信号,做Linux通信的一般都这么做,否则当出现Pipe broken时,程序会自动退出。
然后就是为SIG_INT添加处理函数。
然后就是加载动态链接库,因为mjpg-streamer的命令行输出就需要输入要使用的输入输出方式,也没想通为啥要运行时加载,反正加载不是重点,也不管了,重点是加载的内容。
##数据采集
通过加载动态链接库后就可以获取到要进行操作的方法,这些方法是init、stop、run和cmd这四种。因为是为了摄像头数据传输,别的我也不管了。首先是输入,文件是input_uvc.c:
在初始化函数中init(),当然是对摄像头参数设置和初始化啦:
/* open video device and prepare data structure */
if (init_videoIn(videoIn, dev, width, height, fps, format, 1) < 0) {
IPRINT("init_VideoIn failed\n");
closelog();
exit(EXIT_FAILURE);
}
设置的参数,有摄像头设备,采集的分辨率,采集速度、格式以及压缩的JPEG的质量:
/* display the parsed values */
IPRINT("Using V4L2 device.: %s\n", dev);
IPRINT("Desired Resolution: %i x %i\n", width, height);
IPRINT("Frames Per Second.: %i\n", fps);
IPRINT("Format............: %s\n", (format == V4L2_PIX_FMT_YUYV) ? "YUV" : "MJPEG");
if ( format == V4L2_PIX_FMT_YUYV )
IPRINT("JPEG Quality......: %d\n", gquality);
在摄像头初始化init_videoIn()中,就是Linux V4L2的摄像头设置:
if (init_v4l2 (vd) < 0) {
fprintf (stderr, " Init v4L2 failed !! exit fatal \n");
goto error;
}
在采集方法中,根据不同的数据格式,分别拷入到不同的buffer中:
switch (vd->formatIn) {
case V4L2_PIX_FMT_MJPEG:
if (vd->buf.bytesused <= HEADERFRAME1) { /* Prevent crash
* on empty image */
fprintf(stderr, "Ignoring empty buffer ...\n");
return 0;
}
memcpy(vd->tmpbuffer, vd->mem[vd->buf.index], vd->buf.bytesused);
if (debug)
fprintf(stderr, "bytes in used %d \n", vd->buf.bytesused);
break;
case V4L2_PIX_FMT_YUYV:
if (vd->buf.bytesused > vd->framesizeIn)
memcpy (vd->framebuffer, vd->mem[vd->buf.index], (size_t) vd->framesizeIn);
else
memcpy (vd->framebuffer, vd->mem[vd->buf.index], (size_t) vd->buf.bytesused);
break;
default:
goto err;
break;
}
根据采集的数据格式判断是否需要压缩为JPEG:
/*
* If capturing in YUV mode convert to JPEG now.
* This compression requires many CPU cycles, so try to avoid YUV format.
* Getting JPEGs straight from the webcam, is one of the major advantages of
* Linux-UVC compatible devices.
*/
if (videoIn->formatIn == V4L2_PIX_FMT_YUYV) {
DBG("compressing frame\n");
pglobal->size = compress_yuyv_to_jpeg(videoIn, pglobal->buf, videoIn->framesizeIn, gquality);
} else {
DBG("copying frame\n");
pglobal->size = memcpy_picture(pglobal->buf, videoIn->tmpbuffer, videoIn->buf.bytesused);
}
其中pglobal->buf为输出的buffer,也就是在后面需要传输的数据。
##数据传输
既然是为了远程视频传输,当然这里输出应该选择output_http.c,即网页传输。输出方法和输入方法一样,主要为init,stop,run和cmd四个函数组成,这里就主要讲输出传输的过程,主要是run函数中,创建了服务器thread:
/* create thread and pass context to thread function */
pthread_create(&(servers[id].threadID), NULL, server_thread, &(servers[id]));
pthread_detach(servers[id].threadID);
其中server_thread()线程回调函数在httpd.c文件中,其中是一个TCP Server的初始化过程,并且设置的最多listen 10个client:
/* start listening on socket */
if ( listen(pcontext->sd, 10) != 0 ) {
fprintf(stderr, "listen failed\n");
exit(EXIT_FAILURE);
}
为accept的每一个client创建单独的线程,用于处理请求和发送数据:
/* create a child for every client that connects */
while ( !pglobal->stop ) {
//int *pfd = (int *)malloc(sizeof(int));
cfd *pcfd = malloc(sizeof(cfd));
if (pcfd == NULL) {
fprintf(stderr, "failed to allocate (a very small amount of) memory\n");
exit(EXIT_FAILURE);
}
DBG("waiting for clients to connect\n");
pcfd->fd = accept(pcontext->sd, (struct sockaddr *)&client_addr, &addr_len);
pcfd->pc = pcontext;
/* start new thread that will handle this TCP connected client */
DBG("create thread to handle client that just established a connection\n");
syslog(LOG_INFO, "serving client: %s:%d\n", inet_ntoa(client_addr.sin_addr), ntohs(client_addr.sin_port));
if( pthread_create(&client, NULL, &client_thread, pcfd) != 0 ) {
DBG("could not launch another client thread\n");
close(pcfd->fd);
free(pcfd);
continue;
}
pthread_detach(client);
在client thread中,根据用户发来的request来选择后面的操作:
/* What does the client want to receive? Read the request. */
memset(buffer, 0, sizeof(buffer));
if ( (cnt = _readline(lcfd.fd, &iobuf, buffer, sizeof(buffer)-1, 5)) == -1 ) {
close(lcfd.fd);
return NULL;
}
/* determine what to deliver */
if ( strstr(buffer, "GET /?action=snapshot") != NULL ) {
req.type = A_SNAPSHOT;
} else if ( strstr(buffer, "GET /?action=stream") != NULL ) {
req.type = A_STREAM;
}
·····
/* now it's time to answer */
switch ( req.type ) {
case A_SNAPSHOT:
DBG("Request for snapshot\n");
send_snapshot(lcfd.fd);
break;
case A_STREAM:
DBG("Request for stream\n");
send_stream(lcfd.fd);
break;
case A_COMMAND:
if ( lcfd.pc->conf.nocommands ) {
send_error(lcfd.fd, 501, "this server is configured to not accept commands");
break;
}
command(lcfd.pc->id, lcfd.fd, req.parameter);
break;
case A_FILE:
if ( lcfd.pc->conf.www_folder == NULL )
send_error(lcfd.fd, 501, "no www-folder configured");
else
send_file(lcfd.pc->id, lcfd.fd, req.parameter);
break;
default:
DBG("unknown request\n");
}
在send_stream中,发送我们已经处理好的JPEG数据流:
memcpy(frame, pglobal->buf, frame_size);
DBG("got frame (size: %d kB)\n", frame_size / 1024);
DBG("sending frame\n");
if( write(fd, frame, frame_size) < 0 ) break;
其他的数据发送这里就不详细介绍了,搞过嵌入式网页的都知道,就是直接发送html数据,用于在网页上就可以看到响应的页面。