Processing YUV frames from a running GStreamer pipeline

Questions and postings pertaining to the development of ImageMagick, feature enhancements, and ImageMagick internals. ImageMagick source code and algorithms are discussed here. Usage questions which are too arcane for the normal user list should also be posted here.
Post Reply
christian.
Posts: 2
Joined: 2012-08-14T11:09:45-07:00
Authentication code: 67789

Processing YUV frames from a running GStreamer pipeline

Post by christian. »

Hello there,

first of all this post is not a request for help, but meant to help others having the same task at hand. I just have a short question regarding memory handling at the end.

I have a C++ project where it is necessary to grap YUV 4:2:2 video frames from a running GStreamer pipeline and process them with ImageMagick. As usual there is more than one method to tackle the issue.

At first I set up my GStreamer pipeline to produce a JPEG image snapshot (each nth of a second) on the file system with the help of the "jpegenc" and "multifilesink" GStreamer elements. If a frame is needed, the snapshot file is moved to another location on the filesystem, so that it is not overwritten by the next frame. This works fine, but then I wanted to have some finer control of the process and reduce the overhead. Overhead refers to the fact, that I only needed some frames from the pipeline and not all. But with the given setup, every nth frame of a second from the pipeline got encoded into a JPEG file, where I only needed a handful from about a thousand frames.

So, I started to use the "appsink" element from GStreamer to handle the frames directly in the application memory instead, without using the filesystem as an intermediate storage.

Depending on the colorspace used by your GStreamer pipeline you might have to transform it into the YUV encoding with the help of the "ffmpegcolorspace" GStreamer element like

Code: Select all

autovideosrc ! ffmpegcolorspace ! video/x-raw-yuv,format=(fourcc)UYVY ! appsink
ImageMagick uses UYVY per default when loading/storing data in the YUV format. Your video source might use another encoding. I am using Qt-GStreamer with Magick++, by the way:

Code: Select all

    // "buffer" is an instance of QGst::BufferPtr and taken from appsink

    // import image data into ImageMagick facilities
    Magick::Blob imageData(buffer->data(), buffer->size());

    // create image from raw YUV 4:2:2 data
    Magick::Image *image = new Magick::Image();
    image->size(Magick::Geometry(800, 600));
    image->colorSpace(Magick::YUVColorspace);
    image->magick("YUV");
    image->depth(8);
    image->imageInfo()->sampling_factor = new char[6];
    strcpy(image->imageInfo()->sampling_factor, "4:2:2");

    image->read(imageData);
    // image now contains the frame from the GStreamer pipeline as an ImageMagick object
That's it. Maybe this helps someone else in a similar situation.

Of course you can make GStreamer store the video frame in "RGB", too -- though I did not test it.:

Code: Select all

autovideosrc ! ffmpegcolorspace ! video/x-raw-rgb,depth=24,bpp=24 ! appsink
And import it that way into ImageMagick (image->magick("RGB")). Suit yourself.

My only question is: Do I have to free the memory of the old sampling factor? I assume it is just some internal standard value reused over and over again.

PS: This post from francholi helped me along the way

PPS: Post was edited after I realized how handy "ffmpegcolorspace" is.
Post Reply