gjerdav
Posts: 6
Joined: Mon Oct 07, 2019 11:42 pm

Picamera MMAL combine two images

Tue Aug 16, 2022 3:46 pm

I'm working on a project that displays one camera onto the screen twice in a side-by-side fashion. I have working Python code that does this as a preview on the HDMI output (utilizing the PiCamera library via an MMAL splitter and two renderers), but now I'm trying to figure out if it is possible to send this same "combined" image to an encoder for streaming. Is there way to pass the rendered buffer back to an encoder?

The code I started from came from here:
https://raspberrypi.stackexchange.com/q ... 0126#60126

Code: Select all

from picamera import mmalobj as mo, mmal
from signal import pause
from time import sleep

# setup MMAL objects
camera = mo.MMALCamera()
splitter_a = mo.MMALSplitter()
render_l = mo.MMALRenderer()
render_r = mo.MMALRenderer()

# configure camera outputs
camera.outputs[0].framesize = (960, 1080)
camera.outputs[0].framerate = 30
camera.outputs[0].commit()

# configure preview
p = render_l.inputs[0].params[mmal.MMAL_PARAMETER_DISPLAYREGION]
p.set = mmal.MMAL_DISPLAY_SET_FULLSCREEN | mmal.MMAL_DISPLAY_SET_DEST_RECT
p.fullscreen = False

p.dest_rect = mmal.MMAL_RECT_T(0, 0, 960, 1080)
render_l.inputs[0].params[mmal.MMAL_PARAMETER_DISPLAYREGION] = p

p.dest_rect = mmal.MMAL_RECT_T(960, 0, 960, 1080)
render_r.inputs[0].params[mmal.MMAL_PARAMETER_DISPLAYREGION] = p

# connect objects for preview
splitter_a.connect(camera.outputs[0])
render_l.connect(splitter_a.outputs[0])
render_r.connect(splitter_a.outputs[1])
splitter_a.enable()
render_l.enable()
render_r.enable()
I think I am close to getting the additional streaming code working (not shown above) -- I just don't know how to do the image combination.

Disclaimer: I'm still using the "legacy" camera stack via Raspian Buster Lite (sorry for being outdated, but my reasons revolve around the fact that I use omxplayer a LOT still).

I'm currently experimenting with this project on a Pi 2.

Is what I'm trying to do even possible to do entirely within the hardware GPU with an MMAL pipeline? I suspect I'm looking for something similar to the process Picamera goes through to create a stereoscopic side-by-side image from two cameras, but that's just a hunch.

6by9
Raspberry Pi Engineer & Forum Moderator
Raspberry Pi Engineer & Forum Moderator
Posts: 13261
Joined: Wed Dec 04, 2013 11:27 am
Location: ZZ9 Plural Z Alpha, aka just outside Cambridge.

Re: Picamera MMAL combine two images

Tue Aug 16, 2022 6:13 pm

Stereoscopic camera support is done within the legacy camera stack, not externally with MMAL.

There is a MMAL vc.ril.hvs component which allows you to use the "Hardware Video Scaler" (which also does the composition for the displays). That supports multiple (IIRC 5) input ports, and you can configure where in the output image it is meant to be composed to. It was initially used for adding subtitles to videos in VLC, before then presenting a composed frame to X's framebuffer.

In theory Picamera's MMALobj should be able to use it, but it's not one that has been tested out or is part of the normal supported paths. The output should be possible to send to the H264 video encoder.
Software Engineer at Raspberry Pi Ltd. Views expressed are still personal views.
I'm not interested in doing contracts for bespoke functionality - please don't ask.

gjerdav
Posts: 6
Joined: Mon Oct 07, 2019 11:42 pm

Re: Picamera MMAL combine two images

Mon Aug 29, 2022 1:48 am

Thanks 6by9, that helps a lot. I didn't even think about there being another layer besides MMAL. I have a hunch this might be reaching beyond my programming skill, but I'll definitely keep it in mind.

For the time being, I've actually found that I can run two instances of omxplayer (the intended viewer on another pi) and place them side-by-side on the screen. I realize this leaves the potential for sync issues, but over a wired local network it works surprisingly well.

Thanks again!

Return to “Camera board”