The code I started from came from here:
https://raspberrypi.stackexchange.com/q ... 0126#60126
I think I am close to getting the additional streaming code working (not shown above) -- I just don't know how to do the image combination.
Code: Select all
from picamera import mmalobj as mo, mmal from signal import pause from time import sleep # setup MMAL objects camera = mo.MMALCamera() splitter_a = mo.MMALSplitter() render_l = mo.MMALRenderer() render_r = mo.MMALRenderer() # configure camera outputs camera.outputs.framesize = (960, 1080) camera.outputs.framerate = 30 camera.outputs.commit() # configure preview p = render_l.inputs.params[mmal.MMAL_PARAMETER_DISPLAYREGION] p.set = mmal.MMAL_DISPLAY_SET_FULLSCREEN | mmal.MMAL_DISPLAY_SET_DEST_RECT p.fullscreen = False p.dest_rect = mmal.MMAL_RECT_T(0, 0, 960, 1080) render_l.inputs.params[mmal.MMAL_PARAMETER_DISPLAYREGION] = p p.dest_rect = mmal.MMAL_RECT_T(960, 0, 960, 1080) render_r.inputs.params[mmal.MMAL_PARAMETER_DISPLAYREGION] = p # connect objects for preview splitter_a.connect(camera.outputs) render_l.connect(splitter_a.outputs) render_r.connect(splitter_a.outputs) splitter_a.enable() render_l.enable() render_r.enable()
Disclaimer: I'm still using the "legacy" camera stack via Raspian Buster Lite (sorry for being outdated, but my reasons revolve around the fact that I use omxplayer a LOT still).
I'm currently experimenting with this project on a Pi 2.
Is what I'm trying to do even possible to do entirely within the hardware GPU with an MMAL pipeline? I suspect I'm looking for something similar to the process Picamera goes through to create a stereoscopic side-by-side image from two cameras, but that's just a hunch.