
On Github, you'll find some new options and capabilities in the mmal-test branch here: https://github.com/dozencrows/motion/tree/mmal-test. I've only distributed these as source for now pending further testing - so if you try this out, please post here to say how you get on.
So here's the detail on what's new...
Framerate throttle disabled
There was some code in motion that deliberately throttled the frame rate to about 3 fps. In a fit of optimism, I've disabled this code. On its own, this doesn't really help...
Secondary Image Buffer
You can now specify a secondary image buffer with a scaled-up resolution from the primary image. This means you can perform motion detection on a low-res primary image (thus reducing the CPU load) but have a higher-resolution, better quality output from the secondary image.
For example, to set this up for the MJPEG stream do the following in the config file:
1. Set width and height to low values - say 256 and 144 respectively.
2. Set threshold to a low value - say 350.
3. Set mmalcam_secondary_buffer_upscale to 4.
4. Set stream_port to a non-zero port number to turn on MJPEG streaming.
5. Set stream_secondary to "on".
6. Set framerate to 15.
7. Set stream_maxrate to 15.
6. Ensure other output options are all "off" - output_pictures, ffmpeg_output_movies, use_extpipe.
Launch motion using this configuration, and point a web browser at your Pi and the port specified (not Chrome as it doesn't do MJPEG by default) and you should see a stream of images from your camera via motion that are actually 1024 x 576 resolution! You might also see a small frame rate improvement...
Internally, the MMAL code will be capturing images at 1024x576. Via the MMAL API, it will generate a second copy of the image at 1/4 scale (256 x 144) and supply that as the main image to motion's image processing. If motion is detected, the higher resolution image will be used for encoding into the MJPEG stream (and also for adding any text or motion locator markers).
The benefit of this is to reduce the CPU load of the motion detection code, which has been a bottleneck at decent resolutions on the Pi. It doesn't help with the CPU load for JPEG or AVI encoding, which are both significant bottlenecks still.
If you want to use the secondary image for picture or movie output, all you need to do is turn on the setting for the appropriate output and also turn on the corresponding "secondary" setting:
- output_secondary_pictures for pictures.
- ffmpeg_output_secondary_movies for movies.
- extpipe_secondary for external pipe.
MMAL JPEG Encoding
To try to reduce the output encoding bottleneck, there is a further new option - mmalcam_secondary_buffer_jpeg. If this is set to a value between 1 to 100, the MMAL API is used to encode the secondary buffer into JPEG format using the given value to indicate the quality (low to high). This "pre-encoded" secondary buffer is then used as the output for writing pictures, MJPEG stream output and external pipe.
When combined with the other two modifications, this can significantly increase the achievable framerate. I have been able to comfortably hit 15 fps with 1024 x 576 output on MJPEG streaming, and achieve pretty close to that when also saving to JPEG files.
This has the benefit of encoding to JPEG much faster than using motion's internal encoding. It also only encodes once; previously picture output and MJPEG stream output were both separately encoding the image.
There are some limitations:
- Text and motion locator markers can't be overlaid.
- Movie output can't use this data.
- Currently there is no EXIF information.
- Writing to file can cause bottlenecks (depending on storage device and file size).
- JPEG files will be larger for a given quality number than the original software encoding.
The file configs/motion-mmalcam.conf in Github is set up with resolution 256x144, secondary upscale 4x and secondary format as JPEG, and outputs to MJPEG stream only.
Watch out for...
I've not had time to test MMAL stills capture mode. I suspect that the camera's timing for managing exposure will still be the bottleneck here.
I've not tested masking or area detection. Note that as these are part of the motion detection processing, they should use the resolution given in the width and height settings - any secondary buffer settings should not affect them.
Enjoy, and (belated) Happy Easter!