Just want to share my experiment with Pi camera module.
That's you know, camera module does not come with V4L drive. (even it has, I don't expect we can use all camera feature)
I take a week too understand how camera module work, and want to share.
- We can use MMAL api to capture video to memory buffer, convert it to IplImage and process with OpenCV. If you want to show live video on screen. You can tunnelling Camera to Video Render directly and use OpenVG, dispmanx, EPL .. .. OpenGL to overlay information you need over screen. This way you will see high video frame rate and resolution on screen (keep opencv behind). YouTube below show : Pi Camera Module 640x360 full screen 30FPS (but process opencv face detection at 4fps), if you resize image before send it to opencv (some algorithm don't need hi-res... ex. face detect) migh be run 1280x720 30fps with 7fps opencv face detection
http://www.youtube.com/watch?v=b2kGPWxJybo
- ARM core too slow to process full complex image processing in hi-res, hi-framerate, but not partial or simple.
picture below ... will show you ... you can do simple software image filter.. at 1280x720 23fps (user MMAL to capture video to memery buffer and and copy to video render buffer with simple image processing)
https://twitter.com/Tasanakorn/status/3 ... 85/photo/1
YouTube below .. will show you ... You can embedded a small image/text in to video before encode and record to disk. (use MMAL to capture to memory buffer copy to encoder buffer mix with text image before encoding and save). In my experiments. I can record 1280x720 30fps and mix with 600x100 pixel, not too big but enough.. put some information ex. date/time, gps position or sensor information. (this video use output file and upload to youtube without reprocessing)
http://www.youtube.com/watch?v=ZKEtAmnw0ko
My current ugly code hosted on (make sure in develop branch)
https://github.com/tasanakorn/rpi-mmal- ... ee/develop
Sorry for my English
Thank you.
-
- Posts: 18
- Joined: Thu May 16, 2013 3:53 pm
-
- Posts: 22
- Joined: Sat Dec 15, 2012 8:38 am
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
I cannot believe no one has replied to this yet. THANK YOU SO MUCH! This is going to save me SO MUCH TIME. I had just started to dig into the MMAL API and it looks like such a monster. I am using this to build a driver for the Robot Operating System (ROS) and this will definitely help me a huge amount. THANK YOU AGAIN!
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
Nice job - not had time to look at the code, but I presume you are doing your own buffer handling between the camera and the encode components and processing the image there? I did wonder what the max frame size would be to get decent performance. I'm surprised you get up to 720p23 - that's not too bad at all.
Principal Software Engineer at Raspberry Pi Ltd.
Working in the Applications Team.
Working in the Applications Team.
-
- Posts: 18
- Joined: Thu May 16, 2013 3:53 pm
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
Yes. process on encoder input buffer in I420 format (12 bit per pixel, 1,382,400 bytes per frame).
It should be better if we can process in video core.
It should be better if we can process in video core.
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
Not possible to do anything on the VC itself without access to source, compilers and knowledge.tasanakorn wrote:Yes. process on encoder input buffer in I420 format (12 bit per pixel, 1,382,400 bytes per frame).
It should be better if we can process in video core.
I have on my list a job to write a SW stage to run on VC to add text strings to the camera output which will be useful. Going to be a while though.
Principal Software Engineer at Raspberry Pi Ltd.
Working in the Applications Team.
Working in the Applications Team.
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
With OMX video (as seen in the hello_videocube demo) there is a way to pass an texture (OMX_UseEGLImage) for the render module to output to.
I am wondering if MMAL has an implementation of this or if everything has to be re-written using OMX. I think it would bypass the memcpy that you are doing.
I am wondering if MMAL has an implementation of this or if everything has to be re-written using OMX. I think it would bypass the memcpy that you are doing.
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
Well if you use Herman Hermitage's compiler then it is possible to do processing on the VPU
Gordon Hollingworth PhD
Raspberry Pi - Chief Technology Officer - Software
Raspberry Pi - Chief Technology Officer - Software
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
THX for the code, works great for me.
-
- Posts: 4
- Joined: Fri May 31, 2013 2:17 am
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
Great Work!
I'm hacking away at your code at the moment.
Thanks.
I'm hacking away at your code at the moment.
Thanks.
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
Does anyone know how to get a colored OpenCV image in tasanakorn's code? I tried to add a cvShowImage("out", userdata.image) but it is just gray.
Thanks!
Thanks!
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
I can provide some hints how to get a colored image from tasanakorn's code. Haven't done this in his code but rather saving the buffers to disk and post processing them with OpenCV in Python - the principle should be the same though:
The buffer is using the YUV color scheme with a chroma 4:2:0 subsampling. The first half of the buffer you get in the callback (size=width*height) contains the brightness information (Y - luma) which is what you see as the gray image. The other half of the buffer contains two color channels (U, V) however color information is only saved for every 4th pixel, so we need to interpolate for the rest. The resulting 3-channel image can then be converted to RGB using cvtColor. Someone has done this in C before - you need add the color conversion though:
http://tech.dir.groups.yahoo.com/group/ ... sage/59027
And below is how is my Python code.
Hope this helps,
Stefan
def _createImage(self, imgBuf, width, height):
planeSize = width*height
img=np.zeros((height, width, 3), np.uint8)
# Luma
y = np.fromstring(imgBuf[:planeSize], dtype='uint8')
y.shape = (height, width)
img[:,:,0] = y
# Chroma is subsampled, i.e. only available for every 4-th pixel (4:2:0), we need to interpolate
u = np.fromstring(imgBuf[planeSize:planeSize+planeSize/4], dtype='uint8')
u.shape = (height/2, width/2)
img[:,:,1] = cv2.resize(u, (width, height), cv.CV_INTER_LINEAR) #@UndefinedVariable
v = np.fromstring(imgBuf[planeSize+planeSize/4: planeSize+planeSize/2], dtype='uint8') #@UndefinedVariable
v.shape = (height/2, width/2)
img[:,:,2] = cv2.resize(v, (width, height), cv.CV_INTER_LINEAR) #@UndefinedVariable
return cv2.cvtColor(img, cv.CV_YCrCb2RGB) #@UndefinedVariable
The buffer is using the YUV color scheme with a chroma 4:2:0 subsampling. The first half of the buffer you get in the callback (size=width*height) contains the brightness information (Y - luma) which is what you see as the gray image. The other half of the buffer contains two color channels (U, V) however color information is only saved for every 4th pixel, so we need to interpolate for the rest. The resulting 3-channel image can then be converted to RGB using cvtColor. Someone has done this in C before - you need add the color conversion though:
http://tech.dir.groups.yahoo.com/group/ ... sage/59027
And below is how is my Python code.
Hope this helps,
Stefan
def _createImage(self, imgBuf, width, height):
planeSize = width*height
img=np.zeros((height, width, 3), np.uint8)
# Luma
y = np.fromstring(imgBuf[:planeSize], dtype='uint8')
y.shape = (height, width)
img[:,:,0] = y
# Chroma is subsampled, i.e. only available for every 4-th pixel (4:2:0), we need to interpolate
u = np.fromstring(imgBuf[planeSize:planeSize+planeSize/4], dtype='uint8')
u.shape = (height/2, width/2)
img[:,:,1] = cv2.resize(u, (width, height), cv.CV_INTER_LINEAR) #@UndefinedVariable
v = np.fromstring(imgBuf[planeSize+planeSize/4: planeSize+planeSize/2], dtype='uint8') #@UndefinedVariable
v.shape = (height/2, width/2)
img[:,:,2] = cv2.resize(v, (width, height), cv.CV_INTER_LINEAR) #@UndefinedVariable
return cv2.cvtColor(img, cv.CV_YCrCb2RGB) #@UndefinedVariable
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
I can provide some hints how to get a colored image from tasanakorn's code. Since I save the frames to disk and do the OpenCV in python I cannot provide C-code but the principle should be the same though:
The buffer you get in the video_buffer_callback funtion is using the YUV color scheme with a 4:2:0 chroma subsampling. The first width*height bytes of the buffer contain the brigthness information (Y - luma) which is the gray image you already see. The remaining part of the buffer contains two subsequent blocks of color channels (UV, chroma) but since this they are subsampled, color info is only available for every 4th pixel. The color info for the other pixels has to be obtained by interpolation. After this we have can simply convert to RGB color space. I have included my Python code down below. This is inspired from the following post I found, contains some C-code but lacks the color conversion:
http://tech.dir.groups.yahoo.com/group/ ... sage/59027
NB: According to the documentation (http://home.nouwen.name/RaspberryPi/doc ... amera.html) I found the camera uses YUV color space whereas I use YCrBr in OpenCV, this is somehow inconsistent but the result looks o.k.
Hope this gets you started
The buffer you get in the video_buffer_callback funtion is using the YUV color scheme with a 4:2:0 chroma subsampling. The first width*height bytes of the buffer contain the brigthness information (Y - luma) which is the gray image you already see. The remaining part of the buffer contains two subsequent blocks of color channels (UV, chroma) but since this they are subsampled, color info is only available for every 4th pixel. The color info for the other pixels has to be obtained by interpolation. After this we have can simply convert to RGB color space. I have included my Python code down below. This is inspired from the following post I found, contains some C-code but lacks the color conversion:
http://tech.dir.groups.yahoo.com/group/ ... sage/59027
NB: According to the documentation (http://home.nouwen.name/RaspberryPi/doc ... amera.html) I found the camera uses YUV color space whereas I use YCrBr in OpenCV, this is somehow inconsistent but the result looks o.k.

Hope this gets you started
Code: Select all
def _createImage(self, imgBuf, width, height):
planeSize = width*height
img=np.zeros((height, width, 3), np.uint8)
# Luma
y = np.fromstring(imgBuf[:planeSize], dtype='uint8')
y.shape = (height, width)
img[:,:,0] = y
# Chroma is subsampled, i.e. only available for every 4-th pixel (4:2:0), we need to interpolate
u = np.fromstring(imgBuf[planeSize:planeSize+planeSize/4], dtype='uint8')
u.shape = (height/2, width/2)
img[:,:,1] = cv2.resize(u, (width, height), cv.CV_INTER_LINEAR) #@UndefinedVariable
v = np.fromstring(imgBuf[planeSize+planeSize/4: planeSize+planeSize/2], dtype='uint8') #@UndefinedVariable
v.shape = (height/2, width/2)
img[:,:,2] = cv2.resize(v, (width, height), cv.CV_INTER_LINEAR) #@UndefinedVariable
return cv2.cvtColor(img, cv.CV_YCrCb2RGB) #@UndefinedVariable
-
- Posts: 4
- Joined: Fri May 31, 2013 2:17 am
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
So I have been doing some Augmented reality on the Pi with opencv and aruco.
Playing with the res gets me about 20fps marker detection.
Will post some code really soon.
Playing with the res gets me about 20fps marker detection.
Will post some code really soon.
-
- Posts: 3
- Joined: Mon Jun 03, 2013 8:46 am
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
Hello tasanakorn,
just a question. How do you do to loop your image acquisition? Do you start in loop you software or you have had a loop into your code?
just a question. How do you do to loop your image acquisition? Do you start in loop you software or you have had a loop into your code?
-
- Posts: 18
- Joined: Thu May 16, 2013 3:53 pm
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
Hi,nicolas_darkn wrote:Hello tasanakorn,
just a question. How do you do to loop your image acquisition? Do you start in loop you software or you have had a loop into your code?
I not quite sure what you means.
For opencv demo. video_buffer_callback should be called by mmal every frame arrive. This function will copy video buffer and notify main loop via semaphore if it not busy then leave main loop do the opencv jobs.
Too avoid frame drop in all mmal process, make sure video_buffer_callback should be done before next frame arrive.
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
is there an mmal way to send only 'y' data to buffer?
"assuming 30MBits/s as a max datarate"
this might provide 50% boost...
"assuming 30MBits/s as a max datarate"
this might provide 50% boost...
-
- Posts: 18
- Joined: Thu May 16, 2013 3:53 pm
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
It already done in GPU before callback and achieve 30fps.
But we have sent buffer back and exit function as fast as possible to avoid frame drop.
Because of I didn't know how to use vc to process data, All image processing will running on ARM side.
We can speed up data moving by using dma. But the bottle neck is image processing that depend on ARM performance.
IMHO, VideoCore able to implement face detection, face recognition, video split, video mix (some declared it header file) , hope to see in pi.
But we have sent buffer back and exit function as fast as possible to avoid frame drop.
Because of I didn't know how to use vc to process data, All image processing will running on ARM side.
We can speed up data moving by using dma. But the bottle neck is image processing that depend on ARM performance.
IMHO, VideoCore able to implement face detection, face recognition, video split, video mix (some declared it header file) , hope to see in pi.
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
This is a really exciting development and I really do hope tasanakorn can help resolve this issue with video_record on github:
https://github.com/tasanakorn/rpi-mmal-demo/issues/3
The excellent example on youtube has no motion which may in part explain 23fps
my experience with motion ~0.5fps
however as I say, perhaps I've not set things up quite right yet....
did anyone else get video_record working to their satisfaction?
can they provide any advice?
best
Jonathan
https://github.com/tasanakorn/rpi-mmal-demo/issues/3
The excellent example on youtube has no motion which may in part explain 23fps
my experience with motion ~0.5fps
however as I say, perhaps I've not set things up quite right yet....
did anyone else get video_record working to their satisfaction?
can they provide any advice?
best
Jonathan
-
- Posts: 18
- Joined: Thu May 16, 2013 3:53 pm
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
Did you redirect standard output to file My code just print data to standard output. You need to redirect output to file or pipe to another program. If not, it will print out a lot of data to console and slow down CPU.
BTW, I will recheck my code, add more document and record demolition video soon.
Thank you for interest, and sorry for inconvenience.
BTW, I will recheck my code, add more document and record demolition video soon.
Thank you for interest, and sorry for inconvenience.
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
thanks for your prompt response,
afaict 'works for me' using:
mmal_video_record > test2.h264
looking forward to your further comments and patches
how to set length of time to record?
anyway now I have to go away and understand a bit better....
thanks!
afaict 'works for me' using:
mmal_video_record > test2.h264
looking forward to your further comments and patches
how to set length of time to record?
anyway now I have to go away and understand a bit better....
thanks!
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
Hi All
When using the original video_record.c from tasanakorn the CPU load went up to 85%.
To reduce CPU load while recording video i slowed down the overlay update rate in main() ,usleep(1000000) . The problem then was that if the overlay is redrawn while it was copied in to the videostream the text was flickering. Sorry about my coding , but C is not my first language.
So i tried the following:
1. Added a 'double buffer' for the overlay drawing , not knowing much about cairo i added a second cairo context.
2. camera_video_buffer_callback : the last updated overlay_buffer is copied to the videostream
3. main() : the now inactive buffer is redrawn then marked active.
Since i get GPS data every 1 sec im happy updating the overlay_buffer only when new gps data is available.
CPU load is less than 5% with video-size 640x480.:w
3020 pi 20 0 67816 5076 2140 S 4.2 1.3 0:01.16 mmal_video_reco
3028 pi 20 0 4680 1456 1028 R 1.3 0.4 0:00.21 top
I'm new to git , not knowning how to puplish the changes to the original repository i forked tasanakorn's repository to https://github.com/george-ch/rpi-mmal-demo
When using the original video_record.c from tasanakorn the CPU load went up to 85%.
To reduce CPU load while recording video i slowed down the overlay update rate in main() ,usleep(1000000) . The problem then was that if the overlay is redrawn while it was copied in to the videostream the text was flickering. Sorry about my coding , but C is not my first language.
So i tried the following:
1. Added a 'double buffer' for the overlay drawing , not knowing much about cairo i added a second cairo context.
2. camera_video_buffer_callback : the last updated overlay_buffer is copied to the videostream
3. main() : the now inactive buffer is redrawn then marked active.
Since i get GPS data every 1 sec im happy updating the overlay_buffer only when new gps data is available.
CPU load is less than 5% with video-size 640x480.:w
3020 pi 20 0 67816 5076 2140 S 4.2 1.3 0:01.16 mmal_video_reco
3028 pi 20 0 4680 1456 1028 R 1.3 0.4 0:00.21 top
I'm new to git , not knowning how to puplish the changes to the original repository i forked tasanakorn's repository to https://github.com/george-ch/rpi-mmal-demo
-
- Posts: 22
- Joined: Sat Dec 15, 2012 8:38 am
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
Since it was a straight fork, asking him to include your changes is easy! Just go to "Pull Requests" on the right and click "Create New Pull Request." Compare the master branch of his repo with the develop branch of yours and create the request with your justification. He'll get a notification and choose whether or not to include it based on your justification.
-
- Posts: 18
- Joined: Thu May 16, 2013 3:53 pm
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
Good jobs , geo.
I'm busy for a while. Will be back and implement your technique soon.
I'm busy for a while. Will be back and implement your technique soon.
geo wrote:Hi All
When using the original video_record.c from tasanakorn the CPU load went up to 85%.
To reduce CPU load while recording video i slowed down the overlay update rate in main() ,usleep(1000000) . The problem then was that if the overlay is redrawn while it was copied in to the videostream the text was flickering. Sorry about my coding , but C is not my first language.
So i tried the following:
1. Added a 'double buffer' for the overlay drawing , not knowing much about cairo i added a second cairo context.
2. camera_video_buffer_callback : the last updated overlay_buffer is copied to the videostream
3. main() : the now inactive buffer is redrawn then marked active.
Since i get GPS data every 1 sec im happy updating the overlay_buffer only when new gps data is available.
CPU load is less than 5% with video-size 640x480.:w
3020 pi 20 0 67816 5076 2140 S 4.2 1.3 0:01.16 mmal_video_reco
3028 pi 20 0 4680 1456 1028 R 1.3 0.4 0:00.21 top
I'm new to git , not knowning how to puplish the changes to the original repository i forked tasanakorn's repository to https://github.com/george-ch/rpi-mmal-demo
- LetHopeItsSnowing
- Posts: 357
- Joined: Sat May 26, 2012 6:40 am
- Location: UK
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
Compiling!
Guys,
Im having some problems getting rpi-mmal-demo program to compile. So cards on the table.. Im rubbish with C, cmake, make, opencv, but Im a keen learner.
So this is what I have done so:
- forked the program
- cloned the develop branch
- clone raspberrypi/userland to /home/pi/src/raspberrypi/userland
- installed cmake
- mkdir build
- cd build
- cmake ..
- make
And I get this error:
[ 25%] Built target mmal_buffer_demo
Linking C executable mmal_opencv_demo
/usr/bin/ld: cannot find -lopencv_gpu
/usr/bin/ld: cannot find -lopencv_contrib
/usr/bin/ld: cannot find -lopencv_legacy
/usr/bin/ld: cannot find -lopencv_objdetect
/usr/bin/ld: cannot find -lopencv_calib3d
/usr/bin/ld: cannot find -lopencv_features2d
/usr/bin/ld: cannot find -lopencv_video
/usr/bin/ld: cannot find -lopencv_highgui
/usr/bin/ld: cannot find -lopencv_ml
/usr/bin/ld: cannot find -lopencv_imgproc
/usr/bin/ld: cannot find -lopencv_flann
/usr/bin/ld: cannot find -lopencv_core
collect2: ld returned 1 exit status
make[2]: *** [mmal_opencv_demo] Error 1
make[1]: *** [CMakeFiles/mmal_opencv_demo.dir/all] Error 2
make: *** [all] Error 2
I also downloaded and compiled opencv as well using the following instructions http://mitchtech.net/raspberry-pi-opencv/ - I dont know if this was needed or not.
So before I start hacking and slashed without the foggiest idea of what Im doing has anyone got any advice?
Mart
Guys,
Im having some problems getting rpi-mmal-demo program to compile. So cards on the table.. Im rubbish with C, cmake, make, opencv, but Im a keen learner.
So this is what I have done so:
- forked the program
- cloned the develop branch
- clone raspberrypi/userland to /home/pi/src/raspberrypi/userland
- installed cmake
- mkdir build
- cd build
- cmake ..
- make
And I get this error:
[ 25%] Built target mmal_buffer_demo
Linking C executable mmal_opencv_demo
/usr/bin/ld: cannot find -lopencv_gpu
/usr/bin/ld: cannot find -lopencv_contrib
/usr/bin/ld: cannot find -lopencv_legacy
/usr/bin/ld: cannot find -lopencv_objdetect
/usr/bin/ld: cannot find -lopencv_calib3d
/usr/bin/ld: cannot find -lopencv_features2d
/usr/bin/ld: cannot find -lopencv_video
/usr/bin/ld: cannot find -lopencv_highgui
/usr/bin/ld: cannot find -lopencv_ml
/usr/bin/ld: cannot find -lopencv_imgproc
/usr/bin/ld: cannot find -lopencv_flann
/usr/bin/ld: cannot find -lopencv_core
collect2: ld returned 1 exit status
make[2]: *** [mmal_opencv_demo] Error 1
make[1]: *** [CMakeFiles/mmal_opencv_demo.dir/all] Error 2
make: *** [all] Error 2
I also downloaded and compiled opencv as well using the following instructions http://mitchtech.net/raspberry-pi-opencv/ - I dont know if this was needed or not.
So before I start hacking and slashed without the foggiest idea of what Im doing has anyone got any advice?
Mart
"am I getting slower, or is stuff more complicated; either way I now have to write it down - stuffaboutcode.com"
- LetHopeItsSnowing
- Posts: 357
- Joined: Sat May 26, 2012 6:40 am
- Location: UK
Re: Experiment with Camera Module : MMAL, OpenCV, Overlay
Managed to get it compiled... OpenCV compiled had crashed out half way through.
"am I getting slower, or is stuff more complicated; either way I now have to write it down - stuffaboutcode.com"