User avatar
HermannSW
Posts: 6093
Joined: Fri Jul 22, 2016 9:09 pm
Location: Eberbach, Germany

raspiraw raw bayer data: how to use in callbacks for feature extraction and robot control

Sat Jul 29, 2017 8:50 pm

In thread RAW SENSOR ACCESS / CSI-2 RECEIVER PERIPHERAL 6by9 provides beautiful "raspiraw" command dumping raw bayer data received from CSI-2 connected camera to SD card. The raw data has to be postprocessed, eg. with 6by9's hacked version of "dcraw":
https://github.com/6by9/RPiTest/tree/master/dcraw

I played with raspiraw and dcraw, and wanted to get a better understanding of the raw bayer data. The Bayer pattern of OV5647 chip is described here, it is BG/GR:
http://cdn.sparkfun.com/datasheets/Dev/ ... df#page=25

I took a 640*480 frame dumped from raspiraw @60fps, and compared 4 conversions from that:
Image

Here I scaled each pixel by factor of two in x and y direction.
For top left I really used blue/green/red pixels where OV5647 B/G/R pixels are.
Top right is result of postprocessing with hacked dcraw.
Bottom left is generated with below program "rawvga", which discards each 5th byte (conatining least precision 2 bits of 10 bit representation) and uses some luminosity mapping for blue, green and red.
Bottom right is generated similar, just ignoring the color and using the high 8 bit as grey values:

Code: Select all

$ gcc rawvga.c -Wall -pedantic -o rawvga
$ ./rawvga -c o7b.raw > outc.ppm
$ ./rawvga -g o7b.raw > outg.ppm
$ ./rawvga -l o7b.raw > outl.ppm
$ 
This is top left whole 640x480 image (-c), pixel intensity from bayer data is used for blue/green/red pixels. Click to see in original resolution:
Image

This is top right full image, dcraw converted:
Image

This is bottom left full image (-l):
Image

And this bottom right full image (-g):
Image

The best generated with rawvga (that only can handle 640x480 frames) is bottom right (-g). The reason for writing rawvga was to get an understanding, not to replace dcraw.

On one of my robots I have added a tilt servo Raspberry camera, mainly to help with linefollowing:
Image

This is a sample 240x320 image taken with a cheap USB camera much earlier, but that is what I want to process, do feature extraction (where does the line go) and feed that into PID controller making robot follow the line (the bottom of image is "present", middle is "near future", and top is "far future"):
Image

Until yesterday I had the plan to use a raspivid/gstreamer pipeline and write a gstreamer plugin in order to proces the frames. All has become much simpler, I can now use raspiraw, eliminating writing to SD card, and do frame processing in callback routine called every 11.1ms in case of 90fps (which is every 5.5cm for 5m/s robot speed). This is where to find the (-g) grey value of pixel (x,y) in frame:

Code: Select all

[0x8000 + ] y*(640*5/4) + 5*(x>>2) + (x%4)
The 0x8000 offset only when processing raw frames generated with raspiraw "-hd" option. In callback routine there is no offset.

My son told me that edge detection in general is not an easy thing (I hope that it will be with the line following frames I get), and I definitely should have a look on OpenCV (wich seems to be capable of dealing with 640x480 @90fps from Raspberry camera).
I will do that, but try my own feature extraction as well. Yes, a VGA frame has 300KB of data, but on the other hand there are 11 million clock cycles before the next frame callback gets triggered ...

Hermann.

"rawvga" can handle raw data, with or without header:

Code: Select all

/* rawvga converts raspiraw VGA resolution .raw to .ppm/.pgm without dcraw

A) https://www.johndcook.com/blog/2009/08/24/algorithms-convert-color-grayscale
*/
#include <stdio.h>
#include <string.h>
#include <assert.h>

int  hdr=0x8000, siz=640*480*5/4, shl=4, col=0;
char M[2][2]={{'B','G'},{'G','R'}};
int  L[17] = { 1/*B*/, 0,0,0,0, 10/*G*/, 0,0,0,0,0,0,0,0,0,0, 3/*R*/}; /* A) */

void out(char C, unsigned char u) {
  unsigned U = u * (col ? 1 : L[C-'B']);  U <<= shl;  if (U>255) { U=255; } 

  if (!col) {
    putchar(U);
  } else {
    switch (C) {
      case 'B': putchar(0); putchar(0); putchar(U); break;
      case 'G': putchar(0); putchar(U); putchar(0); break;
      case 'R': putchar(U); putchar(0); putchar(0); break;
    }
  }
}

int main(int argc, char *argv[]) {
  FILE *src; int i,j;

  if (argc!=3) { fprintf(stderr,"%s -c|-g|-l file.raw\n",argv[0]); return(1); }

       if (strcmp(argv[1], "-c")==0) { printf("P6"); col=1; }
  else if (strcmp(argv[1], "-g")==0) { printf("P5"); L['G'-'B']=L['R'-'B']=1; }
  else if (strcmp(argv[1], "-l")==0) { printf("P5"); shl=1; }
  else { assert( !"-c|-g|-l needed"); }

  assert( src = fopen(argv[2],"rb") );
  fseek(src,0,SEEK_END); if (ftell(src) != hdr+siz) { hdr=0; }
  assert(ftell(src) == hdr+siz); fseek(src, hdr, 0);

  printf("\n640 480\n255\n");

  for(i=0; i<480; ++i) {
    for(j=0; j<640; ++j) {
      out(M[i%2][j%2], fgetc(src));  if (j%4 == 3) { fgetc(src); }
    }
  }

  return fclose(src);
}
https://github.com/Hermann-SW/RSA_numbers_factored
https://stamm-wilbrandt.de/GS_cam_1152x192@304fps
https://hermann-sw.github.io/planar_graph_playground
https://github.com/Hermann-SW/Raspberry_v1_camera_global_external_shutter
https://stamm-wilbrandt.de/

User avatar
HermannSW
Posts: 6093
Joined: Fri Jul 22, 2016 9:09 pm
Location: Eberbach, Germany

Re: raspiraw raw bayer data: how to use in callbacks for feature extraction and robot control

Sun Jul 30, 2017 8:40 am

6by9 pointed to this documentation in another thread, very useful:
http://picamera.readthedocs.io/en/latest/fov.html

As far as raspiraw is concerned, only these are of interest (remainder of chapter 6 covers GPU and other stuff irrelevant for raspiraw):
6.1. Theory of Operation
6.2 Sensor Modes

In this posting I showed that raspiraw log callback timestamp differences showed 42fps and 60fps for 640x480 modes 6 and 7 of raspiraw. Both do match the minimum fps according this sensor mode V1 camera table from 6.2:
Image

According 6.1.3.2. Maximum framerate is determined by the minimum exposure time higher fps can be achieved by influencing exposure time.

Hermann.
https://github.com/Hermann-SW/RSA_numbers_factored
https://stamm-wilbrandt.de/GS_cam_1152x192@304fps
https://hermann-sw.github.io/planar_graph_playground
https://github.com/Hermann-SW/Raspberry_v1_camera_global_external_shutter
https://stamm-wilbrandt.de/

User avatar
HermannSW
Posts: 6093
Joined: Fri Jul 22, 2016 9:09 pm
Location: Eberbach, Germany

Re: raspiraw raw bayer data: how to use in callbacks for feature extraction and robot control

Wed Aug 09, 2017 1:21 pm

I tried this easy 640x480 raw Bayer data to 320x240 rgb transformation, and the result looks quite good. Simply replace 2x2 pixels of 640x480 by a single rgb pixel on 320x240 as shown here:

Code: Select all

bg
Gr  --> (b | (g+G)/2 | r)
This is the result for sample input used before:
Image

To avoid code duplication, new rawvga.2.c can be downloaded here, below find the diff:
https://stamm-wilbrandt.de/en/forum/rawvga.2.c

Code: Select all

$ gcc -Wall -pedantic rawvga.2.c -o rawvga.2
$ ./rawvga.2 -C o7b.raw > outC.ppm
$ 
$ diff rawvga.c rawvga.2.c 
16c16
<   if (!col) {
---
>   if (col<1) {
30c30
<   if (argc!=3) { fprintf(stderr,"%s -c|-g|-l file.raw\n",argv[0]); return(1); }
---
>   if (argc!=3) { fprintf(stderr,"%s -C|-c|-g|-l file.raw\n",argv[0]); return(1); }
32c32
<        if (strcmp(argv[1], "-c")==0) { printf("P6"); col=1; }
---
>        if (strcasecmp(argv[1], "-c")==0) { printf("P6"); col=argv[1][1]-'b'; }
35c35
<   else { assert( !"-c|-g|-l needed"); }
---
>   else { assert( !"-C|-c|-g|-l needed"); }
40a41
> if (col>=0) {
47a49,67
> } else {
>   unsigned char line[640/4*5], b, g, G, r, m;
> 
>   printf("\n320 240\n255\n");
> 
>   for(i=0; i<480; i+=2) {
>     int ofs=0; 
> 
>     assert(640/4*5 == fread(line, 1, 640/4*5, src));
> 
>     for(j=0; j<640; j+=2) {
>       b=line[j+ofs]; g=line[j+ofs+1];
>       G=fgetc(src);  r=fgetc(src);
>       m=(((unsigned)g)+G)>>1;
>       out('?', r); out('?', m); out('?', b);
>       if (j%4 == 2) { fgetc(src); ++ofs; }
>     }
>   }
> }
$
My question now is, whether this simple, local bayer to rgb transformation might be good enough to identify a (small or big) yellow airplane in frame data?
Image

This would be useful for Pi Zero "follow me" control of 2nd airplane, that would fly following a first, manually controlled airplane (viewtopic.php?f=43&t=190407&p=1196430#p1196354).
https://github.com/Hermann-SW/RSA_numbers_factored
https://stamm-wilbrandt.de/GS_cam_1152x192@304fps
https://hermann-sw.github.io/planar_graph_playground
https://github.com/Hermann-SW/Raspberry_v1_camera_global_external_shutter
https://stamm-wilbrandt.de/

User avatar
HermannSW
Posts: 6093
Joined: Fri Jul 22, 2016 9:09 pm
Location: Eberbach, Germany

Re: raspiraw raw bayer data: how to use in callbacks for feature extraction and robot control

Thu Aug 10, 2017 6:19 pm

... to identify a (small or big) yellow airplane in frame data?
This seems not to be that easy. I created a test setup at home with a "flying" airplane (hanging), and because we have rainy weather outside, I added more light with a 1000lm lamp. Here is a photo of the setup, taken with Android phone camera, roughly from same perspective as the (normal) v1 camera on the robot (I disconnected the normally installed NoIR v1 camera):
Image

Next I turned the airplane and took a default 5 second save every 20th raw bayer frame to SD card raspiraw run (raspiraw -md 7 -hd -o out7.%03d.raw). Then I converted frame 181 with 6by9's hacked dcraw and got this:
Image

Finally I used "rawvga.2 -C" new mode and got this QVGA image:
Image

While the first two images show the airplane "yellow", the simple "rawvga.2 -C" output does not look "yellow" to me. Looking at the sky "-C" mode definitely makes the image too bright. Simple shifting bayer data is no replacement for white balance, need to learn how to do that first.

Hermann.

P.S:
If you want to play with rawcga.2.c, here is the raw bayer frame data:
https://stamm-wilbrandt.de/en/forum/out7.181.raw
https://github.com/Hermann-SW/RSA_numbers_factored
https://stamm-wilbrandt.de/GS_cam_1152x192@304fps
https://hermann-sw.github.io/planar_graph_playground
https://github.com/Hermann-SW/Raspberry_v1_camera_global_external_shutter
https://stamm-wilbrandt.de/

User avatar
HermannSW
Posts: 6093
Joined: Fri Jul 22, 2016 9:09 pm
Location: Eberbach, Germany

Re: raspiraw raw bayer data: how to use in callbacks for feature extraction and robot control

Thu Aug 10, 2017 11:45 pm

For the QVGA conversion of VGA raw Bayer frame taking all 10 bits into account seems to produce much better result:

Code: Select all

$ gcc -Wall -pedantic rawvga.3.c -o rawvga.3
$ ./rawvga.3 out7.181.raw > out7.181.raw3.ppm
$
Image

Still no yellow, but most of the fine details match the dcraw converted image. And no guesswork bit shifting, just taking the 10 raw10 bits.

For comparison here the rawvga.3 image pamscaled by factor of 2 and the dcraw converted picture (again):
Image

Image

Code: Select all

$ cat rawvga.3.c
/* rawvga.3 converts raspiraw VGA resolution .raw to QVGA .ppm without dcraw
*/
#include <stdio.h>
#include <string.h>
#include <assert.h>

#define out(U) putchar((U>255)?255:U);

int main(int argc, char *argv[]) {
  int  hdr=0x8000, siz=640*480*5/4;
  FILE *src; int i,j;
  unsigned char line[2][640/4*5];
  unsigned b, g, G, r, m;

  if (argc!=2) { fprintf(stderr,"%s file.raw\n",argv[0]); return(1); }

  assert( src = fopen(argv[1],"rb") );
  fseek(src,0,SEEK_END); if (ftell(src) != hdr+siz) { hdr=0; }
  assert(ftell(src) == hdr+siz); fseek(src, hdr, 0);

  printf("P6\n320 240\n255\n");

  for(i=0; i<480; i+=2) {
    int ofs=0; 

    assert(640/4*5 == fread(line[0], 1, 640/4*5, src));
    assert(640/4*5 == fread(line[1], 1, 640/4*5, src));

    for(j=0; j<640; j+=4) {                               /* LSB first */
      b=(((unsigned)line[0][j+ofs+0])<<2) + ((line[0][j+ofs+4]>>0)&0x03); 
      g=(((unsigned)line[0][j+ofs+1])<<2) + ((line[0][j+ofs+4]>>2)&0x03); 
      G=(((unsigned)line[1][j+ofs+0])<<2) + ((line[1][j+ofs+4]>>0)&0x03); 
      r=(((unsigned)line[1][j+ofs+1])<<2) + ((line[1][j+ofs+4]>>2)&0x03); 
      m=(g+G)>>1;
      out(r); out(m); out(b);

      b=(((unsigned)line[0][j+ofs+2])<<2) + ((line[0][j+ofs+4]>>4)&0x03); 
      g=(((unsigned)line[0][j+ofs+3])<<2) + ((line[0][j+ofs+4]>>6)&0x03); 
      G=(((unsigned)line[1][j+ofs+2])<<2) + ((line[1][j+ofs+4]>>4)&0x03); 
      r=(((unsigned)line[1][j+ofs+3])<<2) + ((line[1][j+ofs+4]>>6)&0x03); 
      m=(g+G)>>1;
      out(r); out(m); out(b);

      ++ofs;
    }
  }

  return fclose(src);
}
$ 
P.S:
For this kind of QVGA conversion it is definitely better to ask the camera for raw8 instead of raw10 data. 6by9 showed here how to modify raspiraw in order to achieve that:
viewtopic.php?f=43&t=109137&start=200#p1173461
https://github.com/Hermann-SW/RSA_numbers_factored
https://stamm-wilbrandt.de/GS_cam_1152x192@304fps
https://hermann-sw.github.io/planar_graph_playground
https://github.com/Hermann-SW/Raspberry_v1_camera_global_external_shutter
https://stamm-wilbrandt.de/

User avatar
HermannSW
Posts: 6093
Joined: Fri Jul 22, 2016 9:09 pm
Location: Eberbach, Germany

Re: raspiraw raw bayer data: how to use in callbacks for feature extraction and robot control

Tue Oct 03, 2017 3:53 pm

I did build a debugging device for Pi camera, works with gstreamer quite well:
viewtopic.php?f=43&t=193722&p=1217930#p1217930
Image

Need to find out how to draw the frames received processed on /dev/fb1 and /dev/fb2 from inside modified raspiraw.

And need to do edge detection on raw bayer data as it turned out to be what is needed for line following robot (both on artificial as well as "built in" lines):
Image

P.S:
Before installing raspiraw on new camera debug device Pi Zero, I took a raspiraw image on the system I had, and converted a taken 640x480 raw Bayer frame with rawvga.3 program from previous posting to 320x240. Looks good sofar. It is a Pi NoIR camera, and the bright area at bottom is lit-up by 3W infrared LED:
Image
https://github.com/Hermann-SW/RSA_numbers_factored
https://stamm-wilbrandt.de/GS_cam_1152x192@304fps
https://hermann-sw.github.io/planar_graph_playground
https://github.com/Hermann-SW/Raspberry_v1_camera_global_external_shutter
https://stamm-wilbrandt.de/

User avatar
HermannSW
Posts: 6093
Joined: Fri Jul 22, 2016 9:09 pm
Location: Eberbach, Germany

Re: raspiraw raw bayer data: how to use in callbacks for feature extraction and robot control

Thu Oct 05, 2017 1:00 am

These are the complete steps needed to build and run "raspiraw" from scratch.

compile
wget https://github.com/6by9/userland/archive/rawcam.zip

Code: Select all

wget https://github.com/6by9/raspiraw/archive/master.zip
unzip -x rawcam.zip
cd userland-rawcam
sudo apt-get install build-essential cmake
time ( ./buildme 2>err | tee out )
sudo ln -s /home/pi/userland-rawcam/build/bin/raspiraw /usr/bin
prepare

Code: Select all

sudo apt-get install wiringpi
append (using an editor) dtparam=i2c_vc=on  to /boot/config.txt
add  (using an editor) i2c-dev  to  /etc/modules-load.d/modules.conf
reboot
use

Code: Select all

cd userland_rawcam
sudo ./camera_i2c
raspiraw -hd -md 7 -o out.%03d.raw

> Need to find out how to draw the frames received processed on /dev/fb1 and /dev/fb2 from inside modified raspiraw.
>
It was simpler that thought, just hijack the code block that did write to SD card and write frames to /dev/fb1 and /dev/fb2. Nothing really useful, left 320x240 displays the blue pixel of each 2x2 BG/GR bayer block, right 320x240 displays the top green pixel of each 2x2 BG/GR bayer block:
Image

Best workdir is:~/userland-rawcam/build/raspberry/release.

Code: Select all

pi@raspberrypi:~/userland-rawcam/build/raspberry/release $ vi ../../../host_applications/linux/apps/raspicam/raspiraw.c
Compile with just "make raspiraw", then test with:

Code: Select all

pi@raspberrypi:~/userland-rawcam/build/raspberry/release $ raspiraw -hd -md 7 -o outc.%03d.raw -t 15000
With default setting every 20th frame gets drawn on both displays, and since mode 7 runs 640x480 with 60fps, the display gets updated every ⅓ second.

Hermann.

P.S:
Here is the small diff:

Code: Select all

pi@raspberrypi:~/userland-rawcam/build/raspberry/release $ diff ../../../host_applications/linux/apps/raspicam/raspiraw.c.orig ../../../host_applications/linux/apps/raspicam/raspiraw.c 
56a57,58
> FILE *fb1=NULL, *fb2=NULL;
> 
364a367
> #if 0
378a382,416
> #else
>                         int i,j;
>                         unsigned char *p;
> 
>                         rewind(fb1);
>                         p = buffer->data+1;
> 
>                         for(i=0; i<480; i+=2)
>                         {
>                           for(j=0; j<640; j+=2)
>                           {
>                             fputc((*p<<7) & 0xE0, fb1); 
>                             fputc((*p<<1) & 0x07, fb1); 
> //fputc(0x1F,fb2); fputc(0x00,fb2);
>                             
>                             p += (j%4==2) ? 3 : 2;
>                           }
>                           p+=640*5/4;
>                         }
> 
>                         rewind(fb2);
>                         p = buffer->data;
> 
>                         for(i=0; i<480; i+=2)
>                         {
>                           for(j=0; j<640; j+=2)
>                           {
>                             fputc((*p<<2) & 0x1F, fb2); 
>                             fputc(0x00, fb2); 
>                             
>                             p += (j%4==2) ? 3 : 2;
>                           }
>                           p+=640*5/4;
>                         }
> #endif
635a674,676
> 
>         fb1 = fopen("/dev/fb1", "wb");
>         fb2 = fopen("/dev/fb2", "wb");
pi@raspberrypi:~/userland-rawcam/build/raspberry/release $ 
Last edited by HermannSW on Mon Nov 05, 2018 3:34 pm, edited 2 times in total.
https://github.com/Hermann-SW/RSA_numbers_factored
https://stamm-wilbrandt.de/GS_cam_1152x192@304fps
https://hermann-sw.github.io/planar_graph_playground
https://github.com/Hermann-SW/Raspberry_v1_camera_global_external_shutter
https://stamm-wilbrandt.de/

User avatar
HermannSW
Posts: 6093
Joined: Fri Jul 22, 2016 9:09 pm
Location: Eberbach, Germany

Re: raspiraw raw bayer data: how to use in callbacks for feature extraction and robot control

Sun Nov 05, 2017 10:57 pm

This whole thread is on raspiraw and robot control. Today I will describe the first successful combination of both, automatic caterpillar robot camera tilt calibration using modifications to raspiraw.c only.

Results first, this is image taken with "raspistill -w 640 -h 480 ..." after calibration ended:
Image

The calibration code does store exactly one frame and not many as raspiraw normally does. It also stores .pgm file instead of raw Bayer data. For calibration the raw bayer frame gets converted to grey image first (exactly the format of P5 portable grey map). This is done on the G pixels of rg/Gb 2x2 Bayer tiles only, which reduces the 640x480 bayer image to 320x240 grey image (the G pixels are most bright among the 4 pixels of 2x2 tile). Then a black/white filter is applied for calibration code. The final black/white view after calibration finished gets stored, mainly for debugging purposes, but also for demonstration here.

Here is final bright frame with lights on:
Image

Here is final frame in darkness, only lit by the 3W inrared LED mounted on the Raspberry camera:
Image

In order to make calibration task easy for the code, I screwed a big black Lego piece with black tape on front of robot:
Image

The result are many very long black horizontal lines calibration code can search for (the image is raw bayer 640x480 frame, with each rg/Gb 2x2 tyle pixel just drawn in blue/green/green/red color).

As written in last posting, I did build (modified) raspiraw from ~/userland-rawcam/build/raspberry/release directory. I learned that best is to use "time make raspiraw/fast" because that does take only 9s to build raspiraw.

I did choose wiringPi library to control servo motor responsible for camera tilt. Because of that I had to append " -lwiringPi" to host_applications/linux/apps/raspicam/CMakeFiles/raspiraw.dir/link.txt under release directory.

Besides adding the code for calibration I did change some defaults for raspiraw parameters, so that just running "sudo raspiraw" runs the calibration (after "./camera_i2c" was executed).

Here is the (114 line only) diff changing raspiraw into camera tilt calibration code, split into parts.

This adds wiringPi library support and allows for us precision time (delay) determination. The enum is for the simple state-machine I used to get the calibration stable:

Code: Select all

56a57,61
> #include <sys/time.h>
> #include <wiringPi.h>
> int ipwm = 150;
> enum { init, low, high, done } ipwmstate = init;
> 
This just excludes working on 1st frame:

Code: Select all

361c366
<                    (((count++)%cfg->saverate)==0))
---
>                    (((count++)%cfg->saverate)==0) && (count > 1))
As said before, raw10 Bayer frame gets converted to 320x240 portable grey map format, in place. Only the G pixels of rg/Gb 2x2 tiles get used. Black/white filter with threshhold 50 is applied to get 2 color image (1 byte per pixel):

Code: Select all

>                   int i,j,k,t;
>                   unsigned char *p, *q;
>                   struct timeval start, end;
> 
>                   gettimeofday(&start, NULL);
> 
>                   p = q = buffer->data;
> 
>                   for(i=1; i<480; i+=2) // calibration needs last three lines
>                   {
>                     p += 800;
>  
>                     for(j=0; j<640; j+=4, p+=5)        // convert raw10 to .pgm
>                     {
>                       k = (((int) p[0])<<2) + ((p[4]>>0)%0x03);
>                       *q++ = (k>=50) ? 255 : 0;                    // b/w filter
> 
>                       k = (((int) p[2])<<2) + ((p[4]>>4)%0x03);
>                       *q++ = (k>=50) ? 255 : 0;
>                     }
>                   }
> 
First state is when camera looks down on robot. Camera gets moved up until first "long" black line is found at bottom (line 239) of 320x240 image:

Code: Select all

>                   switch (ipwmstate)
>                   {
>                     case init:
>                       p = q = buffer->data + 239*320 + 160;
>                       p[-160] = p[159] = 255;
>                       while (!*p)  --p;
>                       while (!*q)  ++q;
>                       if (q-p < 240)
>                       {
>                         pwmWrite (18, --ipwm);
>                       }
>                       else
>                       {
>                         ipwmstate = low;
>                       }
>                       break;
"low" state then moves camera up further until only few "long" lines remain in image (check is done in line 231, but camera overshoots a bit):

Code: Select all

>                     case low:
>                       p = q = buffer->data + 231*320 + 160;
>                       p[-160] = p[159] = 255;
>                       while (!*p)  --p;
>                       while (!*q)  ++q;
>                       if (q-p > 239)
>                       {
>                         pwmWrite (18, --ipwm);
>                       }
>                       else
>                       {
>                         ipwmstate = high;
>                       }
>                       break;
"high" state determines time taken for frame conversion and adds that as comment to P5 .pgm file written to SD card. Also running=0 is used to end the program. "done" state does nothing, just waits for program to complete:

Code: Select all

>                     case high:
>                       {
>                         gettimeofday(&end, NULL);
> 
>                         t = (end.tv_sec * 1000000 + end.tv_usec)
>                           - (start.tv_sec * 1000000 + start.tv_usec);
> 
374c438,439
< 					fwrite(buffer->data, buffer->length, 1, file);
---
>                                         fprintf(file, "P5 # %dus\n320 240\n255\n", t);
> 					fwrite(buffer->data, 240*320 /*buffer->length*/, 1, file);
378a444,450
>                       }
>                       running = 0;
>                       ipwmstate = done;
>                       break;
>                     case done: 
>                       break;
>                   }
The whole raw Bayer to .pgm frame conversion takes 4-5ms only:

Code: Select all

$ head -3 done.pgm 
P5 # 4439us
320 240
255
$ 
Changed defaults, use mode 7 for 640x480 frames, capture into "dump.pgm", use saverate of 8. This determines the frequency the callback gets triggered, lower values do overshoot calibration more.

Code: Select all

619c691
< 		.mode = 0,
---
> 		.mode = 7,
624,625c696,697
< 		.output = NULL,
< 		.capture = 0,
---
> 		.output = "done.pgm",
> 		.capture = 1,
628c700
< 		.saverate = 20,
---
> 		.saverate = 8,
Allow "raspiraw" being executed without command line parameters, without displaying help:

Code: Select all

647c719
< 	if (argc == 1)
---
> 	if (argc == -1)
Initialize wiringPi library, do initial camera move to high position, and then to low position looking directly onto caterpillar robot:

Code: Select all

> wiringPiSetupGpio();                 // with wiringPi lib with GPIO numbering
> pinMode (18, PWM_OUTPUT);            // PWM on GPIO18
> pwmSetMode(PWM_MODE_MS);             // mark space PWM mode 
> pwmSetClock(192); pwmSetRange(2000); // 50Hz
> 
> pwmWrite (18, ipwm/2); delay(1000);
> pwmWrite (18, ipwm  ); delay(1000);
> 
Instead of running until timeout is reached, run only as long as running==1:

Code: Select all

984c1064
< 	vcos_sleep(cfg.timeout);
---
>         while (running) { usleep(4000); }
Hermann.

https://www.youtube.com/watch?v=jL1S-fi ... e=youtu.be
Image
https://github.com/Hermann-SW/RSA_numbers_factored
https://stamm-wilbrandt.de/GS_cam_1152x192@304fps
https://hermann-sw.github.io/planar_graph_playground
https://github.com/Hermann-SW/Raspberry_v1_camera_global_external_shutter
https://stamm-wilbrandt.de/

User avatar
HermannSW
Posts: 6093
Joined: Fri Jul 22, 2016 9:09 pm
Location: Eberbach, Germany

Re: raspiraw raw bayer data: how to use in callbacks for feature extraction and robot control

Mon Nov 06, 2017 9:30 pm

I used calibration to get the black and white frames of different "line follow" scenarios (the frame that gets stored after calibration is done).

After motors and Arduino Due will be cabled again making robot move, the first thing will be to follow a straight line. Here are different situations:
Image

Later curves will have to be followed as well:
Image

I am quite impressed as it seems that the simple black&white filtering I have implemented for camera tilt calibration seems to be good enough for line following later as well.
https://github.com/Hermann-SW/RSA_numbers_factored
https://stamm-wilbrandt.de/GS_cam_1152x192@304fps
https://hermann-sw.github.io/planar_graph_playground
https://github.com/Hermann-SW/Raspberry_v1_camera_global_external_shutter
https://stamm-wilbrandt.de/

User avatar
HermannSW
Posts: 6093
Joined: Fri Jul 22, 2016 9:09 pm
Location: Eberbach, Germany

Re: raspiraw raw bayer data: how to use in callbacks for feature extraction and robot control

Wed Nov 08, 2017 3:29 am

In this thread I learned how to build gstreamer plugins, and how to run a gstreamer pipeline from an application via appsrc:
viewtopic.php?f=43&t=197124

This will hopefully allow to modify raspiraw so that it can push the captured raw Bayer frames into gstreamer pipeline instead of storing on SD card. If raspiraw can be modified to capture 640x480 with >90fps then this would be a good method for high framerate gstreamer video processing.

Until I will have found out how to make raspiraw use appsrc, gstreamer can already be used to process the sample images I posted before. The trick is imagefreeze plugin, which takes a single image as input and creates a still video from it for further processing.

The left window just displays the image as still video by this command:

Code: Select all

gst-launch-1.0 -v filesrc location=done.6.pgm ! decodebin ! imagefreeze ! videoconvert ! autovideosink 2>err 1>out &
The right window is using edgetv plugin in addition (and another videoconvert after it):

Code: Select all

gst-launch-1.0 -v filesrc location=done.6.pgm ! decodebin ! imagefreeze ! videoconvert ! edgetv ! videoconvert ! autovideosink

Image

P.S:
This thread is on (high framerate) camera robot control, without the need to store captured frames.
Now there is a sibling thread on high framerate video capturing:
"Howto capture 360fps (640x240) videos with Raspberry v1 camera"
viewtopic.php?f=43&t=199204
https://github.com/Hermann-SW/RSA_numbers_factored
https://stamm-wilbrandt.de/GS_cam_1152x192@304fps
https://hermann-sw.github.io/planar_graph_playground
https://github.com/Hermann-SW/Raspberry_v1_camera_global_external_shutter
https://stamm-wilbrandt.de/

User avatar
HermannSW
Posts: 6093
Joined: Fri Jul 22, 2016 9:09 pm
Location: Eberbach, Germany

Re: raspiraw raw bayer data: how to use in callbacks for feature extraction and robot control

Tue Dec 19, 2017 9:59 pm

Quite some time since last posting in this thread.
I was busy in exploring the high framerate options for raspberry cameras.
And I was (really) successful (capturing realiably 640x128 stretched frames with 665fps(!)).
The mode with biggest fov is 640x416_s, which still captures with 210fps!
Here you can find high framerate table (up to 750fps):
viewtopic.php?f=43&t=199204&p=1248266#p1247830

Today time has come for reality check and seeing whether the high framerates can be used for my target application of line following, where the camera tilt calibration part was described in this thread before.

First I wanted just to redo what I did, but running modified raspiraw executable did not do anything -- then I remembered that I had to run "sudo raspiraw" because of wiringpi library compiled in. After that calibration started immediately.

But it did not end where it should and I had to stop the program. Retrying led to same result -- then I realized that lense cap was on NoIR camera lense. After removing that, calibration immediately worked, tested that several times.

This is the 320x240 b/w filtered done.pgm image after calibration:
Image

Further below you can see the 640x416_s stretched image processed with dcraw converting from .raw10 camera format to .ppm. In fact the taken frame really is 640x208 since only every other of the 416 lines was taken (the reason why stretching is needed).

Some small code converts the 640x208 frame in raw10 Bayer format to 320x208 b/w image by taking all 208 lines, but only every other pixel in the line, as described before. Here you can see that 210fps image taking does work for robot control (just the bottom 32 lines from previous done.pgm are missing):
Image

Just for completeness (not needed for robot control), the dcraw processed and stretched 640x416 frame:
Image

P.S:
This was the setup, robot observed by tomcat Mexxi ;-)
Image
https://github.com/Hermann-SW/RSA_numbers_factored
https://stamm-wilbrandt.de/GS_cam_1152x192@304fps
https://hermann-sw.github.io/planar_graph_playground
https://github.com/Hermann-SW/Raspberry_v1_camera_global_external_shutter
https://stamm-wilbrandt.de/

User avatar
HermannSW
Posts: 6093
Joined: Fri Jul 22, 2016 9:09 pm
Location: Eberbach, Germany

Re: raspiraw raw bayer data: how to use in callbacks for feature extraction and robot control

Wed Dec 20, 2017 5:37 pm

I told a friend on the problem I have after calibration:
finding the right threshhold value for black/white conversion of captured frames.

He told me of Otsu's method
https://en.wikipedia.org/wiki/Otsu%27s_method

which sounded like a perfect fit:
... In computer vision and image processing, Otsu's method, named after Nobuyuki Otsu (大津展之 Ōtsu Nobuyuki), is used to automatically perform clustering-based image thresholding,[1] or, the reduction of a graylevel image to a binary image. ...

Before implementing Otsu's method myself I did search a litte further and found this github repo:
https://github.com/hipersayanX/MultiOtsuThreshold

That linked to his blog:
http://hipersayanx.blogspot.de/2016/08/ ... shold.html

And at the bottom of the page it contains an online version of Otsu's method !
You just need to upload an image, and you will see the result (threshold) of Otsu's method as well the b/w converted image, side by side to your image you sent in. Unfortunately my application seems not appropriate for Otsu's method [the Wikipedia page lists some features for that, it seems my application problem is "small object size" (ratio of the object area to the entire image area and the mean difference to be the difference of the average intensities of the object and the background) and the black line I am interested in definitely is "small object area"]. I created a tool that converts raw Bayer 640x208 frame data captured to 320x208 portable grey map (.pgm). Here is screenshot from the online algorithm applied to such a .pgm frame:
Image
https://github.com/Hermann-SW/RSA_numbers_factored
https://stamm-wilbrandt.de/GS_cam_1152x192@304fps
https://hermann-sw.github.io/planar_graph_playground
https://github.com/Hermann-SW/Raspberry_v1_camera_global_external_shutter
https://stamm-wilbrandt.de/

User avatar
HermannSW
Posts: 6093
Joined: Fri Jul 22, 2016 9:09 pm
Location: Eberbach, Germany

Re: raspiraw raw bayer data: how to use in callbacks for feature extraction and robot control

Wed Dec 20, 2017 7:49 pm

There is a good threshhold for that frame, here is animated .gif looping from 10 (all white) to 50 (far too dark), not clear what the best threshhold is (by human inspection), nor how to determine it algorithmically ...
Image
https://github.com/Hermann-SW/RSA_numbers_factored
https://stamm-wilbrandt.de/GS_cam_1152x192@304fps
https://hermann-sw.github.io/planar_graph_playground
https://github.com/Hermann-SW/Raspberry_v1_camera_global_external_shutter
https://stamm-wilbrandt.de/

User avatar
HermannSW
Posts: 6093
Joined: Fri Jul 22, 2016 9:09 pm
Location: Eberbach, Germany

Re: raspiraw raw bayer data: how to use in callbacks for feature extraction and robot control

Wed Dec 20, 2017 10:26 pm

Hmmm, there is an operator named convolution, and that can be used for edge detection as well for other stuff:
https://en.wikipedia.org/wiki/Kernel_(i ... onvolution

I went from 3x3 to 5x5 and then 7x7 kernel.
The symmetric kernel I chose was this:

Code: Select all

-1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1
-1 -1 -1  v -1 -1 -1
-1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1

This is frame processed with v=48 kernel:
Image

This is frame processed with v=72 kernel:
Image

Right direction for "where goes the line" analysis, but have to learn more ...

P.S:
Just saw a message from friend, he played with many different algorithms.
The best result for me was his local Otsu with 50x50 window:
Image
https://github.com/Hermann-SW/RSA_numbers_factored
https://stamm-wilbrandt.de/GS_cam_1152x192@304fps
https://hermann-sw.github.io/planar_graph_playground
https://github.com/Hermann-SW/Raspberry_v1_camera_global_external_shutter
https://stamm-wilbrandt.de/

User avatar
HermannSW
Posts: 6093
Joined: Fri Jul 22, 2016 9:09 pm
Location: Eberbach, Germany

Re: raspiraw raw bayer data: how to use in callbacks for feature extraction and robot control

Mon Dec 25, 2017 10:04 pm

My son (studying CS at KIT/Germany) told me:
  • use Canny edge detection for my problem
  • use OpenCV
First I tried online Canny edge detector:
http://bigwww.epfl.ch/demo/ip/demos/01-edgeDetector/

The results were really good with default settings, and so I did install "libopencv-dev" on Pi.
Then I downloaded sample from OpenCV documentation:
https://docs.opencv.org/2.4/doc/tutoria ... ector.html

I had to do slight changes

Code: Select all

$ diff CannyDetector_Demo.cpp.orig CannyDetector_Demo.cpp
7,8c7,8
< #include "opencv2/imgproc.hpp"
< #include "opencv2/highgui.hpp"
---
> #include "opencv2/imgproc/imgproc.hpp"
> #include "opencv2/highgui/highgui.hpp"
63c63
<   src = imread( parser.get<String>( "@input" ), IMREAD_COLOR ); // Load an image
---
>   src = imread( argv[1], IMREAD_COLOR ); // Load an image
$ 

to get it compile and run (I did "ssh -X pi@...") with

Code: Select all

$ g++ CannyDetector_Demo.cpp -o CannyDetector_Demo $(pkg-config --cflags --libs opencv)
$ ./CannyDetector_Demo out.0010.pgm.png

Min Threshold below 50 showed artefacts, and above 75 lost right edge of line to follow.
50 results in what is needed, cool ! (right is input image)
Image Image


P.S:
"CommandLineParser" is a OpenCV 3 feature, not available on OpenCV 2 on Pi:
https://docs.opencv.org/3.3.1/d0/d2e/cl ... arser.html
https://github.com/Hermann-SW/RSA_numbers_factored
https://stamm-wilbrandt.de/GS_cam_1152x192@304fps
https://hermann-sw.github.io/planar_graph_playground
https://github.com/Hermann-SW/Raspberry_v1_camera_global_external_shutter
https://stamm-wilbrandt.de/

User avatar
HermannSW
Posts: 6093
Joined: Fri Jul 22, 2016 9:09 pm
Location: Eberbach, Germany

Re: raspiraw raw bayer data: how to use in callbacks for feature extraction and robot control

Tue Dec 26, 2017 10:02 am

Bad news, while OpenCV Canny edge detection is functionally really cool, it is just too slow for high framerate processing on small 320x208 frames :-(

raw10_2_pgm.c in attachment shows how I do 640x208 raw10 Bayer to 320x208 pgm conversion.

I learned that I can create OpenCV Mat from memory buffer in returning "image" from function:

Code: Select all

Mat image(Size(width, height), CV_8UC1, dataBuffer, Mat::AUTO_STEP);

So I created pgm_canny.cpp in attachment from previous code for eliminating the color conversion, and most importantly, for adding microsecond timing of the relevant OpenCV functions:

Code: Select all

$ diff CannyDetector_Demo.cpp pgm_canny.cpp 
9a10
> #include <sys/time.h>
14c15
< Mat src, src_gray;
---
> Mat src_gray;
30a32,35
>     struct timeval tv0, tv1, tv2;
> 
>     gettimeofday(&tv0, NULL);
> 
35a41,42
>     gettimeofday(&tv1, NULL);
> 
40a48,49
>     gettimeofday(&tv2, NULL);
> 
47c56
<     src.copyTo( dst, detected_edges);
---
>     src_gray.copyTo( dst, detected_edges);
52a62,65
> 
>     printf(" blur %uus\n", (tv1.tv_sec-tv0.tv_sec)*1000000 + tv1.tv_usec - tv0.tv_usec);
>     printf("Canny %uus\n", (tv2.tv_sec-tv1.tv_sec)*1000000 + tv2.tv_usec - tv1.tv_usec);
>     printf("  sum %uus\n", (tv2.tv_sec-tv0.tv_sec)*1000000 + tv2.tv_usec - tv0.tv_usec);
63c76
<   src = imread( argv[1], IMREAD_COLOR ); // Load an image
---
>   src_gray = imread( argv[1], IMREAD_GRAYSCALE ); // Load an image
65c78
<   if( src.empty() )
---
>   if( src_gray.empty() )
75c88
<   dst.create( src.size(), src.type() );
---
>   dst.create( src_gray.size(), src_gray.type() );
79d91
<   cvtColor( src, src_gray, COLOR_BGR2GRAY );
$ 

I really hoped for processing times acceptable for high framerate processing, but that was not what I saw even compiled with -O6:

Code: Select all

$ g++ -O6 pgm_canny.cpp $(pkg-config --cflags --libs opencv)
$ 

The code does output microsend resolution duration for bluring the image, for Canny and for both together:

Code: Select all

...
Canny 12049us
  sum 21637us
 blur 9330us
Canny 11000us
  sum 20330us
 blur 9184us
Canny 10883us
  sum 20067us

These are the last lines when reaching planned to use "50" threshhold.
20ms means OpenCV's Canny Edge detection cannot process 320x208 frames with more than 50fps on Pi Zero ... and I need 180fps or higher.
Attachments
opencv_pgm_canny.zip
(1.87 KiB) Downloaded 247 times
https://github.com/Hermann-SW/RSA_numbers_factored
https://stamm-wilbrandt.de/GS_cam_1152x192@304fps
https://hermann-sw.github.io/planar_graph_playground
https://github.com/Hermann-SW/Raspberry_v1_camera_global_external_shutter
https://stamm-wilbrandt.de/

Return to “Camera board”