Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Graycode Pattern: Irregular and off-center??

I have been struggling trying to get the Structured light examples going with Opencv (Windows 10, C++, Visual Studio). Something I have noticed when trying with multiple computers, projectors, and even using the Java version of OpenCV, the Graycode patterns it generates seem to be messed up (or there is something in the graycode theory I am missing).

My thoughts are that the graycode patterns should be something like 0) horizontal, left 50% pixels black, right 50% pixels white 0b) horizontal, inverse of previous 1)horizontal, left 25% pixels black, next 25% pixels white, next 25% pixels black, next 25% pixels white 1b)horizontal, inverse of previous ... and so on until there are single pixel wide stripes across then repeat for the vertical direction 0) vertical, top 50% pixels black, bottom 50% pixels white 0b) vertical, inverse of previous 1)vertical, top 25% pixels black, next 25% pixels white, next 25% pixels black, next 25% pixels white 1b)vertical, inverse of previous ...and so on until single pixel length strips and then a final -All Black Pixels -All White pixels

BUT in each version of the generate graycode i try, the pixels are never divided how i would think (i am using the examples from the contrib tutorials https://docs.opencv.org/master/d3/d81/tutorial_structured_light.html ) The first image created is pretty much always 70% black on the left, and 30% white and then it iterates in a kind of offset way from there

then when it is getting down to the single pixels, there tends to be a double to quadruple width stripe present

attached are some images showing what happens

i can run the full process by the way, and decode these pictures, but they don't seem to come out correct at all, usually just a blank screen instead of a depth map.

Graycode Pattern: Irregular and off-center??

I have been struggling trying to get the Structured light examples going with Opencv (Windows 10, C++, Visual Studio). Something I have noticed when trying with multiple computers, projectors, and even using the Java version of OpenCV, the Graycode patterns it generates seem to be messed up (or there is something in the graycode theory I am missing).

My thoughts are that the graycode patterns should be something like 0) horizontal, left 50% pixels black, right 50% pixels white 0b) horizontal, inverse of previous 1)horizontal, left 25% pixels black, next 25% pixels white, next 25% pixels black, next 25% pixels white 1b)horizontal, inverse of previous ... and so on until there are single pixel wide stripes across then repeat for the vertical direction 0) vertical, top 50% pixels black, bottom 50% pixels white 0b) vertical, inverse of previous 1)vertical, top 25% pixels black, next 25% pixels white, next 25% pixels black, next 25% pixels white 1b)vertical, inverse of previous ...and so on until single pixel length strips and then a final -All Black Pixels -All White pixels

BUT in each version of the generate graycode i try, the pixels are never divided how i would think (i am using the examples from the contrib tutorials https://docs.opencv.org/master/d3/d81/tutorial_structured_light.html ) The first image created is pretty much always 70% black on the left, and 30% white and then it iterates in a kind of offset way from there

then when it is getting down to the single pixels, there tends to be a double to quadruple width stripe present

attached are some images showing what happens

i can run the full process by the way, and decode these pictures, but they don't seem to come out correct at all, usually just a blank screen instead of a depth map.

EDIT: i couldn't get pics to upload to this post, so here is a link to example pics of the graycode bugs from Image 0, and image 22 https://photos.app.goo.gl/3PEwaLshXtZHQqi68

Graycode Pattern: Irregular and off-center??

I have been struggling trying to get the Structured light examples going with Opencv (Windows 10, C++, Visual Studio). Something I have noticed when trying with multiple computers, projectors, and even using the Java version of OpenCV, the Graycode patterns it generates seem to be messed up (or there is something in the graycode theory I am missing).

My thoughts are that the graycode patterns should be something like 0) horizontal, left 50% pixels black, right 50% pixels white 0b) horizontal, inverse of previous 1)horizontal, left 25% pixels black, next 25% pixels white, next 25% pixels black, next 25% pixels white 1b)horizontal, inverse of previous ... and so on until there are single pixel wide stripes across then repeat for the vertical direction 0) vertical, top 50% pixels black, bottom 50% pixels white 0b) vertical, inverse of previous 1)vertical, top 25% pixels black, next 25% pixels white, next 25% pixels black, next 25% pixels white 1b)vertical, inverse of previous ...and so on until single pixel length strips and then a final -All Black Pixels -All White pixels

BUT BUT

in each version of the generate graycode i try, the pixels are never divided how i would think (i am using the examples from the contrib tutorials https://docs.opencv.org/master/d3/d81/tutorial_structured_light.html ) The first image created is pretty much always 70% black on the left, and 30% white and then it iterates in a kind of offset way from there

then when it is getting down to the single pixels, there tends to be a double to quadruple width stripe present

attached are some images showing what happens

i can run the full process by the way, and decode these pictures, but they don't seem to come out correct at all, usually just a blank screen instead of a depth map.

EDIT: i couldn't get pics to upload to this post, so here is a link to example pics of the graycode bugs from Image 0, and image 22 https://photos.app.goo.gl/3PEwaLshXtZHQqi68https://photos.app.goo.gl/3PEwaLshXtZHQqi68 Update got pics to upload!

image description

Graycode Pattern: Irregular and off-center??

I have been struggling trying to get the Structured light examples going with Opencv (Windows 10, C++, Visual Studio). Something I have noticed when trying with multiple computers, projectors, and even using the Java version of OpenCV, the Graycode patterns it generates seem to be messed up (or there is something in the graycode theory I am missing).

My thoughts are that the graycode patterns should be something like 0) horizontal, left 50% pixels black, right 50% pixels white 0b) horizontal, inverse of previous 1)horizontal, left 25% pixels black, next 25% pixels white, next 25% pixels black, next 25% pixels white 1b)horizontal, inverse of previous ... and so on until there are single pixel wide stripes across then repeat for the vertical direction 0) vertical, top 50% pixels black, bottom 50% pixels white 0b) vertical, inverse of previous 1)vertical, top 25% pixels black, next 25% pixels white, next 25% pixels black, next 25% pixels white 1b)vertical, inverse of previous ...and so on until single pixel length strips and then a final -All Black Pixels -All White pixels

BUT

in each version of the generate graycode i try, the pixels are never divided how i would think (i am using the examples from the contrib tutorials https://docs.opencv.org/master/d3/d81/tutorial_structured_light.html ) The first image created is pretty much always 70% black on the left, and 30% white and then it iterates in a kind of offset way from there

then when it is getting down to the single pixels, there tends to be a double to quadruple width stripe present

attached are some images showing what happens

i can run the full process by the way, and decode these pictures, but they don't seem to come out correct at all, usually just a blank screen instead of a depth map.

EDIT: i couldn't get pics to upload to this post, so here is a link to example pics of the graycode bugs from Image 0, and image 22 https://photos.app.goo.gl/3PEwaLshXtZHQqi68 Update https://photos.app.goo.gl/3PEwaLshXtZHQqi68

Update:

I took a look at the Graycode code, and noticed the default value was 1024x768 ( got pics to upload!one pic to upload somehow!) So i tried that value, and surprisingly the first set of graycode looks correct now (even though my monitor is 1920x1080)

BUT the second set of graycode (horizontal lines) are still way offset

image description

image description

Graycode Pattern: Irregular and off-center??

I have been struggling trying to get the Structured light examples going with Opencv (Windows 10, C++, Visual Studio). Something I have noticed when trying with multiple computers, projectors, and even using the Java version of OpenCV, the Graycode patterns it generates seem to be messed up (or there is something in the graycode theory I am missing).

My thoughts are that the graycode patterns should be something like 0) horizontal, left 50% pixels black, right 50% pixels white 0b) horizontal, inverse of previous 1)horizontal, left 25% pixels black, next 25% pixels white, next 25% pixels black, next 25% pixels white 1b)horizontal, inverse of previous ... and so on until there are single pixel wide stripes across then repeat for the vertical direction 0) vertical, top 50% pixels black, bottom 50% pixels white 0b) vertical, inverse of previous 1)vertical, top 25% pixels black, next 25% pixels white, next 25% pixels black, next 25% pixels white 1b)vertical, inverse of previous ...and so on until single pixel length strips and then a final -All Black Pixels -All White pixels

BUT

in each version of the generate graycode i try, the pixels are never divided how i would think (i am using the examples from the contrib tutorials https://docs.opencv.org/master/d3/d81/tutorial_structured_light.html ) The first image created is pretty much always 70% black on the left, and 30% white and then it iterates in a kind of offset way from there

then when it is getting down to the single pixels, there tends to be a double to quadruple width stripe present

attached are some images showing what happens

i can run the full process by the way, and decode these pictures, but they don't seem to come out correct at all, usually just a blank screen instead of a depth map.

EDIT: i couldn't get pics to upload to this post, so here is a link to example pics of the graycode bugs from Image 0, and image 22 https://photos.app.goo.gl/3PEwaLshXtZHQqi68

Update:

I took a look at the Graycode code, and noticed the default value was 1024x768 ( got one pic to upload somehow!) somehow! is there a 1Mb limit on photos?) So i tried that value, and surprisingly the first set of graycode looks correct now (even though my monitor is 1920x1080)

BUT the second set of graycode (horizontal lines) are still way offset

image description

image description

Graycode Pattern: Irregular and off-center??

I have been struggling trying to get the Structured light examples going with Opencv (Windows 10, C++, Visual Studio). Something I have noticed when trying with multiple computers, projectors, and even using the Java version of OpenCV, the Graycode patterns it generates seem to be messed up (or there is something in the graycode theory I am missing).

My thoughts are that the graycode patterns should be something like 0) horizontal, left 50% pixels black, right 50% pixels white 0b) horizontal, inverse of previous 1)horizontal, left 25% pixels black, next 25% pixels white, next 25% pixels black, next 25% pixels white 1b)horizontal, inverse of previous ... and so on until there are single pixel wide stripes across then repeat for the vertical direction 0) vertical, top 50% pixels black, bottom 50% pixels white 0b) vertical, inverse of previous 1)vertical, top 25% pixels black, next 25% pixels white, next 25% pixels black, next 25% pixels white 1b)vertical, inverse of previous ...and so on until single pixel length strips and then a final -All Black Pixels -All White pixels

BUT

in each version of the generate graycode i try, the pixels are never divided how i would think (i am using the examples from the contrib tutorials https://docs.opencv.org/master/d3/d81/tutorial_structured_light.html ) The first image created is pretty much always 70% black on the left, and 30% white and then it iterates in a kind of offset way from there

then when it is getting down to the single pixels, there tends to be a double to quadruple width stripe present

attached are some images showing what happens

i can run the full process by the way, and decode these pictures, but they don't seem to come out correct at all, usually just a blank screen instead of a depth map.

EDIT: i couldn't get pics to upload to this post, so here is a link to example pics of the graycode bugs from Image 0, and image 22 https://photos.app.goo.gl/3PEwaLshXtZHQqi68

Update:

I took a look at the Graycode code, and noticed the default value was 1024x768 ( got one pic to upload somehow! is there a 1Mb limit on photos?) So i tried that value, and surprisingly the first set of graycode looks correct now (even though my monitor is 1920x1080)

BUT the second set of graycode (horizontal lines) are still way offset

image description

image description

If i set the resolution for the graycode to generate to 1024 x 1024, i get what looks like proper graycode in the vertical and horizontal direction!

But if i do 1920x1920 it is all skewed again

but if i do 2048 x 2048 everything is even.

Is something messed up with the math they are using to divide up the pixels when generating the graycode pattern? https://github.com/opencv/opencv_contrib/blob/master/modules/structured_light/src/graycodepattern.cpp

is it supposed to be skewed at resolutions that aren't powers of 2?