Ask Your Question

drivenj17's profile - activity

2017-03-11 05:30:12 -0600 received badge  Famous Question (source)
2016-08-03 07:18:24 -0600 received badge  Notable Question (source)
2016-03-08 02:50:48 -0600 received badge  Popular Question (source)
2015-06-30 14:34:47 -0600 received badge  Enthusiast
2015-06-04 13:04:11 -0600 commented answer How to convert from CV_16S to CV_8U || CV_16U || CV_32F? cvtColor assertion failed

Thanks for this, it works :)

2015-06-04 13:03:46 -0600 received badge  Scholar (source)
2015-06-03 18:47:30 -0600 received badge  Editor (source)
2015-06-03 18:46:55 -0600 asked a question How to convert from CV_16S to CV_8U || CV_16U || CV_32F? cvtColor assertion failed

Hi,

I am having trouble converting a Mat to gray scale using the following (more details below)

cv::cvtColor(result, gray, COLOR_BGR2GRAY);

result is a Mat. It is the output of image stitching (from the sample: https://github.com/Itseez/opencv/blob...):

Mat result, result_mask; blender->blend(result, result_mask);

result Mat properties:

  • result.rows 2310
  • result.cols 5545
  • result.type 19
  • result.channel 3
  • result.depth 3

I am trying to convert this Mat result to grayscale as a first step in another cropping function:

cv::Mat gray;
cv::cvtColor(result, gray, COLOR_BGR2GRAY);

but fails at the assertion on line "cv::cvtColor(result, gray, COLOR_BGR2GRAY);":

OpenCV Error: Assertion failed (depth == CV_8U || depth == CV_16U || depth == CV_32F) in cv::cvtColor, file C:\builds\master_PackSlave-win32-vc12-shared\opencv\modules\imgproc\src\color.cpp, line 7343

If I imwrite the result Mat above, and imread it back in like so:

imwrite(result_name, result);
Mat toCrop = imread(result_name, CV_LOAD_IMAGE_COLOR);
cv::Mat gray;
cv::cvtColor(toCrop, gray, COLOR_BGR2GRAY);

Then there are no errors

toCrop Mat properties after reading back in are:

  • toCrop.rows 2310
  • toCrop.cols 5545
  • toCrop.type 16
  • toCrop.channel 3
  • toCrop.depth 0

Any ideas on how to get this working without writing and reading, using the original Mat result to convert to grayscale? I believe the depth is the issue. The result is I've been searching how to do a conversion of the depth but no progress..

Thanks for your support

2015-06-02 09:23:39 -0600 asked a question Stitching images at different time intervals - How to stabilize subsequent frames?

Hi there,

I'm still learning about image stitching at this point, and have been using the opencv library and their sample code to stitch together several images.

We have a camera taking images at multiple positions (9 images), and we take these images and stitch them into a single image (approx.) 3x3 grid.

Every few hours, these images are captured using the same camera at 9 different camera presets (pan/tilt/zoom settings) and stitched together.

The code I am using is from the sample with no modifications: https://github.com/Itseez/opencv/blob...

I am using ORB feature finder.

We were hoping to play the stitched outputs in a slideshow fashion. The issue we are encountering is that the stitched images are "unstable". What I mean is the dimensions of the stitched output generated from the images at different capture times is different and therefore when we play a slide show of these stitched outputs, the stitched images are shifting in different directions..

It is an issue similar to this one here: http://stackoverflow.com/questions/16...

Do you have any recommendations on how to make the stitched outputs more stable (features are placed in the same position, pixel wise), in terms of the stitching_detailed.cpp code above? Could we re-use a particular calculation of one of the stitching pipeline steps for subsequent frames stitching such as the homograph matrix (pretending to know what I am talk about ;) )?

Is there some way to make use of the fact that the subsequent frames (images captured at different time intervals) are taken by the same camera at the same camera presets?

Please let me know if this needs further clarification and thanks for your help.

Best Regards