Ask Your Question

pmh4514's profile - activity

2020-10-13 15:17:05 -0600 received badge  Popular Question (source)
2020-03-26 04:59:55 -0600 received badge  Popular Question (source)
2019-12-11 00:15:02 -0600 received badge  Popular Question (source)
2019-11-19 13:17:46 -0600 received badge  Popular Question (source)
2019-07-03 16:24:32 -0600 received badge  Famous Question (source)
2018-05-14 17:33:37 -0600 commented question merge more than 3 grayscale images into color image

I see your point (and you are not wrong). I can visualize a 'brute force' way to do it by looping pixel for pixel but w

2018-05-14 17:30:35 -0600 commented question merge more than 3 grayscale images into color image

I see your point, you are not wrong. I can visualize a 'brute force' way to do it by looping pixel for pixel but wasn't

2018-05-14 16:02:54 -0600 edited question merge more than 3 grayscale images into color image

merge more than 3 grayscale images into color image Hello I'm using OpenCV (2.4.10) with C++ (ie. cv::Mat vs. IplImage)

2018-05-14 15:52:59 -0600 edited question merge more than 3 grayscale images into color image

merge more than 3 grayscale images into color image Hello I'm using OpenCV (2.4.10) with C++ (ie. cv::Mat vs. IplImage)

2018-05-14 15:52:30 -0600 edited question merge more than 3 grayscale images into color image

merge more than 3 grayscale images into color image Hello I'm using OpenCV (2.4.10) with C++ (ie. cv::Mat vs. IplImage)

2018-05-14 15:06:57 -0600 edited question merge more than 3 grayscale images into color image

merge more than 3 grayscale images into color image Hello I'm using OpenCV (2.4.10) with C++ (ie. cv::Mat vs. IplImage)

2018-05-14 15:06:14 -0600 edited question merge more than 3 grayscale images into color image

merge more than 3 grayscale images into color image Hello I'm using OpenCV (2.4.10) with C++ (ie. cv::Mat vs. IplImage)

2018-05-14 15:05:39 -0600 asked a question merge more than 3 grayscale images into color image

merge more than 3 grayscale images into color image Hello I'm using OpenCV (2.4.10) with C++ (ie. cv::Mat vs. IplImage)

2017-03-03 13:16:01 -0600 commented question transform 16bit grayscale to RGB with constants

Thank you both for responses!

2017-03-03 09:05:16 -0600 commented answer transform 16bit grayscale to RGB with constants

Thanks! Yes, you understand correctly.. See my comments to the above responder, where I showed my "brute force" way looping through pixels..

2017-03-03 09:02:12 -0600 commented question transform 16bit grayscale to RGB with constants

The 16bit data is a purely grayscale image. I'm really just doing a "false coloring" by generating RGB values.

I've written this (simplified for space) which I think should get me there.

Mat mat_raw_16;  // pre-populated source grayscale   
Mat mat_color_8 = Mat(rows, cols, CV_8UC3);   // container for false-color version

for (int i=0; i<mat_raw_16.rows; i++){
    (for int j=0; j<mat_raw_16.cols; j++){
        ushort val = mat_raw_16.at<ushort>(i,j);
        mat_color_8.at<cv::Vec3b>(i,j)[0] = (val * CONST_R)*(val/255.0)
        mat_color_8.at<cv::Vec3b>(i,j)[1] = (val * CONST_G)*(val/255.0)
        mat_color_8.at<cv::Vec3b>(i,j)[2] = (val * CONST_B)*(val/255.0)
    }
}

Each of those CONST values are scaling factors (between 0 and 1)

2017-03-03 08:40:56 -0600 commented question transform 16bit grayscale to RGB with constants

so "brute force" (ie. looping through each pixel) is the way? (there's no "matrix" style multiplication I can do directly?)

to your question, the 16 single channel image does not contain color information, only grayscale pixel intensities. I am working on a "false coloring" so to speak. I have 3 constant values (all between 0-1) one for each of R, G and B, which when multiplied by the original grayscale value and scaled to 0-255, becomes the R, G and B values.

2017-03-03 07:53:55 -0600 asked a question transform 16bit grayscale to RGB with constants

Hi,

I'm trying to figure out the most appropriate way to do this in OpenCV but I don't have much experience with it and feel like there should be a better way than just brute force.

I'm staring with a 16bit grayscale image (CV_16UC1) and I want to produce a RGB image (CV_8UC3)

But I don't want to use a simple conversion like o1.convertTo(o2, CV_8UC3, 1/255.0);

Rather, I have 3 constant scaling factors (each between 0 and 1 corresponding to each of R,G and B) Each resulting RGB pixel value should be the original 16bit grayscale pixel value multiplied by one of the three constants and then scaled down to a value between 0-255.

Thoughts? Thanks!

2017-01-20 01:15:48 -0600 received badge  Notable Question (source)
2016-11-03 09:43:12 -0600 asked a question copy from LibTIFF into OpenCV matrix

I understand that OpenCV uses libtiff "under the hood" but unfortunately it lacks the ability to handle reading/writing tags into the TIFF header.

I need to read a large TIFF file into memory, read it's TAGs, but then manipulate it using OpenCV.

My images are all 16bit grayscale (CV_16UC1)

What is the best way to transfer the pixels from the "libTIFF" instance directly into the openCV matrix?

Thanks!

2016-09-14 10:34:32 -0600 commented answer saving TIFF tags in header?

Thank you.

2016-09-13 11:13:02 -0600 asked a question saving TIFF tags in header?

When saving an OpenCV mat, is it possible to add TIFF tags to the header? I need to store pixel dimensions in the image header.
Thanks!

2016-06-09 07:45:28 -0600 commented question [SOLVED] putText - result is always black text

Thank you. using 0xffff instead of cv::scalar(255) solved the problem.

2016-06-07 14:52:03 -0600 commented question [SOLVED] putText - result is always black text

Thank you for your kind reply. cv::Scalar(255) and cv::Scalar(0) still both produce only black text.

2016-06-07 13:12:52 -0600 asked a question [SOLVED] putText - result is always black text

Hello,

I have a Matrix of type CV_16UC1 which holds 16bit grayscale pixel data from an external camera.

I want to use putText to draw WHITE text over the image.

But regardless which of the following two lines I try, the text is displayed, but it is always black.

What am I doing wrong?

cv::putText(target_mat, strStatus, Point2f(50,100), FONT_HERSHEY_PLAIN, 3, cv::Scalar(255,255,255),2); cv::putText(target_mat, strStatus, Point2f(50,100), FONT_HERSHEY_PLAIN, 3, cv::Scalar(0,0,0),2);

Thanks!

2016-03-15 13:52:53 -0600 received badge  Popular Question (source)
2015-01-23 08:19:57 -0600 asked a question imwrite tiff saving wrong dimension when opened in ImageJ

Strange question here. I'm using imwrite to save a 16bit TIF file to disc (dark images with a lot of surrounding black). The resulting file is supposed to be 1000x1000 and windows properties indicates it is 1000x1000 as does photoshop and MSPaint. When I try to open it in ImageJ however it is reporting 820x820 and I am at a loss to explain why.

Does anybody have any insights into this problem?

2014-12-03 13:03:38 -0600 commented answer blending two color images

Interesting. Perhaps my first post was misleading, in that these aren't "pure red" blending with "pure green" necessarily. Take two RGB color photos and merge, them, blending colors while not losing intensity.

When I do a simple addition (1.0,1.0) then the result becomes very washed out - mostly turning to white. The result is identical if I use addWeighted with 1.0,1.0 or if I simply do c=a+b

2014-12-01 07:53:09 -0600 asked a question blending two color images

Hello,

I have two color images (each in a cv::Mat of the same dimensions and type) and I wish to blend them into a third.

I am more-or-less successful using addWeighted but it seems like each of my source images gets "darker" during the process.

Below is an example of three images (these are not my images or even context, but the blended version of "C" from "A" & "B" represents exactly what I am going after)

image description

I assume I have to specify an "alpha" value of <1 to determine blending level, but if I do something like this (where a,b,c refer to the sample image tiles) addWeighted(a, 0.5, b, 0.5, 0.0, c)

I end up with "C" that blends "A" and "B" but it's as if the overall intensity of both A and B were reduced in the process. This example I've posted here seems to retain full intensity.

So what would be the proper way, using OpenCV, to take A and B and make C?

2014-11-14 15:01:45 -0600 commented answer false coloring of grayscale image

I'm having a heck of a time doing so.. I try but then get an internal server error.. Is this site fully recovered from the recent hack?

Anyway, it is useful info, and what's more interesting is that a colleague and I went through the exercise of understanding and implementing the math to convert from gray through YCbCr colorspace

int r = (int) (Y + 1.40200 * (Cr)); int g = (int) (Y - 0.34414 * (Cb) - 0.71414 * (Cr)); int b = (int) (Y + 1.77200 * (Cb );

and visually the results were exactly the same as my approach described above which I stumbled upon in my experimenting and believed to be a complete hack. Now I'm even more curious!

2014-11-14 07:13:45 -0600 commented answer false coloring of grayscale image

Thanks I think I understand. I tried another approach which seemed to do what I want though I'm not sure if it's "correct".

if I have 8bit gray, and create a new empty 24bit RGB, I can copy the entire 8bit gray into one of the BGR channels (say, R), leaving the others black, and that effectively colorizes the pixels in a range of red. Similar, if the user wants to make it, say, RGB(80,100,120) then I can set each of the RGB channels to the source grayscale intensity multiplied by (R/255) or (G/255) or (B/255) respectively. This seems to work visually. It does need to be a per-pixel operation though cause the color applies only to a user-defined range of grayscale intensities.

2014-11-13 12:47:31 -0600 commented answer false coloring of grayscale image

That is essentially what I am doing (right?). I guess my question goes more to how to calculate the colormap. I know there is cv::applyColorMap and there are some predefined ones. As far as I can tell, I cannot define my own, and would still then have the question of how to calculate it.

In essence, my users may ask that all grayscale values between 90 and 150 should turn "blue" but how can I calculate the proper "shade of blue" per incoming grayscale pixel intensity? To simply make them all RGB(0,0,255) would lose the information about the grayscale intensity differences in the raw data.

2014-11-13 08:21:00 -0600 asked a question ** deleted ***

This is a duplicate of http://answers.opencv.org/question/50781/false-coloring-of-grayscale-image/ I was having website issues when posting and somehow created it twice. sorry

2014-11-13 07:41:53 -0600 asked a question false coloring of grayscale image

Hello,

I have a grayscale image to which I want to apply false coloring.

Below is a chunk of my code where I loop through all rows and cols of my matrix (matProcessed). For each pixel value, I lookup the desired RGB values from the user's pre-defined lookup table, and then overwrite the pixel in matProcessed accordingly. This all works just fine, except that all underlying intensity information in the original grayscale image is lost when a range of intensity values is simply replaced with a color. I think what I need is more like what the hue/saturation/colorize function in Photoshop would do, where I can colorize a range of grayscale pixels while retaining the underlying variation in intensity of the grayscale values. So how do I do that?

   // I first convert to BGR to colorize: 
    cvtColor(matProcessed, matProcessed, CV_GRAY2BGR);

    int iPxVal=0;
    int index;
    int R=0;
    int G=0;
    int B =0;
    for(int i=0; i<matProcessed.rows; i++){
        for(int j=0; j<matProcessed.cols; j++){
            index = matProcessed.channels()*(matProcessed.cols*i + j);

            // grab the grayscale pixel value:
            iPxVal = matProcessed.data[index + 0];

            // lookup user defined RGB value:
            GetColorFromLUT(iPxVal, &R, &G, &B);

            // R, G and B now hold my RGB values the user desires. 

            matProcessed.data[index + 0] = color.blue();
            matProcessed.data[index + 1] = color.green();
            matProcessed.data[index + 2] = color.red();

        }
    }

Thanks!

2014-09-26 09:24:40 -0600 marked best answer [SOLVED] how to average N matrices into 1?

Hello, this is probably pretty straightforward, but I am very new to OpenCV.

I have a camera frame grabber application where the user is allowed to "average" N frames into a single frame (as a way to reduce signal to noise ratio)

So I have 3 (or N) Mat's all the same size (16bit grayscale) and I want to combine them all into one such that the result is the "average" of the 3.. So this isn't really a blending of unique images, each frame is going to be nearly identical, the goal is to average them to reduce the signal to noise.

What's the best way to do this?

edit: I tried cvAccumulate, but it fails, I'm thinking because the documentation says it works only for 8 or 32bit images.

I did make this work by brute-force (non OpenCV) looping, summing, averaging and then setting those values into my final/averaged matrix.. but is there a better way with OpenCV?

Thanks

2014-09-26 09:24:27 -0600 received badge  Scholar (source)
2014-09-26 09:23:46 -0600 received badge  Supporter (source)
2014-09-25 23:09:37 -0600 received badge  Student (source)
2014-09-25 14:36:03 -0600 commented answer [SOLVED] how to average N matrices into 1?

ahh I see.. thanks!