Ask Your Question

theodore's profile - activity

2021-06-15 22:22:18 -0600 received badge  Popular Question (source)
2020-12-09 08:46:43 -0600 received badge  Nice Answer (source)
2020-12-06 05:06:14 -0600 received badge  Popular Question (source)
2020-10-30 08:55:44 -0600 received badge  Popular Question (source)
2020-10-28 04:26:36 -0600 marked best answer some brainstorming help to detect speckles

I am trying the make the life of one of my friends easier regarding to the project that he is working. He has a bunch of images like this one:

image description

and the purpose is to extract the white dots. I tried different kind of filters, and the best that I managed to get was by using Scharr:

image description

and the bitwise_nor version of it:

image description

now I the problem is that some dots are not that acute so if I apply threshold I erase quite some of them or I am not able to extract all of them. I tried to segment the image into blocks and apply different threshold to each of the blocks and then recreate the image but the result is not that good as well (actually worse). As I can see the main problem is the noise in-between. If you can think something that might help to eliminate this noise, I would be grateful. It seems so easy, but for some reason I am stuck.

2020-10-28 04:26:31 -0600 received badge  Nice Question (source)
2020-09-28 16:49:01 -0600 received badge  Popular Question (source)
2020-09-22 05:22:30 -0600 received badge  Famous Question (source)
2020-05-06 16:06:54 -0600 received badge  Notable Question (source)
2020-02-25 09:17:50 -0600 received badge  Popular Question (source)
2020-01-17 03:39:47 -0600 received badge  Great Question (source)
2019-11-15 01:41:18 -0600 received badge  Favorite Question (source)
2019-10-18 23:12:56 -0600 marked best answer isotropic linear diffusion smoothing

I want to apply the denoising filter I named in the title which is based on the following equations:

image description

where d is a scalar constant diffusivity parameter, I(x, y) is the initial noisy image, and u(x, y, t) is the image obtained after a diffusion time t lets say 5, 10 and 30. However, I am quite confused about which function to use and how, in order to achieve this in OpenCV. I have the feeling that it is quite simple but for some reason I am confused. Does anyone have an idea?

Here is a sample image:

image description


UPDATE

this is the result images after following @LBerger 's code, they are in time t = 0/t = 4 and d = 1:

image description image description

, is it expected to be like that?

I think that something is wrong, because I am trying also to compare it with the gaussian smoothing. And according to the following formula:

image description

where G√2t (x, y) is the Gaussian kernel. This proves that performing isotropic linear diffusion for a time t with d = 1 is exactly equivalent to performing Gaussian smoothing with a σ = √(2t)

and applying the gaussian filter with the following code:

void gaussian_2D_convolution(const cv::Mat& src, cv::Mat& dst, const float sigma, const int ksize_x = 0, const int ksize_y = 0)
{
    int ksize_x_ = ksize_x, ksize_y_ = ksize_y;

    // Compute an appropriate kernel size according to the specified sigma
    if (sigma > ksize_x || sigma > ksize_y || ksize_x == 0 || ksize_y == 0)
    {
        ksize_x_ = (int)ceil(2.0f*(1.0f + (sigma - 0.8f) / (0.3f)));
        ksize_y_ = ksize_x_;
    }

    // The kernel size must be and odd number
    if ((ksize_x_ % 2) == 0)
    {
        ksize_x_ += 1;
    }

    if ((ksize_y_ % 2) == 0)
    {
        ksize_y_ += 1;
    }

    // Perform the Gaussian Smoothing
    GaussianBlur(src, dst, Size(ksize_x_, ksize_y_), sigma, sigma, BORDER_DEFAULT);

    // show result
    std::ostringstream out;
    out << std::setprecision(1) << std::fixed << sigma;
    String title = "sigma: " + out.str();
    imshow(title, dst);
    imwrite("gaussian/" + title + ".png", dst);

    waitKey(260);
}

and calling it with gaussian_2D_convolution(img, smoothed, sqrt(2*5)); the two results of gaussian smoothing and isotropic linear smoothing in time t = 5 are respectively:

image description image description

which are for sure not similar :-(.

2019-08-12 06:23:28 -0600 received badge  Notable Question (source)
2019-05-13 12:29:55 -0600 received badge  Notable Question (source)
2019-02-25 09:31:35 -0600 received badge  Popular Question (source)
2018-12-29 17:03:46 -0600 received badge  Notable Question (source)
2018-12-11 04:09:37 -0600 received badge  Guru (source)
2018-12-11 04:09:33 -0600 received badge  Great Answer (source)
2018-11-26 15:06:24 -0600 received badge  Notable Question (source)
2018-10-03 05:59:36 -0600 received badge  Popular Question (source)
2018-09-12 02:19:45 -0600 received badge  Notable Question (source)
2018-08-06 00:21:39 -0600 received badge  Popular Question (source)
2018-07-10 05:47:46 -0600 received badge  Popular Question (source)
2018-06-20 08:14:07 -0600 received badge  Popular Question (source)
2018-06-08 02:33:01 -0600 received badge  Famous Question (source)
2018-05-31 11:05:39 -0600 received badge  Famous Question (source)
2018-04-07 16:32:51 -0600 commented answer how do you plot graphs in opencv projects?

really impressive work!!! nice :-)

2018-02-22 09:01:21 -0600 marked best answer Calculate surface normals from depth image using neighboring pixels cross product

As the title says I want to calculate the surface normals of a given depth image by using the cross product of neighboring pixels. However, I do not really understand the procedure. Does anyone have any experience?

Lets say that we have the following image:

image description

what are the steps to follow?


Update:

I am trying to translate the following pseudocode from this answer to opencv.

dzdx=(z(x+1,y)-z(x-1,y))/2.0;
dzdy=(z(x,y+1)-z(x,y-1))/2.0;
direction=(-dxdz,-dydz,1.0)
magnitude=sqrt(direction.x**2 + direction.y**2 + direction.z**2)
normal=direction/magnitude

where z(x,y) is my depth image. However, the output of the following does not seem correct to me:

for(int x = 0; x < depth.rows; ++x)
{
    for(int y = 0; y < depth.cols; ++y)
    {
        double dzdx = (depth.at<double>(x+1, y) - depth.at<double>(x-1, y)) / 2.0;
        double dzdy = (depth.at<double>(x, y+1) - depth.at<double>(x, y-1)) / 2.0;
        Vec3d d = (dzdx, dzdy, 1.0);
        Vec3d n = normalize(d);
    }
}

Update2:

Ok I think I am close:

Mat3d normals(depth.size(), CV_32FC3);

for(int x = 0; x < depth.rows; ++x)
{
    for(int y = 0; y < depth.cols; ++y)
    {
        double dzdx = (depth.at<double>(x+1, y) - depth.at<double>(x-1, y)) / 2.0;
        double dzdy = (depth.at<double>(x, y+1) - depth.at<double>(x, y-1)) / 2.0;

        Vec3d d;
        d[0] = -dzdx;
        d[1] = -dzdy;
        d[2] = 1.0;

        Vec3d n = normalize(d);
        normals.at<Vec3d>(x, y)[0] = n[0];
        normals.at<Vec3d>(x, y)[1] = n[1];
        normals.at<Vec3d>(x, y)[2] = n[2];
    }
}

which gives me the following image:

image description


Update 3:

following @berak's approach:

depth.convertTo(depth, CV_64FC1); // I do not know why it is needed to be transformed to 64bit image my input is 32bit

Mat nor(depth.size(), CV_64FC3);

for(int x = 1; x < depth.cols - 1; ++x)
{
    for(int y = 1; y < depth.rows - 1; ++y)
    {
        /*double dzdx = (depth(y, x+1) - depth(y, x-1)) / 2.0;
        double dzdy = (depth(y+1, x) - depth(y-1, x)) / 2.0;
        Vec3d d = (-dzdx, -dzdy, 1.0);*/
        Vec3d t(x,y-1,depth.at<double>(y-1, x)/*depth(y-1,x)*/);
        Vec3d l(x-1,y,depth.at<double>(y, x-1)/*depth(y,x-1)*/);
        Vec3d c(x,y,depth.at<double>(y, x)/*depth(y,x)*/);
        Vec3d d = (l-c).cross(t-c);
        Vec3d n = normalize(d);
        nor.at<Vec3d>(y,x) = n;
    }
}

imshow("normals", nor);

I get this one:

image description

which seems quite ok. However, if I use a 32bit image instead of 64bit the image is corrupted:

image description

2017-12-14 08:53:54 -0600 received badge  Famous Question (source)
2017-11-22 08:40:47 -0600 received badge  Popular Question (source)
2017-11-13 16:51:42 -0600 marked best answer how do you plot graphs in opencv projects?

Hey guys, I was just wondering, there are cases where sometimes you need to plot a graph in real time within a project in order to keep track or evaluate data or for any other possible reason.

At the moment OpenCV does not support such a functionality - right? please correct me if I am wrong here, maybe the 3.x version introduced something that I am not aware - and from a thorough search that I did I only found this two libraries, [1] and [2] that somehow try to simulate such a wanna be plotting functionality with the use of OpenCV. However though I appreciate the effort of their authors, both are quite old, meaning they are using the old C api and for sure they are not something close to functional, like matlab, gnuplot, etc... So, here comes my question how do you guys do it?

I was looking also at QWT and some examples but it does not seem that convenient regarding also that you need to include Qt libraries and create a project in a form of application since you need to start a QApplication object.

My wonder was that OpenCV as a mature and really known library in computer vision and image processing fields should have such a functionality or to embed it from another external library which is specifically for that reason, since I thing such a functionality is quite some times necessary.

Moreover, I was also thinking on starting something from scratch based on the first two libraries that I pointed here, but I am not that confident that I would be able to provide a solution and reach it in such a level that I would be also able to contribute it to the main OpenCV's source code. So if someone is interested we could also arrange a group that will start such a task in our spare time and when we reach it in a good level to contribute it to OpenCV.

So I would like to hear your opinions and why not enlighten and advice me, regarding what you are using or how you do it in general or if you are interested to create something together if there is a need about it. Be aware though that I am not talking about offline plotting since this is quite easy to manage and there are a lot of tools out there that make the drill in more than a perfect way ;-).

2017-09-23 01:13:32 -0600 received badge  Notable Question (source)
2017-09-14 13:38:59 -0600 marked best answer isotropic non-linear diffusion smoothing (Perona-Malik)

Now in addition to the my previous thread regarding isotropic linear diffusion smoothing, I want to solve the non-linear version of it based on the Perona-Malik approach. Again we have the following formula:

image description(1)

, where D is not a constant but varies across the image domain. A popular choice is the Perona-Malik diffusivity which can be given as:

image description(2)

where λ is the contrast parameter. ∇u(x, y) is the gradient of the image at pixel (x, y). It seems that KAZE features embed this functionality, therefore I have a look at the source code. Specifically the formula (2) is implemented with the following function:

/* ************************************************************************* */
/**
 * @brief This function computes the Perona and Malik conductivity coefficient g2
 * g2 = 1 / (1 + dL^2 / k^2)
 * @Param Lx First order image derivative in X-direction (horizontal)
 * @Param Ly First order image derivative in Y-direction (vertical)
 * @Param dst Output image
 * @Param k Contrast factor parameter
 */
void pm_g2(const cv::Mat &Lx, const cv::Mat& Ly, cv::Mat& dst, float k) {

    Size sz = Lx.size();
    dst.create(sz, Lx.type());
    float k2inv = 1.0f / (k * k);

    for(int y = 0; y < sz.height; y++) {
        const float *Lx_row = Lx.ptr<float>(y);
        const float *Ly_row = Ly.ptr<float>(y);
        float* dst_row = dst.ptr<float>(y);
        for(int x = 0; x < sz.width; x++) {
            dst_row[x] = 1.0f / (1.0f + ((Lx_row[x] * Lx_row[x] + Ly_row[x] * Ly_row[x]) * k2inv));
        }
    }
}

However, while I am trying to use it regarding to what @LBerger came up in the other thread I cannot get the correct output. What do I miss again?

I see that the author apply some scalar non-linear diffusion step functionality which I do not really understand what it is about it, here is the function. I tried what @Guanta suggested in the other thread

I think in the evolution_ - Vector (KAZEFeatures.h) are the evolutions over time, so if you'd take the last element and then from that element's struct (TEvolution.h) the Lsmooth Mat should be the image which has been smoothed. To create the evolution_-vector you need to call Create_Nonlinear_Scale_Space() .

but it did not give a good output :-(.

2017-09-06 09:57:10 -0600 received badge  Popular Question (source)
2017-09-04 13:17:10 -0600 marked best answer grab and process multiple frames in different threads - opencv semi-related

Well as the title says I would like to grab and process multiple frames in different threads by using a circular buffer or more, I hope that you can point me to what is better. For grabbing frames I am not using the VideoCapture() class from opencv but the libfreenect2 library and the corresponding listener since I am working with the Kinect one sensor which is not compatible with the VideoCapture() + openi2 functionality yet. My intention is to have one thread grabbing frames continuously in order not to affect the framerate that I can get from the kinect sensor, also here I might add a viewer in order to have a live monitor of what is happening (I do not know though how feasible is this and how it will affect the framerate) and having another thread where I would do all the process. From the libfreenect2 listener I can obtain multiple views regarding the sensor so at the same time I can have a frame for the rgb camera, one for the ir, one for the depth and with some process I can also obtain an rgbd. My question now is how to make shareable these frames to the two threads. Having a look in the following questions Time delay in VideoCapture opencv due to capture buffer and waitKey(1) timing issues causing frame rate slow down - fix? I think that a good approach would be to go with two threads and a circular buffer. However, what is more logical? to have multiple circular buffers for each view or one circular buffer which will contain a container (e.g. an stl vector<>) with the frames from each view.

image description

At the moment I am using the second approach with the vector and adapting @pklab 's approach from the first link I posted above. Code is below:

//! [headers]
#include <iostream>
#include <stdio.h>
#include <iomanip>
#include <tchar.h>
#include <signal.h>
#include <opencv2/opencv.hpp>

#include <thread>
#include <mutex>
#include <queue>
#include <atomic>

#include <libfreenect2/libfreenect2.hpp>
#include <libfreenect2/frame_listener_impl.h>
#include <libfreenect2/registration.h>
#include <libfreenect2/packet_pipeline.h>
#include <libfreenect2/logger.h>
//! [headers]

using namespace std;
using namespace cv;

enum Process { cl, gl, cpu };

std::queue<vector<cv::Mat> > buffer;
std::mutex mtxCam;
std::atomic<bool> grabOn; // this is lock free

void grabber()
{
    //! [context]
    libfreenect2::Freenect2 freenect2;
    libfreenect2::Freenect2Device *dev = nullptr;
    libfreenect2::PacketPipeline *pipeline = nullptr;
    //! [context]

    //! [discovery]
    if(freenect2.enumerateDevices() == 0)
    {
        std::cout << "no device connected!" << endl;
        exit(EXIT_FAILURE);
//        return -1;
    }

    string serial = freenect2.getDefaultDeviceSerialNumber();
//    string serial = "014947350647";

    std::cout << "SERIAL: " << serial << endl;

    //! [discovery]

    int depthProcessor = Process::cl;

    if(depthProcessor == Process::cpu)
    {
        if(!pipeline)
            //! [pipeline]
            pipeline = new libfreenect2::CpuPacketPipeline();
            //! [pipeline]
    } else if (depthProcessor == Process::gl) {
#ifdef LIBFREENECT2_WITH_OPENGL_SUPPORT
        if(!pipeline)
            pipeline = new libfreenect2::OpenGLPacketPipeline();
#else
        std::cout << "OpenGL pipeline is not supported!" << std::endl;
#endif
    } else if (depthProcessor == Process::cl) {
#ifdef LIBFREENECT2_WITH_OPENCL_SUPPORT
        if(!pipeline)
            pipeline = new libfreenect2::OpenCLPacketPipeline();
#else
        std::cout << "OpenCL pipeline is not supported!" << std::endl;
#endif
    }

    if(pipeline)
    {
        //! [open]
        dev = freenect2.openDevice(serial, pipeline);
        //! [open]
    } else {
        dev = freenect2 ...
(more)
2017-08-24 02:33:31 -0600 received badge  Nice Answer (source)