Ask Your Question

VanGog's profile - activity

2021-05-13 07:02:17 -0600 received badge  Famous Question (source)
2020-09-14 03:12:26 -0600 received badge  Popular Question (source)
2020-05-04 20:11:47 -0600 received badge  Popular Question (source)
2020-02-03 10:43:21 -0600 received badge  Famous Question (source)
2019-09-11 22:05:12 -0600 received badge  Nice Question (source)
2019-04-15 12:05:42 -0600 received badge  Famous Question (source)
2018-11-26 01:41:39 -0600 received badge  Notable Question (source)
2018-10-28 15:42:43 -0600 received badge  Notable Question (source)
2018-08-20 11:01:42 -0600 received badge  Notable Question (source)
2018-06-12 13:39:20 -0600 received badge  Popular Question (source)
2018-01-12 20:56:10 -0600 received badge  Popular Question (source)
2018-01-03 15:14:54 -0600 received badge  Popular Question (source)
2016-07-01 04:21:42 -0600 asked a question difference between Mat and OutputArray

I am trying this

void copy(){
    Rect roi;           
    auto primarySegment = main_layers.begin();
    for (int c = 0; c< primaryKernelsLoad; c++)
        {
        if (heightPriority)
          {
          roi = cv::Rect(0, c, size.width, segment1Size);
          source(roi).copyTo(&(*primarySegment));
          auto nx = std::next(primarySegment, 2);
          }
        };
};

error: error C2664: 'void cv::Mat::copyTo(cv::OutputArray) const' : cannot convert parameter 1 from 'cv::Mat *' to 'cv::OutputArray'

I cannot find out what is difference between cv::OutputArray and cv::Mat and how to access the cv::OutputArray from the cv::Mat?

2016-06-30 10:31:52 -0600 commented question Should we use filters on colored images? Convolving filter performance.

But you could merge the channels and then you should got RGB image which is blured, don't you? Would the results be different?

2016-06-28 05:45:46 -0600 commented question Opencv and TBB in image processing

There is an interesting article "The Foundations for Scalable Multi-core Software in Intel® Threading Building Blocks" http://citeseerx.ist.psu.edu/viewdoc/... with fibonacci number example similar as is in example code within TBB package. See page 54. It's long article but contains precious examples, explaining, diagrams and benchmarks. Link to LBerger's note: http://answers.opencv.org/question/15...

2016-06-27 14:43:40 -0600 commented answer Opencv and TBB in image processing

What I look for is to create a loop with a listener function which will obtain a signal when some thread function have finished. I cannot imagine how it works. There should be some feeding function running in a loop in memory, which will create queue when it is needed or will pass the images from queue to the thread function. I just cannot imagine how to manage the feeding process. I will create question for this tomorrow because I just compiled OpenCV with TBB and I am completely new to this. So I first need to try your code and then I will ask.

2016-06-27 01:38:15 -0600 commented answer Opencv and TBB in image processing

Thanks for your answer, it's great to see example. But how can you detect that some thread has finished its job and should process next image?

2016-06-26 08:12:30 -0600 commented question HSV colorspace clarification

@LorenaGdL: It should be possible to force range 256 based on this expression: int hrange = depth == CV_32F ? 360 : code == CV_BGR2HSV || code == CV_RGB2HSV || code == CV_BGR2HLS || code == CV_RGB2HLS ? 180 : 256; It is the 3rd parameter of function cvtColor

2016-06-26 05:40:54 -0600 asked a question Compiling OpenCV with Visual Studio 2010

With Visual Studio 2010.

When I want compile OpenCV with TBB, should I add libraries of TBB to project or the settings should be prepared after cmake generated project files? When I try to compile OpenCV I got lots of messages complaining missing libraries. But I cannot change the projects. I tried to right click and add references but there is not this enabled. I can view references, not add. I cannot find the options to set compiler. I can view property manager and right click on properties but of profile Microsoft.Cpp.Win32.user but there are not compiler settings.

When I try to compile core project I got the message LINK : fatal error LNK1104: cannot open file 'tbb_debug.lib'

Update: I tried to open BUILD_ALL project and checked project settings in core project and here is possible to add the libraries to linker. I will try, but I hope I will not need to do this for every project.

2016-06-26 05:03:49 -0600 commented answer Problem to set Visual Studio for OpenCV

I solved it. The path needs to use slashes / not \

2016-06-26 04:15:46 -0600 marked best answer Problem to set Visual Studio for OpenCV

I have followed the OpenCV guides for installation in Windows and setting of Visual Studio but I have problem if I understood all step correctly. The folder structure is different.

My system is Windows XP. IDE Visual Studio 2010. Downloaded and installed OpenCV 3.1

@echo %OPENCV_DIR%
prints
P:\PROGRAMY\programming\OpenCV

notice: I have added both as system environment variable (+rebooted PC) and as user environment variable (+relogged user, no reboot).

here is my directory structure:

>p:
>cd \PROGRAMY\programming\OpenCV
>dir /b /o:d
build 3.1
sources

>cd \PROGRAMY\programming\OpenCV\build 3.1
>dir /b /o:d
bin
include
etc
lib

These folders I have moved from installation folder from build. I use 32 bit Windows, I skipped Java, Perl and 64 bit things. Note there is bin/opencv_ffmpeg310.dll and lib/opencv_ffmpeg310.dll - same size files - only dlls/lib I have. I expect this to be the opencv main library, right?

The P:\PROGRAMY\programming\OpenCV\build 3.1\include\ has two folders opencv and opencv2 ? Here it is not clear which folder should be included because in doc one one folder is shown.

>cd \PROGRAMY\programming\OpenCV\sources
>dir /b /o:d
>dir /b /o:d
samples
platforms
modules
include
doc
data
cmake
apps
3rdparty

here are all sources

In my project I have set Release and Debug profile: Linker -> General -> Additional Library Dependecies: $(OPENCV_DIR)\build 3.1\lib Linker -> Input .. no change here. Here are standard Windows libraries: kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies) I tried to add one library but I found the lib files are not part of the distribution. (sad). I have read there is needed to add some libraries, but no *.lib files I found in the folders.

Now I have created the file main.cpp which includes these files:

#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>

then I build and I got this error:

1>------ Build started: Project: OPEN CL, Configuration: Debug Win32 ------
1>LINK : error LNK2001: unresolved external symbol _mainCRTStartup
1>U:\C++\openCV\test 00\Debug\OPEN CL.exe : fatal error LNK1120: 1 unresolved externals
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========

Note that if I include both directories include/opencv and include/opencv2 it still generates same error.

What do I do wrong during installation that I am not able to compile the program in guide? Do I need to include some lib files?

Links: http://docs.opencv.org/2.4/doc/tutori... http://docs.opencv.org/2.4/doc/tutori...

Edit: After creating new clear project and setting include directories I can compile single program. But when I try to include OpenCV core so it will print error. Do I need to link any library? Which file? I cannot link opencv_ffmpeg310.dll it would print error that file is corrupt (There are two files like this, both error ... (more)

2016-06-26 04:15:46 -0600 commented answer Problem to set Visual Studio for OpenCV

@pklab: Now I want to recompile OpenCV with TBB and I stuck this problem. What is opencv_config_file_include_dir? Should it be the destination directory or source directory? When I started cmake GUI there is set destination directory (the last settings used). And the path is U:/OpenCV Build/3.1 win32 + contrib ... and error I get is: CMake Error at cmake/OpenCVModule.cmake:260 (foreach) - syntax error during parsing string: U:/opencv-master/modules;EXTRA;U:\opencv_contrib-master\modules ... so I dont know where does it take the EXTRA word from.

2016-06-24 07:05:03 -0600 asked a question Should we use filters on colored images? Convolving filter performance.

I noticed there is great difference in performance if we use filters on grayscale and colored images. This is an example code

#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
#include <stdlib.h>
#include <stdio.h>

using namespace cv;

int main( int argc, char** argv )
{
    Mat src, gray, dst, abs_dst;
    src = imread( "../../data/lena.jpg" );
    if ( src.empty() )
        return -1;
    /// Remove noise by blurring with a Gaussian filter

    double t = (double) cv::getTickCount();
    GaussianBlur( src, dst, Size(3,3), 0, 0, BORDER_DEFAULT );
    cvtColor( dst, gray, CV_RGB2GRAY );
    t = (double) 1000 * (cv::getTickCount() - t) / cv::getTickFrequency();
    std::cout << "convertion + blur time: " << t << "ms" << std::endl;
    imshow("blured->gray", gray);
    waitKey(0);

    t = (double) cv::getTickCount();
    cvtColor( dst, gray, CV_RGB2GRAY );
    GaussianBlur( gray, gray, Size(3,3), 0, 0, BORDER_DEFAULT );
    t = (double) 1000 * (cv::getTickCount() - t) / cv::getTickFrequency();
    std::cout << "convertion + blur time: " << t << "ms" << std::endl;
    imshow("gray->blured", gray);
    waitKey(0);
    destroyWindow("blured->gray");
    destroyWindow("gray->blured");

    /// Apply Laplace function
    t = (double) cv::getTickCount();
    Laplacian( src, dst, CV_16S, 3, 1, 0, BORDER_DEFAULT );
    convertScaleAbs( dst, abs_dst );
    t = (double) 1000 * (cv::getTickCount() - t) / cv::getTickFrequency();
    std::cout << "Laplacian time: " << t << "ms" << std::endl;
    imshow( "Laplacian", abs_dst );

    waitKey(0);
    return 0;
}

So I wonder why should we perform filters on RGB images? Should we or not? I have seen various effects on grayscale images and always you convert color image to grayscale. But if it is faster to apply the filter on grayscale, could we just apply the filter on the grayscale image and then use some trick/effect that will do similar change (blur/smoothing/emboss/edges effects) to the colored images. I think like it should be many times faster than applying kernels on color images. I think like using grayscale image like some kind of "mask" or something what can change the look of the color image.

2016-06-24 06:56:01 -0600 commented question HSV colorspace clarification

Thanks for explanation! So what image type RGB should I use exactly, not to lose it? CV_8U1, CV_8U3 will cause data lost. Do I need to use floats? So basicly I can use HSV but I need to use wider data type. But which one?

2016-06-24 03:50:49 -0600 asked a question HSV colorspace clarification

I read that OpenCV uses 0-179° of Hue. In this link the best voted answer is that docs state that OpenCV uses 0-360°. But I did not find that paragraph stating that. What is thruth? I guess the fist. But main reason why I create this question, is it save to use this namespace or can I lose information. I often use Photoshop where the range is 360° or RGB o CMYK image. So if read the image savee using iRGB ICC profile, and then read it by OpenCV and then convert it to HSL, work with it, convert it back to RGB and save it, did I lost the information? And if I measure color ranges in Photoshop and then use color ranges for HSV image (converted from RGB) do I loose information. And what if I want to work with 360° degrees, is there any way how to convert to some different data type or namespace which will contain all the data and not lose it? I am used to work with HSV colorspace because this is the most "user friendly" colorspace, it is quite simple to change lightness or hue or saturation in this colorspace so this is why I am worried about inaccuracies.

2016-06-23 21:25:57 -0600 commented answer Can anyone confirm the range of various color spaces such as LAB, LUV and YUV in OpenCV ?

great answer, thanks

2016-06-23 11:32:36 -0600 commented answer How to draw gradient?

I will use float, this was for a testing purpose. I will find another way how to do it more exactly using 360 as a maximum value.

2016-06-23 10:23:59 -0600 commented answer How to draw gradient?

Yeah, but I need mathematical accurancy. If you use image editor you can see the image is blured in the 358-359° and that is problem. I can correct it in image editing software but it would be useless when I would use it as a build in solution. I have done recalculation for the smallest image (my question been updated). Just it is not perfect because I used floats and divisions of floats so it was a bit complicated.

2016-06-23 07:50:50 -0600 marked best answer How to draw gradient?

How can I draw gradient like this?

image description

I want to draw gradient of circle.

First idea, I could create a white line and empty image I. Then to go through loop with angle from 0 to 89, in every iteration do I++; and angle++; rotate this line with fixed origin at x=radius, y=0; This would create one image of gradient. Then I need to copy the image 3 times in a loop rotating it +90° in every iteration.

Now I ask what functions to use to copy the pixels from the original line to the image using rotation and how to do the same with an image. It's basically the same difference is only in image dimensions. That line is radius,0 and image dimensions are radius*2,radius,2

What I tried after Lorena pointed me to try linearPolar:

int radius = 4;
int side = 2*radius;
int size = 1 + 2*side;
Mat I = Mat::zeros(size, size, CV_8UC1);
Point center; 
center.x = 2*radius+1 -1; center.y = 2*radius+1 -1;
int sigma = radius%2==0 ? radius+1 : radius;
cv::Size kernelSize = CvSize(sigma, sigma);

Mat L = Mat::zeros(1, center.x, CV_8UC1);
I = Mat::zeros(size, size, CV_8UC1);
Point a,b;

L = Mat::zeros(center.x, 1, CV_8UC1);
a.x=0; a.y = 0; b.x=0; b.y = radius-1;
line( L, a, b, cv::Scalar( 255, 255, 255 ), 1, 8 );
cv::GaussianBlur( L, L, kernelSize, sigma );

imshow("blured line",L);
waitKey(0);
cv::Size dsize= cv::Size(radius*10, radius*10);

resize(L, I, dsize );
Point2f  fcenter; 
fcenter.x = (float) radius/16 ;
fcenter.y = (float) radius/16 ;
cv::linearPolar(I, I, fcenter, radius, INTER_LINEAR );
imshow("linearPolar",I);
waitKey(0);

The resulting image contains some gradient which is horizontal instead circular.

Edit: New code (radius 25) - same problem:

// Mat::zeros( 5, 5, CV_8UC3 );
int radius = 25;
int side = 2*radius;
int size = 1 + 2*side;
Mat I = Mat::zeros(size, size, CV_8UC1);
Point center; 
center.x = 2*radius+1 -1; center.y = 2*radius+1 -1;
/*
circle( I,
    center,
    radius,
    cv::Scalar( 255, 255, 255 ),
    -1, // when thickness is negative, filled circle is drawn
    8 );
imshow("circle",I);
waitKey(0);*/
int sigma = radius%2==0 ? radius+1 : radius;
cv::Size kernelSize = CvSize(sigma, sigma);
/*
cv::GaussianBlur( I, I, kernelSize, sigma );
imshow("blured circle",I);
waitKey(0);*/

/** ANGLE GRADIENT **/
Mat L = Mat::zeros(1, center.x, CV_8UC1);
I = Mat::zeros(size, size, CV_8UC1);

Point a,b; 
/*
a.x=0; a.y = 0; b.x=radius; b.y = 0;
int c = 255;
line( I, a, b, cv::Scalar( c ), 1, 8 );
*/

/** ANGLE GRADIENT **/
L = Mat::zeros(center.x, 1, CV_8UC1);
a.x=0; a.y = 0; b.x=0; b.y = radius-1;
line( L, a, b, cv::Scalar( 255, 255, 255 ), 1, 8 );
cv::GaussianBlur( L, L, kernelSize, sigma );

imshow("blured line",L);
waitKey(0);
cv::Size dsize= cv::Size(size, size);

resize(L, I, dsize );
Point2f  fcenter; 
fcenter.x = (float) radius ;
fcenter.y = (float) radius ...
(more)