# kovand11's profile - activity

2016-04-15 06:56:29 -0500 asked a question cv:Mat Encoding (RGB vs BGR)

How is the encoding information stored in the in a cv:Mat? Like how it can be known if it is an RGB or a BGR image? (there are other possibilities) As far as I know, it is only the channel number what is stored. (like 32FC1)

2015-07-24 04:16:29 -0500 asked a question Get [R|t] of two kinnect cameras

My aim is to determine the correct tranformation between the coordinate systems of two kinnect cameras, based on chessboard patterns. (the base unit would be meter)

I am basicly using stereo_calib.cpp sample. (with the chessboard unit set correctly)

Using 5 pairs, the reprojection error is 3.32.

R: !!opencv-matrix
rows: 3
cols: 3
dt: d
data: [ -1.2316533437904904e-002, 5.2568656248225909e-001,
-8.5058917288527658e-001, -5.7551867406562152e-001,
6.9190195114274877e-001, 4.3594718235883412e-001,
8.1769588405826155e-001, 4.9489931100219109e-001,
2.9402059989690299e-001 ]
T: !!opencv-matrix
rows: 3
cols: 1
dt: d
data: [ 1.2315362401819163e+000, -9.6293867563953039e-001,
1.5791089847312716e+000 ]


Having a valid point pair, P_k1 = {0.01334677 -0.3134326 2.604} and P_k2 = {-0.9516979 -0.3950531 2.483} the given R and T does not seem to be right.

I am assuming the P_k2 = R*P_k1 + T formula.

Any idea where the error comes from, or how to improve my results?

2015-07-13 06:03:12 -0500 asked a question HoughCircles: Getting the biggest one

Using the built in circle detection algorithm, I have trouble to find the biggest circle. For example:

HoughCircles(src_gray, circles, CV_HOUGH_GRADIENT, 1, src_gray.rows / 8, 200, 20, 10, 50);


I only draw the best detection, but it seem to be the average of the actually detected circles.

How can I detect the biggest one?

2015-01-16 03:58:16 -0500 commented question Normalized standard deviation

In the meantime, I looked up the exact definition which I hadn't known before, that's why I asked the question. In fact, it is only a division by the squared mean.

2015-01-16 02:39:31 -0500 asked a question Normalized standard deviation

What is the easiest way to calculate normalized standard deviation for a certain region of an image?

2015-01-08 09:51:14 -0500 asked a question HDR: Precalibrate

Is it possible to generate the camera response matrix in advance? (for a given sensor) For example, can I use different times while generating the response matrix, and merging the hdr image?

2014-11-02 13:10:06 -0500 asked a question Pixelwise subtract, with negative numbers

I would like to implement the following pixelwise operation between two images:

• Subtract the pixels (RGB) (important: if the result is negative, keep it)
• Convert to absolute value (RGB)
• Somehow merge the three channels (for starter, I use the cv::COLOR_RGB2GRAY which is a weighted add)

But, the problem is, that the cv::substract() and the operator - on cv::Mat fails to calculate negative values, and uses 0 instead.

How can I easily implement the behavior I need?

2014-10-20 19:29:35 -0500 asked a question cv::xfeatures2d::SURF abstract?

With all default opencv + opencv_contrib build (vs2013), it seems that the mentioned class is abstract.

Test code:

#include <opencv2\core.hpp>
#include <opencv2\features2d.hpp>
#include <opencv2\xfeatures2d\nonfree.hpp>

int main()
{
cv::Mat m;
cv::xfeatures2d::SURF surf;
return 0;
}


Error:

Error   1   error C2259: 'cv::xfeatures2d::SURF' : cannot instantiate abstract class    c:\documents\visual studio 2013\projects\xfeattest\xfeattest\main.cpp   8   1   xfeatTest

2   IntelliSense: object of abstract class type "cv::xfeatures2d::SURF" is not allowed:
function "cv::xfeatures2d::SURF::setHessianThreshold" is a pure virtual function
function "cv::xfeatures2d::SURF::getHessianThreshold" is a pure virtual function
function "cv::xfeatures2d::SURF::setNOctaves" is a pure virtual function


The nonfree.hpp:

#ifndef __OPENCV_XFEATURES2D_FEATURES_2D_HPP__
#define __OPENCV_XFEATURES2D_FEATURES_2D_HPP__

#include "opencv2/features2d.hpp"

namespace cv
{
namespace xfeatures2d
{

/*!
SIFT implementation.

The class implements SIFT algorithm by D. Lowe.
*/
class CV_EXPORTS_W SIFT : public Feature2D
{
public:
CV_WRAP static Ptr<SIFT> create( int nfeatures = 0, int nOctaveLayers = 3,
double contrastThreshold = 0.04, double edgeThreshold = 10,
double sigma = 1.6);
};

typedef SIFT SiftFeatureDetector;
typedef SIFT SiftDescriptorExtractor;

/*!
SURF implementation.

The class implements SURF algorithm by H. Bay et al.
*/
class CV_EXPORTS_W SURF : public Feature2D
{
public:
CV_WRAP static Ptr<SURF> create(double hessianThreshold=100,
int nOctaves = 4, int nOctaveLayers = 3,
bool extended = false, bool upright = false);

CV_WRAP virtual void setHessianThreshold(double hessianThreshold) = 0;
CV_WRAP virtual double getHessianThreshold() const = 0;

CV_WRAP virtual void setNOctaves(int nOctaves) = 0;
CV_WRAP virtual int getNOctaves() const = 0;

CV_WRAP virtual void setNOctaveLayers(int nOctaveLayers) = 0;
CV_WRAP virtual int getNOctaveLayers() const = 0;

CV_WRAP virtual void setExtended(bool extended) = 0;
CV_WRAP virtual bool getExtended() const = 0;

CV_WRAP virtual void setUpright(bool upright) = 0;
CV_WRAP virtual bool getUpright() const = 0;
};

typedef SURF SurfFeatureDetector;
typedef SURF SurfDescriptorExtractor;

}
} /* namespace cv */

#endif


Any idea what the problem could be?

2014-10-10 18:00:40 -0500 asked a question Build error under Windows

With the following cmake settings:

• mingw
• All defaults
• OPENCV_EXTRA_MODULES_PATH given

Compilation fails:

Any idea, how to fix?

EDIT:

CMake output: http://pastebin.com/3HVJAqwX

MinGW output: http://pastebin.com/X3fZFPdk (4.8.1-4 version)

opencv: 55f490485bd58dc972de9e0333cdff005fce1251 (master latest)

2014-10-02 17:28:21 -0500 asked a question Speeding up pixelwise operations

I would like to compute an average LoG (Laplacian of Gauss) score over a given AOI. The problem, with the naive approach is, that it takes a really long time, and i have a stong feeling that it may have been done faster.

cv::Mat src = cv::Mat( imgSize.height(), imgSize.width(), CV_8UC4, (void*)imgPtr, (size_t)pitch );

cv::Mat roi;
if( subROI.isNull() ) {
roi = src;
} else {
cv::Rect rect = cv::Rect( subROI.x(), subROI.y(), subROI.width(), subROI.height() );
roi = src(rect);
}
cv::Mat grey;
cv::cvtColor(roi, grey, cv::COLOR_RGB2GRAY);
cv::GaussianBlur(grey, grey, cv::Size(3, 3), 0, 0, cv::BORDER_DEFAULT);
cv::Mat laplaced;
cv::Laplacian(grey, laplaced, CV_16S, 3, 1, 0, cv::BORDER_DEFAULT);
cv::convertScaleAbs(laplaced, laplaced);

float avg = 0.0;
for (int i = 0; i < laplaced.rows; i++)
{
for (int j = 0; j < laplaced.cols; j++)
{
avg += (float)laplaced.at<unsigned char>(i, j);
}
}
avg /= laplaced.rows*laplaced.cols;

return avg;


Any tip, to make the code run faster? (gonna run on mobile ivy bridge CPU)

2014-09-03 14:45:43 -0500 asked a question VideoWriter problem with Ubuntu (3.0 alpha)

With opencv 3.0 (master branch)

int codec = cv::VideoWriter::fourcc('M','J','P','G');
cv::VideoWriter writer(ui->lineEdit->text().toStdString(),codec,23,cv::Size(640,480));
qDebug() << "videoWriter.isOpened() = " << writer.isOpened();
writer.release();

//Debug output
//videoWriter.isOpened() =  false


With libopencv-dev package installed from Ubuntu repo:

int codec = CV_FOURCC('M','J','P','G');
cv::VideoWriter writer(ui->lineEdit->text().toStdString(),codec,23,cv::Size(640,480));
qDebug() << "videoWriter.isOpened() = " << writer.isOpened();
writer.release();

//Debug output
//videoWriter.isOpened() =  true


Any idea how to solve whats wrong (no console error)?

• Hidden dependency what is installed with the libopencv-dev package?
• Bug in the 3.0 version?

UPDATE:

I tried out the stable 2.4 branch, but it's broken as well. So there must be some package that comes with the libopencv-dev, that is needed.

2014-09-01 07:45:33 -0500 asked a question CV_FOURCC missing?

In the 3.0 version (post videoio), which header contains the CV_FOURCC macro? Or how can I avoid using it?

2014-08-29 12:52:23 -0500 commented question How can i get the old nonfree functionality back?

I use the SURF fp without CUDA. Is this bug expected to be fixed soon?

2014-08-29 12:20:06 -0500 asked a question How can i get the old nonfree functionality back?

I know its moved to the opencv_contrib, but if I build from the git repo, with OPENCV_EXTRA_MODULES_PATH set correcly, I have no xfeatures2d at all. (but i have xphoto ximgproc xobjdetect)

Any idea?

2014-08-25 04:47:49 -0500 asked a question Recording long videos: Memory management

Hi,

I get the frames as raw RGB888 data (I can convert it to cv::Mat), and I want to store it in a compressed video file, without storing all the frames in the memory.

Is it possibe to use the hard drive to store the temporary data, or is there any easy way to achive my goal?

2014-07-22 06:36:08 -0500 asked a question module.hpp vs module/module.hpp

Im working with the latest OpenCV form the master branch.

What is the difference between (for example):

//and

# include <opencv2 imgproc="" imgproc.hpp="">

Is it safe, to use both a stable opencv from ubuntu repo, and a self built one?

2014-07-22 06:25:14 -0500 commented question Apply infinite homography to image

Look at the documentation of gemm(): dst = alphasrc1.t()src2 + beta*src3.t(); In c++ I would use the operator *, there must be a Java equivalent to that.

2014-07-20 11:48:27 -0500 commented answer Apply rotation matrix to image

Just multiple the matrices, and pass the result to the warpPerspective() function.

2014-07-19 20:25:32 -0500 commented question Build from git repo fails under Linux, but ok with Windows

I found it, as a bug report: http://code.opencv.org/issues/3821

2014-07-19 20:19:47 -0500 answered a question Apply rotation matrix to image

The R matrix transforms from Cam1 system, to Cam2 system. Its a 3D->3D transformation. The warpPerspective() expects an 2D Image -> 2D Image tranformation (in normalized space).

Luckily there is a function for that: http://docs.opencv.org/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html?highlight=findhomography#findhomography

2014-07-19 19:46:14 -0500 asked a question Build from git repo fails under Linux, but ok with Windows

Any idea what is the source of this error?

/home/kovand/opencv/modules/videoio/src/cap_v4l.cpp: In function ‘bool mjpeg_to_rgb24(int, int, unsigned char*, int, unsigned char*)’:
/home/kovand/opencv/modules/videoio/src/cap_v4l.cpp:1740:16: error: ‘imdecode’ is not a member of ‘cv’
cv::Mat temp=cv::imdecode(cv::Mat(std::vector<uchar>(src, src + length)), 1);
^
make[2]: *** [modules/videoio/CMakeFiles/opencv_videoio.dir/src/cap_v4l.cpp.o] Error 1
make[1]: *** [modules/videoio/CMakeFiles/opencv_videoio.dir/all] Error 2
make: *** [all] Error 2


Distro: Linux Mint 17 (all dependencies up to date) Used that tutorial: http://docs.opencv.org/doc/tutorials/introduction/linux_install/linux_install.html

2014-06-23 19:24:28 -0500 asked a question Blending images with different focal length

I'm trying to implement an automatic focus stacking algorithm. The scene and the camera position is static, but the view angle changes with focus length. I can take several images, and I have direct API controll over the focus lenses. Is there any robust automatic method to allign the images? Or any method to calibrate the setup? (measure the angles)

2014-06-07 14:20:39 -0500 commented question Except for OpticalFlow,Is there other way to calculate the new position of the corners points?

You need to specify, what to track, because the majority of the image (the background) is still, and a hand is not easy to track.

2014-06-07 14:04:06 -0500 answered a question Get smoothing point using B-spline curve C++

The p1 = 11; determines the number of evaluated points. But if only fixed number of points needed, its a waste using a generic B-spline. The exact behaviour can be achived with only weighting the four points with precalculated B-spline basis function values. And the k = 2; means that the segments are simple lines which are just connect the control points, so it should be 3 to be continuous in tangent, and 4 to be continuous in curvature.

So quick fix is p1 = 4; and k = 3;

But this is not the best solution to filter out points.

An easy: Moving average A hard one: Kalman filter

2013-08-29 04:09:53 -0500 asked a question Superresolution using feature points instead of opical flow

I'd like to use feature points to detect a rigid transform instead of optical flow, which is much slower, but I cant find any documentation about what interface I must implement. (to make my algorithm act like an optical flow), or does it even work, or there is a better solution?

2013-08-09 09:55:10 -0500 asked a question SuperResolution nextFrame bug

In the superresolution sample (built with vc11 compiler) the the following line:

//Ptr<superresolution> superRes;

superRes->nextFrame(result);

results the following error error (tried with multipe test videos):

http://i.imgbox.com/abwNaL3z.jpg

and if I change the optical flow method to simple, it takes forever to run (30 min with an i7 2600k)

Any idea?

Update:

The program used 3.5 GB memory before it stopped. Its simply unreasonable. Must be a memory leak.

2013-03-12 15:46:46 -0500 marked best answer Detector for FREAK

Which feature detector works best with the FREAK extractor? (with a good performance) And how should i set the parameters? (for example: threshold)