2019-01-28 10:57:02 -0600
| received badge | ● Popular Question
(source)
|
2017-04-06 01:07:05 -0600
| received badge | ● Famous Question
(source)
|
2015-10-31 11:25:42 -0600
| received badge | ● Notable Question
(source)
|
2015-10-31 08:31:46 -0600
| received badge | ● Student
(source)
|
2015-09-30 07:33:15 -0600
| received badge | ● Popular Question
(source)
|
2015-04-17 05:53:00 -0600
| commented question | How to rebuild openCV 3.0 for Visual Studio 2015 Thanks, this helps, but now I'm having trouble to rebuild with Cmake cuz ICV: Local copy of ICV package has invalid MD5 hash . It failed to download the ippcv automatically, and I don't know where to place after I downloaded the required version. |
2015-04-16 07:55:57 -0600
| asked a question | How to rebuild openCV 3.0 for Visual Studio 2015 I'm using Visual Studio 2015 Ultimate Preview and OpenCV 3.0 Alpha on windows 7 64bit. The project in VS is 32bit console application. I'm getting a linking error when I try to build a simple example just reading in an image in a Mat type.
The errors are looking like: 1>opencv_core300d.lib(alloc.obj) : error LNK2038: mismatch detected for '_MSC_VER': value '1800' doesn't match value '1900' in solver.obj 1>opencv_core300d.lib(alloc.obj) : error LNK2038: mismatch detected for 'RuntimeLibrary': value 'MTd_StaticDebug' doesn't match value 'MDd_DynamicDebug' in solver.obj After Googleing the issue, I learned that I have to recompile the OpenCV library in order to be linked with my project. How can I do that? I can't find a makefile in opencv to import in VS and recompile from there. what is the solution? |
2015-01-30 07:46:21 -0600
| commented question | Build a project using Automake (Make error: undefined reference to openCV functions (error: ld returned 1 exit status)) I have similar issue, "undefined reference" linking fails, while I have a Cmake setup form the official webpage. With the above command can compile. How should that be in a cmake file? |
2015-01-17 06:21:26 -0600
| received badge | ● Teacher
(source)
|
2014-10-07 07:18:10 -0600
| commented question | warpPerspective height adjustment |
2014-10-07 07:15:29 -0600
| asked a question | Bird eye view homography I
I have openCV 3.0 and I'm trying the bird eye view example (modified) from OReilly Learning OpenCV book, example 12.1. Consider that I did the calibration separately and saved in the .xml files. I would like to have a bird eye view of a road, but the camera is mounted too high and the chessboard can't be seen well on the image. So, I would like to use the pained withe rectangles on the ground (I know their dimensions and relative position from each other). Yet not calibrated image:
How should I set up the objPts and imgPts coordinates in order to achieve a bird eye view? The modified code: #include <opencv/cv.h>
#include <opencv/highgui.h>
#include <opencv2/calib3d/calib3d_c.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <iostream>
#include <fstream>
int main(int argc, char* argv[]) {
IplImage* image = 0;
image = cvLoadImage("img/img1.jpg");
cvShowImage( "Original", image );
CvMat *intrinsic = (CvMat*)cvLoad("Intrinsics.xml");
CvMat *distortion = (CvMat*)cvLoad("Distortion.xml");
// Build the undistort map that we will use for all
// subsequent frames.
//
IplImage* mapx = cvCreateImage( cvGetSize(image), IPL_DEPTH_32F, 1 );
IplImage* mapy = cvCreateImage( cvGetSize(image), IPL_DEPTH_32F, 1 );
cvInitUndistortMap(
intrinsic,
distortion,
mapx,
mapy
);
IplImage *t = cvCloneImage(image);
cvRemap( t, image, mapx, mapy ); // Undistort image
cvReleaseImage(&t);
cvShowImage( "Calibration", image ); // Show raw image
// FIND THE HOMOGRAPHY
//
CvMat *H = cvCreateMat( 3, 3, CV_32F);
// this parameters are the real size of a white rectangle on the bottom of the image
CvPoint2D32f objPts[4], imgPts[4];
objPts[0].x = 0; objPts[0].y = 0;
objPts[1].x = 0.5; objPts[1].y = 0;
objPts[2].x = 0; objPts[2].y = 2.8;
objPts[3].x = 0.5; objPts[3].y = 2.8;
// this is the pixel coordinates of one of the rectangles
imgPts[0].x = 0; imgPts[0].y = 0;
imgPts[1].x = 61; imgPts[1].y = 0;
imgPts[2].x = 23; imgPts[2].y = 80;
imgPts[3].x = 112; imgPts[3].y = 79;
cvGetPerspectiveTransform(objPts, imgPts, H);
// LET THE USER ADJUST THE Z HEIGHT OF THE VIEW
//
float Z = 25;
int key = 0;
IplImage *birds_image = cvCloneImage(image);
cvNamedWindow("Birds_Eye");
// LOOP TO ALLOW USER TO PLAY WITH HEIGHT:
//
// escape key stops
//
while(key != 27) {
// Set the height
//
CV_MAT_ELEM(*H,float,2,2) = Z;
// COMPUTE THE FRONTAL PARALLEL OR BIRD’S-EYE VIEW:
// USING HOMOGRAPHY TO REMAP THE VIEW
//
cvWarpPerspective(
image,
birds_image,
H,
CV_INTER_LINEAR | CV_WARP_INVERSE_MAP |
CV_WARP_FILL_OUTLIERS
);
cvShowImage( "Birds_Eye", birds_image ); key = cvWaitKey();
if(key == 'u') Z += 0.5;
if(key == 'd') Z -= 0.5;
}
cvSave("H.xml",H); //We can reuse H for the same camera mounting
return 0;
}
The resulted image is:
Moreover, in case I try to modify the height H(2,2)=z than the result is only scaleing the image, but not hing else. |
2014-10-07 06:54:39 -0600
| commented answer | lane tracking - how to group hough lines Dont you think that a bird eye view would help you more in detecting and tracking the lane? |
2014-10-06 23:46:26 -0600
| received badge | ● Necromancer
(source)
|
2014-10-06 03:42:20 -0600
| answered a question | lane tracking - how to group hough lines I'm currently working on the same subject and this http://hompi.sogang.ac.kr/fxlab/paper/45.pdf">publication might help you in all the steps. Regarding step 5: - first you should filter out all the almost horizontal and vertical lines.
- differentiate the left and right side of the image, so on one side u filter out the lines with one orientation and on the other side with the other. If y=ax+b than left side a>0 and on the right a<0...
- I would suggest to do line unification in case you detect more the one line close by each other
|
2013-03-07 07:37:34 -0600
| commented answer | Segmentation fault with iplmage pointer, ROS node. thanks, I used IplImage *src_g=cvCloneImage(&src); and now its working. |
2013-03-07 05:20:47 -0600
| received badge | ● Supporter
(source)
|
2013-03-07 05:20:46 -0600
| received badge | ● Scholar
(source)
|
2013-03-06 16:31:49 -0600
| asked a question | Segmentation fault with iplmage pointer, ROS node. I'm trying to detect a blob on a video feed with cvBlob lib in my ROS node. I think, I made a mistake with pointers, but I can't figure out where. Moreover, do I have to free some of this variable? Mat& corridorProces(Mat& resultImg)
{
Mat srcMat=resultImg.clone();
cvtColor( resultImg, resultImg, CV_RGB2GRAY );
IplImage src= resultImg.clone();
IplImage *src_g= new IplImage(src);
IplImage *src_g_inv=new IplImage(src);
cvThreshold(src_g, src_g_inv,35,255, CV_THRESH_BINARY_INV);
cvThreshold(src_g, src_g,40,255, CV_THRESH_BINARY);
IplImage *labelImg=cvCreateImage(cvGetSize(src_g), IPL_DEPTH_LABEL, 1);
cvb::CvBlobs blobs;
unsigned int result=cvb::cvLabel(src_g, labelImg, blobs);
...
}
|