2018-11-20 15:18:36 -0600 | received badge | ● Nice Answer (source) |
2015-12-17 10:42:13 -0600 | commented question | Where to find configuration parameters of official builds? I don't assume you are core developer of OpenCV. |
2015-12-17 10:18:58 -0600 | commented question | Where to find configuration parameters of official builds? You want to ship VC11 build of your library. Don't worry, thanks for attention. |
2015-12-17 10:04:03 -0600 | commented question | Where to find configuration parameters of official builds? I said before. The path to lbs is equal in both builds. It works for my build, it doesn't work when on the same path is official build. When building OpenCV from source, CMake generates log that shows various flags used in the compilation. What I am asking for is this log that was generated when building official binaries. Additional details are as follows: I am building using Visual Studio 2015 bud using VC11 compiler. If this can be the reason.. |
2015-12-17 04:43:41 -0600 | commented question | Where to find configuration parameters of official builds? My project does not use property sheets. The paths to library are directly written in the project file. In for both cases these are linking to <opencv>build/x64/VC11/lib directory. After I compiled my library, I simply moved the official build to ../VC11.old/ and replaced it with my build. |
2015-12-17 03:29:48 -0600 | asked a question | Where to find configuration parameters of official builds? When using official build of OpenCV3 for Visual Studio 2012 (VC11 compiler), I got a bunch of linking errors. To resolve them, I compiled OpenCV3 from source using VC11 compiler. I didn't change any default parameters apart from turning off CUDA. My own build has no linking issues and my application works fine. This makes me wondering, what might be the difference in official build and why it wasn't linking correctly. Building OpenCV from source was a hack to my current problem, but I would much rather change the configuration of my application and use the official build. It is simpler to distribute such program to clients. This leads to the question in the title. |
2015-04-09 15:20:38 -0600 | received badge | ● Enthusiast |
2015-04-06 11:59:51 -0600 | asked a question | Warping patch on sphere Consider the following image with denoted region of interest (ROI). Assuming the helmet in the image is a perfect sphere, how to warp the ROI so that it simulates rotation of the sphere? The goal is to generate training patterns that are likely to happen when the head rotates.
|
2015-02-16 02:52:30 -0600 | commented answer | Hey, I am planning to develop an application. The problem mentioned is complex. Your answer: "bag-of-words" is not enough and shows your superficial understanding of the problem. |
2015-02-16 02:12:38 -0600 | commented question | Hey, I am planning to develop an application. all == 2 ? |
2015-02-16 02:03:53 -0600 | commented answer | Hey, I am planning to develop an application. I can see that you are doing PhD now. Believe, there is long way from doing research to building app like this that actually works :) |
2015-02-16 01:51:45 -0600 | answered a question | Hey, I am planning to develop an application. It depends. If you are expert with a lot of konowledge in CV then yes OpenCV can help you. If you start with OpenCV and wish to build such app, forget about it. |
2015-02-16 01:45:20 -0600 | answered a question | openCV missing bioinspired300d.lib etc Make sure to use the latest OpenCV 3.0.0 beta and have a look at this simple tutorial: https://www.youtube.com/watch?v=tHX3M... There are in fact only 2 libraries that you have to include now. |
2015-02-14 23:57:46 -0600 | commented question | OpenCV 3.0.0 with Python 2.7 I have regenerated VC12 project in cmake-gui. This time, I noticed that "BUILD_opencv_python2" item was present. I suspect this item wasn't there in my previous attempts. Compilation using VC12 generated cv2.pyd as expected. I still don't know what was going wrong before. But it seems that 'turn-off-on' strategy + regeneration of the project solved that issue for me. |
2015-02-14 14:49:50 -0600 | asked a question | OpenCV 3.0.0 with Python 2.7 I am trying to compile OpenCV 3.0.0 from source with Python. I followed this tutorial. Python 2.7 is installed on my machine along with Numpy. I am compiling with VC12 on Windows 8 platform. When I compile from VC12, OpenCV is built, but not the Python package cv2.pyd. All default settings are left in CMake apart from the paths to Python which I manually set to my Python installation. I have tested compiling with bosth x86 and x64, debug/release. Didn't find any useful information about the issue on internet. Any suggestions? |
2013-11-18 19:45:11 -0600 | asked a question | gpu::BroxOpticalFlow I have tested Brox optical flow on my GPU where it is second fastest after Farneback. I would like to know how does it compare on CPU. Is there CPU implementation of Brox algorithm as it is done for Farneback? |
2013-04-20 14:22:31 -0600 | answered a question | warpPerspective gives unexpected result It is not clear how Quad Warping relates to projective transformation. The "bug" in WarpPerspective is not visible in the images you provided. You should give more details about what exactly you were doing. |
2013-04-20 13:56:17 -0600 | commented answer | SolvePnp: similar input returns very different output I had similar problem when testing detection/pose recognition of AR markers from AR Toolkit. In that case, when the marker is viewed from top (without any perspective distortion) the solution to the pose estimation was unstable. In contrast when the marker was viewed from angle, the estimation was accurate. This is caused by the fact that the points are on plain and the estimation is done in 3D. When the plane is facing you, it will have the same projection to 2D plain as if facing other way round. Note, the estimation was done from 4 points only. Hope this helps. |
2013-04-20 13:45:07 -0600 | commented question | stutters screen Your question is too vague. You have to give exact definition of the problem. |
2013-04-20 13:40:53 -0600 | answered a question | Need help about extracting a circular portion from image I was wondering what sort of sensor gives you this output and what you are looking for (if it's not a secret). As mentioned in comment by @Guanta, the standard way to search for circles is to use Hough transform If you have access to gray-scale pixel values you may thing about some image filtering and apply Hough transform on the filtered image. |
2013-04-18 15:38:08 -0600 | answered a question | SolvePnp: similar input returns very different output Is the jump really related to y-location? I believe it is caused by the pose of the object, which also changes as you move the object up. Can you fix the object location and change only its pose? The solution of solvePnP is not constrained sufficiently in that particular pose which may cause instability. You can consider some temporal smoothing to reduce this effect. |
2013-04-01 06:39:05 -0600 | received badge | ● Teacher (source) |
2013-03-29 13:38:20 -0600 | answered a question | Best way to save cv::Mat and load it in Matlab? I am using three ways to interface Matlab and OpenCV:
I am not satisfied with these approaches myself, but hopefully it gives some ideas. What I need all the time is to access local variables during debugging OpenCV application. Visual Studio allows using 'Immediate Window' to inspect variables. Ideally, I would like to access the local variables in Matlab and work with them. Assuming I know the pointer to the beginning of image data during debugging OpenCV, it would be great to read from this location the data to Matlab. Is that possible? |
2012-11-20 15:28:56 -0600 | asked a question | blobtrack_sample.cpp Does anybody have experience with this sample code? I believe there is rough description in 'doc/vidsurv/*' but, there is no information how to run and test the code. Based on my tests using webcam, I was not able to get any useful results (no targets were detected). |
2012-11-13 03:34:13 -0600 | answered a question | FernDescriptorMatcher Make sure you have reasonable number of detected points (~100). In case you have too many, it may take a while dependent on your machine. But it is more likely there is something wrong, so you may want to debut the matcher and see what is going on. |
2012-11-13 03:05:52 -0600 | answered a question | USB3 Vision Can provide more information about what you meen by 'USB3 Vision protocol'? |
2012-11-12 02:51:34 -0600 | answered a question | about traincascade negative set creation When you train a detector there is usually no problem to get a large number of negative examples. Even in case when the camera is fixed I would highly recommend to use a large number of negative examples. It is true that some part of the background will be always the same (or similar, dependent on illumination) but soms of the background will vary and you don't know how in advance. For example if you were to detect heads from top view, heads will be positive examples, but shoulders and legs will be negative examples. Detector shouldn't fire on those as well. When you have large negative set it will lead to lower false positive rate. |
2012-11-12 02:42:29 -0600 | answered a question | Trouble with example - runtime error It seems like the camera is not correctly initialized. Try to debug the code and investigate the 'cap' variable. Are you able to run precompiled samples (like lkdemo) by the way without any problems? |
2012-11-09 15:28:42 -0600 | received badge | ● Editor (source) |
2012-11-09 15:26:08 -0600 | asked a question | OpenCV 2.4.3 + iOS 6: image conversion While following ECCV2012 tutorial (http://opencv.org/eccv2012.html) about OpenCV + iOS, I encountered a problem with conversion of image from UIImage format to cv::Mat. l am using the functions provided in the tutorial, i.e. MatToUIImage and UIImageToMat. But when I use them, the image is not displayed. If I comment out the conversion, the image is shown without any problem. UIImage * image = [UIImage imageWithContentsOfFile:filename]; // load image in iOS cv::Mat m; UIImageToMat(image, m); // iOS ->OpenCV UIImage * image2 = MatToUIImage(m); // OpenCV -> iOS self.imageView.image = image2; // display in iOS I am testing the tutorial with OpenCV 2.4.3 on iOS 6. Any ideas? |
2012-11-09 14:08:41 -0600 | received badge | ● Supporter (source) |