2020-05-08 05:23:15 -0500 received badge ● Popular Question (source) 2019-03-08 02:20:27 -0500 received badge ● Great Answer (source) 2018-09-22 01:46:08 -0500 received badge ● Popular Question (source) 2016-02-15 14:02:01 -0500 commented question Matrix subtraction normalization ty @berak . updated. 2016-02-15 09:16:10 -0500 asked a question Matrix subtraction normalization Hello everyone. Recently i had to implement an algorithm to subtract an image from this same image blurred. I first tried to: Mat img = imread("some_image"); GaussianBlur(img, blurred, Size(7, 7), 0, 0); Mat result = img - blurred;  But my output (result) was displayed as an black image. So I found this normalization steps to solve the problem: result_pixel = (pixel_image - pixel_blurred_image) / 2 + 127 ; for each pixel on image: void sub(Mat & src, Mat & src2, Mat & result ) { for (int i = 0; i < src.cols ; i++) { for (int j = 0; j < src.rows; j++) { int px1 = int(src.at(j, i)); int px2 = int(src2.at(j, i)); int px = ( (px1 - px2 ) / 2) + 127; result.at(j, i) = px; } } }  This kinda of normalization seems trivial to me. So I was wondering, doesn't Opencv already provides any option to apply this normalization automatically? 2016-02-11 11:42:08 -0500 asked a question How to to find the Gaussian Weighted-Average and Std Deviation of a structural element Hello everyone. I'm trying to implement an Intensity Normalization algorithm that is described by this formula: x' = (x - gaussian_weighted_average) / std_deviation The paper I'm following describes that I have to find the gaussian weighted average and the standard deviation corresponding to each pixel "x" neighbors using a 7x7 kernel. PS: x' is the new pixel value. So, my question is: how can I compute a gaussian weighted average and the standard deviation for each pixel in image using a 7x7 kernel? OpenCV provides any method to solve this? 2016-01-06 11:31:46 -0500 commented question How to capture video from web camera and write video to file in C++ with OpenCV in Microsoft Visual Studio 2013 2016-01-04 12:07:56 -0500 asked a question OpenCV 3 Brew installation error (OSX) Hello Everyone. I recently removed OpenCV from my machine and I'm trying to reinstall using: brew install opencv3 --with-ffmpeg --with-tbb --with-qt5xcode-select --install Trying this, I'm getting this error: ==> cmake .. -DCMAKE_C_FLAGS_RELEASE=-DNDEBUG -DCMAKE_CXX_FLAGS_RELEASE=-DNDEBUG -DCMAKE_INSTALL_PREFIX=/usr/local/Cellar/opencv3/3.1.0 -DCMAKE_BUILD_TYPE=Release -DCMAKE_FIND_FRAMEWORK=LAST -DCMAKE_VERBO ==> make Last 15 lines from /Users/charlesprado/Library/Logs/Homebrew/opencv3/02.make: ^ /System/Library/Frameworks/Foundation.framework/Headers/NSArray.h:26:1: error: duplicate interface definition for class 'NSArray' @interface NSArray (NSExtendedArray) ^ /System/Library/Frameworks/Foundation.framework/Headers/NSArray.h:16:12: note: previous definition is here @interface NSArray<__covariant ObjectType> : NSObject ^ /System/Library/Frameworks/Foundation.framework/Headers/NSArray.h:26:32: error: method type specifier must start with '-' or '+' @interface NSArray (NSExtendedArray) ^ fatal error: too many errors emitted, stopping now [-ferror-limit=] 20 errors generated. make[2]: *** [modules/videoio/CMakeFiles/opencv_videoio.dir/src/cap_qtkit.mm.o] Error 1 make[1]: *** [modules/videoio/CMakeFiles/opencv_videoio.dir/all] Error 2 make: *** [all] Error 2 READ THIS: https://git.io/brew-troubleshooting If reporting this issue please do so at (not Homebrew/homebrew): https://github.com/Homebrew/homebrew-science/issues  Anyone have any idea how can I solve this? 2015-12-23 11:00:22 -0500 commented question Unable to install OpenCV on Elementary OS Freya done! as requested... 2015-12-23 10:49:20 -0500 asked a question Unable to install OpenCV on Elementary OS Freya Hi. I'm trying to install Opencv 3.0.0 on Elementary OS (Freya) but the installation is not generating the python module ("cv2.so" file). I already had installed OpenCV 3 before on Windows, OSX and Ubuntu 14.04 with no errors, so I think maybe the problem is related with Elementary OS... I'm following this tutorial. I downloaded Opencv and Opencv-contrib and I'm making the build with this command: cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules -D BUILD_EXAMPLES=ON -D PYTHON_EXECUTABLE=/usr/bin/python2.7/ -D PYTHON_INCLUDE=/usr/include/python2.7/ -D PYTHON_LIBRARY=/usr/lib/libpython2.7.a -D PYTHON_PACKAGES_PATH=/usr/local/lib/python2.7/site-packages/ -D PYTHON_NUMPY_INCLUDE_DIR=/usr/local/lib/python2.7/dist-packages/numpy/core/include .. (I made some additions trying to fix the problem, probably various of these commands are unnecessary...) The problem is that I'm getting this scenario: -- Python 2: -- Interpreter: /usr/bin/python2.7 (ver 2.7.6) -- Libraries: /usr/lib/libpython2.7.a (ver 2.7.6) -- numpy: /usr/local/lib/python2.7/dist-packages/numpy/core/include (ver 1.10.2) -- packages path: lib/python2.7/dist-packages -- -- Python 3: -- Interpreter: /usr/bin/python3.4 (ver 3.4.3)  Package path is being set to dist-packages instead of site-packages (I know it's not the problem at all... but as I manually tried to set this path to site-packages it looks weird that it is on dist-packages). Also... no one cv2.so are being generated during the make && make install commands (I tried "find / -name cv2.so with no results) So... any suggestions how can I fix this problem and get the python module of opencv working on my Elementary OS? 2015-11-09 09:31:12 -0500 answered a question how to calculate the horizontal width of chest in a binary body picture?? Just subtract left point coordinates from right point coordinates... If right point is on (200,100) and left point is on (100, 100), the distance will be 100 in x and 0 in y... If you want only one value (and not x and y coordinates), use Euclidian distance: distance = sqrt(deltaX^2 + deltaY^2) Using (200,100) and (100,100) again, the distance will be: distance = sqrt((200-100)^ 2 + (100-100)^2) ... distance = sqrt(100ˆ2) ... distance = 10 2015-11-09 08:50:04 -0500 received badge ● Nice Question (source) 2015-11-09 06:26:58 -0500 received badge ● Student (source) 2015-11-09 05:15:15 -0500 asked a question C++ vs Python, the cost of the abstraction Hello guys. I'm working in a computer vision software for a while, and I always used C++ to code the system. The C++ was chosen because of performance. The software works with video input and have to normalize various aspects of a frame in each second and realize a classification for each frame (~ 30 classifications per second). I was using Random Forests to classify these frames but now I'm migrating to Convolutional Neural Networks. I always loved Python, and whenever I have to develop a external/personal project using opencv + machine learning I use Python, not C++. So I was thinking in begin this module (the CNN module) in Python instead of C++ (the rest of the system will still being written in C++). My question is... A time ago I read that Python abstraction costs are much bigger that C++ abstraction's costs... How much? If I write a CNN that have to make at least 30 classifications per second, in python, it will be work? it will be slow? There's any text that lists the performance differences in a good quantitative way? Thanks. 2015-10-26 08:59:53 -0500 asked a question VideoWriter writing invalid videos I'm using the code described below to record my webcam on macbook pro. The file being generated is not a valid video, and I can't open it in a video player When I try to use CV_FOURCC('X','V','I','D') the output is an 'output.mov' file with 414kb (everytime I run the program this size is the same). I tried to change FOURCC to: CV_FOURCC('m', 'p', '4', 'v'). In that case the file are geting bigger for every second that I kept the webcam on, but I still can't open the video file. How can I record this video?  cap = cv2.VideoCapture(0) w=int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_WIDTH )) h=int(cap.get(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT )) fps = cap.get(cv2.cv.CV_CAP_PROP_FPS) fourcc = cv2.cv.CV_FOURCC('X','V','I','D') vout = cv2.VideoWriter() capSize = (w, h) success = vout.open('output.mov',fourcc,fps,capSize,True) while(vid.isOpened()): ret, frame = cap.read() if ret == True: vout.write(frame) cv2.imshow("frame", frame) if cv2.waitKey(1) & 0xFF == ord('q'): break  PS: When I tried to open the mp4v format the error ir: it generates the error: Application Specific Information: * Terminating app due to uncaught exception 'NSRangeException', reason: '* -[__NSArrayI objectAtIndex:]: index 0 beyond bounds for empty array' terminating with uncaught exception of type NSException abort() called 2015-09-07 13:45:41 -0500 received badge ● Good Answer (source) 2015-09-03 07:06:50 -0500 commented answer Informative websites related to OpenCV awesome! :) 2015-09-03 07:03:33 -0500 commented question How to detect a book! @shamshun , did you checked this post on SO: http://stackoverflow.com/questions/26... ? Your code looks like that? 2015-09-02 07:12:03 -0500 asked a question Should I prefer functions that transform data instead of creating data? I noticed in many functions from OpenCV library (C++ API) that there's a pattern regarding to the parameters passed to a function. I see that almost always there's a &src Mat and a &dst Mat that is the result (the data modified) being the return type of that function a void type. When I came from java to start in C++ I was writing all of my functions with a return type instead of the &dst Mat idea. Something like this: cv::Mat doStuff(cv::Mat &src) { cv::Mat modifiedStuff; ... // do stuff return modifiedStuff;  I think that way sometimes make things more clear, I understand that I'm generating an extra matrix inside the method everytime, and it can be bad. However, I'm curious how much bad it could be. Sometimes for example, I'm using the return concept instead of the &dst concept coz I need that the matrix have a specific proportion for example, so I have to create it inside the method to be sure that method will always work with that proportion. cv::Mat doStuffWith1x132Matrix(cv::Mat &src) { cv::Mat modifiedStuff(1,132,CV_64F); ... // do stuff return modifiedStuff;  There's some consense in the community regarding to where should I use the &dst matrix idea instead of returning a new matrix (created inside the method) or should I take it as a question of preferences? 2015-09-01 11:25:13 -0500 received badge ● Nice Answer (source) 2015-09-01 06:45:45 -0500 commented question How to detect a book! I agree with @thdrksdfthmn , you should go for Homography idea. Could you try to write the code using it? That way if some error occurs on that, you could post here and we can try to help you. 2015-08-31 20:46:20 -0500 commented question How to detect a book! Oh I see. In that case the edge detector will really not help. But maybe this could help. What you have to do in that case is train your own classifier to be able to reconigze where is the bunch of pixels in an image that can be a Harry Potter Book - Volume 3" 's SPINE. To train your own classifier you can use Haar Cascade technique described here and used on coding-robin.de tutorial (first link). You only need to substitute the banana by the Harry Potter Book - Volume 3" 's SPINE. :) 2015-08-31 15:27:44 -0500 commented question Informative websites related to OpenCV http://www.pyimagesearch.com/ : cool stuff about OpenCV in Python. http://www.codergears.com/Blog/?p=535 : Some good patterns to be followed in C++ API. https://www.youtube.com/channel/UClOg... : A complete course of Computer Vision. 2015-08-31 12:26:42 -0500 commented question How to detect a book! Something like this? http://www.pyimagesearch.com/2014/09/... . PS: If you want a C++ version, there's one in the book "OpenCV Essentials" at end of the chapter 3. 2015-08-13 12:50:21 -0500 commented question namespace cv::ml not found - xcode - C++ Hm. You're right. I was trying to use the command cv::ml::RTrees. The correct command for version 2.4 is the CvRTrees (I found that weird, because to me everything that begin with Cv... was part of the old API, the C API, but it seems to be not true). Thank you very much berak! 2015-08-13 12:03:15 -0500 asked a question namespace cv::ml not found - xcode - C++ Hi guys. I'm using XCode IDE in my project and I'm trying to create a machine learning program using the ml lib. However, I'm getting an error when I try to use the namespace cv::ml. My includes are: #include "opencv2/core/core.hpp" #include "opencv2/ml/ml.hpp" #include #include #include using namespace std; using namespace cv; using namespace cv::ml; --> In this line I get the error: "expected namespace name"  I guess that this error is related to some configuration that I hasn't applied on XCode. I'm using libc++ as standard lib and I already tried to change to libstdc++ (the problem persists). I also tried to use the objects in ml namespace using the fully qualified name, i.e. cv::ml::SomeMethod for example, but I get the error: "no member named ml in the namespace cv". The include of "opencv2/ml/ml.hpp" isn't generating errors. Question is: How can I add this namespace (ml) to my project in XCode? 2015-08-05 09:12:53 -0500 answered a question unable to determine angle because of points from line You could create a Point2f (2 dimensions float) in the center of each of the smaller sides of your rectangle: o o x o o ---> x = point_a o o o o o o o o o x ---> x = point_b  (Above, "x" represents the points). This way, you will get a line between those points: o x o o | o o | o o|o x  To calculate that line angle in radians: double rad_angle = atan(point_a/point_b);  If you want the angle in degrees: double degrees_angle = rad_angle * 180/CV_PI;  Cheers! 2015-07-28 02:26:13 -0500 received badge ● Self-Learner (source) 2015-07-27 11:37:15 -0500 answered a question Rotate Landmarks Hi guys. As I promised on my last comment, here is the solution that I found (as I said, I don't think that is a elegant solution, but it works):  Mat rotate_landmarks(Mat &src) { Mat dst = src.clone(); Point2f l = get_left_eye_centroid(src); Point2f r = get_right_eye_centroid(src); Point2f center( (r.x + l.x) / 2, (r.y + l.y) / 2 ); Mat rot = getRotationMatrix2D(center, get_alignment_angle(src), 1); double angle_cos = rot.at(0,0); double angle_sin = rot.at(1,0); for (int i = 0; i <= 66; i++) { double x = src.row(0).at(i); double y = src.row(1).at(i); dst.row(0).at(i) = (x*angle_cos) - (y*angle_sin); dst.row(1).at(i) = (y*angle_cos) + (x*angle_sin); } return dst; }  I'm applying the rotation for each point, using:  dst.row(0).at(i) = (x*angle_cos) - (y*angle_sin); dst.row(1).at(i) = (y*angle_cos) + (x*angle_sin);  I actually don't like this solution very much because I'm generating the rotationMatrix2D but it is not being used directly, that's because i'm not working with a conventional image matrix. If someone have any solution that uses the rotationMatrix directly please add the answer to this topic and I probably will mark it as the best answer. Thank to everyone that helped. 2015-07-25 07:12:31 -0500 received badge ● Enthusiast