2018-04-25 06:08:01 -0600 | received badge | ● Notable Question (source) |
2017-09-15 20:10:06 -0600 | received badge | ● Popular Question (source) |
2016-06-17 10:10:10 -0600 | commented answer | Can trackbar values be sent from another device? Call each value by a separate name and hence call the respective trackbar.. sweet! But how do I see the script's output on my screen? |
2016-06-17 09:34:51 -0600 | asked a question | Can trackbar values be sent from another device? I have made a traffic light detection algorithm with OpenCV and Python. My script has some trackbars which I need to adjust and see the output to set a desired stage. My script runs on my Raspberry Pi, is there any way I can send the trackbar values from another device on the same network on the RPi? For example, maybe make a phone app (I use Windows Phone BTW) which sends the trackbar data to the script and shows the output on my phone screen. |
2016-06-15 15:40:15 -0600 | commented question | Extracting circle centre coordinates using cv2.HoughCircles @jmbapps Didn't get a word of that... can you please elaborate? I think that C |
2016-06-09 02:50:02 -0600 | asked a question | Extracting circle centre coordinates using cv2.HoughCircles Hi there! I want to extract centre coordinates of detected circles using cv2. HoughCircles, here's the code- When I run this code, in X I get the centre coordinates as well as the radius of the circle. How do I store the coordinates and radius in separate variables? Also if my code detects more than one circle, I get an error -
Can anybody help me? |
2016-06-09 01:54:26 -0600 | commented question | Detecting a traffic light for a robot project Actually Its quite simple, just threshold the image, and use cv2.HoughCircles to find a circle, now put a condtional statement that as soon as the pixel intensity at the detected circle goes below a minimun threshold, activate the robot. |
2016-06-07 02:44:57 -0600 | commented question | How to use np.count_nonzero() within a range of pixels? @berak ohh the minimum and max ranges of the y and x axis respectively right? |
2016-06-07 02:26:32 -0600 | commented question | Line detection and tracking algorithm for autonomous robot. @Tetragramm I actually came up with another method to solve the problem. I used np.count_nonzero, and restricted the ROI to one half of the image, and thanks to @berak , it worked! |
2016-06-07 02:20:15 -0600 | commented question | How to use np.count_nonzero() within a range of pixels? @berak it works! Thanks a ton! But I don't understand in what order you've written the coordinates |
2016-06-06 18:29:19 -0600 | commented question | Line detection and tracking algorithm for autonomous robot. @Tetragramm, I've been through the documentary for whole nights, and it helped me a lot, but it fails to answer several specific questions... |
2016-06-06 15:03:24 -0600 | asked a question | How to use np.count_nonzero() within a range of pixels? Hi there! I am working on a script that reads the non-zero pixels in a binary image. I am using np.count_nonzero() to count non zero pixels, which works fine until, I specify a range of coordinates. Works fine But this line gives an error -
To me it sounds like the image layers (RGB or similar) might be causing this problem. Can anyone tell me how to fix it? |
2016-06-06 14:52:22 -0600 | commented question | Line detection and tracking algorithm for autonomous robot. @Tetragramm Actually, I don't know how do I split my binary image into 2 parts so that I can use np.count_nonzero to compare non zero pixels in each part |
2016-06-02 23:55:15 -0600 | received badge | ● Editor (source) |
2016-06-02 23:54:16 -0600 | commented question | Line detection and tracking algorithm for autonomous robot. @berak I was in the process of doing that :P My code is just the regular one used for thresholding, create trackbars, convert image to HSV and then threshold. |
2016-06-02 23:43:27 -0600 | asked a question | Line detection and tracking algorithm for autonomous robot. Hi there! I am wokring on an autonomous robot which has to run in a lane 2m wide with a yellow line marking the outer boundary and white line marking the inner boundary. What I have thought of is, that I threshold the image to obtain a binary image containing the lines. Then divide that binary image into 2 parts vertically and then find out the non zero pixels in each part, the part with less pixels indicate that that the lane is turning, and hence the robot turns to the side of less pixels. I'm trying to do something like this- Problem is with all my opencv+python knowledge I am stuck at the stage where I threshold the image. I don't know how to proceed after that. I tried searching the documentary but I didn't find anything. So how can I achieve this? Thanks. |
2016-03-31 13:26:56 -0600 | commented question | Unsupported or Unrecognized array type error in OpenCV 3.1 in Python It was nothing more than an indentation error in the for loop. :P |
2016-03-30 07:01:37 -0600 | commented question | How to create Trackbars which do not call any function? OpenCV 3.1 with Python 2.7 I tried that. It didn't work. But I have solved my problem. I made an empty function which returns nothing, and the I called that. @Tetragramm |
2016-03-29 11:06:33 -0600 | asked a question | How to create Trackbars which do not call any function? OpenCV 3.1 with Python 2.7 Hi there! cv2.createTrackbar only works with 5 arguements. Only 4 given. Next I tried callback and nothing. I got this error:- Okay after a while I tried using None. Got this error:-
TypeError: on_change must be callable Thanks! |
2016-03-27 08:38:04 -0600 | asked a question | Unsupported or Unrecognized array type error in OpenCV 3.1 in Python Hi there! I'm getting this unexpected error in this code. I found some solutions after googling a whil, I've tried all of them but none work. Here is the code- import cv2 circles = cv2.HoughCircles(img,cv2.HOUGH_GRADIENT,1,20, param1=50,param2=30,minRadius=0,maxRadius=0) circles = np.uint16(np.around(circles)) The code is borrowed from a tutorial about HoughCircles from here- http://opencv-python-tutroals.readthe... Whenever I try to run the code I get an error saying Unsupported or Unrecognized array type, ( you must have seen the entire error). I tried copying DLLs, putting the image in the working directory, giving the path to the image but no good. I can run other programs like of object detection fine, on Python and C/C++. Any help would be greatly appreciated. |
2016-03-27 02:39:43 -0600 | commented question | How to make trackbars in Python to threshold an image @berak , I did, that was the first thing I did. But that's only for RGB and I couldn't possibly understand how to implement it when I have 6 parameters (High H,S,V and Low H,S,V) I maybe can type in the cv2.inRange function correctly but I can't really understand how to use my trackbars to change the required values |
2016-03-27 01:30:07 -0600 | asked a question | How to make trackbars in Python to threshold an image Hi there! So my obvious question is- |
2016-03-14 13:57:18 -0600 | received badge | ● Enthusiast |
2016-03-12 09:29:36 -0600 | asked a question | Detecting a traffic light for a robot project (Too many questions in very little time) Hi there, I'm new to OpenCV (downloaded 24 hours ago, I've learnt a lot, but still), I am working on a project that requires image processing on a Raspberry Pi. The Pi has to detect a traffic light(when the robot is standing on the start line), and as soon as the light goes green, signal the Arduino to run the motors. Now to detect the traffic lights, based on my research, I have thought of 3 different approaches... Approach 1. With the help of a great tutorial (Link- http://opencv-srf.blogspot.in/2010/09...) which told me how to detect a red coloured object, I've wrote a code which detects red lights(most of it is same as the tutorial), it shows the object as white and rest as black. Now what I think of doing is, adjust the HSV values for the threshold image so it sees my traffic light(appears as a white spot on the threshold image), set hem there and then monitor the the ROI until it goes black(meaning that the red light has gone off), and send signal to Arduino. In this approach, I need to know- Approach 2- Convert the RGB frames received from the camera feed into greyscale. Once they have been converted find the brightest spot(which should be traffic light), once the spot is found mark it as a ROI, monitor it and as soon as it goes below the threshold value(say 200, meaning the light has gone off), send appropriate commands to the Arduino. In this method what I need to know is same as in the first approach. Approach 3- Take a sample image of a traffic light(when it is red) and then compare it with the frames from my camera feed. As soon as there is a mismatch( meaning the light has gone off), send appropriate command to the Arduino. I'm thinking of ... (more) |
2016-03-11 13:46:17 -0600 | commented answer | Comparing a captured image with a given image. Thanks there! The Cascade Classifier methods looks nice, I read about the Template matching function earlier this morning and I'm thinking its worth a try... I'm gonna try both methods and see which works best on first my PC and then the RPi. The homography estimation tutorial is only a hell lot if code and no explanation, so that's gonna take me some time to interpret (I'm new to C/C++ been coding in Java and C# till now). One more question that I came across was that I am using OpenCV in a Windows environment on VS thereby coding in C/C++ and I found that OpenCV code on the RPi is written in Python(atleast in all the tutorials I came across). So do I need to convert my code from C to Python or I can use it as I'm using it ... (more) |
2016-03-11 05:13:52 -0600 | asked a question | Comparing a captured image with a given image. Hi there! I am working on a project which requires the detection of road signs. I was wondering that if I could take sample images of the road signs, save them onto my PC(final version to run on Raspberry Pi), and then as the camera runs, it keeps on checking the presence of the sign stored in the memory and when it is found takes appropriate action(For example if I implement it in a robot, whenever the robot sees a stop sign, it should stop for, say 10 seconds and then start moving again.) Can anyone help me out on this? |
2016-03-10 12:34:07 -0600 | received badge | ● Scholar (source) |
2016-03-10 12:22:06 -0600 | asked a question | Problems running OpenCV in VS 2012 express for desktop. Hi there. I am a complete beginner to OpenCV. I downloaded it today and tried installing it on my pc using the documentation on this site. After a while I figured out that the documentation was for version 2.14 and was very, very complex. They were using VS 2010 and I have 2012. So I headed to YouTube and found these tutorials-
However, I had already completed the first part of the documentation(which does something with a command prompt and makes environment variables. But I ignored that and followed both the tutorials and when I tried to run the sample code. I got errors on two lines. Here's the code- include iostream
void main() { std::cout << "OpenCV Version: " << CV_VERSION << std::endl;// VS says indetifier CV_VERSION is unidentified. } This program is supposed to show the OpenCV version I have on my pc, but I cannot get it to run. Is my OpenCV installation valid? If not then how can I repair it? I have already checked the paths a lot of times. I just can' seem to figure out what to do. Any and all help will be greatly appreciated. Thank you -YaddyVirus |
2016-03-10 10:44:19 -0600 | commented question | OpenCV on Visual Studio 2012 Express Thanks a lot @berak |
2016-03-10 01:28:03 -0600 | asked a question | OpenCV on Visual Studio 2012 Express Hey there! I am new to OpenCV and while I originally intend to use it on a Raspberry Pi, I couldn't wait for my board to ship and thought that I'll start coding around on my PC itself. I just wanted to know that to run OpenCV will the Visual Studio Express 2012 Express edition is enough or will I have to download the full edition? Moreover the set up tutorial is very complex(I'm a geek and code in Java and C# but still it went over my head). Can anyone tell me a simplified procedure? |