Ask Your Question

b2meer's profile - activity

2020-08-12 01:54:00 -0500 received badge  Notable Question (source)
2020-04-15 11:15:54 -0500 received badge  Popular Question (source)
2020-03-26 07:14:40 -0500 received badge  Notable Question (source)
2019-02-14 10:58:03 -0500 received badge  Popular Question (source)
2018-03-05 15:15:51 -0500 received badge  Popular Question (source)
2017-03-30 05:07:04 -0500 commented answer Calculate odometry from camera poses

Great ! I did it and it worked. Now I am getting world camera coordinates. Thanks for your help.

2017-03-29 04:15:22 -0500 commented answer Calculate odometry from camera poses

Thanks for the detailed answer. I am using the aruco's own tracker whose readme says that it tracks and provides camera poses. I am using aruco from the following link I have already created a markermap of my environment and I am running the file aruco_test_markermap that comes with the package. You can also find it's cpp file by downloading the package from above link. It is present in the path aruco-2.0.19/utils_markermap/aruco_test_markermap.cpp Apparently, it returns me the camera poses. But I can't use the tvec_x and tvec_z as my camera's x and y locations respectively. Can you guide me how to interpret these pose values and get my camera's X Y from them. Thanks

2017-03-24 08:28:27 -0500 asked a question Calculate odometry from camera poses

I am doing pose estimation using ArUco markermaps. I followed the following procedure:

  1. Created a markermap by printing and pasting markers in a room and recording their video and using it to create marker map (map.yml, map.log , and the point cloud file)
  2. Use the generated markermap to run aruco tracking program. The tracking program outputs camera poses in real time in the form of three tvec and three rvec values.

Now, I want to keep record about how much the camera has moved in x and y positions (2d only) using the cam poses, so that I can generate my odometry on its basis. I am not sure how to interpret or use the tvec and rvec values for this purpose. Any ideas on this ? Thanks.

Note: I am a beginner in pose estimation, so If I seem to be going wrong in my approach, please mention it.

2017-03-16 07:59:18 -0500 commented question Need explaination about rvecs returned from SolvePnP

Yeah I'm a bit confused with these values as this is the first time I am doing pose estimation. My concept till now is that I need the rotation angle along y-axis to find the world coordinates of my camera. For this, I am trying to find the rotation along y-axis but I am getting the rotation angle along z-axis correctly. Please correct me if I am wrong in my concept or my approach for calculating camera's world coordinates. Also, I have gone through the articles you advised and I am already using the drawAxis function of ArUco. Thanks

2017-03-16 05:55:30 -0500 asked a question Camera selection for running ArUco

I want to implement pose estimation using ArUco markers. For this, I want to know what should be the ideal specs for the camera to choose for running ArUco. I have an outdoor area of 100x300 meters in which I want to run ArUco marker detection. Any suggestions for which camera should I choose that works best for this applicatoin? Thanks

2017-03-15 06:08:49 -0500 commented question Need explaination about rvecs returned from SolvePnP

I tried with Matrix to Euler conversion as implemented in the link shared by @Eduardo. The attitude is giving angles fine which in my case is rotation along the z-axis (that is line between camera and marker). The other two angles' values are not making sense. I actually want the rotation angle along y-axis (that is the line going upwards or downwards). Any ideas where am I going wrong ?

2017-03-14 08:48:04 -0500 asked a question Need explaination about rvecs returned from SolvePnP

I am using ArUco for pose estimation and i want to get the global world coordinates of my camera using say a single detected aruco marker. For this, I need to know the rotation of camera wrt marker along y axies (upward/downward axis). The output of ArUco /SolvePnP gives me rvecs which contains rotation vector. Now, I really don't understand how this rotation vector represents the angle of rotation. I can convert it to rotation matrix using Rodrigues but still i don't get the actual roll, yaw, pitch angles (or rotation along x,y and z axes) which i really need.

So, can anyone please explain how to manipulate rotation using the rotation vector in rvecs and also how to get simple rotation angles along the three axes from them. Thanks

2016-10-21 16:09:52 -0500 received badge  Enthusiast
2016-10-17 15:16:35 -0500 commented answer Best approach for Grass Detection ?

yeah that seems a good option. But do you think SVM is better to use in this case or using ANN would be better ?

2016-10-17 03:38:53 -0500 commented answer Best approach for Grass Detection ?

Thanks for the detailed response. Actually i do not want to detect grass based on its green color since there maybe a case where a green colored object is placed on grass and I would want it to detect as an obstacle. The machine learning method you have mentioned earlier, can you explain it in detail as to what approach would be better to implement this. Thanks

2016-10-17 03:33:06 -0500 commented answer Best approach for Grass Detection ?

This approach looks good. I will give it a try. Thanks

2016-10-17 03:32:32 -0500 commented question Best approach for Grass Detection ?

@StevenPuttemans Actually I do not want to detect grass based on its green color since I could also encounter an obstacle of green color placed on the grass and i want it to be classified as an obstacle.

2016-10-17 03:31:25 -0500 commented question Best approach for Grass Detection ?

@Balaji R I have attached a sample image via update in my original post

2016-10-17 03:30:42 -0500 received badge  Editor (source)
2016-10-14 06:21:09 -0500 asked a question How to remove glare from image

I am trying to detect circular object in image. I am not using hough circles as it gives many false outputs as well. I am converting the image into binary after applying median blur. For conversion into binary, I have used simple thresholding which works fine with me because the circular object is usually of white or some other color and it is placed on a black background. The only problem I face is that due to some liquid put on the black background, it reflects light which disturbs the thresh image and hence causes problem in the detection of circular object. Is there anyway I can remove the glare effect from my images ? I have attached a sample image for reference. Thanks

image description

2016-10-03 03:40:59 -0500 asked a question Best approach for Grass Detection ?

I want to develop a program for detecting grass. It should be able to identify the input image as 'grass' or as 'obstacle'.

It should output as 'grass' if there is only grass in the input image, and if there is any object placed on grass, then it should output as 'obstacle'. Similarly, if there is no grass at all (i.e. concrete etc) or there is boundary between grass and concrete, it should output as 'obstacle'

Please guide me what approach to use for such kind of grass detection. Do I need a learning algorithm for this purpose or a non-learning algorithm (like features detection etc.) would serve the purpose. In both the cases, what would be the appropriate opencv functions to implement them.


Update: I have attached a sample image which contains grass as well as the concrete surface (obstacle)image description