Ask Your Question

yorkhuang's profile - activity

2017-03-21 23:25:35 -0600 commented answer Is it possible to conduct pose estimation by matching features between picture taken from camera and pre-taken environmental photos?

Thank you, Tatragramm. Is it possible to achieve pose estimation with solvePnP() without SLAM? The scenario is similar to what I explained in my previous post that I set a physical spot as the original of the world coordinate system and build the virtual room a long with photo texture walls accordingly. So, by feature matching camera captured's image with photo textured walls, can I perform pose estimation with solvePnP? Thanks,

2017-03-21 21:08:58 -0600 commented answer Is it possible to conduct pose estimation by matching features between picture taken from camera and pre-taken environmental photos?

Hi, Tatragramm, thank you for your confirm. Just as your explanation, my current problem is how to reconstruct a good 3d environment from your environmental photos? Any recommendation? Someone told me that I have to consider the focal length and FOV of the camera when taking the environment photos. Any comment. Thanks,

2017-03-21 10:44:57 -0600 commented answer The limitation of solvePnP() for pose estimation

HI, Tetragramm, can you answer this question? Thanks,

2017-03-21 10:38:02 -0600 commented answer The limitation of solvePnP() for pose estimation

Hi, Tetragramm, I already pose another question in here. Please give me your advise. Thanks,

2017-03-21 10:36:01 -0600 asked a question Is it possible to conduct pose estimation by matching features between picture taken from camera and pre-taken environmental photos?

Can we conduct pose estimation through pre-taken environmental photos? We want to conduct a mixed reality experiment by taken four photos of a room and texture them onto walls of virtual room to create a virtual environment. While an user wearing a VR headset with a customized front camera to navigating this virtual room, the physical room snapshot is taken by the camera and used to conduct the pose estimation with walls of the virtual room. So, the question is it possible to conduct pose estimation by matching features between picture taken from camera and pre-taken environmental photos? Thanks,

2017-03-19 23:41:00 -0600 commented answer The limitation of solvePnP() for pose estimation

Hi, Tetragramm, is it possible to discuss with you in detail through Email? My Email is [removed because spammers] Thanks,

2017-03-19 09:21:05 -0600 commented answer The limitation of solvePnP() for pose estimation

Got it! Thank you for your prompt reply. What if not worry about the correct physical location and consider relative location to the world origin only, what is your expectation of deviation? My current status is the compute pose by solvePnP will jumpy when I move toward the poster. I suspect the distance between the world origin and the location of the poster will dominate the accuracy of solvePnP. Am I right?

2017-03-19 08:39:09 -0600 commented answer The limitation of solvePnP() for pose estimation

Hi, Tetragramm, thank you for your confirm about this approach. I am not sure I fully understand what you meant of "vary in your estimation by at least 10cm relative to the poster"? Does it means that I should tolerance 10cm of deviation? How you come out this 10cm deviation? I am also not sure about "If your poster->origin is off by some amount, add that uncertainty to the 10cm to get your uncertainty with respect to the origin."? Can you more elaborated? Thank again for your kindness help!

2017-03-18 23:43:40 -0600 commented answer The limitation of solvePnP() for pose estimation

Hi, Teragramm, thank you for your patient. The final goal of my experiment is, given a known physical location, I will place a poster several meters, says 5 meter, away from that location. So, I set that location as the origin of the world coordinate system. Knowing the distance and orientation of the poster w.r.t. that world coordinate system, I compute the feature points on the poster and derive their 3D coordinates w.r.t. that world coordinate system, I want to compute the pose of the smartphone( relative to that physical location) when facing that poster. My plan is to compute user's physical location through this approach. Is it possible to do that?

2017-03-16 21:08:09 -0600 received badge  Student (source)
2017-03-16 21:01:38 -0600 commented answer The limitation of solvePnP() for pose estimation

Hi, Teragram, thank you for your reply. Yes, I do notice that the case of multiple points projecting to the same pixel and filter them out before input to solvePnP. Still, I can not get the stable pose estimation while moving toward the target image. So, the revised question is, what if the cases you mentioned are eliminated, will the distribution of 3D coordinates affect the accuracy of solvePnP? Thanks,

2017-03-16 08:43:05 -0600 commented answer The limitation of solvePnP() for pose estimation

Total understand. Thank you for your answer. I raise another question about input 3D coordinates. Since I am not qualify to add link, please refer to the question "Will the distribution of 3D coordinates affect the accuracy of solvePnP?" Thanks,

2017-03-16 08:42:36 -0600 asked a question Will the distribution of 3D coordinates affect the accuracy of solvePnP?

I am still trying to figure out a proper way to use solvePnP(). Can someone tell me what if input 3D coordinates for solvePnP() is not regular distributed, will the returned pose estimation still accurate? The experiment I did is assigning 3D coordinates to randomly distributed feature points on a poster and take a sequence of pictures while I move straight toward the poster. The resulted camera's XYZ coordinates are a bit jumpy. For the record, I did conduct the following operations to derive camera pose. Mat R; Rodrigues(rvec, R); R = R.t(); tvec = -R*tvec; By the way, the 3D coordinates values that I assigned are a bit high. Some values are range over couple thousand while some are couple hundreds.

2017-03-15 21:22:09 -0600 commented answer The limitation of solvePnP() for pose estimation

Hi, Tetragramm, my experiment increase Z value step by step and left X,Y values to be fixed. The return value after inverse it become world's pose, I got a slight different XY values. In order to ensure the experiment processes are exactly the same, I took a sequence of query images and use the same sequence images for the experiment each time. About the measurement of point coordinates w.r.t. world system, should it be meter or centimeter? And, what if the case of such measurement is impossible and only estimated object 3D coordinates are available, will be affect the solvePnP() result significantly?

2017-03-15 20:52:00 -0600 commented question The limitation of solvePnP() for pose estimation

Hi, Swing, I did read the link and tried solve PnP() over and over again. I always failure to get a stable pose estimation no matter the training data are from planar or non-planar. I am not sure what I went wrong but I am skeptical about the result of solvePnP().

2017-03-15 20:47:03 -0600 commented answer The limitation of solvePnP() for pose estimation

Hi, Tetragramm, thank you for your help. Can you more specific about "set your world points appropriately"? The reason I asked is I tried a poster on the wall and notice that computed pose by solvePnP() will slightly deviated with different Z values. I set the top-left corner of the poster as the origin of the world coordinate system and I am using OpenCV 2.4.13.2. Do I miss anything? Please advise. Thanks,

2017-03-15 01:53:01 -0600 asked a question The limitation of solvePnP() for pose estimation

Please correct me if I am wrong. From my understanding and posts from Internet, all the successful pose estimation results by solvePnP() were setting the original of the world coordinate system at the planar image or marker. Am I right? What if the original of the world coordinate system is NOT on the planar image or marker, will solevPnP() give a correct pose estimation w.r.t. the image or marker? Can anyone answer me? Thanks in advance.

2017-03-15 01:00:02 -0600 received badge  Enthusiast
2017-03-14 23:46:19 -0600 commented answer Does anyone actually use the return values of solvePnP() to compute the coordinates of the camera w.r.t. the object in the world space?

Hi, Teragramm, thank you for your enthusiastic help. I just check the official site and ARUCO is available for OpenCV 3.2-dev only and I am using OpenCV 2.4.13.2. Is there any difference in pose result from sovePnP() between these two versions? ARUCO is a marker based solution while my study is natural feature tracking-based. So, I have two questions and hope you can help me. First, does the number of feature points detected will affect the result of solvePnP()? In other words, what is the minimum number of feature points detected is require for accurate pose estimation? Secondly, is ARUCo use meter or centimeter as the unit for 3D coordinates? Thank again for your help!

2017-03-13 23:50:10 -0600 commented answer Does anyone actually use the return values of solvePnP() to compute the coordinates of the camera w.r.t. the object in the world space?

Thank you, Tetragramm. Are you referring this web page? https://www.uco.es/investiga/grupos/ava/node/26 (https://www.uco.es/investiga/grupos/a...) Thank you for your information. I will check the web site.

2017-03-13 21:28:48 -0600 commented answer Does anyone actually use the return values of solvePnP() to compute the coordinates of the camera w.r.t. the object in the world space?

Hi, Tetragramm, thank you for your answer. The answer you gave is what I found from various post. I just wonder if anyone actual use this method to position himself within a space? Thanks,

2017-03-13 08:04:22 -0600 asked a question Does anyone actually use the return values of solvePnP() to compute the coordinates of the camera w.r.t. the object in the world space?

All the demo video of pose estimation using solvePnP() given in various posts exhibit a wireframe coordinate system or a wireframe object on top of the target image only. Does anyone actually use the return values of solvePnP() to compute the coordinates of the camera w.r.t. the object in the world space? My main confuse is the return values from solvePnP() are the rotation and translation of the object in camera coordinate system. Can we actually use return values to compute camera pose w.r.t. the object in the world space? I been searching this answer for over two months. Can anyone help me? Thanks,

2017-03-13 07:50:45 -0600 commented answer How to derive camera position from solvePnP? Not a repeated question.

I had read the course materia](http://www-lar.deis.unibo.it/people/cmelchiorri/Files_Robotica/FIR_03_Rbody.pdf)l provided by Eduardo and still not solve my doubt about how to derive the camera position from solvePnP(). My main peradventure is the rotation vector and a translation vector from solvePnP() is the object in the camera coordinate system. Are we really can use these two vectors to compute the camera pose from equations above?

2017-03-13 05:40:16 -0600 commented answer How to derive camera position from solvePnP? Not a repeated question.

Thank you for Eduardo's answer. However, according to your equation, it is exactly the same as the post, "camera-position-in-world-coordinate-from-cvsolvepnp"@ stackoverflow. I tried that already and had no luck at all. Am I misunderstand your answer? Thanks again for your kindness help!

2017-03-13 05:14:41 -0600 commented question How to derive camera position from solvePnP? Not a repeated question.

After further search the Internet, from the post, "How to find the position of camera given the known world coordinate of objects?"@http://cs.stackexchange.com/, it seem that we need to further consider camera's intrinsic parameter when we want to derive camera's world position from the return value of solvePnP. Am I correct? Pleas help. Thank you,

2017-03-13 04:40:52 -0600 received badge  Editor (source)
2017-03-13 04:40:19 -0600 asked a question How to derive camera position from solvePnP? Not a repeated question.

Hi, can someone help me? From the post, "camera-position-in-world-coordinate-from-cvsolvepnp"@ stackoverflow, it gave the answer of how to derive camera position in world coordinate from cv::solvePnP. However, I still can not get a correct solution from that page. My experiment is facing a wall with a poster and move straight toward the poster step by step. Unfortunately, the trace of the sequence of camera's coordinates are not a straight line. From my understanding, solvePnP() will return the rotation and translation of the object in camera coordinate system. Notice that, they are object's rotation and translation in camera coordinate system. So, the whole question become: Give a poster's rotation and translation in camera coordinate system, how can we derive camera's position in poster's original world space? From my understand go 3D computer graphics, I am skeptical about the solution from above post. Can someone help me to solve the puzzle. Thanks,

2017-02-25 21:47:05 -0600 asked a question Can value of objectPoints for solvePnP ranging from 1.0 to -1.0?

Does anyone know the constraint of value of objectPoint list for solvePnP? I normalize each axis of 3D value of objectPoint for solvePnP() ranging from +1.0 to -1.0 before input to solvePnP function. Is it OK to have this type of 3D value setting? Will this setting cause any problem?

2017-02-25 19:51:14 -0600 commented answer Question about "imagePoints" parameter for solvePnP() function

Hi, Tetragramm: Thank you for your reply. I do not understand your comments. I do assign a 3D value to each feature point on the reference panorama image(train image). Is this what you meant? I try to find out current live picture match which portion of the panorama. So, you can image that the panorama is much bigger than a live picture and it is very difficult to align point 0 of assigned 3D coordinates with point 0 on live image. Please advise. Thanks,

2017-02-24 00:55:27 -0600 asked a question Should 3D and 2D point lists for solvePnP be declared in double or float?

Does anyone can confirm that the data type of 3D and 2D point lists for solvePnP should be declared in double or float? Someone in the Web said that they should be in double otherwise the answer will not correct. There is no specific instruction in OpenCV document. Thanks,

2017-02-24 00:37:46 -0600 commented answer Question about "imagePoints" parameter for solvePnP() function

Hi, Telragramm: Thank you for your prompt and helpful reply. I am not quite follow you on "point 0 in your 3d points should be point 0 in your image points and so forth"? The thing I am doing is, I took a panorama first and try to match a live photo with that panorama. Of course, I will assign some pseudo 3D coordinates to points in the panorama. So, since live picture will be only a small portion of this panorama, how can I align point 0 of assigned 3D coordinates with point 0 on live image? Can you more specific? Thanks in advance for your reply.

2017-02-23 10:40:22 -0600 asked a question Question about "imagePoints" parameter for solvePnP() function

I am new to OpenCV and try to use solvePnP to compute camera pose from knowns 3D points. I am quite confuse about the second parameter, imagePoints, of this function. Should it by integer? According to the definition, imagePoints are the pixel coordinate of query image. However, when I use ORB method to detect feature points, I notice that 2/3 of generated feature points have floating values of 2D points. I am not sure if this is the reason that I can not get correct answer of camera pose? Furthermore, should the value of imagePoints argument be normalized with image size before passing to solvePnP? Can someone please help me solve the puzzle. Thanks,