2020-09-15 12:48:36 -0600 | received badge | ● Notable Question (source) |
2018-06-18 11:27:05 -0600 | received badge | ● Popular Question (source) |
2016-01-22 07:36:41 -0600 | received badge | ● Student (source) |
2015-10-20 14:20:23 -0600 | received badge | ● Scholar (source) |
2015-09-30 13:37:12 -0600 | commented answer | solvePnP returning incorrect values Updated the question to include the diagram and added some descriptors (Also credited you at the bottom :) ). The update contains some information that may complicate the process, so if you're willing to give it a go read the latest edit! I tried out your method as you described it and checked the math in Matlab after I obtained all the transforms but it didn't work. This is likely due to the new issue I've discovered though. |
2015-09-30 11:40:46 -0600 | received badge | ● Supporter (source) |
2015-09-30 11:40:25 -0600 | commented answer | solvePnP returning incorrect values The second example you gave is the most accurate one to represent my system, although the second "camera" is actually a stereo-optical tracking system, so I can't acquire any pixel values from it (Which is fine because it returns transforms to the markers anyway!). While I understand the equation you provided, I'm unclear on how to obtain c1Mch as Camera 1's pose is entirely unknown to the tracking system. Is that referring to the transform returned by Your diagram is really spot on, by the way. Can I add it to my question to help others visualize? |
2015-09-29 14:33:55 -0600 | received badge | ● Editor (source) |
2015-09-29 13:53:14 -0600 | answered a question | Real time pose - tutorial Did you calibrate your camera using the intrinsic matrix? If not, you may need to apply it to the frame using |
2015-09-29 13:53:13 -0600 | asked a question | solvePnP returning incorrect values I'm currently trying to implement an alternate method to webcam-based AR using an external tracking system. I have everything in my environment configured save for the extrinsic calibration. I decided to use As it stands I pass in my image pixel coordinates acquired with Initially I thought the issue was that I had some ambiguities in how my board pose was determined, but now I'm fairly certain that's not the case. The math seems pretty straightforward and after all my work on setting the system up, getting caught up on what is essentially a one-liner is a huge frustration. I'm honestly running out of options, so if anyone can help I would be hugely in your debt. My test code is posted below and is the same as my implementation minus some rendering calls. The ground truth extrinsic I have that works with my program is as follows (Basically a pure rotation around one axis and a small translation): Thanks! (more) |