2019-02-19 06:34:09 -0600 | received badge | ● Notable Question (source) |
2018-01-22 09:40:07 -0600 | received badge | ● Popular Question (source) |
2015-07-07 01:18:06 -0600 | received badge | ● Student (source) |
2015-05-21 09:19:51 -0600 | commented answer | 2 Video streams from 2 different cameras->Transformation between views? Ok thank you very much! |
2015-05-21 08:30:36 -0600 | commented answer | 2 Video streams from 2 different cameras->Transformation between views? Ok so here's the deal: Let's say I have a marker in my scene and I get 2 images of the markers(one per camera).After I find the Translation and Rotation matrices, can I apply these in a virtual object drawn in the OpenGL through the first camera and make it correctly rendered in the second camera.Also, if both cameras are moving together all the time(e.g they are connected together), do I have to calculate the Translation and Rotation matrices all the time?Or I just need it to happen only 1 time and then always apply thesame transformation to the view? |
2015-05-21 06:11:17 -0600 | asked a question | 2 Video streams from 2 different cameras->Transformation between views? Hi guys! Let's say I have 2 cameras.They are both looking at a scene where a marker exists,however these images are not the same(there is an offset between these 2cameras,e.g one is on top of the other). Now let's say I render a virtual cube on top of the marker that is tracked by one of the cameras.Next I want to draw the same cube,in the second camera's view.However for it to work I need to apply some transformations to the original cube.Can you tell me how can I find these transformations via openCV? What steps should I do? Update 1: Both cameras are mounted on an oculus and they move in the same way. |
2015-05-13 08:30:09 -0600 | commented answer | Translate Mat by x and y offset Perfect that was correct! |
2015-05-13 05:06:19 -0600 | commented answer | Translate Mat by x and y offset Never mind I will find it |
2015-05-13 04:59:29 -0600 | commented answer | Translate Mat by x and y offset Ok I have a 640x480 image with a white circle on the bottom right.Now I want a new mat which will have this circle in the top left corner and every other value will be 0(black) and the final size will be the same.How can I do it/? |
2015-05-13 04:53:52 -0600 | commented answer | Translate Mat by x and y offset Ok first of all,lets say I have a hand blob, so a mat with only 1 or 0 as values and dimensions 640x480.I want to translate by x,y the whole image and then create a new mat which will also be 640x480 but now I can see the blob translated since the image also moved |
2015-05-13 04:44:52 -0600 | commented answer | Translate Mat by x and y offset Hmm yes however if I want to make the image go y-up when I used - in fron of the dy value, I didn't get the result I wanted |
2015-05-12 15:36:44 -0600 | asked a question | Depth data from sensor to z-buffer of OpenGL Hi guys! I was wondering if you have any idea on a problem called occlusion handling.Lets say I use opencv for rendering a virtual object on top of a marker(Augmented Reality) tracked by a RGB-D sensor. Since I have depth map from my sensor for the scene I could use it to only render virtual object polygons that are not occluded(hidden) by my hand. I read that I could mask the z-buffer of OpenGL with the values of the depth mat and in the end I will only have the closer ones.I don't really get though what happens with the values of pixels that belong to the hand of the user.Does anyone have any idea for possible detailed implementation or any open source project which does this? |
2015-05-12 10:04:09 -0600 | commented question | Translate Mat by x and y offset yes exactly I don't need them |
2015-05-12 09:57:46 -0600 | received badge | ● Editor (source) |
2015-05-12 09:53:33 -0600 | asked a question | Translate Mat by x and y offset Hi! I want to be able to translate a mat by a x and y offset and form a new Mat of the same size.Also I want the value of pixels that belong to offset to become 0. How can I do it? I tried it for every element of matrix but I failed. |
2015-04-27 09:53:16 -0600 | commented answer | Get the 3D Point in another coordinate system Rvec,Tvec show the marker position w.r.t the camera correct? |
2015-04-27 06:33:59 -0600 | answered a question | Get the 3D Point in another coordinate system Here is my code.I do get good x,y values for finger wrt chessboard, however the Z value is negative and get even more negative -50 when I am getting closer to chessboard origin |
2015-04-23 11:12:00 -0600 | commented answer | Get the 3D Point in another coordinate system sgarrido why is the website of aruco down? |
2015-04-21 04:27:45 -0600 | commented answer | Get the 3D Point in another coordinate system Yes but since they are really close to each other,all I need is a x-offset right? |
2015-04-20 08:32:44 -0600 | commented answer | Get the 3D Point in another coordinate system Oh by the way.I have the 3D Point of the finger with respect to the depth camera,while the Color camera is the one that is used for marker tracking.What change should I make to correct this offset of depth/camera streams? |
2015-04-20 08:22:58 -0600 | marked best answer | Get the 3D Point in another coordinate system Hi there! I have a system which uses an RGB-D Camera and a marker. I can succesfully get the marker's origin point of coordinate system(center of marker) using an augmented reality library(aruco). Also,using the same camera, I managed to get the 3D position of my finger with respect to the camera world coordinate system(x',y',z'). Now what I want is to apply a transformation to the 3D position of the finger(x',y',z') so that I can get a new (x,y,z) with respect to the marker's coordinate system. Also it is worth mentioning that the camera's coordinate system is left-handed, while the coordinate system on the marker is right-handed. Here is a picture: Can you tell me what I have to do?Any opencv functions?Any calculation I could do to get the required result in c++? |
2015-04-20 08:22:58 -0600 | commented answer | Get the 3D Point in another coordinate system Oh by the way.I have the 3D Point of the finger with respect to the depth camera,while the Color camera is the one that is used for marker tracking.What change should I make to correct this offset of depth/camera streams? |
2015-04-20 07:02:51 -0600 | received badge | ● Autobiographer |
2015-04-20 06:18:20 -0600 | received badge | ● Scholar (source) |
2015-04-20 06:18:16 -0600 | commented answer | Get the 3D Point in another coordinate system Great! Thank you very much! |