guidance for mapping 2D image onto 3D geometry

asked 2020-11-27 03:00:45 -0500

Phil2 gravatar image

updated 2020-11-30 16:58:15 -0500


I try to perform one of following tasks:

a.) use an image as texture onto a 3D surface or b.) warp an image in such a way, that its shape represents a 3D shape

Available input:

  • 3D geometry --> as point cloud or triangulated mesh
  • 3D object coordinates of a set of reference points
  • 2D image coordinates of the same reference points

Boundary conditions:

  • image from only one single camera (no stereo)
  • uncalibrated camera
  • camera has fixed focal length
  • reference points are distributed onto the 3D surface

Current Approach:

  • perform camera calibration --> get intrinsic parameters and distortions coefficiants
  • solvePnP --> use results from camera calibration

What's next?

Programming language: python

I think this has been done many times before, but I maybe do not have the right search key words.

For camera calibration I followed this tutorial: https://opencv-python-tutroals.readth... get a start...I am in search for a "cooking receipt", that would guide me through the necessary steps. Any help is much welcome.

Thank you,


Edit ...Attachments:

image description generic example of camera view

image description generic example of mapped image onto 3D surface as texture (mapping does not fit very well)

2D-3D refernce coordinates & 3D point cloud of geometry available as csv-file to best upload?

edit retag flag offensive close merge delete


a point cloud isn't a surface. you'll have to process your point cloud into a mesh surface or other surface description first. that is, if you want to put texture on triangles. if you only want to give your cloud's points some color, that's simpler: project them onto the image, sample image.

crackwitz gravatar imagecrackwitz ( 2020-11-27 12:55:49 -0500 )edit

also, DO NOT use "tutroals". it's out of date by five years and all of that content is in anyway.

crackwitz gravatar imagecrackwitz ( 2020-11-27 12:56:28 -0500 )edit

Question revised.

Phil2 gravatar imagePhil2 ( 2020-11-30 06:47:14 -0500 )edit

@ crackwitz: Yes, The point cloud is used to build a mesh. Optionally I can already use a generated mesh as input. The task is not to create a coloured point cloud, but to accurately map the image onto the 3D surface using the known 2D-3D-coordinates of the reference points.

Phil2 gravatar imagePhil2 ( 2020-11-30 06:58:26 -0500 )edit

ok, so you can use solvePnP() to get a transformation, but from there on it is probably "texture mapping", for which you probably should use some 3d editor like blender, not try to write opencv code

berak gravatar imageberak ( 2020-11-30 07:42:51 -0500 )edit

@ berak: The image is from an experimental measurement. The 3D surface is from CAD. Numerical data (X-Y-Z coordinates) shall be overlayed with the image in 3D-space...My hope is to get all this together in python using openCV.

Phil2 gravatar imagePhil2 ( 2020-11-30 10:23:00 -0500 )edit

@ berak: It is possible to do UV-texture mapping in the CAD-software (CATIA)....but it's very unhandy & approximate, since it is shifting/manipulating the image (=texture) per mouse and parameter settings. OK for one image, but there are a lot.

Phil2 gravatar imagePhil2 ( 2020-11-30 10:27:10 -0500 )edit

it probably gets better if you show your image and a rendered mesh/cloud ?

berak gravatar imageberak ( 2020-11-30 10:38:51 -0500 )edit

Question is updated. The 2D/3D coordinates are available as *.csv-files. How to best upload here? The uploaded result does not map the image correctly to the 3D surface...the black dots in the image should match the blue dots of the 3D geometry...But hopefully the idea is more clear. Thank you.

Phil2 gravatar imagePhil2 ( 2020-11-30 17:03:46 -0500 )edit