Ask Your Question

Revision history [back]

Inverse Perspective Mapping -> When to undistort?


I have a a camera mounted on a car facing forward and I want to find the roadmarks. Hence I'm trying to transform the image into a birds eye view image, as viewed from a virtual camera placed 15m in front of the camera and 20m above the ground. I implemented a prototype that uses OpenCV's warpPerspective function. The perspective transformation matrix is got by defining a region of interest on the road and by calculating where the 4 corners of the ROI are projected in both the front and the bird's eye view cameras. I then use these two sets of 4 points and use getPerspectiveTransform function to compute the matrix. This successfully transforms the image into top view.


When should I undistort the front facing camera image? Should I first undistort and then do this transform or should I first transform and then undistort.

If you are suggesting the first case, then what camera matrix should I use to project the points onto the bird's eye view camera. Currently I use the same raw camera matrix for both the projections.

Please ask more details if my description is confusing!