Hi, I've already posted this question on StackOverflow, but didn't get any answers there. Hope someone here takes notice. In my application:
I track an object.
Get where its
corners
are coming in this frame.I find the homography between its
corners
from last frame and current frame.Use that homography to do a
perspectiveTransform
on thecorners
found in the current frame, to gettransformed_corners
.Use the
transformed_corners
to find the homography between them and theoverlay_image
.Apply above homography
M
tooverlay_image
, to get what would be called thewarped_image
using warpPerspective. This is the slow part.And then using masking operations, I print the
warped_image
onto the current frame where the object was found.
Now I know after reading this blog article here why warpPerspective is slow.
And I'm getting ~300ms per frame in just the 6th step above, all because of warpPerspective. It's significantly affecting the FPS output of my application. Basically, it went down to 2FPS from 12 FPS without warping on every frame.
Is there any faster alternative to this? It's all done on Android, using NDK r9. What are some fast alternatives, and optimizations to reduce the warp time from 300ms to sub 50ms times?
My code is exactly what is being done here:
http://ramsrigoutham.com/2014/06/14/perspective-projection-with-homography-opencv/
Except that its executed on every frame in my tracking application, and the homography is always new because the detected corners of the tracked object in each new frame will be different.