Hi,
I'm running into a bit of trouble with depth maps computation. I'm taking live feed from a set of stereo cameras and computing disparity maps in real time. But somehow the computed values for each pixel (or block of pixels, for that matter) seems to be shifting constantly between frames, which resulted very inconsistent depth estimation across frames.
For my process, first I have performed stereo calibration on the cameras (with an error around 0.57), then using calibration result, I have managed to rectify stereo images successfully. The rectified images are then fed through a stereoBM object (and a stereo matcher) for disparity map generation, and the result is then smoothed out with a weighted least square filter.
I have attached a gif demonstrating the issue: https://i.imgur.com/yi87y2G.mp4
I am still very new to this field and would appreciate any pointers. Also if I have used any incorrect terms, or have failed to provide sufficient explanations, please feel free to correct me.