I am using SGBM algorithm for calculating the disparity map. However, I would like to measure it's true performance without including SIMD support (none of SSE, AVX etc.). Is it possible? I tried to setUseOptimized(false) but I do not see any difference in runtime between setting it to true and false. Is it normal? Is there a better way to deactivate SIMD support and in general optimizations? What other optimization are automatically performed?