Different results between imread as grayscale and cvtColor
Did anyone notice that if you load a color image directly as gray scale, the resulting image is slightly different than loading it to a color image and then cvtColor to grayscale? Anyone know why?
img1=cv2.imread("imge.jpg", 0)
img2=cv2.imread("imge.jpg", 1)
img2=cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)
print np.max(img1!=img2)
# True
print np.max(img1.astype('float32')-img2.astype('float32'))
# 2.0
BTW, this happens under both Linux and Windows (either c++ or python).
jpg's are lossy.
Thanks for the comment. I understand that. But I thought that cv2.imread(...,0) is loading it to color and then convert to gray. But the results are different. So This doesn't seem to be related to jpeg...
The reason is that there are multiple implementations of the grayscale conversion in play.
cvtColor
() is THE opencv implementation and will be consistent across platforms. When you useimread
() to convert to grayscale, you are at the mercy of the platform-specific implementation ofimread
(). I wouldn't be surprised if imread() returns slightly different grayscale values on each platform.I can't think of any reason to ever convert to grayscale in imread.