Python Canny output image is smaller than input image

asked 2014-04-27 15:23:51 -0500

mikey gravatar image

updated 2014-04-27 15:39:54 -0500


In running the Canny function I've noticed that the output from Canny is smaller than the original input image.

imgnp = numpy.array(original_image)
imgcv = imgnp[:, :, ::-1].copy()
gray = cv2.cvtColor(imgcv, cv2.COLOR_BGR2GRAY)
edges = cv2.GaussianBlur(gray, (3,3), 0)
edges = cv2.Canny(edges,75,200,apertureSize=3, L2gradient=True)
print('imgcv size: %d\n' % imgcv.size)
print('edges size: %d\n' % edges.size)

The print out is:

imgcv size: 6220800

edges size: 2073600

Why is the output from the Canny function smaller than the input. I'd like to add the detected edges to original image for viewing and it seems like that would be easier if both were the same size


edit 1: I think I've figured out why. When using the the image's shape field, it returns imgcv shape = (1080, 1920, 3) edges shape = (1080, 1920)

The image used to find edges from has three dimensions, I'm thinking they are for RGB (or BGR).

edit retag flag offensive close merge delete