Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Segfault in SimpleBlobDetector::findBlobs

After processing about 600 images with 600x600 resolution using the code below, the keypoints = detector.detect(255 - fg_mask) line causes the program to crash or exit after an assert fail depending on whether OpenCV is in release or debug mode. When processing 5000x5000 images, the program crashes or aborts after successfully processing only 2 images. I've looked at (255 - fg_mask) and it appears to be a valid matrix with values from 0 to 255,so what has me stumped is how passing a valid image in is causing OpenCV to crash. Any help or insight into the problem is greatly appreciated.

I haven't been able to get gdb to give me the stack trace after the assert fails since the program immediately exits and gdb says "no stack" when I use the bt command, but I do have the stack trace after the release version crashes below. I'm using OpenCV 2.4.11 with python 2.7.10 bindings on CentOS 6. When OpenCV is compiled in release mode, the program segfaults in cv::SimpleBlobDetector::findBlobs. The stack trace is:


0 cv::SimpleBlobDetector::findBlobs
1 cv::FeatureDetector::detectImpl
2 cv::FeatureDetector::detect
3 pyopencv_FeatureDetector_detect
...14 more frames


When I run the same program with OpenCV compiled in debug mode, it instead stops with "OpenCV Error: Assertion failed (dims <= 2 && data && (unsigned)i0 < (unsigned)size.p[0] && (unsigned)(i1DataType<_Tp>::channels) < (unsigned)(size.p[1]channels()) && ((((sizeof(size_t)<<28)|0x8442211) >> ((DataType<_Tp>::depth) & ((1 << 3) - 1))*4) & 15) == elemSize1()) in at, file /path/to/opencv/modules/include/opencv2/core/mat.hpp, line546.

The assertion error also happens on the keypoints = detector.detect(255 - fg_mask) line in the code below.

The Code


def stabalize(next_img, prev_img, features):
    new_pts, status, err = cv2.calcOpticalFlowPyrLK(prev_img,
                                                    next_img,
                                                    features,
                                                    None,
                                                    **lk_params)
    # compute the homography matrix that is the transformation from one feature
    # set to the next
    H, status = cv2.findHomography(features,
                                   new_pts,
                                   cv2.LMEDS,
                                   10.)

# nab the height and width of an image (equivalent to the matrix shape)
h, w = prev_img.shape[:2]

# out_image is the result of next_img being transposed by the homography
out_image = cv2.warpPerspective(next_img, H, (w, h))

# Output the registered images and the points that were used in registration
return out_image, new_pts

def main(): fgbg = cv2.BackgroundSubtractorMOG2() # Background subtractor object

detector = cv2.SimpleBlobDetector(blob_params) for photo in filelist: # Loop through each file in the list # Load the file image and return B&W image frame = get_frame(photo) numPhoto += 1 if (numPhoto < 2): print("\nInitial image size = %d x %d" % (frame.shape[:2])) frame = frame[300:900, 200:800] if (numPhoto < 2): print("Cropped image size = %d x %d" % (frame.shape[:2]))

print("\nImage number: %d" % (numPhoto))

# Stabalize the images if base_img is None: # Base image needs to be defined base_img = frame # Get a base frame # Identify the good features to track in the base_img base_feat = cv2.goodFeaturesToTrack(base_img, *feature_params) # Register current frame to the base frame stab_img, features = stabalize(frame, base_img, base_feat) # Look for the background and foreground # Blur the image a bit to wash out tiny changes blur_img = cv2.GaussianBlur(stab_img, (5, 5), 0) # Use the foreground-background to determine moving objects as blobs fg_mask = fgbg.apply(blur_img, learningRate=0.01) # Use the blob detector to determine where the movement is located keypoints = detector.detect(255 - fg_mask) base_img = frame # Get a base frame # Identify the good features to track in the base_img base_feat = cv2.goodFeaturesToTrack(base_img, *feature_params)

Finally, here are the parameters for the blob detector and feature tracker:
feature_params = dict( maxCorners = 1000, qualityLevel = 0.01, minDistance = 8, blockSize = 19 )

temp_blob_params = SimpleBlobDetector_Params()

temp_blob_params.minThreshold = 0; # Completely black temp_blob_params.maxThreshold = 255; # Completely white

temp_blob_params.filterByArea = True # Required to detect blobs of different size temp_blob_params.minArea = 0 # Detect even a single pixel blob

temp_blob_params.filterByCircularity = False # Filter by shape of blob, Off by default temp_blob_params.minCircularity = 0.1 # How circular it needs to be

temp_blob_params.filterByConvexity = False # Filter by shape of blob, Off by default temp_blob_params.minConvexity = 0.87 # How convex the shape needs to be

temp_blob_params.filterByInertia = False # I don't know what this is, Off by default temp_blob_params.minInertiaRatio = 0.01 # no clue what this means

blob_params = temp_blob_params

Segfault in SimpleBlobDetector::findBlobs

Update with full code example showing the problem:

I made a simple version of the program which exhibits exactly the same problem, and I've uploaded the simple code and an image which causes it to crash. detector.detect(cv2.imread('mask237.tiff', 0)) crashes in cv::SimpleBlobDetector::findBlobs, same stack trace as below in the original post.

C:\fakepath\mask237.tiff


import cv2
from cv2 import SimpleBlobDetector_Params

# Blob detection params
# Setup SimpleBlobDetector parameters.
temp_blob_params = SimpleBlobDetector_Params()

# Change thresholds
temp_blob_params.minThreshold = 0; # Completely black
temp_blob_params.maxThreshold = 255; # Completely white

# Filter by Area.
temp_blob_params.filterByArea = True # Required to detect blobs of different size
temp_blob_params.minArea = 0 # Detect even a single pixel blob

# Filter by Circularity
temp_blob_params.filterByCircularity = False # Filter by shape of blob, Off by default
temp_blob_params.minCircularity = 0.1 # How circular it needs to be

# Filter by Convexity
temp_blob_params.filterByConvexity = False # Filter by shape of blob, Off by  default
temp_blob_params.minConvexity = 0.87 # How convex the shape needs to be

# Filter by Inertia
temp_blob_params.filterByInertia = False # I don't know what this is, Off by default
temp_blob_params.minInertiaRatio = 0.01 # no clue what this means

blob_params = temp_blob_params

def main():
    detector = cv2.SimpleBlobDetector(blob_params)
    detector.detect(cv2.imread('mask237.tiff', 0))

if __name__ == '__main__':
    main()

Original Post

After processing about 600 images with 600x600 resolution using the code below, the keypoints = detector.detect(255 - fg_mask) line causes the program to crash or exit after an assert fail depending on whether OpenCV is in release or debug mode. When processing 5000x5000 images, the program crashes or aborts after successfully processing only 2 images. I've looked at (255 - fg_mask) and it appears to be a valid matrix with values from 0 to 255,so what has me stumped is how passing a valid image in is causing OpenCV to crash. Any help or insight into the problem is greatly appreciated.

I haven't been able to get gdb to give me the stack trace after the assert fails since the program immediately exits and gdb says "no stack" when I use the bt command, but I do have the stack trace after the release version crashes below. I'm using OpenCV 2.4.11 with python 2.7.10 bindings on CentOS 6. When OpenCV is compiled in release mode, the program segfaults in cv::SimpleBlobDetector::findBlobs. The stack trace is:


0 cv::SimpleBlobDetector::findBlobs
1 cv::FeatureDetector::detectImpl
2 cv::FeatureDetector::detect
3 pyopencv_FeatureDetector_detect
...14 more frames


When I run the same program with OpenCV compiled in debug mode, it instead stops with "OpenCV Error: Assertion failed (dims <= 2 && data && (unsigned)i0 < (unsigned)size.p[0] && (unsigned)(i1DataType<_Tp>::channels) < (unsigned)(size.p[1]channels()) && ((((sizeof(size_t)<<28)|0x8442211) >> ((DataType<_Tp>::depth) & ((1 << 3) - 1))*4) & 15) == elemSize1()) in at, file /path/to/opencv/modules/include/opencv2/core/mat.hpp, line546.

The assertion error also happens on the keypoints = detector.detect(255 - fg_mask) line in the code below.

The Code


def stabalize(next_img, prev_img, features):
    new_pts, status, err = cv2.calcOpticalFlowPyrLK(prev_img,
                                                    next_img,
                                                    features,
                                                    None,
                                                    **lk_params)
    # compute the homography matrix that is the transformation from one feature
    # set to the next
    H, status = cv2.findHomography(features,
                                   new_pts,
                                   cv2.LMEDS,
                                   10.)

# nab the height and width of an image (equivalent to the matrix shape)
h, w = prev_img.shape[:2]

# out_image is the result of next_img being transposed by the homography
out_image = cv2.warpPerspective(next_img, H, (w, h))

# Output the registered images and the points that were used in registration
return out_image, new_pts

def main(): fgbg = cv2.BackgroundSubtractorMOG2() # Background subtractor object

detector = cv2.SimpleBlobDetector(blob_params) for photo in filelist: # Loop through each file in the list # Load the file image and return B&W image frame = get_frame(photo) numPhoto += 1 if (numPhoto < 2): print("\nInitial image size = %d x %d" % (frame.shape[:2])) frame = frame[300:900, 200:800] if (numPhoto < 2): print("Cropped image size = %d x %d" % (frame.shape[:2]))

print("\nImage number: %d" % (numPhoto))

# Stabalize the images if base_img is None: # Base image needs to be defined base_img = frame # Get a base frame # Identify the good features to track in the base_img base_feat = cv2.goodFeaturesToTrack(base_img, *feature_params) # Register current frame to the base frame stab_img, features = stabalize(frame, base_img, base_feat) # Look for the background and foreground # Blur the image a bit to wash out tiny changes blur_img = cv2.GaussianBlur(stab_img, (5, 5), 0) # Use the foreground-background to determine moving objects as blobs fg_mask = fgbg.apply(blur_img, learningRate=0.01) # Use the blob detector to determine where the movement is located keypoints = detector.detect(255 - fg_mask) base_img = frame # Get a base frame # Identify the good features to track in the base_img base_feat = cv2.goodFeaturesToTrack(base_img, *feature_params)

Finally, here are the parameters for the blob detector and feature tracker:
feature_params = dict( maxCorners = 1000, qualityLevel = 0.01, minDistance = 8, blockSize = 19 )

temp_blob_params = SimpleBlobDetector_Params()

temp_blob_params.minThreshold = 0; # Completely black temp_blob_params.maxThreshold = 255; # Completely white

temp_blob_params.filterByArea = True # Required to detect blobs of different size temp_blob_params.minArea = 0 # Detect even a single pixel blob

temp_blob_params.filterByCircularity = False # Filter by shape of blob, Off by default temp_blob_params.minCircularity = 0.1 # How circular it needs to be

temp_blob_params.filterByConvexity = False # Filter by shape of blob, Off by default temp_blob_params.minConvexity = 0.87 # How convex the shape needs to be

temp_blob_params.filterByInertia = False # I don't know what this is, Off by default temp_blob_params.minInertiaRatio = 0.01 # no clue what this means

blob_params = temp_blob_params