Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Segfault on feature detection

Hello, I'm trying to use MSER in a kinect code I have, but it's segfaulting inside the code, and I'm not sure why.

This is the code, and it's all the calls to CV I have. My first idea was to detect blobs in 1D but there doesn't seem to be a way of doing it. I tried running MSER with a 1x320 Mat but that failed, I figured it might be because of the 1x Mat didn't work for the algorithm. I widened it to 7x but it still fails.

...
cv::Mat depthLine(7, frameWidth, CV_16UC1);
USHORT *dlRun = (USHORT *)depthLine.data;

for( ; pBufferRun < pBufferEnd ; pBufferRun++, rgbrun++) {
    *dlRun = NuiDepthPixelToDepth(*pBufferRun);
}

//CvMSERParams params = cvMSERParams;
cv::MserFeatureDetector mser;
vector<cv::KeyPoint> keypoints;
mser.detect(depthLine, keypoints);   // segfault on bad memory access here
.....
}

Maybe this isn't the smartest way to do the buffer conversion, but that's not the point right now, why is that detection failing? am I missing some kind of initialization?

And since we're at this, is there a way I can have CV TRACK the features (link one frame's to the next) after detecting them?

Segfault on feature detection

Hello, I'm trying to use MSER in a kinect code I have, but it's segfaulting inside the code, and I'm not sure why.

This is the code, and it's all the calls to CV I have. My first idea was to detect blobs in 1D but there doesn't seem to be a way of doing it. I tried running MSER with a 1x320 Mat but that failed, I figured it might be because of the 1x Mat didn't work for the algorithm. I widened it to 7x but it still fails.

...
cv::Mat depthLine(7, frameWidth, CV_16UC1);
USHORT *dlRun = (USHORT *)depthLine.data;

for( ; pBufferRun < pBufferEnd ; pBufferRun++, rgbrun++) rgbrun++, dlRun++) {
    *dlRun = NuiDepthPixelToDepth(*pBufferRun);
}

//CvMSERParams params = cvMSERParams;
cv::MserFeatureDetector mser;
vector<cv::KeyPoint> keypoints;
mser.detect(depthLine, keypoints);   // segfault on bad memory access here
.....
}

Maybe this isn't the smartest way to do the buffer conversion, but that's not the point right now, why is that detection failing? am I missing some kind of initialization?

And since we're at this, is there a way I can have CV TRACK the features (link one frame's to the next) after detecting them?

Segfault on feature detection

Hello, I'm trying to use MSER in a kinect code I have, but it's segfaulting inside the code, and I'm not sure why.

This is the code, and it's all the calls to CV I have. My first idea was to detect blobs in 1D but there doesn't seem to be a way of doing it. I tried running MSER with a 1x320 Mat but that failed, I figured it might be because of the 1x Mat didn't work for the algorithm. I widened it to 7x but it still fails.

...
cv::Mat depthLine(7, frameWidth, CV_16UC1);
USHORT *dlRun = (USHORT *)depthLine.data;

for( ; pBufferRun < pBufferEnd ; pBufferRun++, rgbrun++, dlRun++) {
    *dlRun = NuiDepthPixelToDepth(*pBufferRun);
}

//CvMSERParams params = cvMSERParams;
cv::MserFeatureDetector mser;
vector<cv::KeyPoint> keypoints;
mser.detect(depthLine, keypoints);   // segfault on bad memory access here
.....
}

Maybe this isn't the smartest way to do the buffer conversion, but that's not the point right now, why is that detection failing? am I missing some kind of initialization?

And since we're at this, is there a way I can have CV TRACK the features (link one frame's to the next) after detecting them?

If it helps, apparently a malloc within the library is failing, it's trying to alloc 7399104 bytes and malloc is returning 0x0070e6c0 (clearly not a good pointer)