Ask Your Question

orpheelepoete's profile - activity

2016-08-24 07:23:39 -0500 received badge  Enthusiast
2016-08-18 03:47:15 -0500 commented question access violation while filling a Mat with custom uchar* data

I am not allowed to use Cv4Android and this tutoriel use a special type of Camera included in Opencv4Android. I am forced to use my co-worker's code without modifying it (except for the function which gets the byte[] camera frame)

2016-08-18 03:05:24 -0500 commented question access violation while filling a Mat with custom uchar* data

I want to process Android's camera frames, using native C++ Opencv code. I have to get the Android camera preview frame's data, give those data as parameters to my c++ function, store those data inside an OpenCV Mat, process this Mat and give the Mat's data back to Android.

Note: the Android code has been developped by someone else and I cannot modify it. The camera class used is obsolete (import android.hardware.Camera) and I must use its informations.

2016-08-18 02:48:23 -0500 commented question access violation while filling a Mat with custom uchar* data

Thanks. I tried that because base64 is used for webservices transmissions. I first tried to transfer byte arrays using jByteArray JNI type and trying to store it inside a unsigned char*, but I didn't succeed in making it work. My problem is I cannot debug what's happening in the native code so I don't know where the bug is. That's why I tried something simple to see if the transmission works. It's the very first time I try to create native code. I have experiences in Java, C++ and OpenCV but I've never been formed for NDK/JNI.

Do you know the safest way to transmit a camera preview frame to an opencv Mat? The view information I found on the web didn't work for me.

2016-08-17 09:02:02 -0500 received badge  Editor (source)
2016-08-17 09:00:40 -0500 asked a question access violation while filling a Mat with custom uchar* data

I want to test the base64 encoding/decoding about to make an Android application communicate with native Opencv c++ code.

I try to show the result image about to check weather the result string is correct. The code I used looks like this:

cv::Mat img = imread(imgPath);
string imgData = reinterpret_cast<char*>(;
string str = base64_encode(reinterpret_cast<const unsigned char*>(imgData.c_str()), imgData.length()).c_str();
string strOutput = base64_decode(str);
uchar* data = (uchar*)reinterpret_cast<const unsigned char*>(strOutput.c_str());

I have compared the result data and the input image's data using:

int cpr = strcmp((char*), (char*)data);

and the result (0) confirms that the data uchar* is the same as the input image's data

When I try to copy the input image's data into the output image, everything works:

memcpy(,, img.rows * img.cols * img.channels() * sizeof(uchar));

But when I try to copy the result data into the output image, I get an access violation error

memcpy(, data, img.rows * img.cols * img.channels() * sizeof(uchar));

I wonder why the i cannot fill the output image with the "data" uchar* whereas it is strictly identic with the "" uchar*.

Thanks in advance.

2016-07-26 04:25:18 -0500 asked a question impossible de decode my buffers

I have converted an Android image into an unsigned char* buffer about to use imdecode to store it into a opencv Mat. But it's impossible to correctly show the image. I always have some Access Violation errors.

I tried to store the buffer data into a file and load it after, I also tried to use char* instead of unsigned char*, and also to read a jpeg file with opencv and encode it using

Mat jpegMat; 
jpegMat = cvLoadImage(jpegPath, 1);
cv::vector<uchar> buf;
cv::imencode(".jpg", jpegMat, buf);
res = &buf[0];

but the imdecode refuses to work. I tried every possible alternatives I found on the internet but nothing worked for me Here is the code I currently use:

cv::Mat imgbuf = cv::Mat(height, width, CV_8UC1, inputByteArray);
cv::Mat imgMat =  cv::imdecode(imgbuf, CV_LOAD_IMAGE_COLOR);

neither the "imgbuf" Mat nor the "imgMat" accepts to show itself, returning the exact same error message.

Can anyone tell me what is happening because it's been days that I tried everything without getting any results.

Thanks in advance

2015-06-03 09:25:02 -0500 asked a question opencv 3, blobdetection, The function/feature is not implemented () in detectAndCompute

I have a problem with opencv 3: I want to use a feature detector, SimpleBlobDetector, about to use filters by convexity and circularity. But when I try to execute the code, the following error is tracked:

The function/feature is not implemented () in detectAndCompute Then the application crashes.

I searched for informations in the internet without any relevant answer. I think the 3rd version of Opencv could be responsible for this bug, because I know I use the detector the good way (I tried exactly like the official opencv tutorial) and I noticed than the SimpleBlobDector has been modified for the third version.

Using breakpoint, I know that the following line crashes:

detector.detect(gray, keypoints); The SimpleBlobDetector has been created (using the create function) and configured, the gray image isn't empty and the keypoints vector does not need to be filled before the detection.

I use opencv 3.0.0, compiled in MinGW with QtCreator. The opencv treatment is not launched from the main thread.

Did anybody else have the same problem? I would be gratefull if I could have a patch or another solution using another class. I really need to use convexity to filter my blobs and the other detectors I found (FeatureDetector or Brisk) cannot be configurable and only return keypoints, which doesn't have the area or fullness parameters to calculate convexity.

Thanks in advance