What is the best practise for passing cv::Mats around
Hello all,
I've been working on a project for a while not realizing I've been creating a lot of memory leaks as I had not run the program for long enough until recently. It's a fairly large program, so I'll provide a single example that should describe the overall problem.
Firstly, my image data is stored in a class (percepUnit) which has a number of cv::Mat members:
class percepUnit {
public:
cv::Mat image; // percept itself
cv::Mat mask; // alpha channel
cv::Mat alphaImage; // mask + image.
// Create RGBA image from RGB+Mask
cv::Mat applyAlpha(cv::Mat image, cv::Mat mask);
}
I want to apply the mask to the image to create the alpha image. This is what the class method looks like:
// Apply the mask as an alpha channel
cv::Mat percepUnit::applyAlpha(cv::Mat image, cv::Mat mask) {
vector<cv::Mat> channels;
cv::Mat alphaImage;
if (image.rows == mask.rows and image.cols == mask.cols) {
cv::split(image,channels); // break image into channels
channels.push_back(mask); // append alpha channel
cv::merge(channels,alphaImage); // combine channels
}
return alphaImage;
}
Which is used in the percepUnit constructor:
percepUnit::percepUnit(cv::Mat ROI, cv::Mat alpha, int ix, int iy, int iw, int ih, int area) {
// Deep copies.
image = ROI.clone();
mask = alpha.clone();
// Make alpha image
// there may be a more efficient way of doing this. (drawMat() for RGBA?)
this->alphaImage = applyAlpha(image, mask);
After some searching it does seem like returning cv::Mats is not a good idea.
This is the valgrind output:
==25374== 13,422,700 bytes in 21 blocks are possibly lost in loss record 17,975 of 17,982
==25374== at 0x4C2B6CD: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==25374== by 0x33C0BA90: cv::fastMalloc(unsigned long) (in /usr/local/lib/libopencv_core.so.2.4.5)
==25374== by 0x33C51BF1: cv::Mat::create(int, int const*, int) (in /usr/local/lib/libopencv_core.so.2.4.5)
==25374== by 0x33C7313B: cv::_OutputArray::create(int, int const*, int, int, bool, int) const (in /usr/local/lib/libopencv_core.so.2.4.5)
==25374== by 0x33D98DA7: cv::merge(cv::Mat const*, unsigned long, cv::_OutputArray const&) (in /usr/local/lib/libopencv_core.so.2.4.5)
==25374== by 0x33D999D0: cv::merge(cv::_InputArray const&, cv::_OutputArray const&) (in /usr/local/lib/libopencv_core.so.2.4.5)
==25374== by 0x33D99ABD: cv::merge(std::vector<cv::Mat, std::allocator<cv::Mat> > const&, cv::_OutputArray const&) (in /usr/local/lib/libopencv_core.so.2.4.5)
==25374== by 0x415A95: percepUnit::applyAlpha(cv::Mat, cv::Mat) (percepUnit.cpp:19)
==25374== by 0x415C73: percepUnit::percepUnit(cv::Mat, cv::Mat, int, int, int, int, int) (percepUnit.cpp:212)
Should I rewrite the applyAlpha method to take the cv::Mat &alphaImage as an argument and forgo the return statement?
EDIT: Here is my new applAlpha function:
// Apply the mask as an alpha channel
void percepUnit::applyAlpha(const cv::Mat &image, const cv::Mat &mask, cv::Mat &alphaImage) {
// Avoid merge, helps with memory leak?
cv::Mat src[] = {image, mask ...