You wanted an example, so here you go. This assumes the first two dimensions of the convolution have been done.

This does a BGR 2 Gray as a convolution between the channels. The center channel is properly gray, but the edges use reflected borders, so it does the math with gbg or grg, which of course doesn't come out correctly. Hopefully you can see how this would extend to images with a much higher number of channels.

```
image.create(100, 100, CV_8UC3);
image.setTo(cv::Scalar(50,100,200));
cv::cvtColor(image, gray, cv::COLOR_BGR2GRAY);
std::cout << image(cv::Rect(0, 0, 1, 1)) << "\n\n";
std::cout << gray(cv::Rect(0, 0, 1, 1)) << "\n\n";
cv::Mat thirdD, kernel;
kernel.create(1, 3, CV_32F);//3 is the Size of the kernel in the third dimension.
kernel.at<float>(0) = 0.114;//Blue
kernel.at<float>(1) = 0.587;//Green
kernel.at<float>(2) = 0.299;//Red
thirdD = image.reshape(1, image.rows*image.cols);
cv::filter2D(thirdD, output, -1, kernel, cv::Point(-1, -1), 0.0, cv::BORDER_REFLECT_101);//g|bgr|g
//The -1 says keep it in the same pixel format CV_8U. You can choose different border types, but this is the default.
trueGray = output.reshape(3, image.rows);//3 is the number of channels you started with, the depth of the image in the third dimension.
std::cout << trueGray(cv::Rect(0, 0, 1, 1)) << "\n\n";
```

Hmm if I understand it correctly, then applying a convolution in 3D is just two times a 2D convolution? So wouldn't it just be combining 2D convolutions? Imagine a 3D x y z plane object. First convolve each XY layer with your 2D filter, then combine it back into ur 3D container and apply a 2D conv filter to each XZ plane. This should work and have same result.

@StevenPuttemans I will try your idea and see if that is the case. Thanks.

I though that I got it but apparently not. Let's say that I have a depth image

`Z`

with the pixel values describing the distance. In Matlab what I saw people doing is`n1 = conv3(Z, kernel);`

, where kernel is obviously the kernel used for the convolution. How this can be done with`filter2d()`

I am not quite sure that it is possible.If you just have 3 dimensions, with the third being in the channels, you can do the 2D convolution on each channel, then call reshape(1, rows

cols). The new matrix will be (Channel) columns by (rowscols) rows.Then you call filter2d with the third dimension of your kernel as the X kernel parameter, and 1 as the Y kernel.

Then reshape back to normal.

@Tetragramm would be easy to provide an example?

Can you explain how do you manage 3d mat in opencv ? I have try this with a 3d histogram. May be you can look to this post.

actually I had something else in my mind, but nevermind let's keep the tread and the answer of @Tetragramm and your links @LBerger for future readers. Thanks for your response though ;-).

maybe a small tip for @LBerger. I create my 3D mats using

`vector< vector<Mat> >`

, the inner vector is always a XY plane why the second vector represents the Z plane. Works perfectly fine.@StevenPuttemans About convolution in 3d using separable filterI think you use filter2d for each plane (x,y direction) and for z direction a 1D convolution. For this 1D convolution which opencv function do you use with vector<vector <mat=""> > ?

No, it is again a 2D convolution in my mind, being of the XZ plane now. Then you get a 3D convolution in theory!