OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Tue, 16 Apr 2019 12:45:19 -0500Inverse Flow - Forward Warping or Bilinear "Splatting"http://answers.opencv.org/question/211672/inverse-flow-forward-warping-or-bilinear-splatting/I am interested in the inverse or backward flow given the forward flow. The following code does this forward warping or bilinear splatting, but it is annoyingly slow (~8ms @VGA on my i7-7820HK). It seems likely to me that this could/should be closer to 1-2ms. Any insights into speeding this up?
inline bool isOnImage(const cv::Point& pt, const cv::Size& size)
{
return pt.x >= 0 && pt.x < size.width && pt.y >= 0 && pt.y < size.height;
}
cv::Mat img_proc::inverseFlow(const cv::Mat& flow)
{
cv::Mat inverse_flow = cv::Mat::zeros(flow.size(), CV_32FC2);
cv::Mat weights = cv::Mat::zeros(flow.size(), CV_32FC2);
const int rows = flow.rows;
const int cols = flow.cols;
for(int i = 0; i < rows; ++i)
{
auto flow_ptr = flow.ptr<cv::Vec2f>(i);
for(int j = 0; j < cols; ++j)
{
const float du = flow_ptr[j][0];
const float dv = flow_ptr[j][1];
const int u = j + std::round(du);
const int v = i + std::round(dv);
if(!isOnImage({u,v}, flow.size()))
{
continue;
}
const int du_floor = (int) std::floor(du);
const int du_ceil = (int) std::ceil(du);
const int dv_floor = (int) std::floor(dv);
const int dv_ceil = (int) std::ceil(dv);
const int u_min = std::min(cols-1, std::max(0, j + du_floor));
const int u_max = std::min(cols-1, std::max(0, j + du_ceil));
const int v_min = std::min(rows-1, std::max(0, i + dv_floor));
const int v_max = std::min(rows-1, std::max(0, i + dv_ceil));
const float uf = j + du;
const float vf = i + dv;
const float w0 = (u_max - uf) * (v_max - vf); // TL
const float w1 = (uf - u_min) * (v_max - vf); // TR
const float w2 = (uf - u_min) * (vf - v_min); // BR
const float w3 = (u_max - uf) * (vf - v_min); // BL
weights.at<cv::Vec2f>(v_min, u_min) += cv::Vec2f{w0,w0};
weights.at<cv::Vec2f>(v_min, u_max) += cv::Vec2f{w1,w1};
weights.at<cv::Vec2f>(v_max, u_min) += cv::Vec2f{w3,w3};
weights.at<cv::Vec2f>(v_max, u_max) += cv::Vec2f{w2,w2};
inverse_flow.at<cv::Vec2f>(v_min, u_min) += w0 * cv::Vec2f{-du,-dv};
inverse_flow.at<cv::Vec2f>(v_min, u_max) += w1 * cv::Vec2f{-du,-dv};
inverse_flow.at<cv::Vec2f>(v_max, u_min) += w3 * cv::Vec2f{-du,-dv};
inverse_flow.at<cv::Vec2f>(v_max, u_max) += w2 * cv::Vec2f{-du,-dv};
}
}
cv::divide(inverse_flow, weights, inverse_flow);
return inverse_flow;
}Der LuftmenschTue, 16 Apr 2019 12:45:19 -0500http://answers.opencv.org/question/211672/Documentation of Remap Implementationhttp://answers.opencv.org/question/178351/documentation-of-remap-implementation/I was going through OpenCV remap() implementation. I am particularly interested in bilinear interpolation. I am bit overwhelmed with the complexity of the implementation. I couldn't find any documentation which explains the implementation.
Where can I find documentation on OpenCV remap() implementation? Thanks in advance.
Gilson VargheseWed, 15 Nov 2017 04:02:38 -0600http://answers.opencv.org/question/178351/How to implement a bilinear/bicubic interpolation algorithm on Opencv using CUDA?http://answers.opencv.org/question/136477/how-to-implement-a-bilinearbicubic-interpolation-algorithm-on-opencv-using-cuda/Good evening. I just started using Opencv and I would like to use a bilinear/bicubic interpolation algorithm to enhance an image's resolution. I have read some articles about it but I still don't understand how the implementation will be using opencv and C++.
Could someone lend me a hand please?
The version of Opencv is 3.1.0 and I'm using eclipse mars as IDE.JamesacTue, 28 Mar 2017 13:14:23 -0500http://answers.opencv.org/question/136477/Inverse bilinear interpolation (pupil tracker)http://answers.opencv.org/question/15210/inverse-bilinear-interpolation-pupil-tracker/I have build a eye tracking application using openCV and i wish to control the location of the mouse pointer using the location of the left eye pupil.
What i have is four points of the pupil that correspond to the four screen corners. Now i would like to map the current coordinate of the pupil given the four corner positions into a screen coordinate position.
Are there any build in functions in openCV that would let me do this? I already did some research and found that inverse bilinear interpolation would allow me to do this. However i can't seem to find this functionality in opencv for Point2f types. napoleonSat, 15 Jun 2013 06:08:02 -0500http://answers.opencv.org/question/15210/Replicate OpenCV resize with bilinar interpolation in C (shrink only)http://answers.opencv.org/question/20356/replicate-opencv-resize-with-bilinar-interpolation-in-c-shrink-only/Hello, I'm trying to rewrite the resizing algorithm of OpenCV with bilinear interpolation in C. What I want to achieve is that the resulting image is exactly the same (pixel value) to that produced by OpenCV. I am particularly interested in shrinking and not in the magnification, and I'm interested to use it on single channel Grayscale images. On the net I read that the bilinear interpolation algorithm is different between shrinkings and enlargements, but I did not find formulas or implementations, so it is likely that the code I wrote is totally wrong. What I wrote comes from my knowledge of interpolation acquired in a university course in Computer Graphics and OpenGL.
The result of the algorithm that I wrote are images visually identical to those produced by OpenCV but whose pixel values are not perfectly identical.
Mat rescale(Mat src, float ratio){
float width = src.cols * ratio; //resized width
int i_width = cvRound(width);
float step = (float)src.cols / (float)i_width; //size of new pixels mapped over old image
float center = step / 2; //V1 - center position of new pixel
//float center = step / src.cols; //V2 - other possible center position of new pixel
//float center = 0.099f; //V3 - Lena 512x512 lower difference possible to OpenCV
Mat dst(src.rows, i_width, CV_8UC1);
//cycle through all rows
for(int j = 0; j < src.rows; j++){
//in each row compute new pixels
for(int i = 0; i < i_width; i++){
float pos = (i*step) + center; //position of (the center of) new pixel in old map coordinates
int pred = floor(pos); //predecessor pixel in the original image
int succ = ceil(pos); //successor pixel in the original image
float d_pred = pos - pred; //pred and succ distances from the center of new pixel
float d_succ = succ - pos;
int val_pred = src.at<uchar>(j, pred); //pred and succ values
int val_succ = src.at<uchar>(j, succ);
float val = (val_pred * d_succ) + (val_succ * d_pred); //inverting d_succ and d_pred, supposing "d_succ = 1 - d_pred"...
int i_val = cvRound(val);
if(i_val == 0) //if pos is a perfect int "x.0000", pred and succ are the same pixel
i_val = val_pred;
dst.at<uchar>(j, i) = i_val;
//printf("-- Pos & val %d %d \n", i, i_val);
}
}
return dst;
}
ShadowTSMon, 09 Sep 2013 04:30:20 -0500http://answers.opencv.org/question/20356/Bilinear sampling from a GpuMathttp://answers.opencv.org/question/646/bilinear-sampling-from-a-gpumat/Hi everyone,
I'm writing a GPU-based shape/appearance model, for which I have to crop patches centered on given key points. The patches are square but not necessarily aligned with the image axes, so I cannot just use a rowRange/colRange. My plan is to create a fixed matrix of coordinate offsets, O:
O = [x1, x2, ..., xn;
y2, y2, ..., yn;
1, 1, ..., 1]
In Homogeneous coordinates. I will store this matrix on the GPU. When I want to sample a patch around X = [x, y, 1]^T, I simply transform the coordinates by a similarity transformation matrix M (which performs translation, rotation and scaling).
P = M * O
So P will again have the same layout as O, but with transformed coordinates.
**Now for the question**:
Given a matrix P of coordinates, how can I sample an image f(x,y) at the coordinates in P in an efficient manner? The output should be a vector or matrix with the pixel values at the coordinates in P. I want to use bilinear sampling, which is a built in operation on the GPU (so it should be efficient). I suppose I could write a custom kernel for this, but I would think this is already in opencv somewhere. I searched the documentation but didn't find anything.
Alternatively, I could rotate/scale the whole image and then crop an axis-aligned patch, but this seems less efficient.
Thanks in advancetscMon, 23 Jul 2012 03:57:42 -0500http://answers.opencv.org/question/646/