OpenCV Q&A Forum - RSS feedhttp://answers.opencv.org/questions/OpenCV answersenCopyright <a href="http://www.opencv.org">OpenCV foundation</a>, 2012-2018.Fri, 06 Dec 2019 07:29:27 -0600OpenCV getPerspectiveTransform and warpPerspectivehttp://answers.opencv.org/question/222856/opencv-getperspectivetransform-and-warpperspective/what is the difference between getPerspectiveTransform and warpPerspective?
please explain. Akhil PatelFri, 06 Dec 2019 07:29:27 -0600http://answers.opencv.org/question/222856/How to detect and crop rectangle and apply transformation from an image?http://answers.opencv.org/question/191198/how-to-detect-and-crop-rectangle-and-apply-transformation-from-an-image/Hello all,
I am developing an application for detect driving license and capture image of driving license using surface view and detect driving license and crop from those it's four corner using OpenCV.
so right now i am using canny edge detection and find the edges but i am not able to crop the image because canny edge detection return black and white image i am crop my original license image from it's edges.
Please suggest any best solution.
public Bitmap findEdges(Bitmap img)
{
Mat rgba = new Mat();
Utils.bitmapToMat(img, rgba);
Mat edges = new Mat(rgba.size(), CvType.CV_8UC1);
Imgproc.cvtColor(rgba, edges, Imgproc.COLOR_RGB2GRAY, 4);
Imgproc.Canny(edges, edges, 40, 40);
Bitmap resultBitmap = Bitmap.createBitmap(edges.cols(), edges.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(edges, resultBitmap);
Mat rgbMat = new Mat();
Mat grayMat = new Mat();
Mat cannyMat;
Mat linesMat = new Mat();
BitmapFactory.Options o = new BitmapFactory.Options();
// define the destination image size: A4 - 200 PPI
int w_a4 = 1654, h_a4 = 2339;
// TODO: 29/08/2016 May need to check sample size https://developer.android.com/training/displaying-bitmaps/load-bitmap.html
o.inSampleSize = 4;
o.inDither = false;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
resultBitmap.compress(Bitmap.CompressFormat.PNG, 75, baos);
byte[] b = baos.toByteArray();
resultBitmap = BitmapFactory.decodeByteArray(b, 0, b.length);
int w = resultBitmap.getWidth();
int h = resultBitmap.getHeight();
int min_w = 800;
double scale = Math.min(10.0, w * 1.0 / min_w);
int w_proc = (int) (w * 1.0 / scale);
int h_proc = (int) (h * 1.0 / scale);
Bitmap srcBitmap = Bitmap.createScaledBitmap(resultBitmap, w_proc, h_proc, false);
Bitmap grayBitmap = Bitmap.createBitmap(w_proc, h_proc, Bitmap.Config.RGB_565);
Bitmap cannyBitmap = Bitmap.createBitmap(w_proc, h_proc, Bitmap.Config.RGB_565);
Bitmap linesBitmap = Bitmap.createBitmap(w_proc, h_proc, Bitmap.Config.RGB_565);
Utils.bitmapToMat(srcBitmap, rgbMat);//convert original bitmap to Mat, R G B.
Imgproc.cvtColor(rgbMat, grayMat, Imgproc.COLOR_RGB2GRAY);//rgbMat to gray grayMat
cannyMat = getCanny(grayMat);
Imgproc.HoughLinesP(cannyMat, linesMat, 1, Math.PI / 180, w_proc / 12, w_proc / 12, 20);
// Calculate horizontal lines and vertical lines
Log.e("opencv", "lines.cols " + linesMat.cols() + " w_proc/3: " + w_proc / 3);
Log.e("opencv", "lines.rows" + linesMat.rows() + " w_proc/3: " + w_proc / 3);
List<EdgesLine> horizontals = new ArrayList<>();
List<EdgesLine> verticals = new ArrayList<>();
for (int x = 0; x < linesMat.rows(); x++) {
double[] vec = linesMat.get(x, 0);
double x1 = vec[0],
y1 = vec[1],
x2 = vec[2],
y2 = vec[3];
Point start = new Point(x1, y1);
Point end = new Point(x2, y2);
EdgesLine line = new EdgesLine(start, end);
if (Math.abs(x1 - x2) > Math.abs(y1 - y2)) {
horizontals.add(line);
} else {
verticals.add(line);
}
if (BuildConfig.DEBUG) {
}
}
Log.e("HoughLines", "completed HoughLines");
Log.e("HoughLines", "linesMat size: " + linesMat.size());
Log.e("HoughLines", "linesBitmap size: " + Integer.toString(linesBitmap.getHeight()) + " x " + Integer.toString(linesBitmap.getWidth()));
Log.e("Lines Detected", Integer.toString(linesMat.rows()));
if (linesMat.rows() > 400) {
Context context = getApplicationContext();
int duration = Toast.LENGTH_SHORT;
Toast toast = Toast.makeText(context, "Please use a cleaner background", duration);
toast.show();
}
if (horizontals.size() < 2) {
if (horizontals.size() == 0 || horizontals.get(0)._center.y > h_proc / 2) {
horizontals.add(new EdgesLine(new Point(0, 0), new Point(w_proc - 1, 0)));
}
if (horizontals.size() == 0 || horizontals.get(0)._center.y <= h_proc / 2) {
horizontals.add(new EdgesLine(new Point(0, h_proc - 1), new Point(w_proc - 1, h_proc - 1)));
}
}
if (verticals.size() < 2) {
if (verticals.size() == 0 || verticals.get(0)._center.x > w_proc / 2) {
verticals.add(new EdgesLine(new Point(0, 0), new Point(h_proc - 1, 0)));
}
if (verticals.size() == 0 || verticals.get(0)._center.x <= w_proc / 2) {
verticals.add(new EdgesLine(new Point(w_proc - 1, 0), new Point(w_proc - 1, h_proc - 1)));
}
}
Collections.sort(horizontals, new Comparator<EdgesLine>() {
@Override
public int compare(EdgesLine lhs, EdgesLine rhs) {
return (int) (lhs._center.y - rhs._center.y);
}
});
Collections.sort(verticals, new Comparator<EdgesLine>() {
@Override
public int compare(EdgesLine lhs, EdgesLine rhs) {
return (int) (lhs._center.x - rhs._center.x);
}
});
if (BuildConfig.DEBUG) {
}
// compute intersections
List<Point> intersections = new ArrayList<>();
intersections.add(computeIntersection(horizontals.get(0), verticals.get(0)));
intersections.add(computeIntersection(horizontals.get(0), verticals.get(verticals.size() - 1)));
intersections.add(computeIntersection(horizontals.get(horizontals.size() - 1), verticals.get(0)));
intersections.add(computeIntersection(horizontals.get(horizontals.size() - 1), verticals.get(verticals.size() - 1)));
Log.e("Intersections", Double.toString(intersections.get(0).x));
for (Point point : intersections) {
if (BuildConfig.DEBUG) {
}
}
Log.e("Intersections", Double.toString(intersections.get(0).x));
double w1 = Math.sqrt(Math.pow(intersections.get(3).x - intersections.get(2).x, 2) + Math.pow(intersections.get(3).x - intersections.get(2).x, 2));
double w2 = Math.sqrt(Math.pow(intersections.get(1).x - intersections.get(0).x, 2) + Math.pow(intersections.get(1).x - intersections.get(0).x, 2));
double h1 = Math.sqrt(Math.pow(intersections.get(1).y - intersections.get(3).y, 2) + Math.pow(intersections.get(1).y - intersections.get(3).y, 2));
double h2 = Math.sqrt(Math.pow(intersections.get(0).y - intersections.get(2).y, 2) + Math.pow(intersections.get(0).y - intersections.get(2).y, 2));
double maxWidth = (w1 < w2) ? w1 : w2;
double maxHeight = (h1 < h2) ? h1 : h2;
Mat srcMat = new Mat(4, 1, CvType.CV_32FC2);
srcMat.put(0, 0, intersections.get(0).x, intersections.get(0).y, intersections.get(1).x, intersections.get(1).y, intersections.get(2).x, intersections.get(2).y, intersections.get(3).x, intersections.get(3).y);
Mat dstMat = new Mat(4, 1, CvType.CV_32FC2);
dstMat.put(0, 0, 0.0, 0.0, maxWidth - 1, 0.0, 0.0, maxHeight - 1, maxWidth - 1, maxHeight - 1);
Log.e("FinalDisplay", "srcMat: " + srcMat.size());
Log.e("FinalDisplay", "dstMat: " + dstMat.size());
Mat transformMatrix = Imgproc.getPerspectiveTransform(srcMat, dstMat);
finalMat = Mat.zeros((int) maxHeight, (int) maxWidth, CvType.CV_32FC2);
Imgproc.warpPerspective(rgbMat, finalMat, transformMatrix, finalMat.size());
Log.e("FinalDisplay", "finalMat: " + finalMat.size());
// display final results
Bitmap dstBitmap = Bitmap.createBitmap(finalMat.width(), finalMat.height(), Bitmap.Config.RGB_565);
Log.e("FinalDisplay", "dstBitmap: " + img.getWidth() + " x " + img.getHeight());
Utils.matToBitmap(finalMat, dstBitmap); //convert mat to bitmap
try {
Bitmap crop = Bitmap.createBitmap(rotateandscalebitmap, 0, 0, dstBitmap.getWidth(), dstBitmap.getHeight());
croppedbitmap = crop;
doRecognize(croppedbitmap);
} catch (Exception e) {
e.printStackTrace();
}
return croppedbitmap;
}
protected Mat getCanny(Mat gray) {
Mat threshold = new Mat();
Mat canny = new Mat();
// last paramter 8 is using OTSU algorithm
double high_threshold = Imgproc.threshold(gray, threshold, 0, 255, 8);
double low_threshold = high_threshold * 0.5;
Imgproc.Canny(gray, canny, low_threshold, high_threshold);
return canny;
}
protected Point computeIntersection(EdgesLine l1, EdgesLine l2) {
double x1 = l1._p1.x, x2 = l1._p2.x, y1 = l1._p1.y, y2 = l1._p2.y;
double x3 = l2._p1.x, x4 = l2._p2.x, y3 = l2._p1.y, y4 = l2._p2.y;
double d = (x1 - x2) * (y3 - y4) - (y1 - y2) * (x3 - x4);
Point pt = new Point();
pt.x = ((x1 * y2 - y1 * x2) * (x3 - x4) - (x1 - x2) * (x3 * y4 - y3 * x4)) / d;
pt.y = ((x1 * y2 - y1 * x2) * (y3 - y4) - (y1 - y2) * (x3 * y4 - y3 * x4)) / d;
return pt;
}
class EdgesLine {
Point _p1;
Point _p2;
Point _center;
EdgesLine(Point p1, Point p2) {
_p1 = p1;
_p2 = p2;
_center = new Point((p1.x + p2.x) / 2, (p1.y + p2.y) / 2);
}
}Akhil PatelThu, 10 May 2018 01:26:03 -0500http://answers.opencv.org/question/191198/Does the Fourier Transform images have 3 channels like RGB/HSVhttp://answers.opencv.org/question/214009/does-the-fourier-transform-images-have-3-channels-like-rgbhsv/ In RGB we have 3 channels the RED channel, GREEN channel and the BLUE channel. Does this apply to fourier transform images ? Fourier transform has three significant values that capture all the information of the sinusoidal image,
1) Spatial Frequency
2)Magnitude
3)Phase
Are these 3 like the channels in the RGB/HSV?Tasneem KhanThu, 06 Jun 2019 23:54:07 -0500http://answers.opencv.org/question/214009/Open CV's HoughCircles() is ~1000 times faster than my versionhttp://answers.opencv.org/question/31061/open-cvs-houghcircles-is-1000-times-faster-than-my-version/I tried to write my own version of Hough Transform for circles.
Even after a few cycles of optimization I could not get close the performance of OpenCV.
The code can be found here:
https://bitbucket.org/YontanSimson/opencv-projects/src/e504c2a4bd4043003456c63a44e440b96ae86050/HoughCircle/?at=master
My CPU is an Intel 8 core i7-3770 CUP @ 3.4GHz
My Display Adapter is an Intel HD Graphics 4000
How do they manage to get good results so fast?
Y SimsonTue, 01 Apr 2014 19:12:10 -0500http://answers.opencv.org/question/31061/Problem with estimateRigidTransform: mat dst is emptyhttp://answers.opencv.org/question/156963/problem-with-estimaterigidtransform-mat-dst-is-empty/ Hello everyone, I'm new to OpenCV, so it could be that it's just a understanding problem with the [estimateRigidTransformation function](http://docs.opencv.org/2.4/modules/video/doc/motion_analysis_and_object_tracking.html#estimaterigidtransform):
In the following code i find the contours of two rigid translated objects in img1 and 2, but estimateRigidTransformation seems not to work like i thought it would. It would be nice if someone has an idea why the
mat dst keeps empty.
Thank you!
#include <iostream>
#include <string>
#include <opencv2/highgui/highgui.
#include <opencv2/video/tracking.hpp>
//Function from https://github.com/opencv/opencv/blob/master/samples/cpp/shape_example.cpp to extract Contours
static std::vector<cv::Point> sampleContour( const cv::Mat& image, int n=300 )
{
std::vector<std::vector<cv::Point>> contours;
std::vector<cv::Point> all_points;
cv::findContours(image, contours, cv::RETR_LIST, cv::CHAIN_APPROX_NONE);
for (size_t i=0; i <contours.size(); i++)
{
for (size_t j=0; j<contours[i].size(); j++)
{
all_points.push_back(contours[i][j]);
}
}
// In case actual number of points is less than n
int dummy=0;
for (int add=(int)all_points.size(); add<n; add++)
{
all_points.push_back(all_points[dummy++]);
}
// Uniformly sampling
std::random_shuffle(all_points.begin(), all_points.end());
std::vector<cv::Point> sampled;
for (int i=0; i<n; i++)
{
sampled.push_back(all_points[i]);
}
return sampled;
}
int main(){
// image reading
cv::Mat templateImage = cv::imread("1.jpg", cv::IMREAD_GRAYSCALE);
cv::Mat queryImage = cv::imread("2.jpg", cv::IMREAD_GRAYSCALE);
// contour extraction
std::vector<cv::Point> queryPoints, templatePoints;
queryPoints = sampleContour(queryImage);
templatePoints = sampleContour(templateImage);
// cast to vector<point2f> https://stackoverflow.com/questions/7386210/convert-opencv-2-vectorpoint2i-to-vectorpoint2f
std::vector<cv::Point2f> queryPoints2f, templatePoints2f;
cv::Mat(queryPoints).convertTo(queryPoints2f, cv::Mat(queryPoints2f).type());
cv::Mat(templatePoints).convertTo(templatePoints2f, cv::Mat(templatePoints2f).type());
cv::Mat R = cv::estimateRigidTransform(templatePoints2f,queryPoints2f,false);
std::cout <<"R:" << R << std::endl; // R -> empty
/*
* Solution from https://stackoverflow.com/questions/23373077/using-estimaterigidtransform-instead-of-findhomography
* let the program crash
*
cv::Mat H = cv::Mat(3,3,R.type());
H.at<double>(0,0) = R.at<double>(0,0);
H.at<double>(0,1) = R.at<double>(0,1);
H.at<double>(0,2) = R.at<double>(0,2);
H.at<double>(1,0) = R.at<double>(1,0);
H.at<double>(1,1) = R.at<double>(1,1);
H.at<double>(1,2) = R.at<double>(1,2);
H.at<double>(2,0) = 0.0;
H.at<double>(2,1) = 0.0;
H.at<double>(2,2) = 1.0;
std::vector<cv::Point2f> result;
cv::perspectiveTransform(templatePoints2f,result,H);
for(unsigned int i=0; i<result.size(); ++i)
std::cout << result[i] << std::endl;
*/
return 0;
}
JoeBroeselTue, 06 Jun 2017 03:26:44 -0500http://answers.opencv.org/question/156963/How to increase warpPerspective Or warpAffine precision?http://answers.opencv.org/question/123197/how-to-increase-warpperspective-or-warpaffine-precision/ I would like to transform my images with rotation degree of 0.001 degree, and translate them with 0.0001 pixel in row or column. The problem with this library is that it cannot manage this kind of tiny transformation steps and the result is that the source and transformed images are identical with this tiny transformations!
Is there any way to increase this precision such that I could get correct and different images when I rotate them and translate them by 0.001 degree and 0.0001 pixels respectively ?
One approach is changing the interpolation mask size which is defined by "INTER_BITS" inside imgproc.hpp file and compiling opencv library from scratch, but it could only increase the precision only a little bit.
AmirKKThu, 19 Jan 2017 08:59:24 -0600http://answers.opencv.org/question/123197/Rigid body motion or 3D Transformationhttp://answers.opencv.org/question/73498/rigid-body-motion-or-3d-transformation/I have a set of 3D points in camera space and I wish to transform them to world space. I have R and t for my camera. I can make my own transformation matrix [R|t] and gemm this with a matrix of my 3D points converted to homgeneous, but it seems to me that perhaps this process is already contained in an OpenCV function as it is a common procedure. Is there such a function?
Forgive me if this has been answered elsewhere, I have not found it.Throwaway99Sat, 17 Oct 2015 08:24:17 -0500http://answers.opencv.org/question/73498/please help regarding cropping a complex matrixhttp://answers.opencv.org/question/64362/please-help-regarding-cropping-a-complex-matrix/ Hi, I don't know why I can't access the other account I opened in Feb this year. So thanks to thudor who replied to my question in February.
Now the problem I am facing is, (I am still novice in opencv, so please bear):
1. I am taking DFT of an image.
2. From the complex output I am cropping a circular portion surrounding a set of coordinates.
3. I am doing inverse Fourier Transform of this cropped Image (the size of the cropped image is equal to the complex output of DFT).
4. Now when I take the magnitude of the IDFT its fine. But the phase is not what I get in MATLAB. I mean the pattern.
So, I tried to look at the possible troubles. When I am cropping the image, I am creating a mask.
temp = new_mask(complexI, real_center);
Mat new_mask(Mat q, max_intensity m)
{
Mat masks = Mat::zeros(q.size(), CV_8U);
circle(masks, Point(m.col_ref, m.row_ref), int(filter_rad), Scalar(255, 255, 255), -1, 8, 0); //-1 means filled
return masks;
}
And then using this mask I am cropping the complex matrix.
Mat cropped(complexI.size(), complexI.type(), Scalar::all(0));
complexI.copyTo(cropped, temp);
Do you think, because I used CV_8U as the type of the mask matrix it is causing trouble when I am doing inverse Fourier transform? Please suggest.
I tried using other data types like CV_64FC2 but then the sentence complexI.copyTo(cropped, temp) doesn't work. Even if I use the type complexI.type() for the masks matrix this sentence doesn't work.
I am giving an example of the comparison of phase output of IDFT in MATLAB and opencv. PLEASE HELP!!!![image description](/upfiles/14345961906952511.jpg)
The blue line is MATLAB output and the red one is opencv.
THANKS A LOT IN ADVANCE whoever helps :)tkamalWed, 17 Jun 2015 21:59:11 -0500http://answers.opencv.org/question/64362/Perspective Transform using Chessboardhttp://answers.opencv.org/question/62956/perspective-transform-using-chessboard/ Hey,
I need some help with this problem:
I have a camera that takes a picture of something on a horizontal plane with a specific angle.
That creates a perspective transform of this "something". And I would like to get this picture as if I would look down from top of it.
What I did already and one thing that I don't know how to do:
1. I placed a chessboard there.
2. I find the corners of the chessboard.
3. ???
4. cvGetPerspectiveTransform
5. cvWarpPerspective
My problem is point 3.
I have to find out Source and Destination Points which depend on the corners of the chessboard and the width of the picture that was taken, because they show the transformation.
Source is easy: (0,0), (Width, 0), (0,Height) and (Width,Height), because I want the whole picture to be transformed.
Destination however is difficult for me. I don't know how to find those points.
I want that the whole picture (Not just the part with the chessboard inside) is transformed within a single step.
Like in the picture below.
I would appreciate any help.
Greetings and my thanks in advance,
Phanta
![image description](/upfiles/14331120304698347.png)PhantaSun, 31 May 2015 07:20:48 -0500http://answers.opencv.org/question/62956/hough circle detection problemhttp://answers.opencv.org/question/58960/hough-circle-detection-problem/MY CODE IS :
i want to detect outer circle
is it possible ?
video is [here](http://www.filedropper.com/1_1)
// SYDNIA DYNAMICS 2015
#include <iostream>
#include <stdio.h>
#include <vector>
#include <thread>
#include <opencv2/opencv.hpp>
#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/nonfree/nonfree.hpp"
using namespace cv;
Mat src, src_gray;
Mat dst, detected_edges;
int threshold_value = 11;
int threshold_type = 0;;
int max_value = 255;
int max_type = 4;
const char * window_name = "CCC";
string trackbar_type = "Tbin";
string trackbar_value = "Value";
int main(int argc, char *argv[])
{
VideoCapture cap;
cap = VideoCapture("D:/SYDNIA/1.AVI");
if (!cap.isOpened()) // if not success, exit program
{
std::cout << " !! --->> camera problem " << std::endl;
return -1;
}
namedWindow(window_name);
cvMoveWindow(window_name, 5, 5);
int MAX = 130;
createTrackbar("MAX", window_name, &MAX, 300);
int MIN = 100;
createTrackbar("MIN", window_name, &MIN, 300);
int BLACKLEVEL = 47;
for (;;) {
if (!cap.read(src))
{
std::cout << "GRAB FAILURE" << std::endl;
exit(EXIT_FAILURE);
}
cvtColor(src, src_gray, CV_RGB2GRAY);
blur(src_gray, src_gray, Size(15, 15));
threshold(src_gray, dst, 11, 255, 0); //tareshold
vector<Vec3f> circles;
HoughCircles(dst, circles, CV_HOUGH_GRADIENT, 1, dst.rows, 20, 7, MIN, MAX);
string status = "";
for (size_t i = 0; i < circles.size(); i++)
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
bool ok = false;
int r = src.at<Vec3b>(center.y, center.x)[0];
int g = src.at<Vec3b>(center.y, center.x)[1];
int b = src.at<Vec3b>(center.y, center.x)[2];
if ((r<BLACKLEVEL) && (g<BLACKLEVEL) && (b<BLACKLEVEL))ok = true;
if (ok)
{
int radius = cvRound(circles[0][2]);
circle(src, center, 2, Scalar(30, 255, 140), -1, 3, 0);
circle(src, center, radius, Scalar(30, 255, 0), 3, 8, 0);
status = "2";
break;
}
else
{
status = "0";
}
}
imshow(window_name, src);
imshow("HSV", dst);
if (waitKey(1) == 27)break;
}
return 0;
}
source picture :
![image description](/upfiles/14279214619538235.png)
code output center:
![image description](/upfiles/1427921615909328.png)
is it possible to make it :
![image description](/upfiles/14279216578026321.png)
VolkanWed, 01 Apr 2015 15:56:56 -0500http://answers.opencv.org/question/58960/Anold Transformhttp://answers.opencv.org/question/54297/anold-transform/ Hello every one,
I have written code for image scrambling and inverse scrambling using Arnold transform.
Inputs:
1. number of iterations for scrambling is any arbitary number between 0 and period of Arnold transform for the given size=enciter.
2. number of iterations for inverse scrambling=>deciter = period-enciter.
for 128x128 image period of arnold transform is 96.
Refer relevant pepper of arnold tranform for periods of different sizes .
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\core\core.hpp>
#include <opencv2\core\mat.hpp>
#include <opencv2\imgproc\imgproc.hpp>
#include<opencv2\features2d\features2d.hpp>
#include<iostream>
#include<math.h>
#include<conio.h>
using namespace std;
using namespace cv;
class image
{
public:
Mat im,im1,im2,im3,cs,im_enc,frame;
int getim();
};
int image::getim()
{
im=imread("Caption.png",0);
if (im.empty())
{
cout << "Error : Image cannot be loaded..!!" << endl;
return -1;
}
resize(im,im,Size(128,128),0.0,0.0,1);
imshow("Input Image",im);
Mat temp=Mat::zeros(im.size(),im.type());
double m=im.rows,x,x1,y,y1;
int enciter=50;
int deciter=96-enciter;
for(int iter=0;iter<enciter;iter++)
{
for(double i=0;i<m;i++)
{
for(double j=0;j<m;j++)
{
x=fmod((i+j),m);
y=fmod((i+2*j),m);
temp.at<uchar>(x,y)=im.at<uchar>(i,j);
}
}
temp.copyTo(im);
temp=Mat::zeros(im.size(),im.type());
}
imshow("Scrambled Image",im);
for(int iter=0;iter<deciter;iter++)
{
for(double i=0;i<m;i++)
{
for(double j=0;j<m;j++)
{
x=fmod((i+j),m);
y=fmod((i+2*j),m);
temp.at<uchar>(x,y)=im.at<uchar>(i,j);
}
}
temp.copyTo(im);
temp=Mat::zeros(im.size(),im.type());
}
imshow("Inverse Scrambled Image",im);
waitKey(0);
return 0;
}
int main()
{
image my;
my.getim();
return 0;
}
![image description](/upfiles/1423304278947551.png)MahavirMon, 02 Feb 2015 04:11:36 -0600http://answers.opencv.org/question/54297/accounting for over saturation using Filter2D?http://answers.opencv.org/question/34965/accounting-for-over-saturation-using-filter2d/i am using filter2d to convolve an image. as i understand convolution, to make sure that the pixels do not over saturate (turn to white) i would normalize each pixel after it has been convolved by dividing it by the sum of the pixels in the kernel.
but since i am using a very large 30x30 kernel to convolve, filter2d is using cross correlation, which uses discrete fourier transforms.
how would i prevent over saturation when using discrete fourier transforms?
i appreciate anyone who shares any ideas for how to deal with this.
Anton BurschThu, 12 Jun 2014 18:34:49 -0500http://answers.opencv.org/question/34965/CPU<->GPU equivalent functionhttp://answers.opencv.org/question/29272/cpu-gpu-equivalent-function/Hi,
i'm working on a GPU version of my code, but i can't find any way to do an equivalent of the transform() function on GPU.
I use this function to apply a 4x4 Mat on an opencv Mat to make it b&w,sepia,etc....
Is there a way to do this on GPU?
ThanksTarifutFri, 28 Feb 2014 08:06:16 -0600http://answers.opencv.org/question/29272/feature extraction for OCR: I am going to use Fourier transform,how to implement it using vc++ opencv?http://answers.opencv.org/question/28476/feature-extraction-for-ocr-i-am-going-to-use-fourier-transformhow-to-implement-it-using-vc-opencv/feature extraction for OCR: I am going to use Fourier transform,how to implement it opencv?
provide examples if possible
thankyousujayMon, 17 Feb 2014 01:06:54 -0600http://answers.opencv.org/question/28476/which is the best algorithm other then connected component for character segmentation from text image in open cv?http://answers.opencv.org/question/28477/which-is-the-best-algorithm-other-then-connected-component-for-character-segmentation-from-text-image-in-open-cv/ i am gone a try with hough transform, is it possible to implement it?sujayMon, 17 Feb 2014 01:09:56 -0600http://answers.opencv.org/question/28477/How can I find rotation angles (pitch, yaw, roll) from perspective transofmation coefficients?http://answers.opencv.org/question/22100/how-can-i-find-rotation-angles-pitch-yaw-roll-from-perspective-transofmation-coefficients/I have two 2d quads (each represented using 4 xy pairs), one of them is a perspective transformation of the other. How can I use these quads to deduce the rotations (pitch, yaw, roll) that caused the perspective distortion?
Notice that I used the cvGetPerspectiveTransform() which returns the perspective transformation coefficients in the form of a 3x3 matrix. I am able to use such coefficients to map a point from one space to another. However, it is the rotation angles which I'm concerned about knowing.
Any ideas?
Thanks, Hasan.has981Tue, 08 Oct 2013 05:32:31 -0500http://answers.opencv.org/question/22100/Homography matrix transforhttp://answers.opencv.org/question/21154/homography-matrix-transfor/Hi All!
I am very beginner i would like to use a full code which is based on SURF_homography.cpp in the tutorial but i would like to use it in this way:
I want to pick some (3 or 4) points on an image and the program should find the homography plain between the following images based on the picked points. It would be good if I can set the values of the transformation matrix. I have to analyse the following images by manipulating the values.
Can anybody help where I find the source code like this?DaneeeGSun, 22 Sep 2013 18:49:28 -0500http://answers.opencv.org/question/21154/affine transform coordinatehttp://answers.opencv.org/question/18651/affine-transform-coordinate/Hi,
how can I transform a Coordinate (X/Y) with a 3x2 Transform Matrix?
For example, I have an image (img1) of 2048x2048 px, transform it (apply some rotation and translation) and then get img2. Now I want to know, where the pixel, which was at the point P(100/150) in img1, is in img2?
It does not have to be totally accurate, some pixels off is no problem.
Is there any method to achieve this in OpenCV ?
pkohoutMon, 12 Aug 2013 08:37:08 -0500http://answers.opencv.org/question/18651/Eliminate scaling from perspective transformhttp://answers.opencv.org/question/15936/eliminate-scaling-from-perspective-transform/I would like to project an image to a textured wall. Using various techniques I was able to detect the wall plane, and now I just want to draw the image on that plane.
The plane itself is a quadrangle, and I was able to get a perspective transform matrix between the image and the wall plane, and use that transform to perform the projection. My only problem is that the image is scaled up to occupy the entire quadrangle of the wall plane. I would only like it to follow its perspective, without the changes in scale. Can I somehow eliminate the scaling from the transformation matrix, or is there any other method that would help?
Thank you!Rares MusinaSun, 30 Jun 2013 09:40:03 -0500http://answers.opencv.org/question/15936/What data types can I use for a complex OutputArray?http://answers.opencv.org/question/5733/what-data-types-can-i-use-for-a-complex-outputarray/I am using `cv::dft(InputArray src, OutputArray dst, cv::DFT_COMPLEX_OUTPUT)`. What data types are supported for a complex OutputArray? I was hoping to use `std::vector<std::complex<double>>`.UltraBirdSat, 05 Jan 2013 14:33:08 -0600http://answers.opencv.org/question/5733/Transform which selects similarly colored regions?http://answers.opencv.org/question/14819/transform-which-selects-similarly-colored-regions/Is there any transform, which "looks" at any region of a picture and then compares it with neighbor region, and if two regions are similarly colored, then unifies two regions in one and colors it with average color.
Finally, this transform will distinguish "paper" from "writing" on it.
It will color with constant value any gradient fields or dimmed pictures, while remain uncolored any area with frequently changed colors, like in writings.
Key idea is that this transform should allow any big color change if it is gradual.
Example:
![image description](/upfiles/13705949961925747.png)
So, this filter should be something opposite to finding edges.DimsFri, 07 Jun 2013 03:31:57 -0500http://answers.opencv.org/question/14819/Transformation from log-polar to Cartesianhttp://answers.opencv.org/question/11276/transformation-from-log-polar-to-cartesian/I have one simple (and perhaps stupid) question. I am using OpenCV 2.1.
- I transform the original image (320 x 240) to a log-polar image of resolution (200 x 200) by using CvLogPolar function with the magnitude parameter 40, and center of the image set as the center of transformation.
- I perform some processing, and obtain a single point (centroid of the blob) in the log polar coordinates.
-My question: How to transform coordinate values of the point from log-polar to Cartesian frame, i.e. to the coordinates of the original image? (I just need to transform the computed values, not the whole image)?
Thanks.luksdocWed, 10 Apr 2013 15:07:42 -0500http://answers.opencv.org/question/11276/transform phase map to 3Dhttp://answers.opencv.org/question/10250/transform-phase-map-to-3d/Dear fellow opencv users and developers,
I have been trying for quite some time to figure out how to transform my phase map to 3D.
A little explanation:
Opencv has a nice stereo vision workflow, stereo calibration, acquire images, rectify images, calculate disparity via some correspondence technique, use reprojectImageTo3D to get the 3D point cloud.
I'm replacing one of the cameras with a projector for dense correspondence calculation. I'm able to stereo calibrate the camera/projector stereo pair, then I project vertical line patterns while the camera takes pictures of those patterns and calculate the projector column to camera pixel mapping. So here I only have projector column mapping and not column and row mapping. The mapping ends up looking like a disparity image but its not a disparity. The problem is that during the regular camera/camera disparity calculation both camera images are rectified, so all distortion is taken out (amongst other things happening with lining up epipoles). Then the correspondence algorithm searches both rectified images row by row (as the epipoles are aligned) and calculates the disparity.
In my situation I can't rectify the images, because there is only one image from the camera. But I can find the mapping of the projector columns to the camera pixels. So I can undistort this phase map image which will account for the camera distortion, but don't know how to undistort the projector side of the equation.
My question. Does opencv have some function or is there some group of functions that I can use to transform my projector pixel colums mapped to my camera pixels to 3D? I want to account for all distortion. Again here I project only vertical lines which gives me a 1D projector column (y) pixels to camera (x,y) pixels.
Thanks for any help or advice.
nmm02003nmm02003Wed, 27 Mar 2013 21:04:15 -0500http://answers.opencv.org/question/10250/Get plane image from sphere photohttp://answers.opencv.org/question/9162/get-plane-image-from-sphere-photo/I have a photo of a ball. ![example from google](http://cdn.cloudfiles.mosso.com/c128031/sc-image/c/9/a/d/c9ad4a7dffab53cd5faa6a81d003b9a8.jpg)
I want to get the plane image without distortions. For example text "APPROVED" must transform into straight text with one-sized letters, all hexagons must be the same.hbWed, 13 Mar 2013 12:59:49 -0500http://answers.opencv.org/question/9162/Translation transform with depth imagehttp://answers.opencv.org/question/7572/translation-transform-with-depth-image/Hi,
----------
**Summary**:
1. Translate depth-map by <X,Y,Z>, which are known
3. The new depth map has same size as the one which was input
4. A smoother estimate than - *subtraction by Z and translation by warpPerspective* - would provide is thought to be needed
----------
**Explanation**:
I'm trying to perform a translation transformation on a depth map (only depth no intensities), so that I'm able to zoom in on a particular part of the image, while keeping the size of the matrix the same.
I.E. if my input matrix was mat_inp with size (rows, cols) of type float
then I'd like my output to be mat_out with size (rows, cols) of type float with the origin translated by (X,Y,Z). I know what the translation(X, Y, Z) is.
So, I'd like to move my perspective to the point (X,Y,Z), coordinates are in the frame of the initial frame(perspective).
Does anyone know if an existing function exists that lets me do that?
I thought of a way to do it, but am not sure if its correct:
1. Subtract all pixels by Z
2. Replace negatives by zero
3. Use warpPerspective to translate X,Y,Z
The only problem is that in case of occlusions due to a something closer than Z, I don't think I'd get a smooth new depth map. If there was a small object close by, it might make the depth map closer than what a smooth estimate should be.
It seems to be that these warp methods are optimized for intensity values and that makes me wonder if a function exists that could do the translation shift for a depth map.
Sorry about the long post, any help is appreciated.
shrayMon, 18 Feb 2013 03:58:03 -0600http://answers.opencv.org/question/7572/Image transformationhttp://answers.opencv.org/question/4629/image-transformation/Hello,
I've got the following problem: I would like to transform an image (with scaling, rotation and sheering) and save the position of a pixel in the reference image and the transformed position of this pixel in the transformed image. How can I do this, when the transformation matrix is known and I work with Mat objects?
Bomber19Tue, 27 Nov 2012 08:15:14 -0600http://answers.opencv.org/question/4629/How does rotation in OpenCV workhttp://answers.opencv.org/question/3533/how-does-rotation-in-opencv-work/How does rotation in OpenCV work? I have a ROI in the image that I would like to rotate for a certain angle, and I move it to the top left corner of the image as a center of rotation. Then I move ROI back to the old position to see its new position. But the result I get is ROI under some strange angle. Can someone explain me how the rotation works?dzvrtSun, 28 Oct 2012 17:57:37 -0500http://answers.opencv.org/question/3533/RANSAC and 2D point cloudshttp://answers.opencv.org/question/1834/ransac-and-2d-point-clouds/I have 2 point clouds in 2D and I want to use RANSAC to determine transformation matrix between them.
I have no pairs of points, just 2 sets of points.
How can I do this in OpenCV?
----------
I tried to write ransac-like scheme for my purposes.
1. Get 4 random points from 1st set and from 2nd set.
2. Compute transform matrix H using getPerspectiveTransform.
3. Warp 1st set of points using H and test how they aligned to the 2nd set of points using some metric(I don't know what metric I should use I tried sum of min distance for all points in 1st set, but it seems it's not work good if points group together after transform)
4. Repeat 1-3 N times and choose best transform according to metric.
maybe I should use different metric?
Also I have idea that I can match points using shape context algorithm and only then use RANSAC(or some other algorithm) to determine transformation.mrgloomTue, 28 Aug 2012 02:45:39 -0500http://answers.opencv.org/question/1834/