# Affine transform image outside of view

I have an image on which I apply affine transforms (several of them in a row). The problem is that after executing them, sometimes happens that parts of the transformed image go outside of the view window and are not visible (as a result of the transformation). To make things worse, I use the transformed image to apply other affine transforms on it, but since the invisible part is lost, these transforms are not applied on it. Any suggestions how to recover the lost image part due to affine transformation? Thanks

edit retag close merge delete

Sort by » oldest newest most voted

What do you mean by view?

If you have set a ROI (region of interest) for an image, you need to remove the roi with cvResetImageROI().

But if the affine transform merely transforms some points out of the defined boundaries of the image, then the only solution is to use a larger image for transforming.

You could first create a new image with double the dimensions,
then call cvSetImageROI() on that image to select a region in the middle with the size of your old image
then cvCopy() the old image to the new one
then call cvResetImageROI() to remove the roi (important!)
then do your transforms with the new image

EDIT:

If the affine transformations are such that even a much larger image will not contain the result, I can think of two options:

1) instead of transforming a 2d matrix, transform a list of points:
make a matrix that contains all the (foreground) pixels in your image, with their x coordinate in channel 0 and their y coordinate in channel 1. The dimensions of the matrix do not matter. then call transform instead of WarpAffine with this matrix then paint the resulting transformed points in a new image

2) change your affine transforms so that they do not shift the image quite as much
If you use getAffineTransform, then simply shift your 3 reference dst points so that they have the same centroid as your 3 src points before calling getAffineTransform

more

I applied all your suggestions and after I reset the ROI, I apply the transformation on the big image. Then, I display the new image which now has the same size as the big image, but I still can't see the results of my transformation (the image is black).

( 2012-10-24 10:09:57 -0500 )edit

If the image is black then something went wrong along the way. Have you tried displaying the image before the transformations, and with less severe transformations? It is possible that the larger image is still too small. Also, make sure to use yet another image as destination for cvWarpAffine, since it cannot operate in-place.

( 2012-10-24 12:19:11 -0500 )edit

I think that my problem is that once I start working with a big image, I apply the affine transform on the entire image, and then as I result I obtain another big image. Then if I continue applying transformations on the big image, I consider the coordinate system of the big image and that's why the problem happens. Is there a way to translate the coordinate system to the image (or the image is translated to the coordinate system) so that the image can be visible? The problem is that after the affine transform, I don't have any control of the position of the translated image.

( 2012-10-25 16:51:24 -0500 )edit

( 2012-10-26 15:12:14 -0500 )edit

Thanks for the ideas, I'll think about them.

( 2012-10-28 20:01:30 -0500 )edit

In the first you have to rotate 4 corners of image then compute rectangle of rotated points. In the following code you have to call bhRotateImage for rotate of image.

#define BH_DEG_TO_RAD   CV_PI/180
typedef unsigned int        UINT;
#define BH_MAXUINT     ((UINT)~((UINT)0))
#define BH_MAXINT      ((int)(BH_MAXUINT >> 1))
typedef vector<CvPoint2D32f> BhPoints32f;
CvSize bhGetRectSize(const CvRect srcRect)
{
return cvSize(srcRect.width,srcRect.height);
}
BH_RECT bhBlankRECT()
{
BH_RECT res;
res.left = BH_MAXINT;
res.top = BH_MAXINT;
res.bottom = 0;
res.right = 0;
return res;

}
int bhLineLength(CvPoint p1,CvPoint p2)
{
int res ;
if ( (p2.x == p1.x) && (p2.y == p1.y))
res = 0;
else  if (p2.x == p1.x)
res = abs(p2.y - p1.y);
else if (p2.y == p1.y)
res = abs(p2.x - p1.x);
else
res = cvRound( sqrt(pow((float)p2.y-p1.y,2) + pow((float)p2.x-p1.x,2)));

return res;
}
float bhGetLineAngle(CvPoint p1, CvPoint p2)
{
float deg =  float( atan2( (float)(p2.y - p1.y) , (float)(p2.x - p1.x)) * BH_RAD_TO_DEG );
if (deg < 0)
deg = 360 + deg;

return deg;
}

CvRect bhGetImageCvRect(const IplImage* srcImage)
{
return cvRect(0,0,srcImage->width,srcImage->height);
}
BhPoints32f bhGetRectPoints(CvRect srcRect)
{
BhPoints32f result;
result.push_back(cvPoint2D32f(srcRect.x,srcRect.y));
result.push_back(cvPoint2D32f(srcRect.x + srcRect.width,srcRect.y));
result.push_back(cvPoint2D32f(srcRect.x + srcRect.width ,srcRect.y + srcRect.height));
result.push_back(cvPoint2D32f(srcRect.x,srcRect.y + srcRect.height));
return result;
}
CvPoint2D32f bhRotatePoint(CvPoint2D32f srcPoint,CvPoint2D32f center,float angle)
{
float teta = bhGetLineAngle(cvPointFrom32f(center),cvPointFrom32f(srcPoint));

CvPoint2D32f result;
float newTeta = angle + teta;
return result;

}

void bhRotatePoints(BhPoints32f& srcPoints,CvPoint2D32f center,float angle)
{
for (unsigned int i =0 ; i < srcPoints.size();i++)
{
srcPoints[i] = bhRotatePoint(srcPoints[i],center,angle);

}

}

CvRect bhGetPoints32fCVRect(BhPoints32f points)
{

BH_RECT r = bhBlankRECT();
for (unsigned int i=0; i < points.size() ;i++)
{
if (points[i].x < r.left)
r.left = (int) points[i].x;

if (points[i].x > r.right)
r.right = (int)points[i].x;

if (points[i].y < r.top)
r.top = (int)points[i].y;

if (points[i].y > r.bottom)
r.bottom = (int)points[i].y;

}

return bhRect2CvRect(r);
}
CvPoint bhGetImageCenter(const IplImage* srcImage)
{
return cvPoint(srcImage->width/2,srcImage->height/2);
}

IplImage* bhRotateImage(const IplImage* srcImage,CvPoint center,double angle,CvScalar fillval ,CvPoint& offsetPoint,int flags )
{
CvMat* rot_mat = cvCreateMat(2,3,CV_32FC1);
cv2DRotationMatrix(cvPointTo32f( center), angle, 1, rot_mat );
IplImage* resultImage = NULL;

CvRect imageRect = bhGetImageCvRect(srcImage);
BhPoints32f rectPoints = bhGetRectPoints(imageRect);
bhRotatePoints(rectPoints,cvPointTo32f(center),float(-angle));
CvRect rotateRect = bhGetPoints32fCVRect(rectPoints);

CV_MAT_ELEM(*rot_mat,float,0,2) = CV_MAT_ELEM(*rot_mat,float,0,2) - rotateRect.x;
CV_MAT_ELEM(*rot_mat,float,1,2) = CV_MAT_ELEM(*rot_mat,float,1,2) - rotateRect.y;

resultImage = cvCreateImage(bhGetRectSize(rotateRect),srcImage->depth,srcImage->nChannels);

cvSetImageROI(resultImage,bhGetImageCvRect(resultImage));
cvWarpAffine( srcImage, resultImage, rot_mat ,flags,fillval);
cvResetImageROI(resultImage);
offsetPoint = cvPoint(rotateRect.x,rotateRect.y);

CvPoint preCenter = bhGetImageCenter(srcImage);
CvPoint curCenter = bhGetImageCenter(resultImage);

offsetPoint = cvPoint(curCenter.x - preCenter.x , curCenter.y - preCenter.y ...
more

Thanks for the code. I'll analyze it to understand how it works.

( 2012-10-28 19:59:58 -0500 )edit

Official site

GitHub

Wiki

Documentation

## Stats

Seen: 710 times

Last updated: Oct 27 '12