cv2.split(img)[3] fails on RGB[A?]
S'up OpenCVers.
Although the title of the question (my first!) is quite sparse, the question itself requires some reasonably detailed background... so here goes.
I'm working with reasonably large (max size 131GB; usual size ~11GB) ECW aerial images. The images have spatial metadata that includes the co-ordinate system, and the coordinates of the top-left and bottom-right corners and centre, plus details on the colour bands. There is always an Alpha band (which is useful later).
The process uses gdal.Warp()
to cut each big image into hundreds of thousands of small images; each cutline is the bounding box of the boundary of an individual property. The boundaries themselves are stored in a PostgreSQL spatial database with a unique identifier (so I don't need to retain the geographical metadata in my output images: I can append them later).
As you might imagine, not all property boundaries are rectangular - and of those that are rectangular, a lot of them are not oriented 'north-south'. All bounding boxes are rectangular and oriented N-S.
What this means is that the amount of NODATA in each cut image varies with the degree of tilt of the property boundary away from 'vertical'. Generalising this to non-rectangular images, it varies with the difference between the minimum bounding rectangle of the geometry, and the bounding box.
The aim is to get a set of images with minimal NODATA, in order to run them against an image classifier.
So the next stage is to take each of the output 'cut' images, and orient them 'as vertically as possible'.
The second stage uses
cv2.split()
to extract the alpha layer,cv2.findContours()
to find the outline of the alpha layercv2.minAreaRect()
to get the minimum bounding rectangle (MBR) of the outline,
cv2.minAreaRect()
's output includes the orientation of the MBR (sweet!), so all that's left to do is snip the original image by the MBR, rotate the clipped output by MBR[2], and save the image.
The process is two scripts:
- cuts up the large image and stores the cut images;
- re-orients the images so that they are facing due[ish] North.
It's two scripts because the job is separable: the cut-up images are produced faster than the 'rotating' happens, and so it makes sense to have one script pumping out 'cut' images and the second script following behind 'rotating' the results (as opposed to doing the cut-and-rotate process to each image).
I've done this same task a dozen times (I wrote it to begin with, before there was a Python API for gdal - which meant 3 million os
or subprocess
calls to gdapwarp
and gdal_translate
) and it's always gone without a hitch.
Today the 'cut' process worked perfectly, but the 'rotate' failed at the cv2.split()
, saying that the index '3' was out of range (think of it as a direct reference to cv2.split()[3]
: the alpha layer in an RGBA image).
When I look at the images in QGIS ...
Just checking, do you use imread to bring the images in? If so are you using the IMREAD_UNCHANGED flag? That is needed to preserve the alpha channel.