1 | initial version |
Probably the right way to do this is to somehow access the cameras internal parameters. Other than that, if you can make all 3 cameras point at the same location you could take several sets of 3 identical images and use that to try to train the contrast/brightness adjustments of each, which you can correct when you take new images. This is assuming that the three cameras always have the same relative brightness offsets from one another.
On the other hand, perhaps decomposing the images into HSV and equalizing the mean Value for each image could be an approach. Although this certainly has some drawbacks.
Looking at research into image stitching might also help, I'm sure you aren't the only one with this problem