OpenCV on iOS producing milky/off-colored hdr images?
I'm trying to get OpenCV to produce hdr images on iOS with 9 pictures that I've supplied it (-4.0 to 4.0 EV's, increment of 1), but the pictures are coming out looking off. Everything has this weird milky filter look to it, and outside the window looks blue and pixelated. Any insight as to why this result could be happening? Below is my code as well as an image showing what issue I'm having.
cv::Mat mergeToHDR (vector<Mat>& images, vector<float>& times)
{
imgs = images;
Mat response;
vector<Mat> images_(images);
Ptr<AlignMTB> align=createAlignMTB(4);// 4=max 16 pixel shift
align->process(images_, images);
Ptr<CalibrateDebevec> calibrate = createCalibrateDebevec();
calibrate->process(images, response, times);
Mat hdr;
Ptr<MergeDebevec> merge = createMergeDebevec();
merge->process(images, hdr, times, response);
Mat tm;
Ptr<TonemapReinhard> tonemap = createTonemapReinhard(2.2f);
tonemap->process(hdr, tm);
tm = tm * 255;
tm.convertTo(tm, CV_8U);
return tm;
}
I am actually working on something quite similar. A couple things to try out. 1: Try displaying the nine images taken at the different exposures. I recall having a weird issue whenever I'd manually set my exposures so instead I set them depending on the current exposure calculated by iOS. 2: Look at the final produced hdr image before tonemapping. It might be that it is already over-exposed and applying the tonemap at a
2.2f
just worsens it. Probably a lower gamma might help in that scenario. 3: Try using a different tonemap maybe? 4: Do you really need 9 images to create the HDR? In my application, 3 sufficed and I calculated the exposure times based off of the iOS recommended value.What do the requirements say about the exposures? You can try what I did; automate their values respectively.
Honestly, the requirements aren't super detailed, they just specify that I have 9 exposures. Any tips on automating their values?
For my application, I used three images so it was a minor logic implementation. I grabbed the current exposure duration, then halved or doubled it. Then I set my camera to take pics with these three exposure rates. Thereafter I converted the exposure durations to seconds in order to pass them to the OpenCV API.
For your case, consider doing the same thing. Try first with my proposed three images to gauge the results then scale it up to nine. You can use this and this for doubling and halving the exposure durations respectively. The main advantage of this method is once you pull up the camera, it automatically performs calculations to determine the best exposure rate to use in its current environment so we just leverage that to take extra photos.
One more thing. Make sure the newly calculated exposure durations are within the bounds of min and max. Keep me posted on the results you attain, Cheerio :)