[New project] Speed & Precision
Hi everyone,
I had a small project in mind for some years now, and OpenCV might help me resolve it (hopefuly). I've read quite a bit (but surely not enough for some) on camera vision these days and I'd like to know if this would be do-able. To my point of view the answer is yes, but I need some guidance to start.
I'm trying to achieve a sub-mm precision on objects that moves in 3D from mm/sec to 100mm/sec. The frame -to begin with- will be 1500mm*1000mm.
I've seen projects that base their precisions on pixels fading and weight computation and achieve pretty good results (>0.1mm) but don't mention any environnement condition that could distord the results through time.
In order to move to this project with enough details, I'd like to have some information: Are 2 cameras enough ? I've read that a 3rd could help remove the blur and increase the precision, but is this enough, shall I order 100 of camera then :) ? If so, can OpenCV handle more than 2 camera without help of GSvideo ? Is a 4K camera needed? I'd say no, thanks to the spec of the IMX219PQ (but only one port to Raspberry...) To track with more accuracy with less filters, are IR trackers a good solution ?
Thanks in advance for your kind remarks and explanation. See you soon Philippe
This is very complex I think. For a good estimation there are too few information. But starting with the problems you have to solve:
I think you have to solve a lot more problems, but this should give you just a small impression of thematic.
Thank you matman for your quick answer.
I may have set my expectations too high. For my defense, as I've felt on this 2012-paper http://www.series.upatras.gr/userfiles/6_Peloso.pdf (http://www.series.upatras.gr/userfile...) I thought that 2016-technics could now achieve it with off the shelf products.
Anyway, let's correct my enthusiasm to achievable results (with your advice): Frame: 500*300mm Precision: 0.5mm From some camera spec, this seems to be more realistic.
I don't get your 2, can't variation be limited with redundancy ? 3. I understood that objective distortion affects all kind of camera. It just need to be sorted out individuality before and then, with all the cameras, isn't it.? 4 and 5: Is it more computational expensive (like... linear?) to process 4K@30fps or 720HD@200fps ? @200fps do you still get blurred images
(Sorry, the layout always comes wrong...)
Higher frame rates reduce blur.
With multiple cameras, range can be sorted out, but your precision is now limited by how closely you can align and calibrate your cameras (which takes care of the distortion too). Also timing. If your time is different between the cameras, your position estimates will be off too.
720x200 and 4kx30 are almost exactly the same amount of data, so it depends on what precisely you are doing. Either way, it's not easy processing that much data.
But you need to track these objects in real-time?
Not obviously real time, but as close as possible.
If I have to slave a XU4 board/camera to filter the frame and use parallel computing after, on order to keep the posttreatment fast, then why not.
Isn't timing, alignment and calibration already a stereovision problem? I''ve read a multicamera-auto-calibration paper done with a laser that works pretty well (as point is fixed, timing isn't answered :))
Because my hands aren't dirty -yet-, I can't figure out why this is a great issue. A lot of projects involve large views, speed detection and mm precisions (biomeca, sismic, traffic...).
I'm not forcing you to say "this is great, go on!"... Just want to understand the amateur feasable range :) Readings on the difficulties are more than welcome.
Thanks again for your valuable comments
It's not that it can't be done, it's just not easy. If you want an idea of how things will go, set up two cheap cameras in your room and try to track a small ball. Then try throwing it across the area and see how well you can track. It will be almost all the code, and you'll be able to get an idea of how well you can actually pinpoint the ball.
You will also be able to work out what does and doesn't work as far as calibration and alignment.
For me it is just weird that one would use computer vision to solve sub-mm precision problems. Is this normal?
Not really. It's just a matter of scale and how well you can align the cameras. You wouldn't want to try and get sub-mm precision from a typical webcam a few meters away, but get really close, and it's not impossible.
@Pedro Batista Yes it is weird but times are changing