I'm new to OpenCV but am a seasoned iOS programmer looking to create an iOS app that can detect when a specific type of object is present in a video feed and then go on to highlight that object - or at least give me the bounds of the object.
I've found various fragments of information on Cascade Classifier Training, Haar training, and openCV for iOS but found that nothing really covers the entire process. Could anyone point me in the right direction of a comprehensive tutorial of the procedure for making a cascade xml using a Mac and then the further implementation of object detection using that file in iOS? I am familiar with extracting frames from a vido feed in iOS, analysing images at a bit level, and also have a good grasp on the use of Haar wavelets for face recognition. My main downfall is that I have yet to find a clear and maintained example of openCV for iOS (that doesn't involve me attempting to compile openCV for iOS) and i'm confused on what my 1000-odd positive photos should actually contain (all cropped to same size, different lighting, angles, scale, other objects, background etc).
Many thanks for your time.