In my lab they developed an algorithm which provides real world robots positions by tracking color-coded markers with a camera.
This is achieved using the cvBlobs library available here.
The software is really heavy though: can't do more than 10fps on a dual core core2 with 2gigs of ram. The camera resolution is 1280x768, average number of markers is 8, maximum marker speed is about 10cm/s.
These don't seem like taxing parameters, yet the software is running at 100% cpu. Since there is not much it can waste resources on, I am wondering if there are less computationally intensive ways of tracking markers via OpenCV that I could exploit.
And why is there a cvBlobs separated library, and why there are no similar function in the main OpenCV library?