1 | initial version |
Well, in theory you could use standard markers and detect both their position and orientation in consecutive frames with a single line of code (Aruco). If the image is not blurred too much (which I am afraid could often be the case), you will be able to distinguish separate markers and compare their orientation in each frame, which will give you what you want after some calculations. Alternatively, you can use a standard football and this is what I mean by standard football:
without special markers. Just detect the black patches, which are regularly spaced around the ball. Use a model of your ball and make several hypotheses about its orientation and resulting locations of black patches (using just centers should be enough, but if you can detect corners, the precision will be much higher). Compare the real positions with the modeled ones and use some optimization algorithm to minimize the difference. In this case blur will not have such a strong influence on the orientation detection result, I suppose, but the optimization algorithm will be less straightforward than detecting a number of markers.