Annotating pedestrians in a video

asked 2015-03-26 01:25:06 -0500

Sowmya gravatar image

Hi,

I am looking at a object detection task. And I want to give the ground truth by annotating pedestrians in a video. I read information about Caltech Benchmark datasets and Piotr's Matlab Toolbox for annotating. But the Piotr's toolbox is used for behaviour annotation.

And I looked at ViPER -GT also. http://viper-toolkit.sourceforge.net/... http://viper-toolkit.sourceforge.net/...

After downloading the ViPER source code, I am not able to understand how to start annotating.

Can anyone suggest how to annotate in a video?

edit retag flag offensive close merge delete

Comments

If you cut your video into frames, you can simply use the new builtin functionality of OpenCV3.0 and OpenCV2.4. It is only available in the active branches, since it was added only a month ago. When building OpenCV an extra tool called opencv_annotation will be available. Feedback is always welcome!

StevenPuttemans gravatar imageStevenPuttemans ( 2015-03-26 04:33:42 -0500 )edit

How to use the opencv_annotation? After I selected the ROI and press "c", nothing happend. And the "Usage: " didn't appear when I only entered "opencv_annotation".

Sheng Liu gravatar imageSheng Liu ( 2015-05-11 08:37:26 -0500 )edit
1

Weird, it should work in both windows and linux, as it does here. But a command like opencv_annotation /data/testimages/ result.txt should do the trick just fine. It will read all images inside that folder, open them up and save the result to result.txt file.

StevenPuttemans gravatar imageStevenPuttemans ( 2015-05-11 09:00:54 -0500 )edit

Thank you.

Sheng Liu gravatar imageSheng Liu ( 2015-05-11 10:56:50 -0500 )edit

Oh, the result is "OpenCV Error: Assertion failed (size.width>0 && size.height>0) in cv::imshow, fi le D:\2411\opencv\sources\modules\highgui\src\window.cpp, line 261".

Sheng Liu gravatar imageSheng Liu ( 2015-05-11 11:11:45 -0500 )edit

This means that it cannot read your images. Basically your image is non existing ... you should check which image is yielding this result ... are you using absolute or relative paths to the data?

StevenPuttemans gravatar imageStevenPuttemans ( 2015-05-12 02:28:39 -0500 )edit

I see. And my result is F:\testimage\1.jpg 1 9 54 388 180 F:\testimage\2.jpg 1 48 43 89 63 F:\testimage\3.jpg 1 102 46 79 90 . But I think the opencv_createsamples cann't take the result as input. The input format maybe like this F:\image\1.png 1 0 0 100 36 F:\image\10.png 1 0 0 100 36 F:\image\11.png 1 0 0 100 36 F:\image\12.png 1 0 0 100 36 F:\image\2.png 1 0 0 100 36 F:\image\3.png 1 0 0 100 36 F:\image\4.png 1 0 0 100 36 . The parameters w and h in all images are the same. So how to use the opencv_annotation to get suitable result? Please give me some advice. Thanks a lot!

Sheng Liu gravatar imageSheng Liu ( 2015-05-27 09:43:04 -0500 )edit

opencv_createsamples can take this input just fine, I do it all the time. What it basically does is when you define -w and -h parameter is use these values to resize the original annotation to that training window size before adding it to the data vector. The input format that you are supplying is ONLY the case when you have a predefined cut out set of windows!

StevenPuttemans gravatar imageStevenPuttemans ( 2015-05-28 02:32:22 -0500 )edit
1

Thanks! I see.

Sheng Liu gravatar imageSheng Liu ( 2015-05-28 07:46:33 -0500 )edit