Ask Your Question
4

Dataset used for training DNN Face Detector

asked 2018-10-15 09:42:24 -0600

vikasguptaiisc gravatar image

updated 2020-09-18 13:58:29 -0600

I would like to know the source of the dataset used for training the DNN based Face Detector corresponding to the model - res10_300x300_ssd_iter_140000_fp16.caffemodel.

edit retag flag offensive close merge delete

Comments

yes i would like to know that dataset too and use it :) Alternativly you can use the voc dataset and filter by person class. Then label the faces.

I can also recommend the labeled faces in the wild dataset - as its name suggests - its labeled.

holger gravatar imageholger ( 2018-10-15 14:15:58 -0600 )edit

Hello @vikasguptaiisc and @Eduardo@berak, I checked the wiki page on Opencv but still cannot figure out which dataset is used for training the current caffe modell "res10_300x300_ssd_iter_140000_fp16.caffemodel". Any information on that will be helpful.

trohit gravatar imagetrohit ( 2019-04-22 21:57:17 -0600 )edit

2 answers

Sort by ยป oldest newest most voted
2

answered 2018-10-16 04:41:26 -0600

Eduardo gravatar image

For what it worth, you can have more information here:

Quote:

a) Find some datasets with face bounding boxes annotation. For some reasons I can't provide links here, but you easily find them on your own.


Complete description:

This is a brief description of training process which has been used to get res10_300x300_ssd_iter_140000.caffemodel. The model was created with SSD framework using ResNet-10 like architecture as a backbone. Channels count in ResNet-10 convolution layers was significantly dropped (2x- or 4x- fewer channels). The model was trained in Caffe framework on some huge and available online dataset.

  1. Prepare training tools You need to use "ssd" branch from this repository https://github.com/weiliu89/caffe/tre... . Checkout this branch and built it (see instructions in repo's README)

  2. Prepare training data. The data preparation pipeline can be represented as:

(a)Download original face detection dataset -> (b)Convert annotation to the PASCAL VOC format -> (c)Create LMDB database with images + annotations for training

a) Find some datasets with face bounding boxes annotation. For some reasons I can't provide links here, but you easily find them on your own. Also study the data. It may contain small or low quality faces which can spoil training process. Often there are special flags about object quality in annotation. Remove such faces from annotation (smaller when 16 along at least one side, or blurred, of highly-occluded, or something else).

b) The downloaded dataset will have some format of annotation. It may be one single file for all images, or separate file for each image or something else. But to train SSD in Caffe you need to convert annotation to PASCAL VOC format. PASCAL VOC annotation consist of .xml file for each image. In this xml file all face bounding boxes should be listed as:

<annotation> <size> <width>300</width> <height>300</height> </size> <object> <name>face</name> <difficult>0</difficult> <bndbox> <xmin>100</xmin> <ymin>100</ymin> <xmax>200</xmax> <ymax>200</ymax> </bndbox> </object> <object> <name>face</name> <difficult>0</difficult> <bndbox> <xmin>0</xmin> <ymin>0</ymin> <xmax>100</xmax> <ymax>100</ymax> </bndbox> </object> </annotation>

So, convert your dataset's annotation to the format above. Also, you should create labelmap.prototxt file with the following content: item { name: "none_of_the_above" label: 0
display_name: "background" } item {
name: "face" label: 1
display_name: "face" }

You need this file to establish correspondence between name of class and digital label of class.

For next step we also need file there all our image-annotation file names pairs are listed. This file should contain similar lines: images_val/0.jpg annotations_val/0.jpg.xml

c) To create LMDB you need to use create_data.sh tool from caffe/data/VOC0712 Caffe's source code directory. This script calls create_annoset.py inside, so check out what you need to pass as script's arguments

You need to prepare 2 LMDB databases: one for training images, one for validation images.

  1. Train your detector For training you need to have 3 files: train.prototxt, test.prototxt and solver.prototxt. You can ...
(more)
edit flag offensive delete link more
1

answered 2020-09-18 10:48:26 -0600

maikel gravatar image

In this post someone asked this very same question and the post author said to get in touch with the creator of the model, Aleksandr Rybnikov, which confirmed using the WIDER dataset.

edit flag offensive delete link more

Question Tools

1 follower

Stats

Asked: 2018-10-15 09:42:24 -0600

Seen: 3,271 times

Last updated: Apr 22 '19