Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

unnecessary feature learned in traincascade?

Hey,

I have a weird result using traincascade which I can't explained. I created a small set of dummy data just to get to understand what traincascade does.

I get the following results:

===== TRAINING 0-stage =====

<BEGIN
POS count : consumed   17 : 17
NEG count : acceptanceRatio    35 : 1
Precalculation time: 1
+----+---------+---------+
|  N |    HR   |    FA   |
+----+---------+---------+
|   1|        1| 0.428571|
+----+---------+---------+
|   2|        1| 0.428571|
+----+---------+---------+
|   3|        1| 0.142857|
+----+---------+---------+
|   4|        1|        0|
+----+---------+---------+
END>

and the created xml:

<stages>
    <!-- stage 0 -->
    <_>
      <maxWeakCount>4</maxWeakCount>
      <stageThreshold>2.4513483047485352e+00</stageThreshold>
      <weakClassifiers>
        <_>
          <internalNodes>
            0 -1 0 744.</internalNodes>
          <leafValues>
            9.0286773443222046e-01 -9.0286773443222046e-01</leafValues></_>
        <_>
          <internalNodes>
            0 -1 1 -1709.</internalNodes>
          <leafValues>
            -1.2098379135131836e+00 -1.2098379135131836e+00</leafValues></_>
        <_>
          <internalNodes>
            0 -1 1 -1709.</internalNodes>
          <leafValues>
            -1.4120784997940063e+00 1.4120784997940063e+00</leafValues></_>
        <_>
          <internalNodes>
            0 -1 2 3.5550000000000000e+02</internalNodes>
          <leafValues>
            -1.3462400436401367e+00 1.3462400436401367e+00</leafValues></_></weakClassifiers></_>
</stages>

From the first output, I'd say that the weak classifier #2 does not lead to better results. If you take a look at the learned decision stump in the xml output you see the following leafValues

-1.2098379135131836e+00 -1.2098379135131836e+00

which are exactly the same? So how does this weak classifier help in the classification task? I cannot explained what happens in learning here.

Testing the classifier by detecting in a random image leads to the exact same results, no matter if I use this weak classifier or not.

Can somebody explain this behavior?