Ask Your Question

classification_guy's profile - activity

2016-11-10 11:40:14 -0500 received badge  Student (source)
2013-09-26 06:29:29 -0500 commented answer What evaluation classifiers? Precision & recall?

Hmm sorry but i still don't get it... for example: if i have 31 data sets of which 21 are labeled negative and 10 are positive, and my algorithm labels 2 negatives as positives and 5 positives as negatives. then tp = 5, fp = 2, tn = 19, fn = 5 ...right? my instinct says that it's more appropriate to represent this results as a prediction rate of 77.4% rather than a precision of 71% and a recall of 50% ...or is my instinct wrong? thanks!

2013-09-26 05:15:25 -0500 commented question What evaluation classifiers? Precision & recall?

example: the data looks like this: {[some text, pos, pos]; [other txt, neg, pos]; [whatever, neg, neg]; [littlepny, pos, neg]} ...so its like some data, then the manual annotation, then the program's output.

2013-09-26 05:14:25 -0500 commented answer What evaluation classifiers? Precision & recall?

Thanks for your answer! I'm just a bit confused about the whole precision/recall/f1-thing because the true negatives don't count... the results don't change whether there is just one dataset correctly labeled as negative or 1000... just the false negatives are counted. Or am I wrong? Would it still be an appropriate evaluation for the this task?

2013-09-26 05:10:07 -0500 received badge  Supporter (source)
2013-09-25 16:09:05 -0500 asked a question What evaluation classifiers? Precision & recall?

Hi,

I have some labeled data which classifies datasets as positive or negative. Now i have an algorithm that does the same automatically and I want to compare the results.

I was said to use precision and recall, but I'm not sure whether those are appropriate because the true negatives don't even appear in the formulas. I'd rather tend to use a general "prediction rate" for both, positives and negatives.

How would be a good way to evaluate the algorithm? Thanks!!