2019-04-17 07:57:54 -0600 | received badge | ● Popular Question (source) |
2014-10-23 05:31:22 -0600 | received badge | ● Editor (source) |
2014-10-23 05:29:02 -0600 | asked a question | CvSVM::EPS_SVR train_auto assertion sv_count != 0 failed I have the following code: Here _data and _resp are Mats holding the feature vectors and responses, _var_idx containing the active features and _train_idx the active samples. For the parameter grids the default values are used. Unfortunately, the code produces the following error:
When I run a regression one the same data with parameters selected by hand it works fine. And when I switch to a classification problem (and change the corresponding parameters and SVM type) it also works. In that case for single training as well as for auto training. Also, when switching to CvSVM::NU_SVR the auto training works fine. Another thing that bothers me is that I also have to provide the parameter p (EPS_EVR) or nu (NU_SVR) when I want to do auto training. The documentation says that those are also estimated using their corresponding default grids. Why is that so? Thanks in advance. EDIT: I've made a small example that suffers from this problem. Just in case anyone want to try it out and reproduce the problem: |
2014-09-15 06:51:13 -0600 | asked a question | MLP sigmoid output +/-epsilon This may seem like a duplicate question to this, but the difference is that there I was asking whether the output range is [-1,1] or [0,1]. I have accepted that the range is [0,1] if the the activation function is the sigmoid with alpha != 0 and beta != 0 (as stated in the documentation). Anyway, it seems to me that the output range is more like [0-eps, 1+eps]. My question is: Why is there a small epsilon and how can I turn this off? One thing I could think of is that the output neurons aren't sigmoid units but linear units. Although it is explicitly stated that all neurons have the same activation function, this could explain this behavior. Here is a small example that shows what I mean: For me this produces the following output: As you can see, there are values just below 0 and above 1. |
2014-09-10 12:03:54 -0600 | received badge | ● Student (source) |
2014-09-10 09:31:33 -0600 | asked a question | OpenCV MLP with Sigmoid Neurons, Output range I have searched for answers here on SO and google to the following question, but haven't found anything, so here is my situation: I want to realize a MLP that learns some similarity function. I have training and test samples and the MLP set up and running. My problem is how to provide the teacher outputs to the net (from which value range). Here is is the relevant part of my code: The number of input and hidden neurons is set somewhere else and the net has 1 output neuron. X, Y, X_test are Mats containing the training and test samples, no problem here. The problem is, from what value range my Y's have to come and from what value range the predictions will come. In the documentation I have found the following statements: For training:
Since I'm NOT using the default sigmoid function (the one with alpha=0 and beta=0), I'm providing my Y's from [0,1]. Is this right, or do they mean something else with 'default sigmoid function'? Im asking this, because for prediction they explicitly mention alpha and beta:
Again, since I'm not using the default sigmoid function, I assume to get predictions from [0,1]. Am I right so far? What is confusing me here is that I've found another question regarding the output range of OpenCV's sigmoid function, that says the range has to be [-1,1]. And now comes the real confusion: When I train the net and let it make some predictions, I get values slightly larger than 1 (around 1.03), regardless if my Y's come from [0,1] or [-1,1]. And this shouldn't happen in either case. Could somebody please enlighten me? Am I missing something here? Thanks in advance. EDIT: To make things very clear, I came up with a small example that shows the problem: (more) |