Binary versus one-hot encoding, re: ANN MLP

asked 2018-05-07 15:02:41 -0500

sjhalayka gravatar image

updated 2018-05-07 19:00:29 -0500

Which do you prefer? Binary versus one-hot encoding.

Here is the OpenCV XOR problem in Python, using the binary encoding method:

import cv2
import numpy as np

ann = cv2.ml.ANN_MLP_create()
ann.setLayerSizes(np.array([2, 5, 1], dtype=np.uint8))
ann.setTrainMethod(cv2.ml.ANN_MLP_BACKPROP, 0.1)
ann.setActivationFunction(cv2.ml.ANN_MLP_SIGMOID_SYM)
ann.setTermCriteria((cv2.TERM_CRITERIA_COUNT | cv2.TERM_CRITERIA_EPS, 1, 0.000001 ))

input_array0 = np.array([[0.0, 0.0]], dtype=np.float32)
output_array0 = np.array([[0.0]], dtype=np.float32)
input_array1 = np.array([[1.0, 0.0]], dtype=np.float32)
output_array1 = np.array([[0.0]], dtype=np.float32)
input_array2 = np.array([[0.0, 1.0]], dtype=np.float32)
output_array2 = np.array([[0.0]], dtype=np.float32)
input_array3 = np.array([[1.0, 1.0]], dtype=np.float32)
output_array3 = np.array([[1.0]], dtype=np.float32)

td0 = cv2.ml.TrainData_create(input_array0, cv2.ml.ROW_SAMPLE, output_array0)
td1 = cv2.ml.TrainData_create(input_array1, cv2.ml.ROW_SAMPLE, output_array1)
td2 = cv2.ml.TrainData_create(input_array2, cv2.ml.ROW_SAMPLE, output_array2)
td3 = cv2.ml.TrainData_create(input_array3, cv2.ml.ROW_SAMPLE, output_array3)

ann.train(td0, cv2.ml.ANN_MLP_NO_INPUT_SCALE | cv2.ml.ANN_MLP_NO_OUTPUT_SCALE)

for i in range(0, 10000):
    ann.train(td0, cv2.ml.ANN_MLP_UPDATE_WEIGHTS | cv2.ml.ANN_MLP_NO_INPUT_SCALE | cv2.ml.ANN_MLP_NO_OUTPUT_SCALE)
    ann.train(td1, cv2.ml.ANN_MLP_UPDATE_WEIGHTS | cv2.ml.ANN_MLP_NO_INPUT_SCALE | cv2.ml.ANN_MLP_NO_OUTPUT_SCALE)
    ann.train(td2, cv2.ml.ANN_MLP_UPDATE_WEIGHTS | cv2.ml.ANN_MLP_NO_INPUT_SCALE | cv2.ml.ANN_MLP_NO_OUTPUT_SCALE)
    ann.train(td3, cv2.ml.ANN_MLP_UPDATE_WEIGHTS | cv2.ml.ANN_MLP_NO_INPUT_SCALE | cv2.ml.ANN_MLP_NO_OUTPUT_SCALE)

print(ann.predict(input_array0))
print(ann.predict(input_array1))
print(ann.predict(input_array2))
print(ann.predict(input_array3))

Here is the one-hot encoding method:

import cv2
import numpy as np

ann = cv2.ml.ANN_MLP_create()
ann.setLayerSizes(np.array([2, 5, 2], dtype=np.uint8))
ann.setTrainMethod(cv2.ml.ANN_MLP_BACKPROP, 0.1)
ann.setActivationFunction(cv2.ml.ANN_MLP_SIGMOID_SYM)
ann.setTermCriteria((cv2.TERM_CRITERIA_COUNT | cv2.TERM_CRITERIA_EPS, 1, 0.000001 ))

input_array0 = np.array([[0.0, 0.0]], dtype=np.float32)
output_array0 = np.array([[0.0, 1.0]], dtype=np.float32)

input_array1 = np.array([[1.0, 0.0]], dtype=np.float32)
output_array1 = np.array([[0.0, 1.0]], dtype=np.float32)

input_array2 = np.array([[0.0, 1.0]], dtype=np.float32)
output_array2 = np.array([[0.0, 1.0]], dtype=np.float32)

input_array3 = np.array([[1.0, 1.0]], dtype=np.float32)
output_array3 = np.array([[1.0, 0.0]], dtype=np.float32)

td0 = cv2.ml.TrainData_create(input_array0, cv2.ml.ROW_SAMPLE, output_array0)
td1 = cv2.ml.TrainData_create(input_array1, cv2.ml.ROW_SAMPLE, output_array1)
td2 = cv2.ml.TrainData_create(input_array2, cv2.ml.ROW_SAMPLE, output_array2)
td3 = cv2.ml.TrainData_create(input_array3, cv2.ml.ROW_SAMPLE, output_array3)

ann.train(td0, cv2.ml.ANN_MLP_NO_INPUT_SCALE | cv2.ml.ANN_MLP_NO_OUTPUT_SCALE)

for i in range(0, 10000):
    ann.train(td0, cv2.ml.ANN_MLP_UPDATE_WEIGHTS | cv2.ml.ANN_MLP_NO_INPUT_SCALE | cv2.ml.ANN_MLP_NO_OUTPUT_SCALE)
    ann.train(td1, cv2.ml.ANN_MLP_UPDATE_WEIGHTS | cv2.ml.ANN_MLP_NO_INPUT_SCALE | cv2.ml.ANN_MLP_NO_OUTPUT_SCALE)
    ann.train(td2, cv2.ml.ANN_MLP_UPDATE_WEIGHTS | cv2.ml.ANN_MLP_NO_INPUT_SCALE | cv2.ml.ANN_MLP_NO_OUTPUT_SCALE)
    ann.train(td3, cv2.ml.ANN_MLP_UPDATE_WEIGHTS | cv2.ml.ANN_MLP_NO_INPUT_SCALE | cv2.ml.ANN_MLP_NO_OUTPUT_SCALE)

print(ann.predict(input_array0))
print(ann.predict(input_array1))
print(ann.predict(input_array2))
print(ann.predict(input_array3))

I found that sometimes the predict function returns a ... (more)

edit retag flag offensive close merge delete

Comments

2

again, forget all of it instantly. your use case is flawed

(there is only a finite space possible for 2 bits, this is not a case for a neural network at all)

try to solve caltech or similar instead.

to get back to your question, anything else than binary classification will require one-hot encoding here.

berak gravatar imageberak ( 2018-05-07 15:11:02 -0500 )edit
3

First example it's a regression MLP and second case it's a classifier

LBerger gravatar imageLBerger ( 2018-05-07 15:22:38 -0500 )edit

I didn't know about the Caltech images. Thank you so much for that!

sjhalayka gravatar imagesjhalayka ( 2018-05-07 15:36:03 -0500 )edit

@LBerger -- thanks for your insight.

sjhalayka gravatar imagesjhalayka ( 2018-05-07 15:36:48 -0500 )edit

I am going to try the Caltech 256 data set.

sjhalayka gravatar imagesjhalayka ( 2018-05-07 15:58:13 -0500 )edit

@berak -- does OpenCV have a convolutional network? Or do I have to use the Caffe library?

sjhalayka gravatar imagesjhalayka ( 2018-05-07 19:46:34 -0500 )edit
2

opencv currently only has feed-forward cnn's (in the dnn module, you can't train them)

if caffe is too scary, you could get your feet wet with tiny-dnn or even colab

berak gravatar imageberak ( 2018-05-07 20:33:20 -0500 )edit