## How to Convert Keras Prediction Output to desired Binary Value

I have a network with 32 input nodes, 20 hidden nodes and 65 output nodes. My network input actually is a hash code of length 32 and the output is the word.

The input is the ascii value of each character in the Hash divided by 256.
The output of the network is a binary representation I have made. Say for example a is equal to 00000 and b is equal to 00001 and so on and so forth. It only includes the alphabet and the space that why it's only 5 bits per character. I have a maximum limit of only 13 characters in my training input . SO my output nodes is 13 * 5 = 65. And Im expecting a binary output like `10101010101010101010101010101010101010101010101010101010101001011`

. The bit sequence can predict at most 16 characters word given a hash code of 32 length as an input.

model = Sequential([ Dense(32, input_shape=(32,), activation = 'relu'), Dense(20, activation='relu'), Dense(65, input_shape=(65,), activation='softmax') ]) model.summary() model.compile(Adam(lr=.001), loss='binary_crossentropy', metrics= ['accuracy']) model.fit(train_samples, train_labels, batch_size=1000, epochs=10000,shuffle = True, verbose=2)

When I tried predicting using the code below:

clf.predict(X)

It always outputs me small decimal values less than 0.5.

[[8.95109400e-03 1.11340620e-02 1.27389077e-02 1.90807953e-02 1.56925414e-02 7.47500360e-03 1.30378362e-02 1.67052317e-02 1.07944654e-02 9.68935993e-03 9.82633699e-03 1.29385451e-02 1.56633276e-02 1.38113154e-02 1.50949452e-02 8.81231762e-03 1.26177669e-02 1.46279763e-02 1.42763760e-02 1.31389238e-02 8.32264405e-03 1.52036361e-02 1.52883027e-02 1.47563582e-02 1.19247697e-02 1.16073946e-02 1.72672570e-02 1.35995271e-02 1.77132934e-02 1.33292647e-02 1.41840307e-02 1.78522542e-02 9.77656059e-03 1.82192177e-02 9.86329466e-03 1.62205566e-02 1.95278302e-02 9.18696448e-03 2.06225738e-02 1.01496875e-02 2.08229423e-02 2.36334335e-02 6.02523983e-03 2.36746706e-02 6.56269025e-03 2.44314633e-02 2.70614270e-02 4.14136378e-03 2.72923186e-02 3.86772421e-03 2.90246904e-02 2.92722285e-02 3.06371972e-03 2.97660977e-02 1.89558265e-03 3.17853205e-02 3.13901827e-02 1.13886443e-03 3.24600078e-02 1.15508994e-03 3.36604454e-02 3.36041413e-02 4.59054590e-08 3.35478485e-02 4.63940282e-08]]

I'm expecting a binary output. How will I get my desired binary value? I have tried approximating it to 0 when its near to 0 and approximate it to 1 , when its near 1. Is it right? If so, then my output is always 0, because all are close to 0. Which I think is not right. Please help.

There is probably a misconception in your activation function.

The `softmax`

is designed for "one correct class", not for 65 possibly correct classes.
The sum of the softmax results will always be 1, thus you will probably not have (m)any thing(s) above .5 indeed.

Use a `sigmoid`

activation.

**Binary Classification Tutorial with the Keras Deep Learning Library,** Binary Classification Worked Example with the Keras Deep Learning Library We must convert them into integer values 0 and 1. It is stratified, meaning that it will look at the output values and attempt to balance the number of The output layer contains a single neuron in order to make predictions. Yeah, I agree classifier.predict should give a probability, but somehow it is rounding itself to either 0 or 1 with the above code. Also, I tried with 1 output neuron for sigmoid and 2 for softmax, the result for both is a rounded output of 0 or 1.

Your activation of the last layer is causing the problem. When softmax activation is used the model outputs in a way that the model's output sums to one. This is not the behavior you want. You have two options for binary activation. The first choice is sigmoid activation(It outputs values between 0 and 1). Second options is tanh function(It outputs values between -1 and 1). To convert to binary values, for sigmoid function use `greather than or equals to 0.5`

predicate and for tanh `greather than or equals to 0`

predicate.

The way you encode the characters is not efficient way for neural networks. Use embedding vector or one hot encoding for your inputs, and also consider using one-hot encoding for your output nodes.

My suggestion:

##### With Embedding

model = Sequential([ Embedding(input_dim, output_dim, input_length=input_length), Dense(32, activation = 'relu'), Dense(20, activation='relu'), Dense(num_of_classes, activation='softmax') ])

##### With one hot encoding

model = Sequential([ Dense(32,input_shape=(32, number_of_classes), activation = 'relu'), Dense(20, activation='relu'), Dense(num_of_classes, activation='softmax') ])

**How to Make Predictions with Keras,** How do I make predictions with my model in Keras? network model in Keras developed for a simple two-class (binary) classification problem. This LabelEncoder can be used to convert the integers back into string values via the the probability for each outcome class as a value between 0 and 1. I've made a Keras LSTM model that reads in binary target values and is supposed to output binary predictions. However, the predictions aren't binary. A sample of my X and Y values is below: X Y 5.06 0 4.09 1 4.72 0 4.57 0 4.44 1 6.98 1 What I'm trying to predict is if Xt+1 is going to be higher or lower than Xt.

def classify_local(sentence): ERROR_THRESHOLD = 0.15 input_data = pd.DataFrame([bow(sentence, words)], dtype=float, index=['input']) results = model.predict([input_data])[0] results = [[i,r] for i,r in enumerate(results) if r > ERROR_THRESHOLD] results.sort(key=lambda x: x[1], reverse=True) return_list = [] for r in results: return_list.append((classes[r[0]], str(r[1]))) return return_list

**What does the output of model.predict function from Keras mean ,** The output of a neural network will never, by default, be binary - i.e. zeros or ones. The network works with continuous values (not discrete) in order to optimise Since you are doing binary classification. You have a dense layer consisting of one unit with an activation function of the sigmoid. Sigmoid function outputs a value in the range [0,1] which corresponds to the probability of the given sample belonging to a positive class (i.e. class one). To convert these to class labels you can take a threshold.

**constraint Keras output as integer · Issue #2218 · keras-team/keras ,** I am using Keras for my project, but my ANN's output should have a integer type, i want layer as an output layer that rounds the values (still under the gradient). Also, you can have binary output for classification problem. The upside is that your model really only can predict integers, like how you wanted. You have two options for binary activation. The first choice is sigmoid activation(It outputs values between 0 and 1). Second options is tanh function(It outputs values between -1 and 1). To convert to binary values, for sigmoid function use greather than or equals to 0.5 predicate and for tanh greather than or equals to 0 predicate.

**Sigmoid Activation and Binary Crossentropy —A Less Than Perfect ,** I wanted to make sure I get the argument, down to the numbers, especially in So, input argument output is clipped first, then converted to logits, and then Anyways, this is why the values of input variable output in the Keras sigmoid + BCE cannot discriminate among samples whose predicted class is The output of softmax Layer is the probability of each class your sample belongs to. So they are float numbers. I guess you want to get what exactly the classes are, then you could use class_result=np.argmax(result,axis=-1) in the end of your code.

**Multi-label classification with Keras,** We'll be using Keras to train a multi-label classifier to predict both the color Changing this value from softmax to sigmoid will enable us to help="path to output accuracy/loss plot") We also convert labels to a NumPy array as well. compile the model using binary cross-entropy rather than categorical The output variable is string values. We must convert them into integer values 0 and 1. We can do this using the LabelEncoder class from scikit-learn. This class will model the encoding required using the entire dataset via the fit() function, then apply the encoding to create a new output variable using the transform() function.