## Hot questions for Using Neural networks in pycharm

Question:

Removed text as I have not found a solution and I realized I don't want others ripping off the first bit which works.

Answer:

Your input to confusion_matrix must be an array of int not one hot encodings.

# Predicting the Test set results y_pred = model.predict(X_test) y_pred = (y_pred > 0.5) matrix = metrics.confusion_matrix(y_test.argmax(axis=1), y_pred.argmax(axis=1))

Below output would have come in that manner so by giving a probability threshold .5 will transform this to Binary.

output(y_pred):

[0.87812372 0.77490434 0.30319547 0.84999743]

The sklearn.metrics.accuracy_score(y_true, y_pred) method defines y_pred as:

y_pred : 1d array-like, or label indicator array / sparse matrix. Predicted labels, as returned by a classifier.

Which means y_pred has to be an array of 1's or 0's (predicated labels). They should not be probabilities.

the root cause of your error is a theoretical and not computational issue: you are trying to use a classification metric (accuracy) in a regression (i.e. numeric prediction) model (Neural Logistic Model), which is meaningless.

Just like the majority of performance metrics, accuracy compares apples to apples (i.e true labels of 0/1 with predictions again of 0/1); so, when you ask the function to compare binary true labels (apples) with continuous predictions (oranges), you get an expected error, where the message tells you exactly what the problem is from a computational point of view:

Classification metrics can't handle a mix of binary and continuous target

Despite that the message doesn't tell you directly that you are trying to compute a metric that is invalid for your problem (and we shouldn't actually expect it to go that far), it is certainly a good thing that scikit-learn at least gives you a direct and explicit warning that you are attempting something wrong; this is not necessarily the case with other frameworks - see for example the behavior of Keras in a very similar situation, where you get no warning at all, and one just ends up complaining for low "accuracy" in a regression setting...

from keras import models from keras.layers import Dense, Dropout from keras.utils import to_categorical import numpy as np # linear algebra import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv) from keras.models import Sequential from keras.layers import Dense, Activation from sklearn.cross_validation import train_test_split from sklearn import metrics from sklearn.cross_validation import KFold, cross_val_score from sklearn.preprocessing import StandardScaler # read the csv file and convert into arrays for the machine to process df = pd.read_csv('dataset_ori.csv') dataset = df.values # split the dataset into input features and the feature to predict X = dataset[:,0:7] Y = dataset[:,7] # Splitting into Train and Test Set from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(dataset, response, test_size = 0.2, random_state = 0) # Initialising the ANN classifier = Sequential() # Adding the input layer and the first hidden layer classifier.add(Dense(units = 10, kernel_initializer = 'uniform', activation = 'relu', input_dim =7 )) model.add(Dropout(0.5)) # Adding the second hidden layer classifier.add(Dense(units = 10, kernel_initializer = 'uniform', activation = 'relu')) model.add(Dropout(0.5)) # Adding the output layer classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid')) # Compiling the ANN classifier.compile(optimizer = 'adam', loss = 'sparse_categorical_crossentropy', metrics = ['accuracy']) # Fitting the ANN to the Training set classifier.fit(X_train, y_train, batch_size = 10, epochs = 20) # Train model scaler = StandardScaler() classifier.fit(scaler.fit_transform(X_train.values), y_train) # Summary of neural network classifier.summary() # Predicting the Test set results & Giving a threshold probability y_prediction = classifier.predict_classes(scaler.transform(X_test.values)) print ("\n\naccuracy" , np.sum(y_prediction == y_test) / float(len(y_test))) y_prediction = (y_prediction > 0.5) ## EXTRA: Confusion Matrix Visualize from sklearn.metrics import confusion_matrix,accuracy_score cm = confusion_matrix(y_test, y_pred) # rows = truth, cols = prediction df_cm = pd.DataFrame(cm, index = (0, 1), columns = (0, 1)) plt.figure(figsize = (10,7)) sn.set(font_scale=1.4) sn.heatmap(df_cm, annot=True, fmt='g') print("Test Data Accuracy: %0.4f" % accuracy_score(y_test, y_pred)) #Let's see how our model performed from sklearn.metrics import classification_report print(classification_report(y_test, y_pred))

Question:

I searched StackOverflow and visited other websites for help, but I can´t find a solution to my problem. I will leave the whole code to make it understandable for you. It´s about 110 lines, written with PyTorch.

Each time, I will compile and calculate a prediction, this error-code will show up:

Traceback (most recent call last): File "/Users/MacBookPro/Dropbox/01 GST h_da Privat/BA/06_KNN/PyTorchV1/BesucherV5.py", line 108, in <module> result = Network(test_exp).data[0][0].item() TypeError: __init__() takes 1 positional argument but 2 were given

I know, other users had this too, but none of their solutions helped me out. I guess the mistake is either in my class "Network" or in variable "result". I hope that someone of you had this problem and know how to fix it or can help me in a different way.

Short information about the Dataset:

My Dataset has 10 columns and gets splitted into two sets. X and Y. X has 9 columns, Y just one. These are then used to train the network.

Thank you in advance!

Kind regards Christian Richter

My Code:

import torch import torch.nn as nn import torch.nn.functional as F from torch.autograd import Variable import torch.optim as optim import pandas as pd ### Dataset ### dataset = pd.read_csv('./data/train_data_csv.csv') x_temp = dataset.iloc[:, :-1].values print(x_temp) print() print(x_temp.size) print() y_temp = dataset.iloc[:, 9:].values print(y_temp) print() print(y_temp.size) print() x_train_tensor = torch.FloatTensor(x_temp) y_train_tensor = torch.FloatTensor(y_temp) ### Network Architecture ### class Network(nn.Module): def __init__(self): super(Network, self).__init__() self.linear1 = nn.Linear(9, 9) #10 Input-Neurons, 10 Output-Neurons, Linearer Layer self.linear2 = nn.Linear(9, 1) def forward(self, x): pax_predict = F.relu(self.linear1(x)) pax_predict = self.linear2(x) return pax_predict def num_flat_features(self, pax_predict): size = pax_predict.size()[1:] num = 1 for i in size: num *= i return num network = Network() print(network) criterion = nn.MSELoss() target = Variable(y_train_tensor) optimizer = torch.optim.SGD(network.parameters(), lr=0.0001) ### Training for epoch in range(200): input = Variable(x_train_tensor) y_pred = network(input) loss = criterion(y_pred, target) optimizer.zero_grad() loss.backward() optimizer.step() test_exp = torch.Tensor([[40116]]) result = Network(test_exp).data[0][0].item() print('Result is: ', result)

Answer:

The problem is quite simple and is in this line, I suppose:

result = Network(test_exp).data[0][0].item()

Here you should use `network`

(the object) instead of `Network`

(the class). As you defined, `Network`

takes only 1 argument (i.e., `self`

), but you are passing 2: `self`

and `test_exp`

.

Perhaps if you had chosen another name for your object (e.g., `net`

), you'd have spotted this error more easily. Take that into consideration :)

And please, always post the full traceback.

Question:

I've been trying to build a neural network model that will be able to classify images. But there is this one consistent error that keeps on popping up. Can anyone please help me with this? Here are the code and error below: Second Tracebackthird Traceback

x=tf.placeholder(tf.float32,shape=[None,img_size]) y=tf.placeholder(tf.float32,shape=[None,no_classes]) #keep_probable=tf.argmax(y,dimension=1) keep_probable=tf.placeholder(tf.float32) def conv2d(x,w,b,strides=1): x=tf.nn.conv2d(x,w, strides=[1,strides,strides,1],padding='SAME') x=tf.nn.bias_add(x,b) return tf.nn.relu(x) def maxpool2d(x,k=2): return tf.nn.max_pool(x,ksize=[1,k,k,1],strides=[1,k,k,1],padding='SAME') def conv_net(x,weights,biases,drop_out): x=tf.reshape(x,shape=[-1,50,50,1]) conv1=conv2d(x,weights['wc1'],biases['bc1']) conv1=maxpool2d(conv1,k=2) conv2=conv2d(conv1,weights['wc2'],biases['bc2']) conv2=maxpool2d(conv2,k=2) fcl=tf.reshape(conv2,[-1,weights['wd1'].get_shape().as_list()[0]]) fcl=tf.add(tf.matmul(fcl,weights['wd1'])[][2],biases['bd1']) fcl=tf.nn.relu(fcl) # application of dropout fcl.tf.nn.dropout(fcl,dropout) # output of the class prediction out=tf.add(tf.matmul(fcl,weights['out']),biases['out']) return out weights = { 'wc1': tf.Variable((tf.random_normal)([5,5,1,32])), 'wc2': tf.Variable((tf.random_normal)([5,5,32,64])), 'wd1': tf.Variable((tf.random_normal)([7*7*64,1024])), 'out': tf.Variable((tf.random_normal)([1024,no_classes])) } biases = { 'bc1': tf.Variable(tf.random_normal([32])), 'bc2': tf.Variable(tf.random_normal([64])), 'bd1': tf.Variable(tf.random_normal([1024])), 'out': tf.Variable(tf.random_normal([no_classes])) } # construction of a model pred = conv_net(x,weights,biases,keep_probable) # definition of the loss and optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred,labels =y)) optimiser = tf.train.AdadeltaOptimizer(learning_rate=LR).minimize(cost) # Evaluating the model correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) accuracy=tf.reduce_mean(tf.cast(correct_pred,tf.float32))

Following is the error I get:

`TypeError: Expected bool for argument 'transpose_a' not <tf.Variable 'Variable_6:0' shape=(1024,) dtype=float32_ref>.`

Answer:

Problem is in your line `fcl=tf.add(tf.matmul(fcl,weights['wd1'],biases['bd1']))`

. You are missing a bracket. Do this:

`fcl=tf.add(tf.matmul(fcl,weights['wd1']),biases['bd1'])`

Question:

I'm currently learning the theory behind Neural Networks, and I want to learn how to code such models. Therefore I've started to look at TensorFlow.

I've found a really interesting application I want to program, but I currently can't make it work, and I don't really know why!

The example comes from Deep Learning, Goodfellow et al 2016 page 171 - 177.

import tensorflow as tf T = 1. F = 0. train_in = [ [T, T], [T, F], [F, T], [F, F], ] train_out = [ [F], [T], [T], [F], ] w1 = tf.Variable(tf.random_normal([2, 2])) b1 = tf.Variable(tf.zeros([2])) w2 = tf.Variable(tf.random_normal([2, 1])) b2 = tf.Variable(tf.zeros([1])) out1 = tf.nn.relu(tf.matmul(train_in, w1) + b1) out2 = tf.nn.relu(tf.matmul(out1, w2) + b2) error = tf.subtract(train_out, out2) mse = tf.reduce_mean(tf.square(error)) train = tf.train.GradientDescentOptimizer(0.01).minimize(mse) sess = tf.Session() tf.global_variables_initializer() err = 1.0 target = 0.01 epoch = 0 max_epochs = 1000 while err > target and epoch < max_epochs: epoch += 1 err, _ = sess.run([mse, train]) print("epoch:", epoch, "mse:", err) print("result: ", out2)

I get the following error message in Pycharm when running the code:Screenshot

Answer:

In order to *run* the initialization op, you should write:

sess.run(tf.global_variables_initializer())

Instead of:

tf.global_variables_initializer()

Here is a working version:

import tensorflow as tf T = 1. F = 0. train_in = [ [T, T], [T, F], [F, T], [F, F], ] train_out = [ [F], [T], [T], [F], ] w1 = tf.Variable(tf.random_normal([2, 2])) b1 = tf.Variable(tf.zeros([2])) w2 = tf.Variable(tf.random_normal([2, 1])) b2 = tf.Variable(tf.zeros([1])) out1 = tf.nn.relu(tf.matmul(train_in, w1) + b1) out2 = tf.nn.relu(tf.matmul(out1, w2) + b2) error = tf.subtract(train_out, out2) mse = tf.reduce_mean(tf.square(error)) train = tf.train.GradientDescentOptimizer(0.01).minimize(mse) sess = tf.Session() sess.run(tf.global_variables_initializer()) err = 1.0 target = 0.01 epoch = 0 max_epochs = 1000 while err > target and epoch < max_epochs: epoch += 1 err, _ = sess.run([mse, train]) print("epoch:", epoch, "mse:", err) print("result: ", out2)