## Hot questions for Using Neural networks in quantization

Question:

Recently, I've started creating neural networks with Tensorflow + Keras and I would like to try the quantization feature available in Tensorflow. So far, experimenting with examples from TF tutorials worked just fine and I have this basic working example (from https://www.tensorflow.org/tutorials/keras/basic_classification):

import tensorflow as tf from tensorflow import keras fashion_mnist = keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data() # fashion mnist data labels (indexes related to their respective labelling in the data set) class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] # preprocess the train and test images train_images = train_images / 255.0 test_images = test_images / 255.0 # settings variables input_shape = (train_images.shape[1], train_images.shape[2]) # create the model layers model = keras.Sequential([ keras.layers.Flatten(input_shape=input_shape), keras.layers.Dense(128, activation=tf.nn.relu), keras.layers.Dense(10, activation=tf.nn.softmax) ]) # compile the model with added settings model.compile(optimizer=tf.train.AdamOptimizer(), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # train the model epochs = 3 model.fit(train_images, train_labels, epochs=epochs) # evaluate the accuracy of model on test data test_loss, test_acc = model.evaluate(test_images, test_labels) print('Test accuracy:', test_acc)

Now, I would like to employ quantization in the learning and classification process. The quantization documentation (https://www.tensorflow.org/performance/quantization) (the page is no longer available since cca September 15, 2018) suggests to use this piece of code:

loss = tf.losses.get_total_loss() tf.contrib.quantize.create_training_graph(quant_delay=2000000) optimizer = tf.train.GradientDescentOptimizer(0.00001) optimizer.minimize(loss)

However, it does not contain any information about where this code should be utilized or how it should be connected to a TF code (not even mentioning a high level model created with Keras). I have no idea how this quantization part relates to the previously created neural network model. Just inserting it following the neural network code runs into the following error:

Traceback (most recent call last): File "so.py", line 41, in <module> loss = tf.losses.get_total_loss() File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/losses/util.py", line 112, in get_total_loss return math_ops.add_n(losses, name=name) File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/math_ops.py", line 2119, in add_n raise ValueError("inputs must be a list of at least one Tensor with the " ValueError: inputs must be a list of at least one Tensor with the same dtype and shape

Is it possible to quantize a Keras NN model in this way or am I missing something basic? A possible solution that crossed my mind could be using low level TF API instead of Keras (needing to do quite a bit of work to construct the model), or maybe trying to extract some of the lower level methods from the Keras models.

Answer:

As mentioned in other answers, TensorFlow Lite can help you with network quantization.

TensorFlow Lite provides several levels of support for quantization.

Tensorflow Lite post-training quantization quantizes weights and activations post training easily. Quantization-aware training allows for training of networks that can be quantized with minimal accuracy drop; this is only available for a subset of convolutional neural network architectures.

So first, you need to decide whether you need post-training quantization or quantization-aware training. For example, if you already saved the model as *.h5 files, you would probably want to follow @Mitiku's instruction and do the post-training quantization.

If you prefer to achieve higher performance by simulating the effect of quantization in training (using the method you quoted in the question), and your model **is** in the subset of CNN architecture supported by quantization-aware training, this example may help you in terms of interaction between Keras and TensorFlow. Basically, you only need to add this code between model definition and its fitting:

sess = tf.keras.backend.get_session() tf.contrib.quantize.create_training_graph(sess.graph) sess.run(tf.global_variables_initializer())

Question:

Self organizing maps are more suited for clustering(dimension reduction) rather than classification. But SOM's are used in Linear vector quantization for fine tuning. But LVQ is a supervised leaning method. So to use SOM's in LVQ, LVQ should be provided with a labelled training data set. But since SOM's only do clustering and not classification and thus cannot have labelled data how can SOM be used as an input for LVQ?

Does LVQ fine tune the **clusters** in SOM?
Before using in LVQ should SOM be put through another classification algorithm so that it can classify the inputs so that these labelled inputs maybe used in LVQ?

Answer:

It must be clear that supervised differs from unsupervised because in the first the target values are known.
Therefore, the output of supervised models is a prediction.
Instead, the output of unsupervised models is a label for which we don't know the meaning yet. For this purpose, after clustering, it is necessary to do the *profiling* of each one of those new label.

Having said so, you could label the dataset using an unsupervised learning technique such as SOM. Then, you should profile each class in order to be sure to understand the meaning of each class.
At this point, you can pursue two different path **depending on what is your final objective**:
1. use this new variable as a way for dimensionality reduction
2. use this new dataset featured with the additional variable representing the class as a labelled data that you will try to predict using the LVQ

Hope this can be useful!

Question:

I hope someone here can help me: I'm trying to implement a neural network to find clusters of data, that is presented as a 2D cluster. I tried to follow the standard algorithm as discribed on wikipedia: I look for the smallest distance for each data point and update the weights of this neuron towards the data point. I stop doing this, when the total distance is small enough.

My result is finding most of the clusters, but is wrong on a view, and although it calculates a permanent distance it is no more converging. Where is my error?

typedef struct{ double x; double y; }Data; typedef struct{ double x; double y; }Neuron; typedef struct{ size_t numNeurons; Neuron* neurons; }Network; int main(void){ srand(time(NULL)); Data trainingData[1000]; size_t sizeTrainingData = 0; size_t sizeClasses = 0; Network network; getData(trainingData, &sizeTrainingData, &sizeClasses); initializeNetwork(&network, sizeClasses); normalizeData(trainingData, sizeTrainingData); train(&network, trainingData, sizeTrainingData); return 0; } void train(Network* network, Data trainingData[], size_t sizeTrainingData){ for(int epoch=0; epoch<TRAINING_EPOCHS; ++epoch){ double learningRate = getLearningRate(epoch); double totalDistance = 0; for(int i=0; i<sizeTrainingData; ++i){ Data currentData = trainingData[i]; int winningNeuron = 0; totalDistance += findWinningNeuron(network, currentData, &winningNeuron); //update weight network->neurons[i].x += learningRate * (currentData.x - network->neurons[i].x); network->neurons[i].y += learningRate * (currentData.y - network->neurons[i].y); } if(totalDistance<MIN_TOTAL_DISTANCE) break; } } double getLearningRate(int epoch){ return LEARNING_RATE * exp(-log(LEARNING_RATE/LEARNING_RATE_MIN_VALUE)*((double)epoch/TRAINING_EPOCHS)); } double findWinningNeuron(Network* network, Data data, int* winningNeuron){ double smallestDistance = 9999; for(unsigned int currentNeuronIndex=0; currentNeuronIndex<network->numNeurons; ++currentNeuronIndex){ Neuron neuron = network->neurons[currentNeuronIndex]; double distance = sqrt(pow(data.x-neuron.x,2)+pow(data.y-neuron.y,2)); if(distance<smallestDistance){ smallestDistance = distance; *winningNeuron = currentNeuronIndex; } } return smallestDistance; }

`initializeNetwork(...)`

initiates all neurons with random weights in the range of -1 and 1.
`normalizeData(...)`

normalizes in a way so the greatest value is 1.

**an example:**
If I feed the network with about 50 (normalized) data points, that are seperated in 3 clusters, the remaining `totaldistance`

stays at about *7.3*. When I check the position of the neurons, that should represent the centers of the clusters, two are perfect, and one is at the border of a cluster. Shouldn't it be moved more to the center by the algorithm? I repeated the algorithm several times, the output is always similar(at the exact same *wrong* points)

Answer:

Your code does not look like LVQ, in particular you do not ever use the winning neuron, while you should move only **this one**

void train(Network* network, Data trainingData[], size_t sizeTrainingData){ for(int epoch=0; epoch<TRAINING_EPOCHS; ++epoch){ double learningRate = getLearningRate(epoch); double totalDistance = 0; for(int i=0; i<sizeTrainingData; ++i){ Data currentData = trainingData[i]; int winningNeuron = 0; totalDistance += findWinningNeuron(network, currentData, &winningNeuron); //update weight network->neurons[i].x += learningRate * (currentData.x - network->neurons[i].x); network->neurons[i].y += learningRate * (currentData.y - network->neurons[i].y); } if(totalDistance<MIN_TOTAL_DISTANCE) break; } }

your neuron to move is in `winningNeuron`

yet you update `i`

th neuron where `i`

actually iterates over **training samples**, I am suprised you do not fall off your memory (network->neurons should be smaller than sizeTrainingData). I guess you meant something like

void train(Network* network, Data trainingData[], size_t sizeTrainingData){ for(int epoch=0; epoch<TRAINING_EPOCHS; ++epoch){ double learningRate = getLearningRate(epoch); double totalDistance = 0; for(int i=0; i<sizeTrainingData; ++i){ Data currentData = trainingData[i]; int winningNeuron = 0; totalDistance += findWinningNeuron(network, currentData, &winningNeuron); //update weight network->neurons[winningNeuron].x += learningRate * (currentData.x - network->neurons[winningNeuron].x); network->neurons[winningNeuron].y += learningRate * (currentData.y - network->neurons[winningNeuron].y); } if(totalDistance<MIN_TOTAL_DISTANCE) break; } }

Question:

I'm new learning LVQ, and i want to implement it with my mfcc (Mel-frequency cepstral coefficients) result. So far as i learn, every example that i studied have uniform training and input data size array like:

x1[2][4] = {{0,1,1,1},{1,1,1,1},[{1,1,0,1}}

x2[2][4] = {{0,1,1,0},{1,1,0,1},{1,0,0,1}}

x3[2][4] = {{1,0,1,0},{1,1,1,0},{0,0,0,1}}

But my mfcc results data size are unbalance like :

x11[4] = {{0,1,1,1},{1,1,1,1}}

x2[2][4] = {{0,0,1,0},{1,1,0,1},{1,0,0,1}}

x2[4][4] = {{0,0,1,0},{1,1,0,1},{1,0,0,1},{0,1,1,1},{1,0,1,0}}

so how can i deal with this unbalanced data size for LVQ training and input?

Answer:

Most basic thing to try in this problem is to fill missing values by 0 components. Since your Mel-frequency cepstral coefficient is basically fourier transform, then second fourier transform. This should have no effect.

Try to find maximum size of your input vector. Then fill other smaller input vectors missing dimensions with 0. Like below.

x1[1][4] = {{0,1,1,1},{1,1,1,1},,{0,0,0,0},{0,0,0,0},{0,0,0,0}}

x2[2][4] = {{0,0,1,0},{1,1,0,1},{1,0,0,1},{0,0,0,0},{0,0,0,0}}

x2[4][4] = {{0,0,1,0},{1,1,0,1},{1,0,0,1},{0,1,1,1},{1,0,1,0}}