## Hot questions for Using Neural networks in pattern recognition

Question:

I am trying to extract common patterns that always appear whenever a certain event occurs.

For example, patient A, B, and C all had a heart attack. Using the readings from there pulse, I want to find the common patterns before the heart attack stroke.

In the next stage I want to do this using multiple dimensions. For example, using the readings from the patients pulse, temperature, and blood pressure, what are the common patterns that occurred in the three dimensions taking into consideration the time and order between each dimension.

What is the best way to solve this problem using Neural Networks and which type of network is best? (Just need some pointing in the right direction)

and thank you all for reading

Answer:

Described problem looks like a time series prediction problem. That means a basic prediction problem for a continuous or discrete **phenomena** generated by some existing **process**. As a **raw data** for this problem we will have a sequence of **samples** x(t), x(t+1), x(t+2), ..., where x() means an output of considered process and t means some arbitrary timepoint.

For artificial neural networks solution we will consider a time series prediction, where we will **organize** our raw data to a new sequences. As you should know, we consider X as a matrix of input vectors that will be used in ANN learning. For time series prediction we will construct a new collection on following schema.

In the most basic form your input vector x will be a sequence of samples (x(t-k), x(t-k+1), ..., x(t-1), x(t)) taken at some arbitrary timepoint t, appended to it predecessor samples from timepoints t-k, t-k+1, ..., t-1. You should generate every example for every possible timepoint t like this.

But the key is to **preprocess** data so that we get the best prediction results.

Assuming your data (phenomena) is continuous, you should consider to apply some **sampling** technique. You could start with an experiment for some naive sampling period Δt, but there are stronger methods. See for example Nyquistâ€“Shannon Sampling Theorem, where the key idea is to allow to recover continuous x(t) from discrete x(Δt) samples. This is reasonable when we consider that we probably expect our ANNs to do this.

Assuming your data is discrete... you still should need to try sampling, as this will speed up your computations and might possibly provide better generalization. But the key advice is: **do experiments!** as the best architecture depends on data and also will require to preprocess them correctly.

The next thing is network output layer. From your question, it appears that this will be a binary class prediction. But maybe a wider prediction vector is worth considering? How about to **predict the future** of considered samples, that is x(t+1), x(t+2) and experiment with different **horizons** (length of the future)?

Further reading:

- Somebody mentioned Python here. Here is some good tutorial on timeseries prediction with Keras: Victor Schmidt, Keras recurrent tutorial, Deep Learning Tutorials
- This paper is good if you need some real example: Fessant, Francoise, Samy Bengio, and Daniel Collobert. "On the prediction of solar activity using different neural network models." Annales Geophysicae. Vol. 14. No. 1. 1996.

Question:

I am using Encog library to solve a pattern recognition problem by following the basic example provied by Mr. Jeff Heaton. I have the pattern

1 3 5 4 3 5 4 3 1

which is my ideal pattern with output 1 (which would mean it is 100% the same) Now I want to input another pattern and see how similar it is to the ideal pattern.

This code is used for creating the network

BasicNetwork network = new BasicNetwork(); network.AddLayer(new BasicLayer(new ActivationSigmoid(), true, NumberOfInputNeurons)); network.AddLayer(new BasicLayer(new ActivationSigmoid(), true, 20)); network.AddLayer(new BasicLayer(new ActivationSigmoid(), true, 15)); network.AddLayer(new BasicLayer(new ActivationSigmoid(), true, 1)); network.Structure.FinalizeStructure(); network.Reset(); INeuralDataSet trainingSet = new BasicNeuralDataSet(XOR_INPUT, XOR_IDEAL);

Then, I train the network

do { train.Iteration(); Console.WriteLine("Epoch #" + epoch + " Error:" + train.Error); epoch++; } while ((epoch <= 20000) && (train.Error > 0.001));

And finally, I print the results:

foreach (INeuralDataPair pair in trainingSet) { INeuralData output = network.Compute(pair.Input); Console.WriteLine(pair.Input[0] + "-" + pair.Input[1] + "-" + pair.Input[2] + .... + ": actual = " + output[0] + " ideal=" + pair.Ideal[0]) ; }

Back to my question again:

**How do I enter another pattern and see if it looks like mine?**

Any ideas that may lead me to a solution are welcomed. Thanks

Answer:

I am not sure I completely follow this. Do you have more patterns than that? Or is your training set exactly one pattern, and you just want to see how similar other patterns are to it? A neural network does not really compare the degree of similarity between patterns. A neural network is trained to output some vector based on a training set giving inputs and ideal vectors.

If you really do want to just compare "1 3 5 4 3 5 4 3" to another similar vector, I would suggest just using a Euclidean distance, or similar measurement.

If, on the other hand, you DO want to train the neural network to recognize how similar something is to that patter, you will need to generate some more training data. I would generate 1000 or so cases, and generate a euclidean distance between each random case and your vector above, and scale that to a percent. You will also need to normalize your input vector to 0 to 1 for the neural network, for best performance.

**Edit:**

Here is how I would represent this. You will have a number of input neurons equal to the maximum number of x-axis points you can have. However, you do need to normalize these values, so I would suggest establishing what the max Y would be and normalize to between 0 and that value. Then, for your outputs, you will have one output neuron for every possible letter you can have. Perhaps the first output neuron is A, the second B. Then use one-of-n encoding and set the only ONE of the output neurons to 1 and the rest to zero:

[input pattern for A] -> [1 0 0] [input pattern for B] -> [0 1 0] [input pattern for C] -> [0 0 1] [another input pattern for C] -> [0 0 1]

Use the above as your training set. Of course, if you have all 26 letters, then you have 26 outputs.

Question:

In Wikipedia I found that machine learning is a subsection of neural networks science. So, does it mean that work with machine learning is itself implies working with neural networks or not? What will be better to use for pattern recognition tasks in terms of efficiency and complexity?

Answer:

Machine learning is a part of neural network? I'd be surprised because machine learning includes dozen of techniques that have nothing to do with neural network. It's most likely the other way around.

The exact pattern recognition algorithm depends on your requirement and data set. There're *many* such algorithms, for example, SVM, linear models for classification, HMM, PCA etc. Note that the phase "pattern recognition" is a very general term, there is no algorithm that always work. It all depends on what patterns you are looking and what kind of assumption you can make.

I recommend Dr Bishop's "Pattern Recognition and Machine Learning" book, you'll learn a lot from the book.

Question:

I am trying to use the Neural Net Pattern Recognition toolbox in MATLAB for recognizing different types of classes in my dataset. I have a 21392 x 4 table, with the columns 1-3 which I would like to use as predictors and the 4th column has the labels with 14 different categories (strings like Angry, Sad, Happy, Neutral etc.). It seems that the Neural Net Pattern Recognition toolbox, unlike the MATLAB Classification Learner toolbox doesn't allow me to import the table and automatically extract the predictors and responses from it. Moreover, I am unable to either specify the inputs and targets to the neural network manually as it isn't showing up in the options.

I looked into the examples like the Iris Dataset, Wine Dataset, Cancer Dataset etc., but all of them only have 2-3 classes as outputs which are being Identified (and encoded in binary like 000, 010, 011 etc.) and the labels are not string type unlike mine like Angry, Sad, Happy, Neutral etc. (total 14 different classes). I would like to know how I can use my table as input to the neural network pattern recognition toolbox, or otherwise, any way in which I can extract the data from my table and use it in the toolbox. I am new to using the toolbox, so any help in this regard would be highly appreciated. Thanks!

Answer:

The first step to use the Neural Net Pattern Recognition Toolbox is to convert the table to a numeric array, as neural networks work only with numeric arrays, not other datatypes directly. Considering the table as `my_table`

, it can be converted to a numeric array using

my_table_array = table2array(my_table);

From `my_table_array`

, the inputs (predictors) and outputs/targets can be extracted. But, it is imperative to mention that the inputs and outputs need to be transposed (as the data is needed to be in column format for the toolbox, each column is one datapoint, and each row is the feature), which can easily be accomplished using:-

inputs = inputs'; %(now of dimensions 3x21392) labels = labels'; %(now of dimensions 1x21392)

The string type labels (categorical) can be converted to numeric values using a one-hot encoding technique with `categorical`

, followed by `ind2vec`

:

my_table_vector = ind2vec(double(categorical(labels)));

Now, the my_table_vector (final targets) and inputs (final input predictors) can easily be fed to the neural network and used for classification/prediction of the target labels.

Question:

I just trained a neural network, and I would like to test it with a new data set that were not included in the training so as to check its performance on new data. This is my code:

net = patternnet(30); net = train(net,x,t); save (net); y = net(x); perf = perform(net,t,y) classes = vec2ind(y);

where x and t are my input and target, respectively. I understand that `save net`

and `load net;`

can be used, but my questions are as follows:

At what point in my code should I use

`save net`

?Using

`save net;`

, which location on the system is the trained network saved?When I exit and open MATLAB again, how can I load the trained network and supply new data that I want to test it with?

Please Note: I have discovered that each time I run my code, it gives a different output which I do not want once I have an acceptable result. I want to be able to save the trained neural network such that when I run the code over and over again with the training data set, it gives the same output.

Answer:

If you just call `save net`

, all your current variables from the workspace will be saved as `net.mat`

. You want to save only your trained network, so you need to use `save('path_to_file', 'variable')`

. For example:

save('C:\Temp\trained_net.mat','net');

In this case the network will be saved under the given file name.

The next time you want to use the saved pre-trained network you just need to call `load('path_to_file')`

. If you don't reinitialize or train this network again, the performance will be the same as before, because all weights and bias values will be the same.

You can see used weights and bias values by checking variables like `net.IW{i,j}`

(input weights), `net.LW{i,j}`

(layer weights) and `net.b{i}`

(bias). As long as they stay the same, the network's performance stay the same.

**Train and save**

[x,t] = iris_dataset; net = patternnet; net = configure(net,x,t); net = train(net,x,t); save('C:\Temp\trained_net.mat','net'); y = net(x); perf = perform(net,t,y); display(['performance: ', num2str(perf)]);

It returns `performance: 0.11748`

in my case. The values will be different after each new training.

**Load and use**

clear; [x,t] = iris_dataset; load('C:\Temp\trained_net.mat'); y = net(x); perf = perform(net,t,y); display(['performance: ', num2str(perf)]);

It returns `performance: 0.11748`

. The values will be the same when using the network on the same data set. Here we used the training set again.

If you get an absolutely new data set, the performance will be different, but it will always be the same for this particular data set.

clear; [x,t] = iris_dataset; %simulate a new data set of size 50 data_set = [x; t]; data_set = data_set(:,randperm(size(data_set,2))); x = data_set(1:4, 1:50); t = data_set(5:7, 1:50); load('C:\Temp\trained_net.mat'); y = net(x); perf = perform(net,t,y); display(['performance: ', num2str(perf)]);

It returns `performance: 0.12666`

in my case.

Question:

I've recently completed Professor Ng's Machine Learning course on Coursera, but I have some problem with understanding backpropagation algorithm. so I try to read Bishop codes for backpropagation using sigmoid function. I searched and found clean codes which try to explain what backpropagation does, but still have problem with understanding codes

can any one explain me what does really backpropagation do? and also explain codes for me?

here is the code that I found in the github and I mentioned it before

Answer:

You have an error of the network. And first step of backpropagation is to compute a portion of guilt for each neuron in network. Your goal is to describe an error as dependence of weights(parameter which you can change). So backprop equation is partial derivation error/weights.

First step: error signal = (desired result - output of output neuron) x derivationactivation(x) where x is input of output neuron. That is portion of guilt for output neuron.

Next step is compute a portion of guilt for hidden units. First part of this step is summation of error signals of next layer x weights which connect hidden unit with next layer unit. And rest is partial derivation of activation function. error signal = sum(nextlayererror x weight)x derivationactivation(x).

Final step is adaptation of weights.

wij = errorsignal_i x learning_rate x output_of_neuron_j

My implementation of BP in Matlab NN

Question:

I'm trying to make a character recognition using linear network but I'm getting some error when running my code, anyone who can help me with a single or basic explanation or how I can go about it? below is my code

A1 = [ 0 0 1 1 0 0 0; 0 0 0 1 0 0 0; 0 0 0 1 0 0 0; 0 0 1 0 1 0 0; 0 0 1 0 1 0 0; 0 1 1 1 1 1 0; 0 1 0 0 0 1 0; 0 1 0 0 0 1 0; 1 1 1 0 1 1 1]; B1 = [ 1 1 1 1 1 1 1; 1 0 0 0 0 0 1; 1 0 0 0 0 0 1; 1 0 0 0 0 0 1; 1 1 1 1 1 1 1; 1 0 0 0 0 0 1; 1 0 0 0 0 0 1; 1 0 0 0 0 0 1; 1 1 1 1 1 1 1]; C1 = [ 0 0 1 1 1 1 1; 0 1 0 0 0 0 1; 1 0 0 0 0 0 0; 1 0 0 0 0 0 0; 1 0 0 0 0 0 0; 1 0 0 0 0 0 0; 1 0 0 0 0 0 0; 0 1 0 0 0 0 1; 0 0 1 1 1 1 0]; A2 = [ 0 0 0 1 0 0 0; 0 0 0 1 0 0 0; 0 0 0 1 0 0 0; 0 0 1 0 1 0 0; 0 0 1 0 1 0 0; 0 1 0 0 0 1 0; 0 1 1 1 1 1 0; 0 1 0 0 0 1 0; 0 1 0 0 0 1 0]; B2 = [ 1 1 1 1 1 1 0; 1 0 0 0 0 0 1; 1 0 0 0 0 0 1; 1 0 0 0 0 0 1; 1 1 1 1 1 1 0; 1 0 0 0 0 0 1; 1 0 0 0 0 0 1; 1 0 0 0 0 0 1; 1 1 1 1 1 1 0]; C2 = [ 0 0 1 1 1 0 0; 0 1 0 0 0 1 0; 1 0 0 0 0 0 1; 1 0 0 0 0 0 0; 1 0 0 0 0 0 0; 1 0 0 0 0 0 0; 1 0 0 0 0 0 1; 0 1 0 0 0 1 0; 0 0 1 1 1 0 0]; p = [A1(1:end); B1(1:end); C1(1:end)]'; t = [A2(1:end); B2(1:end); C2(1:end)]'; net = newlin(minmax(p),1); net.trainParam.goal = 10e-5; net.trainParam.epochs = 500; net = train(net, p, t);

my error is on line 62 and the code on line 62 is

net = train(net, p, t);

anyone with a good example or how i can make this code run?thanks in advance im trying to learn and im new to matlab

Answer:

I ran the code and the error states: Output data size does not match net.outputs{1}.size. Check the format for matrix sizes. I think that is the issue.

Question:

I have nn to be trained and tested on images . I know that when I want to train nn on Image I should extract features . According to this can I consider Iris dataset which is supplied on https://archive.ics.uci.edu/ml/datasets/iris as an image dataset with extracted features and train nn on it ?? in other words can any body say to me "no you didn't train your nn on images"

thanks for help

Answer:

When you want to train your neural network, you have to:

- Preprocess your data images, in order to "clean" your dataset
- Decide which features you want to extract (this is a very difficult task, because you have to decide which features are relevant for recognition and which are not. This passage is already done in Iris dataset)
- Proceed to training and testing your neural network

So, if you look at number two, you can see that Iris dataset it's not an image dataset. However it has a number of features that you can use to train your neural network.

So the question is: Which features do you want to extract from your images in order to train your neural network?

If the answer is: the features already extracted in the Iris dataset (number of sepals, sepal width, etc.), then you can proceed to train your neural network.

If the answer is: I want to extract features directly on images, then you have to study this step.