Hot questions for Using Neural networks in sequence to sequence

Top 10 Python Open Source / Neural networks / sequence to sequence

Question:

I have coded a sequence to sequence learning LSTM in keras myself using the knowledge gained from the web tutorials and my own intuitions. I converted my sample text to sequences and then padded using pad_sequence function in keras.

from keras.preprocessing.text import Tokenizer,base_filter
from keras.preprocessing.sequence import pad_sequences

def shift(seq, n):
    n = n % len(seq)
    return seq[n:] + seq[:n]

txt="abcdefghijklmn"*100

tk = Tokenizer(nb_words=2000, filters=base_filter(), lower=True, split=" ")
tk.fit_on_texts(txt)
x = tk.texts_to_sequences(txt)
#shifing to left
y = shift(x,1)

#padding sequence
max_len = 100
max_features=len(tk.word_counts)
X = pad_sequences(x, maxlen=max_len)
Y = pad_sequences(y, maxlen=max_len)

After a carefully inspection I found my padded sequence looks like this

>>> X[0:6]
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 3],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4],
       [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 7]], dtype=int32)
>>> X
array([[ 0,  0,  0, ...,  0,  0,  1],
       [ 0,  0,  0, ...,  0,  0,  3],
       [ 0,  0,  0, ...,  0,  0,  2],
       ..., 
       [ 0,  0,  0, ...,  0,  0, 13],
       [ 0,  0,  0, ...,  0,  0, 12],
       [ 0,  0,  0, ...,  0,  0, 14]], dtype=int32)

Is the padded sequence suppose to look like this? Except the last column in the array the rest are all zeros. I think I made some mistake in padding the text to sequence and if so can you tell me where I made the error?


Answer:

If you want to tokenize by char, you can do it manually, it's not too complex:

First build a vocabulary for your characters:

txt="abcdefghijklmn"*100
vocab_char = {k: (v+1) for k, v in zip(set(txt), range(len(set(txt))))}
vocab_char['<PAD>'] = 0

This will associate a distinct number for every character in your txt. The character with index 0 should be preserved for the padding.

Having the reverse vocabulary will be usefull to decode the output.

rvocab = {v: k for k, v in vocab.items()}

Once you have this, you can first split your text into sequences, say you want to have sequences of length seq_len = 13 :

[[vocab_char[char] for char in txt[i:(i+seq_len)]] for i in range(0,len(txt),seq_len)]

your output will look like :

[[9, 12, 6, 10, 8, 7, 2, 1, 5, 13, 11, 4, 3], 
 [14, 9, 12, 6, 10, 8, 7, 2, 1, 5, 13, 11, 4],
 ...,
 [2, 1, 5, 13, 11, 4, 3, 14, 9, 12, 6, 10, 8], 
 [7, 2, 1, 5, 13, 11, 4, 3, 14]]

Note that the last sequence doesn't have the same length, you can discard it or pad your sequence to max_len = 13, it will add 0's to it.

You can build your targets Y the same way, by shifting everything by 1. :-)

I hope this helps.

Question:

I am trying to write a sequence to sequence RNN in keras. I coded this program using what I understood from the web. I first tokenized the text then converted the text into sequence and padded to form feature variable X. The target variable Y was obtained first shifting x to left and then padding it. Lastly I fed my feature and target variable to my LSTM model.

This is my code I written in keras for that purpose.

from keras.preprocessing.text import Tokenizer,base_filter
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Activation,Dropout,Embedding
from keras.layers import LSTM


def shift(seq, n):
    n = n % len(seq)
    return seq[n:] + seq[:n]

txt="abcdefghijklmn"*100

tk = Tokenizer(nb_words=2000, filters=base_filter(), lower=True, split=" ")
tk.fit_on_texts(txt)
x = tk.texts_to_sequences(txt)
#shifing to left
y = shift(x,1)

#padding sequence
max_len = 100
max_features=len(tk.word_counts)
X = pad_sequences(x, maxlen=max_len)
Y = pad_sequences(y, maxlen=max_len)

#lstm model
model = Sequential()
model.add(Embedding(max_features, 128, input_length=max_len, dropout=0.2))
model.add(LSTM(128, dropout_W=0.2, dropout_U=0.2))
model.add(Dense(max_len))
model.add(Activation('softmax'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop')

model.fit(X, Y, batch_size=200, nb_epoch=10)

The problem is its showing an error

Epoch 1/10
IndexError: index 14 is out of bounds for size 14
Apply node that caused the error: AdvancedSubtensor1(if{inplace}.0, Reshape{1}.0)
Toposort index: 80

Answer:

The problem lies in:

model.add(Embedding(max_features, 128, input_length=max_len, dropout=0.2))

In the Embedding documentation you may see that the first argument provided to it should be set to size of vocabulary + 1. It's because there should be always a place for a null word which index is 0. Because of that you need to change this line to:

model.add(Embedding(max_features + 1, 128, input_length=max_len, dropout=0.2))

Question:

I am trying to make a chatbot that uses a sequence to sequence model to respond to the user's input. The problem is that the input sequence given to the model will almost never be the same. The input sequence is a list of words. I have created a vocabulary that maps each word in this sequence to its own unique id, however, the input is still variable and is not fixed so I can't just use a sequence to sequence model. I understand that it is possible to use an encoder to map the sequence of words to a fixed vector representation and then have a decoder map that vector back to a sequence.

The question I have is how would I go about encoding the sequence of words to a fixed vector? Is there any sort of technique that could be used for this purpose?


Answer:

Mapping a sequence of words to a vector representation can be accomplished with Recurrent Neural Network. You can take a look at this introduction: http://colah.github.io/posts/2015-08-Understanding-LSTMs/

There is a tutorial in tensorflow tool kit that address this sequence to sequence mapping architecture with example code: https://www.tensorflow.org/versions/r0.11/tutorials/index.html

Before working with RNN, however, I would recommend going through the basics for neural networks: http://deeplearning.net/software/theano/tutorial/#basics

Bengio's deep learning book: http://www.deeplearningbook.org/ covers a lot of materials about RNN, however it involves quite a bit of math.

Question:

I was trying to implement a sequence to sequence learning task in Keras. I want the model to learn the following transformation. F(X) -> Y Where X is

  [[0., 0., 1., 0., 1., 1., 0., 0., 0., 0.],
   [1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 1., 0., 1., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
   [0., 0., 0., 0., 1., 0., 0., 0., 0., 0.]]

and Y is given by

  [[1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
   [0., 0., 0., 0., 0., 1., 0., 0., 0., 0.],
   [0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
   [0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
   [0., 1., 0., 1., 0., 0., 0., 0., 0., 0.]]

Each column in the X and Y array represents a word in a sentence. However, each word has ten features in the input sequence (i.e X) and just five features in Y. I tried implementing a model with the following code.

model = Sequential()
LAYER_NUM = 3
HIDDEN_DIM = 900
model = Sequential()
model.add(Bidirectional(LSTM(HIDDEN_DIM, return_sequences=True), input_shape = (None, 10)))
print("YEAH!")
for i in range(LAYER_NUM - 1):
    model.add(Bidirectional(LSTM(HIDDEN_DIM, return_sequences=True)))
    model.add(Dropout(0.2))
model.add(TimeDistributed(Dense(5)))
model.add(Activation('softmax'))
model.compile(loss="categorical_crossentropy", optimizer="rmsprop", metrics=["accuracy"])

However, this raised the following error.

ValueError: Error when checking target: expected activation_3 to have shape (10, 5) but got array with shape (5, 10)

Can someone guide me on this?


Answer:

The error gives it away. You said columns represent the words, in the Keras model you have it would be rows. So you have to transpose your data X.T and Y.T to fix it.

Question:

Chat bot can be created with Sequence to Sequence Learning with Neural Networks, I have training chat-data but how to use it?


Answer:

Sequence to Sequence Learning using Neural networks is a way to use Neural Networks to translate sequences. The general goal is you have a source sequence(say a sentence in English), a target sequence(it's translation in French) and the task is to generate target sequence looking at source sequence.

Challenges for traditional Feed Forward Neural Networks are varying source and target lengths. In this paper, they use a Recurrent neural Network(RNN) to encode the source sequence i.e, the RNN reads the individual elements in source sequence one-by-one. Once it finishes, the encoder would be having a fair idea of what the source sequence is.

You use the last state of the encoder and provide this additional information to Decoder(which is basically a language model) to generate the target sequence element by element.

In your case, you can use it to generate responses. Say you have chat messages between two users. Now chat message of user 1 would be source sequence and the corresponding reply from user 2 would be target sequence. The training is done as said in the paper. The model after training would be trying to mimick user 2 response.