Hot questions for Using Neural networks in google colaboratory

Top 10 Python Open Source / Neural networks / google colaboratory

Question:

I've recently started to use Google Colab, and wanted to train my first Convolutional NN. I imported the images from my Google Drive thanks to the answer I got here.

Then I pasted my code to create the CNN into Colab and started the process. Here is the complete code:

Part 1: Setting up Colab to import picture from my Drive

(part 1 is copied from here as it worked as exptected for me

Step 1:

!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse

Step 2:

from google.colab import auth
auth.authenticate_user()

Step 3:

from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}

Step 4:

!mkdir -p drive
!google-drive-ocamlfuse drive

Step 5:

print('Files in Drive:')
!ls drive/
Part 2: Copy pasting my CNN

I created this CNN with tutorials from a Udemy Course. It uses keras with tensorflow as backend. For the sake of simplicity I uploaded a really simple version, which is plenty enough to show my problems

from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten 
from keras.layers import Dense 
from keras.layers import Dropout
from keras.optimizers import Adam 
from keras.preprocessing.image import ImageDataGenerator 

parameters

imageSize=32

batchSize=64

epochAmount=50

CNN

classifier=Sequential() 

classifier.add(Conv2D(32, (3, 3), input_shape = (imageSize, imageSize, 3), activation = 'relu')) #convolutional layer

classifier.add(MaxPooling2D(pool_size = (2, 2))) #pooling layer

classifier.add(Flatten())

ANN

classifier.add(Dense(units=64, activation='relu')) #hidden layer

classifier.add(Dense(units=1, activation='sigmoid')) #output layer

classifier.compile(optimizer = "adam", loss = 'binary_crossentropy', metrics = ['accuracy']) #training method

image preprocessing

train_datagen = ImageDataGenerator(rescale = 1./255,
                               shear_range = 0.2,
                               zoom_range = 0.2,
                               horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1./255) 

training_set = train_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/training_set',
                                             target_size = (imageSize, imageSize),
                                             batch_size = batchSize,
                                             class_mode = 'binary')

test_set = test_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/test_set',
                                        target_size = (imageSize, imageSize),
                                        batch_size = batchSize,
                                        class_mode = 'binary')

classifier.fit_generator(training_set,
                     steps_per_epoch = (8000//batchSize),
                     epochs = epochAmount,
                     validation_data = test_set,
                     validation_steps = (2000//batchSize))
Now comes my Problem

First of, the training set I used is a database with 10000 dog and cat pictures of various resolutions. (8000 training_set, 2000 test_set)

I ran this CNN on Google Colab (with GPU support enabled) and on my PC (tensorflow-gpu on GTX 1060)

This is an intermediate result from my PC:

Epoch 2/50
63/125 [==============>...............] - ETA: 2s - loss: 0.6382 - acc: 0.6520

And this from Colab:

Epoch 1/50
13/125 [==>...........................] - ETA: 1:00:51 - loss: 0.7265 - acc: 0.4916

Why is Google Colab so slow in my case?

Personally I suspect a bottleneck consisting of pulling and then reading the images from my Drive, but I don't know how to solve this other than choosing a different method to import the database.


Answer:

As @Feng has already noted, reading files from drive is very slow. This tutorial suggests using some sort of a memory mapped file like hdf5 or lmdb in order to overcome this issue. This way the I\O Operations are much faster (for a complete explanation on the speed gain of hdf5 format see this).

Question:

I have a dataset of images on my Google Drive. I have this dataset both in a compressed .zip version and an uncompressed folder.

I want to train a CNN using Google Colab. How can I tell Colab where the images in my Google Drive are?

  1. official tutorial does not help me as it only shows how to upload single files, not a folder with 10000 images as in my case.

  2. Then I found this answer, but the solution is not finished, or at least I did not understand how to go on from unzipping. Unfortunately I am unable to comment this answer as I don't have enough "stackoverflow points"

  3. I also found this thread, but here all the answer use other tools, such as Github or dropbox

I hope someone could explain me what I need to do or tell me where to find help.

Edit1:

I have found yet another thread asking the same question as mine: Sadly, of the 3 answers, two refer to Kaggle, which I don't know and don't use. The third answer provides two links. The first link refers to the 3rd thread I linked, and the second link only explains how to upload single files manually.


Answer:

To update the answer. You can right now do it from Google Colab

# Load the Drive helper and mount
from google.colab import drive

# This will prompt for authorization.
drive.mount('/content/drive')

!ls "/content/drive/My Drive"

Google Documentation

Question:

I am training a neural network for Neural Machine Traslation on Google Colaboratory. I know that the limit before disconnection is 12 hrs, but I am frequently disconnected before (4 or 6 hrs). The amount of time required for the training is more then 12 hrs, so I add some savings each 5000 epochs.

I don't understand if when I am disconnected from Runtime (GPU is used) the code is still execute by Google on the VM? I ask it because I can easily save the intermediate models on Drive, and so continue the train also if I am disconnected.

Does anyone know it?


Answer:

Yes, for ~1.5 hours after you close the browser window. To keep things running longer, you'll need an active tab.

Question:

I am tuning hyperparameters for a neural network via gridsearch on google colab. I got a "transport endpoint is not connected" error after my code executed for 3 4 hours. I found out that this is because google colab doesn't want people to use the platform for a long time period(not quite sure though).

However, funnily, after the exception was thrown when I reopened the browser, the cell was still running. I am not sure what happens to the process once this exception is thrown.

Thank you


Answer:

In google colab, you can use their GPU service for upto 12 hours, after that it will halt your execution, if you ran it for 3-4 hours, it will just stop displaying Data continuously on your browser window(if left Idle), and refreshing the window will restore that connection.

In case you ran it for 34 hours, then it will definitely be terminated(Hyphens matter), this is apparently done to discourage people from mining cryptocurrency on their platform, in case you have to run your training for more than 12 hours, all you need to do is enable checkpoints on your google drive, and then you can restart the training, once a session is terminated, if you are good enough with the requests library in python, you can automate it.

Question:

Whenever i specify them path like "C:\Users\Admin\Desktop\tumor", I get the "file not found" error,using cv2.imread(). Can anyone explain the correct way to read them?


Answer:

You'll need to transfer files to the backend VM. Recipes are in the I/O example notebook: https://colab.research.google.com/notebooks/io.ipynb

Or, you can use a local runtime as described here: http://research.google.com/colaboratory/local-runtimes.html