Hot questions for Using Neural networks in google cloud platform

Top 10 Python Open Source / Neural networks / google cloud platform

Question:

I have a requirement where I need to Deploy a Convolutional Neural Network Model in an Offline device. I know we can use Google Cloud ML to train a model, tune the hyper-hyperparameters and deploy it for prediction.

But my question is if we can download the trained TensorFlow model and deploy it on a custom device for prediction?

Note - The Custom Device will have a lot of processing power but no internet connectivity.


Answer:

Yes. The training service and the prediction services are completely separate. To train a model for a custom device, you create a TensorFlow script to train the model. The script will typically describe one TensorFlow graph for training and a second for prediction (the prediction graph will be constructed in such a way that it can load the parameters learned during training). The prediction graph will be tailored for your custom hardware.

Be sure to include commands to export the prediction graph to GCS. For an example of how to export a model, see this post.

Question:

I have implemented a neural network model using Python and Tensorflow, which normally runs on my own computer. Now I would like to train it on new datasets on the Google Cloud Platform. Do you think it is possible? Do I need to change my code?

Thank you very much for your help!


Answer:

Google Cloud offers the Cloud ML Engine service, which allows to train your models and perform predictions without the need of running and maintaining an instance with the required software.

In order to run the TensorFlow NN models you already have, you will not need to change your code, you will only have to package the trainer appropriately, as described in the documentation, and run a ML Engine job that performs the training itself. Once you have your model, you can also deploy it in the same service and later get predictions with different features depending on your requirements (urgency in getting the predictions, data set sources, etc.).

Alternatively, as suggested in the comments, you can always launch a Compute Engine instance and run there your TensorFlow model as if you were doing it locally in your computer. However, I would strongly recommend the approach I proposed earlier, as you will be saving some money, because you will only be charged for your usage (training jobs and/or predictions) and do not need to configure an instance from scratch.

Question:

I created a windows VM where I have the BERT master, SQUAD, and BERT-large model. I tried to run the squad using this:

python run_squad.py \
  --vocab_file=$BERT_LARGE_DIR/vocab.txt \
  --bert_config_file=$BERT_LARGE_DIR/bert_config.json \
  --init_checkpoint=$BERT_LARGE_DIR/bert_model.ckpt \
  --do_train=True \
  --train_file=$SQUAD_DIR/train-v2.0.json \
  --do_predict=True \
  --predict_file=$SQUAD_DIR/dev-v2.0.json \
  --train_batch_size=24 \
  --learning_rate=3e-5 \
  --num_train_epochs=2.0 \
  --max_seq_length=384 \
  --doc_stride=128 \
  --output_dir=gs://some_bucket/squad_large/ \
  --use_tpu=True \
  --tpu_name=$TPU_NAME \
  --version_2_with_negative=True

It threw an error: googleapiclient.errors.HttpError: <HttpError 403 when requesting https://tpu.googleapis.com/v1alpha1/projects/projectname/locations/us-central1-a/nodes/testnode?alt=json returned "Request had insufficient authentication scopes.">

Is there a way to change the scope of existing VM to cloud-platform after VM is created?


Answer:

Is there a way to change the scope of existing VM to cloud-platform after VM is created?

Yes you can. Go to the Google Cloud Console. Select your instance and stop it. Then edit your instance and change the scopes, etc. The restart your instance.