Hot questions for Using Neural networks in raspberry pi

Question:

I'm trying to make a simple gesture recognition system to use with my Raspberry Pi equipped with a camera. I would like to train a neural network with tensorflow on my more powerful laptop and then transfer it to the RPi for prediction (as part of a Magic Mirror). Is there a way to export the trained network and weights and use a lightweight version of tensorflow for the linear algebra and prediction without the overhead of all the symbolic graph machinery that are necessary for training? I have seen the tutorials on tensorflow server, but I'd rather not set up a server and just have it run the prediction on the RPi.


Answer:

Yes, possible and available in the source repository. This allows to deploy and run a model trained on your laptop. Note that this is the same model, which can be big.

To deal with size and efficiency, TF is currently moving along a quantization approach. After your model is trained, a few extra steps allow to "translate" it into a lighter model with similar accuracy. Currently, the implementation is quite slow, though. There is a recent post that shows the whole process for iOS---pretty similar to RaspberryPI overall.

The Makefile contribution is also quite relevant for tuning and extra configuration.

Beware that this code moves often and breaks. It is sometimes useful to checkout an old "release" tag to get something that works end to end.

Question:

I'm doing a project where the inputs are taken from wheel encoder in the form of float values. How to get input for every 60 seconds and feed the inputs to the neural network model. In the code km_per_hour has to recorded for every 60 seconds and fed to neural network.

if __name__ == '__main__':
    init_GPIO()
    init_interrupt()
    while True:
        calculate_speed(40)
        print('kmh:{1:.0f}'.format(km_per_hour))
        sleep(0.1)

Answer:

Capture the time at the start of each round and check if 60s have passed.

if __name__ == '__main__':
    import time
    init_GPIO()
    init_interrupt()
    while True:
        start_time = time.time()  # seconds since unix time
        end_time = start_time + 60
        list_speeds = list()
        while time.time() < end_time:
            km_per_hour = calculate_speed(40)
            list_speeds.append(km_per_hour)
            print('kmh:{1:.0f}'.format(km_per_hour))
            sleep(0.1)
        else:  # while-else
            feed_your_dino(list_speeds)

If you don't want to lock your inputs, you can do the following, to run it the function in a new thread, not locking your program:

if __name__ == '__main__':
    import time, threading
    init_GPIO()
    init_interrupt()
    while True:
        start_time = time.time()  # seconds since unix time
        end_time = start_time + 60
        list_speeds = list()
        while time.time() < end_time:
            km_per_hour = calculate_speed(40)
            list_speeds.append(km_per_hour)
            print('kmh:{1:.0f}'.format(km_per_hour))
            sleep(0.1)
        else:  # while-else
            th_my_ai = threading.Thread(target=feed_your_dino, args=(list_speeds,))
            th_my_ai.start()

Question:

I have retrained Tensorflow inception image classification model on my own collected dataset and is working fine. Now, I want to make a continuous image classifier on a live camera video. I have a raspberry pi camera for input.

Here's I/O 2017 link(https://www.youtube.com/watch?v=ZvccLwsMIWg&index=18&list=PLOU2XLYxmsIJqntMn36kS3y_lxKmQiaAS) I want to do the same as shown in the video at 3:20/8:49

Is there any tutorial to achieve this?


Answer:

Step one

Put your tensorflow model aside for this first step. Follow different tutorials online like this one that show how to get an image from your raspberry pi.

You should be able to prove that your code works to yourself by displaying the images to a device or ftp'ing them to another computer that has a screen.

You should also be able to benchmark the rate at which you can capture images, and it should be about 5 per second or faster.

Step two

Look up and integrate image resizing as needed. Google and Stack Overflow are great places to search for how to do that. Again, verify that you are able to resize the image to exactly what your tensorflow needs.

Step three

copy over some of the images to your dev environment and verify that they work as is.

Step four

ftp your trained tensorflow model to the pi along with installing supporting libraries. Integrate the pieces into one codebase and turn it on.