Hot questions for Using Neural networks in gaussian


I'm trying to add Gaussian noise to a layer of my network in the following way.

def Gaussian_noise_layer(input_layer, std):
    noise = tf.random_normal(shape = input_layer.get_shape(), mean = 0.0, stddev = std, dtype = tf.float32) 
    return input_layer + noise

I'm getting the error:

ValueError: Cannot convert a partially known TensorShape to a Tensor: (?, 2600, 2000, 1)

My minibatches need to be of different sizes sometimes, so the size of the input_layer tensor will not be known until the execution time.

If I understand correctly, someone answering Cannot convert a partially converted tensor in TensorFlow suggested to set shape to tf.shape(input_layer). However then, when I try to apply a convolutional layer to that noisy layer I get another error:

ValueError: dims of shape must be known but is None

What is the correct way of achieving my goal of adding Gaussian noise to the input layer of a shape unknown until the execution time?


To dynamically get the shape of a tensor with unknown dimensions you need to use tf.shape()

For instance

import tensorflow as tf
import numpy as np

def gaussian_noise_layer(input_layer, std):
    noise = tf.random_normal(shape=tf.shape(input_layer), mean=0.0, stddev=std, dtype=tf.float32) 
    return input_layer + noise

inp = tf.placeholder(tf.float32, shape=[None, 8], name='input')
noise = gaussian_noise_layer(inp, .2)
noise.eval(session=tf.Session(), feed_dict={inp: np.zeros((4, 8))})


What is the fastest way to change the standard deviation of a Gaussian noise layer in Keras during training?

Currently I am reloading the whole network with the adapted standard deviation every iteration, which is really slow.

Thank you in advance!


Can you try using keras backend variables?

from keras import backend as K

self.std_var = K.variable(value=std)

When building the network.

# instantiate
stddev = std_var(0.8)
g = GaussianNoise(stddev)

During training (possibly inside a loop).

K.set_value(stddev.std_var, <new_std_val>)

Try this snippet and see whether it works.


I was wondering if there is a way in which I can remove a gaussian noise layer


after using.

so that when using my neural net in applications it will not be affected by such layers."network.h5")


tf.keras.layers.GaussianNoise() is a regularization layer. You don't need to worry about it during prediction. It is active only during training time.


I converted a pretrained keras model to use it with Tensorflow.js following the steps in this guide

Now, when I try to import it to javascript using

const model = tf.loadModel("{% static "keras/model.json" %}");

The following error shows up:

Uncaught (in promise) Error: Unknown layer: GaussianNoise. This may be due to one of the following reasons:
1. The layer is defined in Python, in which case it needs to be ported to TensorFlow.js or your JavaScript code.
2. The custom layer is defined in JavaScript, but is not registered properly with 
    at new t (errors.ts:48)
    at deserializeKerasObject (generic_utils.ts:239)
    at deserialize (serialization.ts:31)
    at t.fromConfig (models.ts:940)
    at deserializeKerasObject (generic_utils.ts:274)
    at deserialize (serialization.ts:31)
    at models.ts:302
    at common.ts:14
    at (common.ts:14)
    at i (common.ts:14)

I'm using 0.15.3 version of Tensorflow.js, imported this way:

<script src=""></script>

I trained my neural network with Tensorflow 1.12.0 and Keras 2.2.4


You are using the layer tf.layer.gaussianNoise that is not supported yet by tfjs.

Consider changing this layer by another one supported


I'd like to calculate the receptive fields (e.g. Gaussian) for spiking neural networks in python. Let's say that I want to encode the iris data set and transform it into spike trains. I work with Brian framework, and I'm looking for a way to encode my data sets.

Is there any way to do it automatically? Or even any site explaining the trasnformation process? I've read several papers but this process is explained partially ...

Thanks in advance


For overlapping Gaussian RF, you need to know the minimum (I_min) and maximum (I_max) for each variable. Then, (again for each variable) you create an array of N input neurons located at the peaks of N overlapping Gaussians. Use the following formulas to space out the neurons evenly over the variable range (this is of course pseudocode):

range = I_max - I_min
for (i = 1..N)
    gaussian_i_mean = I_min + range * (2*i - 3) / (2 * (N - 2))
    gaussian_i_sd = range / (beta * (N - 2))
end for

beta controls the width of the Gaussian. See this paper for more details.