Is there a tensorflow way to extract/save mean and std used for normalization?

tensorflow normalize between 0 and 1
tensorflow minmaxscaler
tensorflow transform
tensorflow batch normalization
tensorflow dataset
tensorflow normalize layer
image standardization
tensorflow 2.0 batch normalization

I am normalizing my input training data using data_norm = tf.nn.l2_normalize(data, 0).

The data is of shape [None, 4]. Each column is a feature. It might look like this:

data = [[-3., 0.2, 1.6, 0.5], 
        [3.6, 1.5, -1.9, 0.71], 
       ...]

I understand that given normalization in the training set, the test set should be normalized, too, but using the std and mean from the training set. (I assume this applies also during the actual usage of the NN, i.e. input should be normalized using the training set mean, std before feeding into the NN).

Is there a way to extract/save the mean, std used for normalization from this function, so I can normalize my test set using the same mean, std used for normalizing the training data? I know how to save the weights etc with saver.save(sess, "checkpoints/" + save_id) . Is there a way to save/load std, mean like this?

tf.nn.l2_normalize use the real_time mean of the input data, you can't use this function to use training data mean or std. l2_normalize_docs

output_l2_normalize = input / sqrt(max(sum(input**2), epsilon))

Note: Since you are trying to normalize the input data you may precompute the global(training dataset) mean and std and write your own function to normalize.

How to normalize features in TensorFlow, TL;DR When using tf.estimator, use the normalizer_fn argument in In this case, I'm using Pandas to get the mean and standard deviation for  The mean based normalization used here requires knowing the means of each column ahead of time. Categorical data. Some of the columns in the CSV data are categorical columns. That is, the content should be one of a limited set of options.

From the documentation:

For a 1-D tensor with dim = 0, computes

output = x / sqrt(max(sum(x**2), epsilon))

epsilon is defaulted to 1e-12, or 10 to the -12.

So you could just apply this same function to the test data.

HTH!

Cheers,

-maashu

Preprocessing data with TensorFlow Transform, Normalize an input value by using the mean and standard deviation think about how this data is used and the potential benefits and harm a model's predictions can cause. KB/s in 0.03s 2020-06-12 09:14:23 (108 MB/s) - 'adult.​data' saved Extract features and label from the transformed tensors. And I thought it would be a good idea to implement different layers that can be used later. Below is the list of layers I wanted to practice. a. Row-Wise Mean Subtracting Layer b. Row-Wise Standard Deviation Layer c. Ranged Normalization Layer d. Global Contrast Normalization Layer e. Reconstructive Principle Component Layer

I am not an expert in tensorflow but I am happy to share what worked for me. I made two additional variables before starting the session for training:

train_mean = tf.Variable(np.mean(X_train), name='train_mean', dtype=tf.float64)
train_std = tf.Variable(np.std(X_train), name='train_std', dtype=tf.float64)

# initialize other variables here

with tf.session() as sess:
    sess.run(init)

    # normalize data
    train_mean_py = sess.run(train_mean)
    train_std_py = sess.run(train_std)
    X_train = (X_train - train_mean_py) / train_std_py
    X_test = (X_test - train_mean_py) / train_std_py

    # do training

    # save model

When I recover the model later in a different script I do the following

# define the variables that have to be later recovered
train_mean = tf.Variable(0., name='train_mean', dtype=tf.float64)
train_std = tf.Variable(0., name='train_std', dtype=tf.float64)


with tf.Session() as sess:
    sess.run(init)

    saver.restore(sess, "./trained_models/{0}.ckpt".format(model_name))

    # normalize data
    train_mean_py = sess.run(train_mean)
    train_std_py = sess.run(train_std)
    X_test = (X_test - train_mean_py) / train_std_py

tf.image.per_image_standardization, Linearly scales each image in image to have mean 0 and variance 1. N is the number of elements in x; stddev is the standard deviation of all values in x  Pre-trained models and datasets built by Google and the community

Load CSV data, This tutorial provides an example of how to load CSV data from a file into a tf.data​.Dataset . The data used in this tutorial are taken from the Titanic passenger list. Bind the MEAN and STD to the normalizer fn using functools.partial . The mean based normalization used here requires knowing the means of each column  Hello Everyone. I was deploying my image classification on Android mobile.I converted my model to tflite and rechecked it by passing the input in it and Interpreter.invoke() way in python it works fine there .But when I integrate it with my android device it start predicting wrong classes and work completely different how it works in python .

Basic regression: Predict fuel efficiency, This notebook uses the classic Auto MPG Dataset and builds a model to predict the fuel efficiency of Get the data Look again at the train_stats block above and note how different the ranges of each feature are. Caution: The statistics used to normalize the inputs here (mean and standard deviation) need to be applied  The base dependencies in Python3 are : pip install numpy h5py tensorflow=1.4.0 tensorflow-gpu=1.4.0 We will later provide a requirements.txt for you to use with pip. If working with an Anaconda environment on Windows, you can create the environment from the tf-lift-env.yml file or the tfliftspec.txt

Preprocess data with TensorFlow Transform, Get started with TFMA · Fairness Indicators tutorial Transform ) can be used to preprocess data using exactly the same code for both training a model Normalize an input value by using the mean and standard deviation; Convert strings to The method to_this_call is being invoked and passed the object called pass_this  TensorFlow is the most famous deep learning library these recent years. A practitioner using TensorFlow can build any deep learning structure, like CNN, RNN or simple artificial neural network. TensorFlow is mostly used by academics, startups, and large companies.

Comments
  • What is a "real_time" mean as opposed to a regular mean?
  • as it is computed during the run time from the input data of the function, not restored from any other constant. you can assume it as regular mean also.