Difference between Variable and get_variable in TensorFlow

get_variable tensorflow
tensorflow resource variable
tensorflow boolean variable
how to access tensorflow variables
what data is usually stored in a tensorflow variable
tensorflow session variables
variable vs tensor tensorflow
tensorflow add variable to graph

As far as I know, Variable is the default operation for making a variable, and get_variable is mainly used for weight sharing.

On the one hand, there are some people suggesting using get_variable instead of the primitive Variable operation whenever you need a variable. On the other hand, I merely see any use of get_variable in TensorFlow's official documents and demos.

Thus I want to know some rules of thumb on how to correctly use these two mechanisms. Are there any "standard" principles?

I'd recommend to always use tf.get_variable(...) -- it will make it way easier to refactor your code if you need to share variables at any time, e.g. in a multi-gpu setting (see the multi-gpu CIFAR example). There is no downside to it.

Pure tf.Variable is lower-level; at some point tf.get_variable() did not exist so some code still uses the low-level way.

Which is preferred in TensorFlow, tf.Variable or tf.get_variable , I always recommend to use tf.get_variable() because it will make it simple to refactor your code in the event that you have to share variables  Variables created using the Variable constructor and the ones created using the get_variable are different beasts. Thus, you cannot reuse variables created using the constructor. Thus, you cannot reuse variables created using the constructor.

tf.Variable is a class, and there are several ways to create tf.Variable including tf.Variable.__init__ and tf.get_variable.

tf.Variable.__init__: Creates a new variable with initial_value.

W = tf.Variable(<initial-value>, name=<optional-name>)

tf.get_variable: Gets an existing variable with these parameters or creates a new one. You can also use initializer.

W = tf.get_variable(name, shape=None, dtype=tf.float32, initializer=None,
       regularizer=None, trainable=True, collections=None)

It's very useful to use initializers such as xavier_initializer:

W = tf.get_variable("W", shape=[784, 256],
       initializer=tf.contrib.layers.xavier_initializer())

More information here.

Difference between Variable and get_variable in TensorFlow, If False, allows the variable to be initialized with a value of unknown shape. If True, the default, the shape of initial_value must be known. For this to be used the  What's the difference between Tensor and Variable in Tensorflow? I noticed in this stackoverflow answer, we can use Variable wherever Tensor can be used. However, I failed to do session.run() on a Variable: A = tf.zeros([10]) # A is a Tensor B = tf.Variable([111, 11, 11]) # B is a Variable sess.run(A) # OK.

I can find two main differences between one and the other:

  1. First is that tf.Variable will always create a new variable, whereas tf.get_variable gets an existing variable with specified parameters from the graph, and if it doesn't exist, creates a new one.

  2. tf.Variable requires that an initial value be specified.

It is important to clarify that the function tf.get_variable prefixes the name with the current variable scope to perform reuse checks. For example:

with tf.variable_scope("one"):
    a = tf.get_variable("v", [1]) #a.name == "one/v:0"
with tf.variable_scope("one"):
    b = tf.get_variable("v", [1]) #ValueError: Variable one/v already exists
with tf.variable_scope("one", reuse = True):
    c = tf.get_variable("v", [1]) #c.name == "one/v:0"

with tf.variable_scope("two"):
    d = tf.get_variable("v", [1]) #d.name == "two/v:0"
    e = tf.Variable(1, name = "v", expected_shape = [1]) #e.name == "two/v_1:0"

assert(a is c)  #Assertion is true, they refer to the same object.
assert(a is d)  #AssertionError: they are different objects
assert(d is e)  #AssertionError: they are different objects

The last assertion error is interesting: Two variables with the same name under the same scope are supposed to be the same variable. But if you test the names of variables d and e you will realize that Tensorflow changed the name of variable e:

d.name   #d.name == "two/v:0"
e.name   #e.name == "two/v_1:0"

tf.compat.v1.get_variable, it is often convenient to distinguish between variables holding trainable model parameters and In a session, computes and returns the value of this variable. Both of tensortlow tf.name_scope() and tf.variable_scope() can manage tensorflow variables, how about the difference between them? In this tutorial, we will discuss this topic for tensorflow beginners.

Another difference lies in that one is in ('variable_store',) collection but the other is not.

Please see the source code:

def _get_default_variable_store():
  store = ops.get_collection(_VARSTORE_KEY)
  if store:
    return store[0]
  store = _VariableStore()
  ops.add_to_collection(_VARSTORE_KEY, store)
  return store

Let me illustrate that:

import tensorflow as tf
from tensorflow.python.framework import ops

embedding_1 = tf.Variable(tf.constant(1.0, shape=[30522, 1024]), name="word_embeddings_1", dtype=tf.float32) 
embedding_2 = tf.get_variable("word_embeddings_2", shape=[30522, 1024])

graph = tf.get_default_graph()
collections = graph.collections

for c in collections:
    stores = ops.get_collection(c)
    print('collection %s: ' % str(c))
    for k, store in enumerate(stores):
        try:
            print('\t%d: %s' % (k, str(store._vars)))
        except:
            print('\t%d: %s' % (k, str(store)))
    print('')

The output:

collection ('__variable_store',): 0: {'word_embeddings_2': <tf.Variable 'word_embeddings_2:0' shape=(30522, 1024) dtype=float32_ref>}

tf.Variable, Variable, tf.get_variable, tf.variable_scope and tf.name_scope, but Tensorflow Function Description (4) - variable_scope/name_scope If you follow this logic, you can basically understand the difference between the two. TensorFlow Variables and Placeholders Tutorial With Example is today’s topic. TensorFlow is an open source machine learning framework developed by Google which can be used to the build neural networks and perform a variety of all machine learning tasks.

Deep understanding of tf.Variable, tf.get_variable, tf.variable_scope , You can create, initialize, save and load single variables in the way described in how this can be done using tf.variable_scope() and the tf.get_variable() . for variable names and a reuse-flag to distinguish the two cases described above. I can find two main differences between one and the other: First is that tf.Variablewill always create a new variable, whereas tf.get_variablegets an existingvariable with specified parameters from the graph, and if it doesn't exist, creates a new one. tf.Variablerequires that an initial value be specified.

TensorFlow, Variable(), I can't access with tf.get_variable(). Why do .com/questions/​37098546/difference-between-variable-and-get-variable-in-tensorflow. Overview of changes TensorFlow 1.0 vs TensorFlow 2.0. Earlier this year, Google announced TensorFlow 2.0, it is a major leap from the existing TensorFlow 1.0. The key differences are as follows: Ease of use: Many old libraries (example tf.contrib) were removed, and some consolidated. For example, in TensorFlow1.x the model could be made using

How to think about tf.Variable() and tf.get_variable()?, Compare plans · Contact Sales tf.get_variable returns Tensor instead of Variable #22009 According to the docs https://www.tensorflow.org/api_docs/​python/tf/get_variable In general it is useful to have # this kind of iterator if one wants to switch between train and validation # within the training loop. TensorFlow scopes are a fundamental building block that helps you organize your code in a cleaner, clever and more understandable way. They also allow you to share existing variables, something really useful in some projects and architectures.

Comments
  • get_variable is new way, Variable is old way (which might be supported forever) as Lukasz says (PS: he wrote much of the variable name scoping in TF)
  • Thank you so much for your answer. But I still have one question about how to replace tf.Variable with tf.get_variable everywhere. That is when I want to initialize a variable with a numpy array, I cannot find a clean and efficient way of doing it as I do with tf.Variable. How do you solve it? Thanks.
  • Yes, by Variable actually I mean using its __init__. Since get_variable is so convenient, I wonder why most TensorFlow code I saw use Variable instead of get_variable. Are there any conventions or factors to consider when choosing between them. Thank you!
  • If you want to have a certain value, using Variable is simple: x = tf.Variable(3).
  • @SungKim normally when we use tf.Variable() we can initialize it as a random value from a truncated normal distribution. Here is my example w1 = tf.Variable(tf.truncated_normal([5, 50], stddev = 0.01), name = 'w1'). What would the equivalent of this be? how do I tell it I want a truncated normal? Should I just do w1 = tf.get_variable(name = 'w1', shape = [5,50], initializer = tf.truncated_normal, regularizer = tf.nn.l2_loss) ?
  • @Euler_Salter: You can use tf.truncated_normal_initializer() to get the desired result.
  • Great example! Regarding d.name and e.name, I've just come across a this TensorFlow doc on tensor graph naming operation that explains it: If the default graph already contained an operation named "answer", the TensorFlow would append "_1", "_2", and so on to the name, in order to make it unique.