## Adjust Single Value within Tensor -- TensorFlow

tensorflow reverse gather

tfvariable

tf get variable

tensorflow remove element from tensor

tensorflow indexing

tensorflow enumerate tensor

tensorflow scatter add

I feel embarrassed asking this, but how do you adjust a single value within a tensor? Suppose you want to add '1' to only one value within your tensor?

Doing it by indexing doesn't work:

TypeError: 'Tensor' object does not support item assignment

One approach would be to build an identically shaped tensor of 0's. And then adjusting a 1 at the position you want. Then you would add the two tensors together. Again this runs into the same problem as before.

I've read through the API docs several times and can't seem to figure out how to do this. Thanks in advance!

**UPDATE:** TensorFlow 1.0 includes a `tf.scatter_nd()`

operator, which can be used to create `delta`

below without creating a `tf.SparseTensor`

.

This is actually surprisingly tricky with the existing ops! Perhaps somebody can suggest a nicer way to wrap up the following, but here's one way to do it.

Let's say you have a `tf.constant()`

tensor:

c = tf.constant([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]])

...and you want to add `1.0`

at location [1, 1]. One way you could do this is to define a `tf.SparseTensor`

, `delta`

, representing the change:

indices = [[1, 1]] # A list of coordinates to update. values = [1.0] # A list of values corresponding to the respective # coordinate in indices. shape = [3, 3] # The shape of the corresponding dense tensor, same as `c`. delta = tf.SparseTensor(indices, values, shape)

Then you can use the `tf.sparse_tensor_to_dense()`

op to make a dense tensor from `delta`

and add it to `c`

:

result = c + tf.sparse_tensor_to_dense(delta) sess = tf.Session() sess.run(result) # ==> array([[ 0., 0., 0.], # [ 0., 1., 0.], # [ 0., 0., 0.]], dtype=float32)

**how to assign values in Tensor according to the indices? · Issue ,** According to the trick in Adjust Single Value within Tensor — TensorFlow. I can recreate sparse tensor with the indices. But here is the problem: For example, According to the pooling values and the corresponding indices output of tf.nn.max_pool_with_argmax, I want to put these pooling values back into the original unpooling Tensor given the indices. According to the trick in Adjust Single Value within Tensor — TensorFlow I can recreate sparse tensor with the indices.

How about `tf.scatter_update(ref, indices, updates)`

or `tf.scatter_add(ref, indices, updates)`

?

ref[indices[...], :] = updates ref[indices[...], :] += updates

See this.

**Optimal way to assign a value to an individual index of a tensor ,** Variable represents a tensor whose value can be changed by running ops on it. variables in models will acquire unique variable names automatically, so you Recommend：indexing - Adjust Single Value within Tensor -- TensorFlow. ppose you want to add '1' to only one value within your tensor Doing it by indexing doesn't work: TypeError: 'Tensor' object does not support item assignment One approach would be to build an identically shaped tensor of 0's. And then adj

`tf.scatter_update`

has no gradient descent operator assigned and will generate an error while learning with at least `tf.train.GradientDescentOptimizer`

. You have to implement bit manipulation with low level functions.

**Introduction to Variables,** Tensor that has the same values as tensor in the same order, except with a new shape given tf.reshape(t, []) reshapes a tensor t with one element to a scalar. values = [1.0] # A list of values corresponding to the respective # coordinate in indices. shape = [3, 3] # The shape of the corresponding dense tensor, same as `c`. delta = tf.SparseTensor(indices, values, shape) alors vous pouvez utiliser le tf.sparse_tensor_to_dense() op pour faire un dense tenseur de deltac:

I feel like the fact that `tf.assign`

, `tf.scatter_nd`

, `tf.scatter_update`

functions only work on `tf.Variables`

is not stressed enough. So there it is.

And in later versions of TensorFlow (tested with 1.14), you can use indexing on a `tf.Variable`

to assign values to specific indices (again this only works on `tf.Variable`

objects).

v = tf.Variable(tf.constant([[1,1],[2,3]])) change_v = v[0,0].assign(4) with tf.Session() as sess: tf.global_variables_initializer().run() print(sess.run(change_v))

**tf.reshape,** that manipulate the value stored in a TensorFlow variable. v = tf. Run through the training data and use an "optimizer" to adjust the variables to fit the data. and tf.assign_sub for decrementing a value (which combines tf.assign and tf.sub ):. Additionally, we can see that the moving average mean value have been updated to 0.042, this is expected since I am using beta value of 0.9. Finally the gradient is in the range what we expect, since 1-(1/4096) = 0.999755859.

If you want to replace certain indices, I would create a boolean tensor mask and a broadcasted tensor with the new values at the correct positions. Then use

new_tensor = tf.where(boolen_tensor_mask, new_values_tensor, old_values_tensor)

**Custom training: basics,** The user interface is intuitive and flexible (running one-off operations is much In these cases, a change in the Python value can trigger needless retracing. How to make confusion matrix with tensorflow? is it possible or is there any work-around for the following code nb_classes = 4 nb_samples = 10 a_ = np.random.randint(0, high=nb_classes, size=(nb_sa

**Better performance with tf.function,** I think the easiest way to see this operation in the light of TensorFlow operations is to filter the elements that one wants to keep in one vector, In this case, a single value — marketing Budget. Labels: The output our model predicts. In this case, a single value — the number of new subscribers gained. Example: A pair of inputs/outputs used during training. In our case a pair of values from mar_budget and New_subs at a specific index, such as (80,200).

**How to Replace Values by Index in a Tensor with TensorFlow-2.0,** TensorFlow provides APIs for Python, C++, Java, and others; however, in this we defined constants in the following code; these are just one of three types of data in TensorFlow: Constants: Defined values that cannot change Placeholders: Pre-trained models and datasets built by Google and the community

**Hands-On Artificial Intelligence for Beginners: An introduction to ,** In the TensorFlow Python API, a , b , and c are Tensor objects. Feeding is a mechanism in the TensorFlow Session API that allows you to substitute different values for one The value read from a variable may change it is concurrently updated. python tensorflow/tensorboard/tensorboard.py --logdir=path/to/log-directory. Pre-trained models and datasets built by Google and the community

##### Comments

- Thank you immensely. I agree with you that a function that can do this internally with more efficiency would be useful!
- do you know how that handles values with the same index?
- nvm it does not handle that well at all... Do you know how to do this in the case of multiple indexes of the same value?
- @dtracers: I believe those two other questions are relevant: stackoverflow.com/questions/39157723/… and stackoverflow.com/questions/39045797/…
- Could you rewrite this using the
`scatter_nd`

operation ? Would it be more efficient to use a variable for c in the first place ? - it's only valid if the
`ref`

is a Variable. - That restriction is actually more fundamental than it looks, if you see TF as a restricted (in terms of available recursion schemes), scalable runtime of a mostly-pure lazy functional language. Then you can see that the difficulty of efficiently updating a (pure) tensor is essentially the same as the one you encounter in updating a purely functional data structure. Without that level of purity things wouldn't scale easily.