What's the difference between reshape and view in pytorch?

what'd
what w
what does
how to pronounce what
whats the meaning of
what c
what h
whay definition

In numpy, we use ndarray.reshape() for reshaping an array.

I noticed that in pytorch, people use torch.view(...) for the same purpose, but at the same time, there is also a torch.reshape(...) existing.

So I am wondering what the differences are between them and when I should use either of them?

torch.view has existed for a long time. It will return a tensor with the new shape. The returned tensor will share the underling data with the original tensor. See the documentation here.

On the other hand, it seems that torch.reshape has been introduced recently in version 0.4. According to the document, this method will

Returns a tensor with the same data and number of elements as input, but with the specified shape. When possible, the returned tensor will be a view of input. Otherwise, it will be a copy. Contiguous inputs and inputs with compatible strides can be reshaped without copying, but you should not depend on the copying vs. viewing behavior.

It means that torch.reshape may return a copy or a view of the original tensor. You can not count on that to return a view or a copy. According to the developer:

if you need a copy use clone() if you need the same storage use view(). The semantics of reshape() are that it may or may not share the storage and you don't know beforehand.

Another difference is that reshape() can operate on both contiguous and non-contiguous tensor while view() can only operate on contiguous tensor. Also see here about the meaning of contiguous.

What, I will do what I can to help you. TranslationsEdit. show ▽what? (interrogative pronoun)  Definition of what (Entry 3 of 3) 1 a —used as an interrogative expressing inquiry about the identity, nature, or value of a person, object, or matter What minerals do we export? b : how remarkable or striking for good or bad qualities — used especially in exclamatory utterances and dependent clauses What mountains! Remember what fun we had?

Although both torch.view and torch.reshape are used to reshape tensors, here are the differences between them.

  1. As the name suggests, torch.view merely creates a view of the original tensor. The new tensor will always share its data with the original tensor. This means that if you change the original tensor, the reshaped tensor will change and vice versa.
>>> z = torch.zeros(3, 2)
>>> x = z.view(2, 3)
>>> z.fill_(1)
>>> x
tensor([[1., 1., 1.],
        [1., 1., 1.]])
  1. To ensure that the new tensor always shares its data with the original, torch.view imposes some contiguity constraints on the shapes of the two tensors [docs]. More often than not this is not a concern, but sometimes torch.view throws an error even if the shapes of the two tensors are compatible. Here's a famous counter-example.
>>> z = torch.zeros(3, 2)
>>> y = z.t()
>>> y.size()
torch.Size([2, 3])
>>> y.view(6)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: invalid argument 2: view size is not compatible with input tensor's
size and stride (at least one dimension spans across two contiguous subspaces).
Call .contiguous() before .view().
  1. torch.reshape doesn't impose any contiguity constraints, but also doesn't guarantee data sharing. The new tensor may be a view of the original tensor, or it may be a new tensor altogether.
>>> z = torch.zeros(3, 2)
>>> y = z.reshape(6)
>>> x = z.t().reshape(6)
>>> z.fill_(1)
tensor([[1., 1.],
        [1., 1.],
        [1., 1.]])
>>> y
tensor([1., 1., 1., 1., 1., 1.])
>>> x
tensor([0., 0., 0., 0., 0., 0.])

TL;DR: If you just want to reshape tensors, use torch.reshape. If you're also concerned about memory usage and want to ensure that the two tensors share the same data, use torch.view.

Drake - Nice For What, what meaning: 1. used to ask for information about people or things: 2. used in questions that show you are…. Learn more. noun the true nature or identity of something, or the sum of its characteristics: a lecture on the whats and hows of crop rotation.

Tensor.reshape() is more robust. It will work on any tensor, while Tensor.view() works only on tensor t where t.is_contiguous()==True.

To explain about non-contiguous and contiguous is another time story, but you can always make the tensor t contiguous is you call t.contiguous() and then you can call view() without the error.

what, It is plural if a plural noun or noun phrase completes the sentence, as in He sometimes makes what seem to be gestures of reconciliation. · Clauses with what as  what (wŏt, wŭt, hwŏt, hwŭt; wət, hwət when unstressed) pron. 1. a. Which thing or which particular one of many: What are you having for dinner? What did she say? b

WHAT, what definition: The definition of what is which one or as many as. (adjective) An example of what used as an adjective is in the sentence, "What shirt do you  WhatsApp Messenger: More than 2 billion people in over 180 countries use WhatsApp to stay in touch with friends and family, anytime and anywhere. WhatsApp is free and offers simple, secure, reliable messaging and calling, available on phones all over the world.

What, Your questions answered on what type of mask to wear to cut the risk of getting Covid-19. The ORIGINAL and OFFICIAL "What What (In the Butt)" video! Samwell will CALL your friends and family! Go here: http://officialsamwell.com/live-call/ This is

What dictionary definition, What's wrong with WhatsApp. Illustration: Jo Raynsford/Guardian Design. As social media has become more inhospitable, the appeal of private  What Is My IP? WhatIsMyIP.com® is the industry leader in providing REAL IP address information. We provide IP address tools that allow users to perform an Internet Speed Test, IP address lookup, proxy detection, IP Whois Lookup, and more.

Comments
  • Maybe emphasizing that torch.view can only operate on contiguous tensors, while torch.reshape can operate on both might be helpful too.
  • @pierrom contiguous here referring to tensors that are stored in contiguous memory or something else?
  • @gokul_uf Yes, you can take a look at the answer written here: stackoverflow.com/questions/48915810/pytorch-contiguous
  • Maybe it's just me, but I was confused into thinking that contiguity is the deciding factor between when reshape does and does not share data. From my own experiments, it seems that this is not the case. (Your x and y above are both contiguous). Perhaps this can be clarified? Perhaps a comment on when reshape does and does not copy would be helpful?