Chunked transfer encoding - browser behavior
I'm trying to send data in chunked mode. All headers are set properly and data is encoded accordingly. Browsers recognize my response as a chunked one, accepting headers and start receiving data.
I was expecting the browser would update the page on each received chunk, instead it waits until all chunks are received then displays them all. Is this the expected behavior?
I was expecting to see each chunk displayed right after it was received. When using
curl, each chunk is shown right after it is received. Why does the same not happen with GUI browsers? Are they using some sort of buffering/cache?
I set the
Cache-Control header to
no-cache, so not sure it is about cache.
afaik browsers needs some payload to start render chunks as they received. Curl is of course an exception.
Try to send about 1KB of arbitrary data before your first chunk.
If you are doing everything correctly, browsers should render chunks as they received.
HTTP Evasions Explained - Part 3, HTTP/1.0 200 ok Transfer-Encoding: chunked content which is not chunked All browser behave this way but about 15% of the firewalls I've seen in the reports� Note the absense of a Content-Length header. Again, this is correct behavior according to RFC2616. Your code when run results in no data at all being read, because ContentLength on the response is -1. This makes sense - the Content-Length header is NOT set by the server. This is the entire point of Transfer-Encoding: Chunked.
Fix your headers.
1) As of 2019, if you use
Content-type: text/html, no buffering occurs in Chrome.
2) If you just want to stream text, similar to
text/plain, then just using
Content-type: text/event-stream will also disable buffering.
3) If you use
Content-type: text/plain, then Chrome will still buffer 1 KiB, unless you additionally specify
RFC 2045 specifies that if no
Content-Type is specified,
Content-type: text/plain; charset=us-ascii should be assumed
5.2. Content-Type Defaults
Default RFC 822 messages without a MIME Content-Type header are taken by this protocol to be plain text in the US-ASCII character set, which can be explicitly specified as:Content-type: text/plain; charset=us-ascii
This default is assumed if no Content-Type header field is specified. It is also recommend that this default be assumed when a syntactically invalid Content-Type header field is encountered. In the presence of a MIME-Version header field and the absence of any Content-Type header field, a receiving User Agent can also assume that plain US-ASCII text was the sender's intent. Plain US-ASCII text may still be assumed in the absence of a MIME-Version or the presence of an syntactically invalid Content-Type header field, but the sender's intent might have been otherwise.
Browsers will start to buffer
text/plain for a certain amount in order to check if they can detect if the content sent is really plain text or some media type like an image, in case the
Content-Type was omitted, which would then equal a
text/plain content type. This is called MIME type sniffing.
MIME type sniffing is defined by Mozilla as:
In the absence of a MIME type, or in certain cases where browsers believe they are incorrect, browsers may perform MIME sniffing — guessing the correct MIME type by looking at the bytes of the resource.
Each browser performs MIME sniffing differently and under different circumstances. (For example, Safari will look at the file extension in the URL if the sent MIME type is unsuitable.) There are security concerns as some MIME types represent executable content. Servers can prevent MIME sniffing by sending the X-Content-Type-Options header.
According to Mozilla's documentation:
X-Content-Type-Optionsresponse HTTP header is a marker used by the server to indicate that the MIME types advertised in the
Content-Typeheaders should not be changed and be followed. This allows to opt-out of MIME type sniffing, or, in other words, it is a way to say that the webmasters knew what they were doing.
X-Content-Type-Options: nosniff makes it work.
Transfer-Encoding, HTTP/2 doesn't support HTTP 1.1's chunked transfer encoding mechanism this content-encoding is used by almost no browsers today, partly� Chunked transfer encoding is a streaming data transfer mechanism available in version 1.1 of the Hypertext Transfer Protocol (HTTP). In chunked transfer encoding, the data stream is divided into a series of non-overlapping "chunks". The chunks are sent out and received independently of one another.
The browser can process and render the data as it comes in whether data is sent chunked or not. Whether a browser renders the response data is going to be a function of the data structure and what kind of buffering it employs. e.g. Before the browser can render an image, it needs to have the document (or enough of the document), the style sheet, etc.
Chunking is mostly useful when the length of a resource is unknown at the time the resource response is generated (a "Content-Length" can't be included in the response headers) and the server doesn't want to close the connection after the resource is transferred.
Request body streams should use chunked encoding � Issue #966 , at least Firefox, do not support it. the chunked encoding from server to client is and we even have a http-wg ticket to standardize this behavior. Browsers don't currently send Transfer-Encoding request headers at all (and� To get around this problem HTTP 1.1 added a special header, Transfer-Encoding, that allows the response to be chunked. Each write to the connection is pre-counted and a final zero-length chunk written at the end of the response signifies the end of the transaction. In some cases a server or client may want the older HTTP 1.0 behavior.
HTTP persistent connection, 1.1 HTTP 1.0; 1.2 HTTP 1.1. 1.2.1 Keepalive with chunked transfer encoding. 2 Advantages; 3 Disadvantages; 4 Use in web browsers; 5 See also; 6 References � I observed it. If you look at the code in reflector, turning off bufferoutput is effectively equivalent to calling flush after each write; and each flush that is not final checks and sets chunked transfer encoding if headers haven't been written yet, haven't been suppressed, the client isn't disconnected, this isn't the "final" flush, the response's content length isn't set manually, and the
HTTP/1.1: Header Field Definitions, If an Accept header field is present, and if the server cannot send a response which is The Accept-Charset request-header field can be used to indicate what The directives specify behavior intended to prevent caches from adversely fields is present in the trailer of a message encoded with chunked transfer-coding . C# (CSharp) TransferEncoding - 21 examples found. These are the top rated real world C# (CSharp) examples of TransferEncoding extracted from open source projects. You can rate examples to help us improve the quality of examples.
Configuration (browser query string, constructor, config.yaml) We are using swagger.json which is generated at run time. Expected Behavior. Swagger should receive all the splitted chunks of file,compine them and display the full content with content-length header. Current Behavior. got the message "transfer-encoding: chunked" . file is not diplayed
- Which browsers are you looking in? Generally browsers will do incremental rendering, but they can internally buffer things up for a bit because relayouts are expensive...
- What type of data are you sending in the chunks? Is it just HTML or are you sending script data?
- i'm sending
text/html. Tried on Firefox and Chrome. Both waiting all chunks to be received.
- See also (the newer) stackoverflow.com/q/16909227/179081
- Yay!!! that was it! works perfectly in Firefox, Chrome, Safari, even Opera! Thank you a lot.
- 1KiB is indeed a good general value, for more details look here: stackoverflow.com/q/16909227/1534459
- AFAIK browsers only gather the mentioned 1KB of data if they didn't receive a content-type header. They need the data then to make an educated guess about what they are about receiving. Beside, also anti-virus software may be causing this problem, as I described here: stackoverflow.com/a/41760573/1004651
- For me, the
charset=xxxxwas the key. With just
Content-type: text/plain(in firefox 60.0.9esr) the output was buffered and only displayed all at once at the end of receiving data. When changed to
Content-type: text/plain; charset=us-ascii(or
Content-type: text/html; charset=utf8) suddenly the chunked progressive web rendering worked as expected.
- @MatijaNalis, that should be
Content-type: text/html; charset=utf-8(or UTF-8 if case matters)