The following table shows features compared to the different HTTP Versions.
Feature | HTTP/1.1 | HTTP/1.2 | HTTP/2 |
---|---|---|---|
Keep-Alive Connections | ✅ | ✅ (permanent, see 1.1) | ✅ |
Parametrized Keep-Alive (Header) | ✅ | ❌ (see 1.2) | ✅ |
Transfer Encoding (Compressed) | ✅ | ❌ (see 1.3) | ✅ |
Byte Range Requests | ✅ | ❌ (see 1.4) | ✅ |
Pipelined Requests | ✅ (broken) | ✅ (working, see 1.5) | ❌ (Layer7 multiplexed) |
Chunked Encoding | ✅ | ❌ (see 1.6) | ✅ |
Caching Mechanism / Header | ✅ | ✅ (Static Content) | ✅ |
Caching 304 not modified | ✅ | ✅ (Static Content) | ✅ |
Cookies | ✅ | ✅ | ✅ |
Text Based Protocol | ✅ | ✅ | ❌ |
Binary Based Protocol | ❌ | ❌ (see 1.7) | ✅ |
See Exemplary HTTP Network Processing for a detailed explanation / analysis why HTTP/2 and HTTP/3 are not suitable for future-proof Web-Applications.
HTTP/1.2 uses permanent Keep-Alive. This means 1 Single Client always connects through 1 Single Socket to a Server Domain / Virtual Host.
Tip
TCP/IP does allow multiple "multiplexed" data-channels by design without blocking anything or arising retransmission problems.
Caution
HTTP/1.2 will correct the broken HTTP/1.1 Pipelined Connection implementation. This will automagically use the existing rock solid TCP/IP data channel "multiplexing" feature (which was unnecessarily implemented on top into HTTP/2 at Layer7).
Also read Exemplary HTTP Network Processing.
Due to a permanent Keep-Alive, we also do not need Parametrized Keep-Alive settings which will drastically reduce Protocol-Logic.
Also it is no good idea to allow a non-authenticated client to modify server-parameters (from a security point of view) at runtime.
Our oppinion: Runtime Compression pollutes our environment. Unneccesarry consumed CPU-Power. In times of Intel XEON 6 and 800Gigabit Ethernet Runtime Based Compression should be considered as old-fashioned.
Think of a "Web-Pack-Format" which only sends *a single Metadata+Media-Package at Initial-Client-Request and when App-Updates exist (compressed, not runtime-compressed).
Caution
Worldwide socket-count again reduced by a factor of 1.000.000.
Our primary goal is to drastically speed up Modern-Web-Applications, not being a Streaming-Client. Feature also ommitted in HTTP/1.2.
Tip
Think about using existing streaming-protocols designed primarily for streaming purpose!
Pipelined Requests! This key-feature started confusion inside the HTTP-Developer-Community. Its failure never has been understood correctly.
Instead of correcting the very small design flaw (wrong result ordering) HTTP/2 copied already existing (and working) TCP/IP Layer3 features (unneccessarily) into Layer7.
Modern, generic OOP design teaches: NEVER COPY IF YOU CAN AVOID, DONT BE LAZY!
HTTP/1.2 corrects these design flaws with a single new HTTP-Header: "Request-UUID". Every HTTP-Request puts a Unique Identifier Hash inside the HTTP-Header which will be sent back by the Server in the Response. The Client now is able to allocate the Response to the correct Request even if the Network-Order asynchronously mismatches.
HTTP/1.2 will use stable, existing TCP/IP Layer3 feature to "multiplex" Requests without the need of adding Layer7 logic.
Also completely useless. Seldomly used for sending large data to the client.
Tip
Think of using an already existing File Transfer Protocol instead.
In times of Intel XEON 6 Processors a binary protocol is not guaranteed to be faster than a text-based. It makes the following tasks even more complex / error-prone:
- Debugging
- Generic Type Handling
- Parsing
XML improves:
- DTD / Clear Type Definition
- Non Error-Prone Parsing
- Updateable Protocol-Features / Protocol-Versions
Next-Gen-WAP (Web-Application-Protocol) XML Specs RFP see: WAP-AS-XML-SPECS.md.
Feature | HTTP/1.1 | HTTP/1.2 | HTTP/2 |
---|---|---|---|
CORS | ✅ | ❌ (see 2.1) | ✅ |
HTTP Method OPTIONS | ✅ | ❌ (see 2.2) | ✅ |
HTTP Method HEAD | ✅ | ❌ (see 2.2) | ✅ |
HTTP Method TRACE | ✅ | ❌ (see 2.2) | ✅ |
HTTP Method PUT | ✅ | ❌ (see 2.2) | ✅ |
HTTP Method DELETE | ✅ | ❌ (see 2.2) | ✅ |
Streaming Characteristics | ✅ | ❌ (see 2.3) | ✅ |
WebSockets | ✅ | ✅ (see 2.4) | ✅ |
Global Up/Downloads | ✅ | ✅ (see 2.5, decapsulated) | ✅ |
A clean Application Environment / Setup (including Kubernetes) makes CORS / Cross-Site obsolete. Our Web-Application Design forbids using Cross-Site.
These are ancient features. No one needs anymore. Handle via Web-Service. Documents reside on scalable Storage-Backends these days.
HTML in times of AOL 38.400 Baud Modem lines was built to inline-display content while loading the HTML page.
In times of Intel XEON 6 6980P, PCI Express 5.0, Kernel DMA and 800Gbit Ethernet computers are able to render > 1000 pages with a loading time < 1 second.
So drop this feature in HTTP/1.2 or WAP, however we will call the new protocol suite.
Long-Polling was used before WebSockets Protocol has been invented.
Long-Polling also works better and more leightweight when using one single Keep-Alived TCP/IP socket (using Request-UUID).
Important
But: Think of implementing this on a separated (firewallable) TCP/IP port, maybe WACP (Web-Application-Control-Protocol), idea?
So HTTP/1.2 is able to use Long-Polling if really needed.
Phew, almost forgotten: Up / Downloads. It seems likely that protocol-architects did not cope with x86_64 CPU architecture enough. Both HTTP/1.1 and HTTP/2 (not mentioning HTTP/3) involve a catastrophic approach in design and implementation.
What exactly happens when you send data over an TCP/IP socket? Currently, all stream packets will be processed by Kernel (ring-0) and then passed to User-Space (ring-3). This will be done for each single packet (CPU Context-Switch). Due to a slightly oldfashioned Socket-API / Kernel-Interface all packets must be handled additionally in User-Space by the application? YES.
Detailed CPU tasks for a Standard Application-Upload:
- Kernel IP-Packet receive
- Pass IP-Packet to User-Space (1 CPU Context-Switch)
- Process the Buffer inside Application (e.g. Python Interpreter) - Generates CPU Cycles
- Pass the Buffer back to Kernel-Space ("write" to File-Descriptor, 1 CPU Context Switch)
HTTP/2 is even worse, implementing flow-control features which already exist in underlaying TCP/IP, so CPU load even raises and throughput reduces.
Tip
This could be done easily (completely) in ring-0 like existing file protocols already do (file-descriptor-reference).