The IETF has formed a working group to develop and maintain the new HTTP/2 standard. They are using Google's SPDY protocol as a starting point. I have of course already taken a look at SPDY and I'm not quite happy about it. I'll explain why that is. Of course, I'm looking at SPDY from the webserver's point of view and therefor not looking at what it means for the browser.
HTML is something that can be compressed very well. The creators of PHP know this. That's why they gave PHP compression support (via the zlib.output_compression setting). I really believe other languages should have the same.
Encryption: According to the SPDY specs, all content should also be encrypted. Encrypting everything is a bad idea. Most content on the internet isn't confidential, so why encrypt it? It's again a waste of CPU power. And since most of the people who use the internet don't know what encryption means and what to do in case of a certificate error, encrypting everything doesn't make the internet more safe.
It also makes hosting a webserver more expensive, since it requires a certificate. Specially for people, like me, who have a lot of hobby websites. If all those websites would require a certificate, it would cost a few hundreds of euro's per year. Quite expensive for just some hobby websites that work just fine with HTTP/1.1.
Server push: A SPDY-enabled server can push items to a client when a server knows that a client needs it. My question is: how does a server know what a client needs? Due to caching, it is very well possible a client already has the resource that a server is about to push. And for those few rare cases that a server is 100% sure the client can't have the resource, is it really worth the trouble of implementing this feature?
Multiplexing: a web performance researcher has tested SPDY with real world sites and found that SPDY doesn't offer the same performance boost as it does with sites in test environments. The reason is that most websites have resources located multiple servers. The benefit from multiplexing only works when all resources come from the same server. In my opinion, thinking that this will change because of SPDY is naive. The multiplexing part in SPDY is the main reason why I think that SPDY is a protocol by Google, for Google.
Instead of the SPDY features, I think the following things are far more easy and also offer improvement:
Send certain headers only once: In HTTP/2, at least the following headers should only be sent in the first requests: Host, User-Agent, Accept-Encoding, Accept-Language, Accept-Charset. Those headers also count for the following requests. By not sending them over and over again saves a lot of bytes.
Connection header deprecated: Keep-alive is default in HTTP/2. A client or server can disconnect any time. Telling it via the Connection header is unnecessary.
Related-Resource: A server application can optionally list all the resources via a Related-Resource HTTP header. Via this header, a browser knows what other resources to request before the entire body is received. Via request pipelining, those resources can be requested very quickly when needed. This is the same as the Server Hint in SPDY.
These are my first thoughts about HTTP/2 and SPDY. I really like to hear yours.