Skip to main content

Chatty AJAX

I recently attended a technical conference about AJAX and .NET. We all know that AJAX has been around in some shape or another for more than six years. The nature of AJAX is no different than an applet calling back to its home. Instead of a Java applet, though, you’re just writing a JavaScript “applet” and hitting the dynamic rendering engine of the web server. What we’ve all forgotten is why we stopped creating fat clients and put the CPU burden back on the server.

First of all, back in the 90s, nobody had gigahertz computers, so we all lamented the slow speed of a fat client. Then there were compatibility issues with the browsers and their implementations of Java. Then there was the problem of a chatty network, where your applet produced lots of little requests that bogged down your thin pipe. The Java applet was nice because you could combine your requests into a larger payload and make more efficient use of your bandwidth. AJAX doesn’t do that.

Why is a chatty network so bad? Network communication occurs through software that implements a nice and clean protocol that guarantees that you will be able to send your data on the wire, and possibly receive your response. If you are using TCP, then you are guaranteed to get something back. That something, though, may not be your expected response. Yet, all of this communication carries with it a protocol burden that is as small as 20 bytes, plus 20 bytes for the IP header, and some additional bytes for the MAC header.

http://www.erg.abdn.ac.uk/users/gorry/eg3561/lan-pages/mac.html

http://www.networksorcery.com/enp/protocol/ip.htm

http://www.networksorcery.com/enp/protocol/tcp.htm

So that’s at least 54 bytes just to send a single packet to a destination. If you are using HTTP or an XML web service, then you further add space for the request headers, which could be an additional 40, 50, 250, or thousands of bytes. As you can plainly see, the protocol expense for TCP with HTTP or XML-WS can be pretty high, and not to be taken lightly. Our networks haven’t expanded capacity beyond 10/100 megabits to the client. Yeah, bits remember, not bytes. So a 10 megabit connection really only allows for about 1.25 megabytes per second, or about 23000 empty TCP packets. The 100 megabit connections only allow for 12.5 megabytes per second, or about 230000 empty TCP packets.

Let’s consider a real world example. You own www.myniftywebservice.com and you have an AJAX application that computes mortgage amortization using the current interest rate from the US Federal Reserve Bank. There is an API available that allows you to get this information over the web. So let’s make a simple http service that uses JSP, or ASP, or perl, or Ruby, or whatever poison you like.

First thing you do is deliver your AJAX application to the client, which could be between 30K and 400K of JavaScript. The ATLAS library is supposedly about 400K worth of JS code. Your load page is probably going to be a simple web page with a form on it and a button to update the mortgage schedule. That page will likely be about 6K, maybe 10K without pictures and fancy branding. So the round trip on your page would really be something like 10K download + 2K upload, or 12K total, sans the images.

Being the innovator, you add in AJAX to the mix, and now your page goes to 40K on the download. Your AJAX is pretty simple to implement because you’re just hitting a URL on your server to get back the current interest rate: http://www.myniftywebservice.com/getcurrentfedrate.xx. That URL is 53 bytes long. The HTTP request header for your rate update is:

GET /getcurrentfedrate.xx HTTP/1.1CRLF
Host:www.myniftywebservice.comCRLF

That’s a minimum of 68 bytes for the header. Add in another CRLF line for the body marker, and you’re up to 70 bytes. You haven’t even requested any data yet. What is your data? That would be just the form data that you would post anyway on an HTTP post request. We’ll say is 8 digits for the loan amount, 2 digits for the term, and 6 digits for the down payment. That’s a total of 16 bytes, plus the field identifiers, which we can just encode as simple one-digit numbers. So far that’s 28 bytes for the data, and 5 bytes for the result (2-digits for the mantissa, and 2 digits for the ordinate, and 1 digit for the separator).

Let’s compute our total cost for this 28 byte request.

1. TCP incurs 40 bytes
2. IP incurs 40 bytes
3. MAC header incurs maybe 14 bytes
4. HTTP request header takes 70 bytes

The grand total just to get in the door is 40 + 40 + 14 + 70, or 164 bytes for your 28 byte request, and 5 byte result. That means your request data would take only 14% of your total transmission on the wire, leaving 86% of the transmission as “noise.” Oh, wait, that’s “chatter” right? We like that term better because it makes the marketing guys happier.

Don’t think this makes any real-world sense? This type of AJAX application is what you are going to be seeing on many composite “mash-up” sites. Lots of little nifty applications (used to be Java applets, and Flashlets) that produce a lot of noisy chatter on the network to produce very little information.

The downside of this “new” AJAX craze is that you can’t coordinate requests effectively. With a Java applet, you could funnel your requests through a sieve and make better use of your connections. With AJAX, you are distributing your requests across the execution space of your page, which sounds like a great idea to the young programmers out there who are just fresh off the boat. For the seasoned software engineer, though, this produces more problems than solution. Not only have you increased the amount of noise on the network, but you’ve reduced the value of a click on the page.

Users control their behavior on a web page because they know that there is a time cost to their click. With AJAX, you are reducing that time cost to almost nothing. Why would a user not click on your nifty little gadget if there is no time cost for it to update? The end result will be to empower the noisy “chatter” user so they can play with more Web 2.0 gadgets. The downside is the enablement of more users producing more chatter on a network that is still not equipped to handle an enormous number of simultaneous users.

The right solution for the future of Web 2.0 is a finer-grained control on the caching of page parts. If I could mark DIV sections in an HTML page as ‘cached’, then I could better control what data gets updated and what doesn’t. That would make the web browser far more complicated, but it would also make it more efficiently use network bandwidth. Another solution is the Windows Presentation Framework (WPF). This is yet another “Java” solution where plugins provide the capability to render customer content in a more “rich-client” experience.

http://en.wikipedia.org/wiki/Windows_Presentation_Foundation

http://en.wikipedia.org/wiki/Microsoft_gadgets#Microsoft_Gadgets

http://microsoftgadgets.com/

Popular posts from this blog

Clustered Foolishness

I had morning coffee with a well respected friend of mine recently. Aside from chatting about the usual wifery and family, we touched on the subject of clustered indices and SQL Server performance. A common misconception in the software industry is that a clustered index will make your database queries faster. In fact, most cases will demonstrate the polar opposite of this assumption. The reason for this misconception is a misunderstanding of how the clustered index works in any database server. A clustered index is a node clustering of records that share a common index value. When you decide on an index strategy for your data, you must consider the range of data to be indexed. Remember back to your data structures classes and what you were taught about hashtable optimizations. A hashtable, which is another way of saying a database index, is just a table of N values that organizes a set of M records in quickly accessible lists that are of order L, where L is significantly less than M. ...

Deadly Information

Remember back to 2006 when a young girl killed herself [1] , [4] after being tricked and harassed by a faux boy she found on the Web using MySpace. The trial against the faux boy, an adult woman (Lori Drew), did not result in prosecution for the death of Megan, much to the dismay of many.  Yet, today we read about another trial where someone is being accused of second degree murder because they may have mentioned something slanderous about another person who was later killed by a hit man [2] . In this case, though, the person on trial is a former FBI agent who was working deep cover to infiltrate organized crime. In both cases, someone released information to third parties that resulted in the death of another person.  Neither defendant in either of these cases actually committed the act of murder, though. In the case of the FBI agent, though, the murder charge is being taken seriously. Yet, in the MySpace slander case, the murder charge was not taken seriously. How are t...

Faster Climate Change

CNN reports that a WWF study has found that global climate change is happening faster than predicted in 2007 and that there will not be any arctic ice by 2013, or 2040. [1] Then it goes on to say that global sea level will increase by 1.08 meters by the end of the century, which is 2100, 92 years from now. Quite honestly, nobody really cares what is going to happen to the planet in 98 years. Why? Because in 98 years we (as humans) will either: (1) Obliterate ourselves because God told us to do it. (2) Eat eachother because there will no longer be any land available to grow crops and sustain living quarters for our 50 billion people. (3) Suffocate because our planet will no longer smell nice thanks to 50 billion people producing lots of solid waste in our oceans. (4) Leave the planet because there will no longer be enough fresh water to sustain our lives. Wait a minute. Consider (4) for a moment. Where can we get an abundance of fresh water TODAY? Anyone? Yeah, the arctic! It's goin...