22 February 2013

Retransmission Dampening

A certain network protocol that I work with can spew a lot of traffic.  Staggering amounts, actually.

I only have limited control over this traffic.  It just shows up (sometimes as if fired by a Gatling gun), and code that I help maintain on a certain server has to deal with it.   When the traffic isn't handled in a timely manner, people get upset and the phone starts ringing.

Recently, after witnessing our server starting to have difficulty handling a certain burst in traffic, I started to think to myself "how are we going to improve?".  I came up with a variety of ideas.

One of my ideas went as follows:


It is computationally non-trivial to unpack everything in every PDU that arrives at the server.  Under normal operations, everything is fine, and there are plenty of CPU resources for this sort of thing.  However, during heavy traffic loads, my data suggests that unpacking all of these PDUs is something that I need to pay attention to.

And....one of the problems is that the {things} that are sending our server all of these PDUs almost seem to be impatient.  Some of these {things} seem to operate in the following way: 

  1. Send a PDU with a request in it
  2. Wait an infinitesimal amount of time
  3. Have you received a response yet?  If "yes", we are done!  However, if "no" then goto step 1.
The problem I am seeing when I analyze the traffic is that I see a fair number of re-sent PDU requests.  Our server is handling the requests, but perhaps not at the rate that these {things} on the network would like.  All that these re-sent request PDUs do in these busy periods is add more load onto our server, which is unpacking the received PDUs as quickly as it can and then doing work.

But...when I look at the guts of this protocol, I believe that I see a non-trivial (but not difficult either) way of limiting these re-sent packets.  The code that I have in mind won't actually have to entirely unpack all of the PDUs in order to figure out which packets are re-sent packets and which packets are valid new requests.

Here is the thing that I have in mind:
  1. Under normal operations, everything works as it usually does.
  2. A "retransmission dampening filter" (RDF) will always be running.
  3. The RDF will let new PDUs into the system with no restrictions.
  4. In a CPU-efficient way, the RDF will be able to detect PDUs that are retransmissions.  If a PDU has been re-transmitted by the network {thing} without having waited {N} milliseconds, then the packet will be dropped.  This will spare the rest of the system from having to unpack the entire PDU, which, again, takes some CPU time.
I think that the nice part about this scheme is that it pretty much maintains itself.

However, before I go off an write some code to implement this, I decided that I wanted to run a simulation using some Real World data to see how much of an effect on overall traffic this scheme would have if the system was running with a {1, 2, 3} second minimum.

So, I threw together a simulator, and here are my results:






So, um, yeah....obviously I'm going to write some code soon to implement this.  Of course, I'll make my code tunable, but I'm pretty sure that the default will be to enforce a minimum 1-second retransmission minimum.  This should be both a fun project, as well as something that will really help the server that I help maintain!

21 February 2013

Nam-shub of Enki

It is the Nam-shub of Enki

(or, at least it is for Mac OSX systems...)

18 February 2013

08 February 2013

Pondering the different reactions from the two /^J.*Z.*?$/s






http://xkcd.com/1171/


Why do I get the feeling that jwz is laughing right now?

Also, why do I get the feeling that sometime today, somebody might try to explain to Jay-Z a certain technical joke -- maybe he'll smile at the joke -- and then he'll get back to making another million dollars...

03 February 2013

Praise for Tomas Holmstrom

Like every other hockey fan, I'm pretty irked with the NHL right now (thanks for the lockout, idiots....). But, this doesn't prevent me from admiring the truly remarkable career of Tomas Holmstrom, who retired recently.
Take a look at some highlights:



I think that every goalie in the NHL was glad to see Holmstrom go -- he drove them CRAZY. He scored most of his goals 3-4 feet from the net -- most of these were redirections and "garbage goals". He made a fine art out of these goals, but of course he had to work really hard and take a TON of abuse for each one.

Can you imagine having a job in which it is your task to stand in front of the goalie, trying to obscure his vision while a giant defenseman tries to move you out of the way?  Oh, and then, what happens next is that some guy shoots a small hard rubber puck in your general direction at around 100mph....and while the defenseman is seriously abusing you, it is now your job to "tip" the puck with your stick, hoping to direct the puck into the net.   This is what Holmstrom made into art.

I'm a hockey fan, and I cheer for the players more than I do for the teams. I'll miss seeing Homer out on the ice.

01 February 2013

Lazy Dung Beetles

The other night as I was driving home, a story about the remarkable navigation skills of dung beetles was playing on the radio:

WARRANT: They have to get away from the pile of dung as fast as they can and as efficiently as they can because the dung pile is a very, very competitive place with lots and lots of beetles all competing for the same dung. And there's very many lazy beetles that are just waiting around to steal the balls of other industrious beetles and often there are big fights in the dung piles.

I really don't know why, but when Professor Warrant, using his very Australian accent, got to the part about "lazy dung beetles", I sort-of lost it.  I laughed a lot as I listened to this story.

We live in a great world in which folks like Professor Warrant can study the lowly dung beetle, and can make interesting discoveries about their remarkable skills.