20 December 2009

Lentil Soup

I learn something new every day.

So, the other day I was out and about snowshoeing with some friends. It was a cold and windy day, and we were traveling at a good clip.

Eventually, it was time for lunch, and I was cold and hungry and thirsty.

I had my usual lunch for such trips: a PB&J sandwich. It was a challenge to wolf down my sandwich in these conditions: my hands were cold, I needed to drink water, it was windy, I didn't have a lot of time, etc. The conditions were so challenging that I had to focus all of my efforts on just eating my sandwich and drinking water.

While I was struggling with my sandwich, I watched one of my fellow travelers pull a thermos out of his backpack. Then he unscrewed the top, and I saw steam come out of the top. Now he put the thermos to his mouth and poured. After a big gulp he told the rest of the people in the group that he was eating wonderful lentil soup.

I have to tell you, at this point in time, I was pretty envious. My sandwich was nice and all, but given the conditions, hot soup would have been much better. This gentleman had come up with a perfect meal for the conditions: his meal was hot, it came in a package that he could open even with heavy mittens on, he didn't have to pause to drink water along with his meal, etc. He didn't need a spoon either.

I learned something on this trip. The next time I go on a hike, especially a cold winter hike, I am going to seriously think about bringing along a thermos of soup. I like PB&J sandwiches and all, but clearly on some days hot soup would be better.

13 November 2009

Dogs Are Great

I'm a big fan of dogs. I don't own any myself, but I will gladly play with nearly any dog that is handy.

I'm also somebody who is very appreciative of the veterans in our society.

Given these two facts, I thought that this web-page was the best thing that I've seen all week: Dogs Welcoming Home Soldiers

Simply wonderful....

06 November 2009

Skill in Debate

I am somebody who admires people who can debate effectively. I wish my high school had had a debate club -- I think that this could have been a fun activity. I mean, sure, in the Real World compromise is frequently a better alternative to never-ending debate, but sometimes you have to stick to your principles and defend your position.

A good debate, done well, can be a very enjoyable thing....even if one is merely observing from the sidelines. It is enjoyable to see somebody who has all of the facts at their command, can come up with logical arguments, and can communicate effectively as well.

So, anyways, the other day, as I was searching for something else, I came across Kurt Denke's response to Monster Cable's cease and desist letter. Oh, wow, this is awesome! Now here is a gentleman who knows how to debate.

I think that my two favorite quotes from the letter are these:

Further, on that point: one of the design patents you attached is closely related to a utility patent applicable to the same design, and you failed to point that fact out. I need to be able to rely upon the completeness and accuracy of the information you send to me and I find this sort of omission deeply disturbing because it is clear that the effect of this nondisclosure is to obscure the real significance of the patent features. Similarly, as I note further below, you omit reference to another patent Monster has held which appears, frankly, to be fatal to your position. If you expect to persuade me, you had better start making full, open and honest disclosures; I will find out the facts sooner or later in any event, but the impact upon your credibility will not be repaired. It looks like when you sent this letter, you were operating on the premise that I am not smart enough to see through your deceptions or sophisticated enough to intelligently evaluate your claims; shame on you. You are required, as a matter of legal ethics, to display good faith and professional candor in your dealings with adverse parties, and you have fallen miserably short of your ethical responsibilities.


...and...

After graduating from the University of Pennsylvania Law School in 1985, I spent nineteen years in litigation practice, with a focus upon federal litigation involving large damages and complex issues. My first seven years were spent primarily on the defense side, where I developed an intense frustration with insurance carriers who would settle meritless claims for nuisance value when the better long-term view would have been to fight against vexatious litigation as a matter of principle. In plaintiffs' practice, likewise, I was always a strong advocate of standing upon principle and taking cases all the way to judgment, even when substantial offers of settlement were on the table. I am "uncompromising" in the most literal sense of the word. If Monster Cable proceeds with litigation against me I will pursue the same merits-driven approach; I do not compromise with bullies and I would rather spend fifty thousand dollars on defense than give you a dollar of unmerited settlement funds. As for signing a licensing agreement for intellectual property which I have not infringed: that will not happen, under any circumstances, whether it makes economic sense or not.

It is very clear to me that Mr. Denke would be a very formidable person to engage in a debate. If you wanted to summarize Mr. Denke's letter to Monster Cable it could be summarized thusly:

  1. You just messed with the wrong guy
  2. If you ever lob a hand-grenade into my camp again, I will respond by dropping an atomic bomb on top of your shack.

Mr. Denke's letter is an excellent display of skill, and I enjoyed it very much.

09 October 2009

Gear Review: BR Lights C2.1-H

Yet another gear review...

Some of the best bike rides I've ever been on have occurred in total darkness. I can recall one ride in particular in which a woman who was leading the ride declared that we simply hadn't ridden far enough that evening and "would anybody mind if we rode down to the beach?". The vote was unanimous -- so off to the beach we went. The ride had already been a ton of fun up until this point but this addition put the ride over the top. We made our way to the coastline and then started cruising down the beach at around 22mph. The moon was obscured by many clouds that evening and was definitely not full. I could dimly see the waves off to my left, and the air smelled of the sea. Because we were riding far after tourist season had ended, we were only interrupted by two cars for our entire journey down the shoreline. Except for the sounds of the waves crashing on the beach, the wind in my ears, and the occasional voices of my fellow riders, it was serenely quiet. This was a glorious ride.

Anyways, this is a long introduction to review the bike light that I purchased a few years ago: the BR Lights C2.1-H. This light seems to be the marriage of two recent developments in the technology world: decent white LED bulbs and lithium-ion batteries.

Before I tell you what I think of this light, I need to tell you something: the gentleman who owns and operates BR Lights is a very good friend of my brother-in-law. I paid full price for my light, and I guess one of the little things that went through my head before I bought this light was that if I ever ran into any problems that I would probably have a good customer experience because of my brother-in-law. I'd heard some complaints against other more-famous manufacturers of bike lights, and I wanted to avoid this. This I just wanted a reliable light that was bright and kept me safe at night. More about this at the end of the review...

So, anyways, I bought a C2.1-H and immediately put it to use for some rides with friends. This light easily attaches to a handlebar and the attachment is secure. The body of this thing is pretty much indestructible (see the video on the website of a car running over one of these lights). On top of unit itself is a very simple user-interface -- an indicator light that shows how much charge is left and a button for controlling the light. The button on top of the light is easy to use, even when wearing (I am not kidding...) three pairs of gloves. I found it to be very easy when riding in cold weather to put the light into "low output mode" when I was riding in the back of the pack and "high output mode" when I got near the front of the pack.

This unit comes with two bulbs -- one produces light in a wide, diffuse pattern and the other produces a more narrow beam. For the type of road biking that I do, this combination is perfect. I can see how somebody who mountain bikes at night would maybe want to augment all of this with something that is helmet-mounted.

As for light output, this unit is really really bright.

Basically, I have ridden with this light on many occasions....in rain, in cold weather, etc. The light has never failed me in these conditions, and I have had a lot of fun in the process.

Highly recommended.

One last thing in my review: this light turned out to be extremely handy during the ice storm last year. It was nice to be able to leave my wife at home with a very powerful, very long-running flashlight. The one problem that I experienced with this light was that the unit itself got damaged from being charged from my generator's (probably very noisy) power output after five days of charging. So, I sent the unit in for repair, and the gentleman who own's BR Lights (my brother-in-law's friend) fixed the light for a small fee.

If you are in the market for a really bright bike light, one that is bombproof and well-designed, I think that you should check out the lights from BR Lights.

29 August 2009

Stale Beer

True story: Last Friday at noon I went on my usual bike ride in the hills that surround the office park that I work at. As me and my co-worker were descending a small hill, the driver of a (blue?) car traveling in the opposite direction threw a half-empty can in our direction and managed to hit me with a glancing blow square in the chest. I had less than a half second to react to this. Thankfully, I was completely fine from the whole incident. The car disappeared, and there was nothing I could do about the whole incident -- I had no description of the car, no license plate, and I wasn't carrying a cell phone either. The only thing that I could hope for from the incident was that the people driving behind the (blue?) car would have seen the incident and called the police.

After the coward(s) in the car disappeared, my co-worker asked what had happened. Since the can that hit me was blue, I assumed that the can was a can of Pepsi. So I told my co-worker that some idiot threw a can of Pepsi at me.

There was nothing we could do at this point except to continue the ride. So, that's what we did.

When I got back to the office, I sat down at my desk to cool off. There in my cube, in the absence of a 20mph breeze from riding my bike, I took a few breaths and tried to figure out what the strange smell was. Then it hit me: stale beer. That coward in the car threw a half empty can beer at me!

Oh, wonderful. Not only was this driver cowardly enough to throw something at a bicyclist, but he/she also had an open container of beer in the car too, in handy enough reach to throw at me.

I wish this driver had had enough courage to stop...I'm sure I could have convinced him/her that what they did was wrong.

19 August 2009

Big Ohh.

Did you know that if you replace an some code that (1) is frequently executed, (2) runs over a large data-set, and (3) implements an O(n^2) algorithm with some code that is implemented in terms of an O(n) algorithm that things will go quite a bit faster!? Yessirree, it is true!

18 August 2009

Too many of them

So, the other day I'm standing in the deli in my local grocery store. It is Sunday afternoon. The store is busy. I'm trying to pick up some pastrami that is on sale.

The clerks behind the deli counter serve customers according to the numbers on the little tickets that everybody picks up when they arrive at the deli. This is the same system that is used at 10,000 other delis.

I'm holding number "9". The 50-ish year-old woman standing next to me is holding the number "8". The clerks at the deli are now helping the customer with the number "7".

Eventually, customer #7 gets what they want. So, now it's time for the next number. One of the clerks says in a loud voice "who's next?". There is an awkward pause and then I hear one of the clerks ask in a loud voice "who is next -- what is the number after '7'?" Me and the 50-ish year-old woman look at each other quizzically. After a second she suggests to the clerk that eight is the number after seven, at which point in time the clerk says "Yeah, that's it! Eight is the next number. Who has number eight?". Me and customer #8 exchange a knowing look.

Eventually, customer #8 gets her Polish ham (sliced medium), and I get my pastrami (sliced thin). When I get home, my wife notices that I didn't actually get the pastrami that I asked for ("thin and trim") but instead got some other kind. Customer #8 seemed to be a much more savvy shopper than I was -- I'll bet she managed to get what she asked for.

I think that Beavis and Butthead summed up this situation nicely:

[a teacher asks Butt-head if he is angry for some reason]
Butt-head: Uhhhh... I'm, like, angry at numbers.
Beavis: Yeah, there's like, too many of them and stuff.


13 August 2009

Java Memory Usage

Dear Lazyweb,

Suppose I am writing a large server, and my implementation language is Java. The memory requirements for the server are larger than what the Sun JVM provides by default. So, I need the configure the JVM to use more memory.

This isn't rocket science. Everybody knows about Sun's "-Xmx" flag.

Here is Sun's documentation for the -Xmx flag:

-Xmxn
Specify the maximum size, in bytes, of the memory allocation pool. This value must a multiple of 1024 greater than 2MB. Append the letter k or K to indicate kilobytes, or m or M to indicate megabytes. The default value is 64MB. The upper limit .... [elided since the limits are a lot higher nowadays....] Examples:

-Xmx83886080
-Xmx81920k
-Xmx80m

There are a couple of things that confuse me about this flag. First of all, take a look at what runs just fine on my server box:

$ java -Xmx1000g HelloWorld
Hello, world!

I can assure you that my server box does not have 1000GB of any sort of memory!

More confusingly, here is another test I can run:

$ java -Xmx1000g SomeLongRunningProgram &
$ java -Xmx1000g SomeLongRunningProgram &

So, now I've got two programs running on my server box that seem to expect that they will, at some point in the program's runtime, be able to allocate 1000GB each for the memory pool. In my mind, this means that not only do I not have enough memory here, but the memory that I do have on this machine is heavily oversubscribed.

This whole situation confuses the heck out of me. My background includes quite a bit of embedded systems programming, and in the embedded world systems typically allocate their memory up front and then treat this memory as if it was a precious resource. If you can't allocate the memory that you need up front, you know something is wrong right away and it needs to be fixed. You don't get this behavior with the Sun JVM "-Xmx" flag.

OK, so, given the work that I have at hand, the "-Xmx" flag does not give me the behavior that I wanted. So, I looked into this a little bit more and thought about this problem. Soon, I was focusing my attention on the "-Xms" flag. Here's the documentation for this:

-Xmsn Specifies the initial size of the memory allocation
pool. This value must be a multiple of 1024
greater than 1 MB. Append the letter k or K to
indicate kilobytes or the letter m or M to indicate
megabytes. The default value is 2MB. Examples:

-Xms6291456
-Xms6144k
-Xms6m

So, after thinking about this a little bit, I eventually arrived at the following pattern for specifying memory allocation for server applications:

java -Xms1024m -Xmx1024m MyServer

The point is, I am specifying the same values for "-Xms" and "-Xmx". This pattern ensures that the JVM tries to allocate the memory that it needs up front. It isn't hard for me to experiment with my server machine and to learn how much memory I can allocate. If I allocate too much, I know about the problem right away. My server's memory never gets "oversubscribed" either.

So, I think that this pattern of specifying the same values for both "-Xms" and "-Xmx" is a good pattern for Java-based server applications. In fact, after a few minutes of searching, I came across this page, which seems to offer the same advice:

Setting -Xms and -Xmx to the same value increases predictability by removing the most important sizing decision from the JVM.

So, my question is this: under what circumstances would it be advantageous to specify different values for "-Xms" and "-Xmx"? I don't see a lot of upside for doing this, especially since I know that writing solid code that handles out-of-memory errors is a pretty difficult thing to do.

27 July 2009

Bike snobs and genius bears, oh my!

A couple of musings, based on articles I read today.

First, when do you think that Stephen Colbert is going to take notice of Yellow-Yellow the "genius bear" and declare her to be a threat to national security?:

"She's quite talented," said Jamie Hogan, owner of BearVault, based in San Diego. "I'm an engineer, and if one genius bear can do it, sooner or later there might be two genius bears. We're trying to work on a new design that we can hopefully test on her."

I'll bet Yellow-Yellow will be breaking into nuclear storage depots next.

.....

Next, I read this article today and it piqued my interest with one question that has been bugging me for, like, forever: why do professional/fashionable cyclists position their sunglasses OVER their helmet straps? It makes no sense to me. I can think of several times in my life when I've been traveling down the road at 20+mph, ran into a bee/wasp, and "successfully" ripped my helmet off to let my angry tenant escape. So, my question is this: is it fashionable to rip off your helmet and destroy your sunglasses, all to save a bumblebee? Terminally un-cool cyclists like myself want to know know the answers to important questions like these.

24 July 2009

Electronic Trading Marketplace Is In Need of a Fairness Algorithm

I found this article to be fascinating:

The slower traders began issuing buy orders. But rather than being shown to all potential sellers at the same time, some of those orders were most likely routed to a collection of high-frequency traders for just 30 milliseconds — 0.03 seconds — in what are known as flash orders. While markets are supposed to ensure transparency by showing orders to everyone simultaneously, a loophole in regulations allows marketplaces like Nasdaq to show traders some orders ahead of everyone else in exchange for a fee.


I guess I find this article to be interesting for three reasons. First, I predicted that this would start to occur nearly a decade ago (which begs the question: why am I still working at $DAYJOB? Ugh.) Second, I find this to be academically interesting, because whenever I see a large-scale system with this many transactions, I wonder to myself how well the system is designed and tested (in my day-to-day work, I continue to see....interesting...methods of ensuring synchronization and transaction-consistency). And third, I think that the people who run these electronic markets really need to put some thought into implementing some fairness algorithms....otherwise I can't see how this situation won't devolve into investment firms turning themselves into high-speed loops that run algorithms that MAKE MONEY FAST....much to the detriment to the actual electronic marketplace.

20 July 2009

The Problem with Groups on LinkedIn

Like a lot of other people, I have signed up with LinkedIn in order to manage my professional contacts. Overall, I think that LinkedIn is a good service (as long as one carefully manages one's privacy, that is...). Overall, I would say that the vast majority of all of the email that I have received that came about as a result of my LinkedIn account has been worthwhile for me to read.

However, with the introduction of a new feature, LinkedIn has put my overall goodwill towards this service in a precarious place. This new feature is the LinkedIn "Groups" feature.

At first, LinkedIn Groups seemed like a good idea: give people who are part of the same logical group the ability to network. So, for example, alumni from a certain university can join a LinkedIn Group. Most of the people who are part of this "group" have never interacted professionally, but because they're part of a group now they can more easily connect. Okay so far....

The problem is that, as the size of the group grows, the probability that the group will grow to include people you never want to hear from also grows. Guess what? This is already starting to happen in LinkedIn Groups.

So, for example, if you join a LinkedIn Group devoted to the the university that you happened to graduate from, at some point in the future some loser/spammer can also join that group. At this point, because you are part of the same LinkedIn Group, this loser/spammer can send email to the ENTIRE group. There is apparently very little in LinkedIn to guard against this sort of abuse.

Before the introduction of the LinkedIn Groups feature, this sort of behavior wasn't really possible. I mean, you could get spammed by some co-worker who you knew at some previous job, but at this point you could just disconnect from this in-duh-vidual and be done with the problem. This isn't really possible with LinkedIn Groups -- either you're all in or your not.

I predict that LinkedIn is going to have to deal with a LOT of this sort of abuse in the future. I do hope that LinkedIn introduces some sort of moderation system, controlled by the users themselves. It'd be nice to hear that some loser got his LinkedIn account yanked because some threshhold of regular users deemed his/her mail to be spam.

In the meanwhile, the one thing that I've done to fix this problem for myself is to configure my LinkedIn preferences to send me "Group" updates no more often than once a week. This makes the spammers much easier to deal with.

26 June 2009

Netflix Prize Contest Winner?!

Wow! A team finally managed to attain a 10% improvement in the Netflix Prize Contest This is a very impressive achievement.

21 June 2009

Beauty in Engineering

In terms of beauty in engineering, do you know what I find to be beautiful?

Well, here's one thing: it's coming up with a plan to secure a particular network protocol, documenting the plan, and then implementing the plan in code that executes in several radically different environments.

It is (perversely?) pleasurable to see seemingly random bytes be transmitted onto a wire from one peer on the network destined to another very different peer on the network and to have all of these bytes be decoded and authenticated properly. The mathematics behind this stuff is extremely pretty, and making it all work in the Real World is fascinating.

The icing on the cake is knowing that all of the code is efficient and scalable, and that the code has a reasonable test harness so that there are no hidden surprises.

As a character on a TV show once said, I love it when a plan comes together.

12 June 2009

Penguins Win The Stanley Cup!

Wow. What a game! The Penguins have just won the Stanley Cup.

This was a hugely improbable event. The Penguins beat the Red Wings in Detroit, and the Wings are an incredible team oozing with all-stars. The Penguins had some ups and downs during the series, but, in general, they won the series by keeping their foot on the gas when it mattered. At the end of the series, the Wings looked beat-up, shell-shocked, and out of energy.

I can't believe that former Bruin Hal Gill is going to get his name on the cup. Denis Leary must be ranting in some bar right now, screaming "unbelievable!".

01 June 2009

Rant: Microsoft decreases Firefox's security with a forced plugin

Aaarrggh. I am frustrated.

I am somebody who tries to be serious about computer security. Also, I frequently find myself answering questions from friends and family like ``my computer is slow and it seems to be doing a lot of strange things -- can you tell me what is wrong with it?'' This line of questioning nearly always causes my head to throb. The whole state of security on Windoze machines is something that depresses me.

After performing various magic incantations to try to ``fix'' my friends and family's Windoze machines, I always tell my friends and family to stop using Microsoft's Internet Explorer. I look them right in the eye and tell them that I am deadly serious about this -- in my opinion this browser cannot be made to be secure. In fact, I tell them that in my opinion Internet Explorer would be a much better application if, when it first started running, it displayed a splash screen that stated:

Friendly reminder: by running this browser, you are authorizing one or (probably many) more people who most likely reside in eastern Europe or the Far East to be able to run arbitrary code on your machine. These people will be in complete control of your computer. They will be able to steal anything from your computer, and be able to use your computer to mount attacks against other computers. Have fun on the Internet, and thank you for choosing a Microsoft product!

| OK! |

Instead of running Internet Explorer, I tell my friends and family to run Firefox. This is a no-brainer. I mean, let's be clear: Firefox isn't totally secure either but it is The Right Choice for my friends and neighbors. One gentleman that I know thanked me a few months ago for recommending Firefox to him several years ago. He is a total computer neophyte, and from what I have been told, his computer has been acting fine for years, with no viruses or malware installed on it.

Anyways, getting back to my frustration: Microsoft, in their infinite wisdom, has decided to include a Firefox plugin in their Microsoft .NET Framework 3.5 Service Pack 1 update pushed to end-user's computers via Windows Update.

Some representative from Microsoft has explained the ``rationale'' for this decision:
A couple of years ago we heard clear feedback from folks that they wanted to enable a very clean experience with launching a ClickOnce app from FireFox.

Microsoft's actions here are totally crazy.

First of all, I seriously doubt that Microsoft got clear feedback like this.

Second of all, if Microsoft had wanted to publish an addon for Firefox, The Right Place for Microsoft to publish this addon would have been http://addons.mozilla.org/ . Instead of publishing this addon in the proper manner, letting end-users decide for themselves whether they wanted to install this addon or not, Microsoft has shoved this addon down the throats of end-users.

Third, it seems to me that this addon is yet more poorly designed insecure Microsoft crap. In fact, many people (such as myself) who run Firefox run this browser entirely for the reason that totally crazy insecure plugin crap like this hasn't been available for Firefox -- until now.

Fourth, continuing on with the grand nightmare that Microsoft has created here, Microsoft has installed this plugin at the machine level, and has provided no easy way for end-users to uninstall this enormous security problem from their machines. If an end-user wants to uninstall this plugin, they must resort to using the registry editor, which is far from easy.

Basically, here is what I imagine happened recently in Redmond: somebody at Microsoft observed that Firefox is steadily increasing its market share -- it might have even captured 10% of the market by now. This must have worried this in-duh-vidual from Microsoft, which is arguably one of the world's best monopolies. So, what to do about this? Easy! Produce a plugin that gives people all of the same sucky ``user-experience'' and ``security'' as Internet Explorer, and shove it down the throats of all of the people who use Firefox.

Way to go, Microsoft! I, for one, seriously doubt that this company has my best interests in mind with actions like this one.

15 May 2009

I am not a fan of C++ iostreams

I'm one of those people who thinks that C++ can be an acceptable choice for a programming project. Lots of people think that C++ is dead, but I'm not one of them. I'm not a big fan of this language anymore, but it can occasionally be OK. I'm mostly somebody who programs at the level of bits/bytes/protocols/hardware/etc. so a language that lets me program at this level and also give me some OO features is alright in my book.

There are some parts of C++ that I have never liked though. The other day I was reminded of how much I have always disliked C++ iostreams.

...

I've spent the last {long period of time} wrestling with several large libraries, trying to get them all to work together. It has been like wrestling with three grizzly bears, but I don't mind: this is what I do for a living.

Anyways, around two weeks ago, before I got interrupted with something else, I noticed that my current work was dying strangely. I've been working with somebody else's code, and this code uses C++ iostreams, so, in an effort to fit in, I used C++ iostreams too. I think that the code looks somewhat ugly, but that's the way it is.

So, my code was dying strangely, and then I got interrupted for a few weeks. When I finally got back to take a look at my problem, I came to the dim realization that at some point in the program's execution my debugging output just....stopped. However, it was also obvious to me that certain parts of the program were just churning away, working just fine.

Weird.

Before I seriously got started on this project, I modified the entire project's code to compile without any warnings (as usual, this was a hugely efficient use of time...). Also, there were no memory related problems in the codebase. Nothing in the runtime was smashing one of the iostreams-related data-structures, which could explain the root-cause of the program's output just stopping.

Eventually, after a bit of searching around, I figured out what was going on: a part of the program's code (part that I had never really looked at in detail) was printing out some C++ classes with the iostreams library. The problem was that, as part of my modifications, I had some NULL pointers to various things -- and parts of the code were attempting to print these too.

To cut to the chase, can you guess what the following code produces with my platform's C++ implementation? (generic Linux x86_64 box, g++ 4.x.y, program compiles cleanly with "-Wall -pedantic")


#include <iostream>

int main(int argc, char *argv[])
{
     char *p = NULL;

     std::cout << p << std::endl;
}



Answer: my platform's C++ implementation prints ABSOLUTELY NOTHING when it runs this code. This pretty much mimics the problem I was having with my much larger program.

Since we're on the subject, can you guess what this program outputs on my platform? (gcc 4.x.y, program compiles cleanly with "-Wall -pedantic")

 #include <stdio.h>

 int main(int argc, char *argv[])
 {
     char *p = NULL;
 
     printf("%s\n", p);

     printf("Hello World\n");

     return 0;
 }


Answer: this program prints out something completely reasonable.

The thing that bugs me about the behavior of the C++ program is that not only does the iostreams implementation not tell me about the NULL pointer, but this action also seems to corrupt the internal state of the iostreams library itself, thus causing ABSOLUTELY NO SUBSEQUENT OUTPUT. I mean, surely this must be a common enough occurence in the Real World, no? The designers of iostreams must be happy with their type-safe interface to the rest of the thorny things that might go on in any C++ program, but this particular implementation failed in a manner that cost me a couple of hours of debugging time.

(As a side-note: yes, I know quite a lot about the internal plumbing of C++ to understand why an implementation might not print out the code's NULL pointer here -- yeah, yeah, I know, "undefined behavior". Yawn. Just because I understand what might be going wrong here doesn't imply that I am a fan of this behavior)

I'm just not a big fan of C++ iostreams. I much prefer strategic use of the battle-tested and known-to-be-reliable stdio library -- even in C++ code. Maybe iostreams would be a lot more attractive to me if I commonly programmed at a higher layers, but at that point I'd almost certainly favor Java over C++.

06 April 2009

What is so Hard About Parallel Programming? Lots!

Over at JavaLobby, Clay Breshears asks "What is so Hard About Parallel Programming?".

Hmmm. Interesting. I've spent quite a bit of time thinking about this question too. I happened to write my masters thesis on the subject of parallel computation.

I think that Mr. Breshears is obviously a very smart individual, but I disagree with his assertion that:
[...] learning parallel programming is no harder than learning a second programming language.
Let me be clear: I am not violently disagreeing with Mr. Breshears, but let me put it this way: I would suggest that for a reasonably intelligent computer science student, learning a second programming language, on a scale of 1-10, is around a "3" (in terms of difficulty). Now, for this same student to learn parallel programming, I would estimate that, on a scale of 1-10, the amount of difficulty would be a "7".

So, I disagree with Mr. Breshears.

What is my rationale for my disagreement? In my experience, it is often hard to write a serial program to solve a particular problem. I mean, this should be uncontroversial. However, I think that moving from a serial program that solves a given problem to a correct parallel program that solves the same problem requires a paradigm shift. The central part of this transformation does not involve more rote learning of some new programming language's syntax and APIs -- this process requires a completely different way of thinking. It is one thing to hazily understand that an execution unit on a given machine is going to be executing a given routine; it is another thing entirely to be able to grok that multiple execution units on a given machine might be executing the same routine all at the same time.

There is one other thing that this transformation requires as well: a fantastic attention to detail. Coming up with a correct/efficient parallel program that solves a problem means that the programmer has to completely understand the problem at hand (especially the data dependencies). The programmer also has to know the semantics of the parallel programming system that they are using like the back of their hand. This last bit cannot be stressed enough: the number of details involved here can be mind-boggling.

How difficult can it be to come up with a parallel implementation of a serial program? Well, let me answer this question somewhat indirectly. One of the most seductively popular ways to write a parallel program is to use threads and synchronization mechanisms. In fact, threads and synchronization mechanisms aren't just used for parallel programming -- they're also used quite a bit in concurrent systems programming. This is where I spend quite a bit of my time as a software engineer, actually. Over the years, I have seen quite a few problems in real products that have as their root-cause something to do with threads and synchronization primitives. These problems have all been inadvertently created by engineers with at least a few years of industry experience (sometimes much more...). If engineers with this level of experience can get confused and make mistakes in such an environment, what chance does the average college sophomore (as Mr. Breshears suggests) have to use similar technology and create parallel programs? I would humbly suggest that the answer to this question is "not much".

10 March 2009

In Praise of the Old Guys

Gary Roberts, one of my favorite gritty, character players in the NHL, has retired. I'm sorry to see him go. I have been a fan for a long time of Roberts' leadership qualities and everything he brought to the game. You always knew when Roberts was playing that he was giving one hundred percent of his effort towards making sure his team won.

Let's see, with the retirement of Roberts, who is my current favorite old guy in the NHL? This is an easy question for me to answer: Brendan Shanahan. The Devils recently hired Shanahan and he scored a goal in his first game. I swear, the look on his face after he scored the goal was something like "see, kids, scoring goals is easy -- just like riding a bike!". There was clearly no rust on this old guy.

I think that one of my favorite incidents involving Shanahan occured a few years ago when Shanahan got sick of Donald Brashear's antics so Shanahan decided to do something about it. The thing that you need to understand here is that Brashear is one of the most fearsome enforcers in the NHL. He is VERY good at his job (he is also a very interesting person off-ice). So, when Shanahan dropped gloves with Brashear, this was immediately a very interesting event.



The best part of the whole incident occurred right before the fight, when Shanahan told Brashear to remove his helmet, as if he was saying:

Good Sir! I will be striking your head with my fists soon, and I would be grateful to you if you were to allow me to do so without marring your helmet.
Old guys are great.

24 February 2009

Low-Powered But Decent-Performance Linux Boxes

Today I read some cool news: Marvell is offering a $100 wall-wart computer. This seems to be a pretty neat box, with a reasonable amount of RAM, USB 2.0, GbE, and an ARM-flavored CPU. This would definitely make for a workable home NAS or print server box. Very cool stuff.

I am a big fan of low-powered (electrically speaking) computers. A few years ago I began experimenting with a Soekris net4801 box. I wanted a low-powered (~5 watts at 120 volts) machine to use inside my house. My net4801 uses a consumer-grade 2GB CF card for its hard drive, and it runs a customized Linux distro for its OS.

The thing that drew me towards the Soekris box was the fact that it is x86 compatible, and at the time I purchased the box I was in the middle of a stressful project at $DAYJOB in which I was doing a lot of cross-compilation. I didn't really want to come home from a long day at work to do even more cross-compilation... On my net4801 I have installed a package manager, so adding a new package to the box usually just involves installing from a package.

For my purposes, having a low-powered but decent-performance box available for use at home is VERY handy -- much more handy than simply leaving a noisy, power-sucking tower workstation on all the time at home.

One thing is certain: the whole area of low-power/decent-performance computers is a very interesting place right now. I can't wait to see what the future brings.

09 February 2009

Fun With Logging

What is it with logging subsystems? Writing one always seems to bring up the most interesting problems....

One day at $DAYJOB I was called upon to write a logging subsystem. So, I dived in.

In my judgment, the logging subsystem I had to design+implement had to satisfy a number of basic constraints:
  • output redirectable to a file or terminal or syslog
  • configurable so that engineers working on one subsystem could see their own output and ignore the output generated by other subsystems
  • configurable so that several levels of output could be generated...informational....warnings....debug...etc.
  • ability to generate timestamps with output
  • bombproof and reliable
  • portable to a number of systems and compilers
In addition, due to the nature of the project we were working on, I also added a few more features:
  • very high performance
  • ability to log to a circular memory buffer
  • ability to change logfile parameters at runtime
We were implementing a very big system, and we needed all of these features. So, this is what I designed+implemented.

With all of these constraints, the logging subsystem was not everything to everybody. For example, I was under CONSIDERABLE pressure to make this subsystem as efficient as possible, so when it came time to choosing between making the system easy-to-use or ultra-efficient, I had to choose the latter design goal. Still, I was pleased with my efforts. I unit-tested my code, finished my docs, checked it in, alerted my co-workers, and started to work on other things.

My co-workers thought my work was OK and they started using my logging subsystem.

This is when the interesting problems began.

The problem that hit me right away was that now that our SQA staff had logfiles that they could read, when something went wrong in the product (still in its infancy), most of the time the new bug in the bug-tracking system got assigned to me. What was the rationale for this? Answer (paraphrasing) "The product bombed and I saw an error message in the logfile. Kevin owns the logfile subsystem, so I assigned the problem to Kevin". With this logic, I was getting assigned and bothered by dozens and dozens of bugs. I mean, a logfile message could have said "Fatal error in Ralph's code; please contact Ralph if you see this error" and the bug still likely would have gotten assigned to me. It took me several iterations with SQA to address this problem.

At this point SQA understood that if they wanted to test out my code, they would have to test the logging system ITSELF and not simply look at the messages that happened to be traversing through it. Well, the next test that came up went like this: pump as much {stuff} through the logging system to see it it could handle this. I was a bit dubious of this test, but I had little time to complain. After a week of SQA staying out of my hair, I again started getting many bugs assigned to me. This time the logic went like this: after turning on full logging in every subsystem, heavily loading the system, and then letting it run for a day the system bombed. Oh, and the system was completely out of disk space too, due to the immense size of the resulting logfile.

I had to roll up my sleeves and investigate this situation. It turns out that my code was not the culprit here either, because my code deliberately did not check the return value from the call to write() to send the data to disk. However, now I was getting assigned lots of bugs that had the following pattern: (1) turn on full logging (2) hammer on system for long time (3) observe that it bombs. I did not enjoy getting assigned these sorts of bugs, so I protested to SQA that they were assuming that there was a causal relationship between two things: the fact that the system's disks were filling up and the fact that the system died. I pointed out that these two things might be completely unrelated. Eventually, I had to make this point crystal clear to SQA by writing a tool for them that filled the local disk to 99.999% full before they tested the system. At this point, with a bit of effort, I was able to convince them that if the system didn't die right away when they started their testing, that the problem must lie somewhere else. Eventually, I sat down in the lab for a few hours to figure out where the real root cause was, and sure enough, there were several real problems elsewhere.

The last problem I had with the logging subsystem was the weirdest one. My boss's boss had heard of all of the "problems" that were cropping up "in the logging subsystem" and he decided to see what was going on. I explained the situation with the previous two "problems" to him and he understood right away -- he was, after all, a very good engineer. But then over the next few weeks he asked me several times if I was sure that the logging subsystem was efficient enough. In response to his first few queries as to whether the logging subsystem was "efficient enough" I gave some short but succinct answers: "yeah, it is written very efficiently" and "I used some preprocessor hacks to precompute things", etc. But he kept on asking me about the logging subsystem, and I began to detect a very worried vibe from this gentleman.

The final straw came one day when, for the Nth time, I had to explain that if somebody were to turn on full logging on the system (and you might as well include a few calls to fflush() and fsync() in here, because I made these configurable too), then the system would run slower AND THIS WAS OK. I kept on fielding questions and concerns from various people in the company about this issue, and it was starting to drive me a little bit batty. I'd explain it this way "look, if you ask a computer to do more work, would you also reasonably expect that it would take more time to do this work?". After I managed to extract an answer of "yes" I'd ask "so why are you concerned about our system slowing down if you turn on full debug logging?". I'd also point out that you would NEVER want to do this on a production system.

Anyways, my boss's boss stopped by one more time to ask me about all of this and so I had it out with him. We went back to his office and I explained exactly how the logging subsystem worked and answered all of his questions. Then I got to ask my question: "why are you so concerned about the system running slower when somebody turns on debug logging?". At this point he told me one of HIS war stories: one day when he was working in the telecom world he was involved in a sales demo of a serious, big-iron telephone switch. Not having one of these just laying around for a demo, they brought the potential customer to the central office, where a big-iron switch was operating in a production environment. Well, the demo started, and at one point the potential customer asked "What if something goes wrong on the switch? Are there any debugging capabilities?". Well, the sales engineer, wanting to make a sale said "No problem! I know how to turn on debug output!" and before anybody could stop him he did exactly that. Well, at this point, operations on the switch screeched to a crawl and THIRTY-FIVE THOUSAND PHONE CALLS WERE DROPPED INSTANTLY. He didn't tell me if they got the sale or not.... (I can only assume that this event was the inspiration behind the "can you hear me now!?" slogan)

Anyways, after telling me this, I understood where my boss's boss was coming from. I never got to make any modifications the logging subsystem based on his input (like, maybe some level of debug logging should not be available in a production build) because the issue never really got to be really important and I transitioned to another project a short time later. But, I understood where he was coming from.

Ahh...logging subsystems....never a dull moment.

02 February 2009

Patch for Unix Network Programming Interface Code

I am a big fan of the book _UNIX Network Programming_ by Rich Stevens. Maybe if you are reading this blog you are too.

Anyways, I have discovered that some of the code in this book is suffering from some bit-rot. Specifically in section 17.6 of the third edition some code is presented that allows the programmer to traverse the list of network interfaces on the local machine. This code uses uses ioctl(...., SIOCGIFCONF, ...) to get the information.

The source of the bit-rot is that Stevens' code made some assumptions about the size of (struct ifreq).....and these assumptions are no longer valid, at least, not for me who is running a relatively modern Linux 2.6 box.

So, I've come up with a patch. I'm pretty pleased with my patch, because

  1. the code can be fixed.
  2. the code can be simplified.
  3. the code can be made to be more portable and to still work even if this union is changed in the future.
I thought that other people would find this to be useful as well, so here it is.

21 January 2009

Mad Marble Madness Skillz

Holy crap. Watch this person play Marble Madness:



Marble Madness was one of the few games I ever played with any regularity when I was a kid. I played the Amiga version a fair bit and I thought I was pretty good at it. I played quite a few "perfect" games at the hardest level and I was pretty quick too.

If I thought I was good at Marble Madness, all I need to do is to look at this video to understand very clearly that there are people out there who were/are a lot better at it than I ever was.

17 January 2009

Gear Review: Toro 1800 Electric Snow Thrower

Hey, did you read the recent news that Belkin's development rep is hiring people to write fake positive Amazon reviews? I guess this gentleman (and possibly company...) has decided that it would be a better investment to essentially commit fraud rather than actually make a product that works well. Way to go!

(I'm not shocked that there are fake reviews on Amazon -- I mean, this is obvious -- but I do find the brazen manner in which this is being arranged to be surprising)

Anyways, I have another (real) gear review: The Toro 1800 Electric Snow Thrower.

My gear review will be pretty short and sweet: this thing works pretty well. I've used it for two winters now and it still works well. For snow depths of less than (say) eight inches this thing works like a champ. For depths greater than this, then things get a bit harder. I happened to use this thing two weeks ago after a storm dumped 16 inches of somewhat dry/powdery snow on my driveway -- this thing handled everything except for the snow at the end of my driveway. With 16 inches of snow the going was slow but steady.

I like this thing because it (1) works well (2) is maintenance-free (3) doesn't require me to maintain yet another balky gas engine (4) I figure that this thing has to lead to less overall pollution than the balky 2-stroke gas snowblower that this thing replaced.

If you have a really large driveway or if you absolutely need to clear your driveway really quickly, then maybe this isn't the tool for you. For my purposes, this thing works really well.

04 January 2009

(Don't) Throw It Over The Wall

I used to work at an interesting shop in which the general culture of the place regarded the SQA staff as being one step above moldy bread or something you find growing between your toes. This situation was succinctly expressed by a single catch-phrase; this phrase was uttered whenever a new software release was given to SQA:

THROW IT OVER THE WALL

In the culture of this place, the ultra-smart software engineers sat on one side of an imaginary wall, and the not-very-bright SQA staff sat on the other side of the wall. "Throw it over the wall" was the derisive phrase that the software organization used when it wanted to get the SQA organization to test some shiny and new software trinket that they produced. In general, the relationship between the software engineers and the SQA staff was not good.

I really wasn't wild about this dynamic. I mean, let's be honest: there were people of all abilities in both groups....just like at every other company.

The thing that bothered me about the "throw it over the wall" dynamic that went on at this place was that all this culture did was serve to demoralize the SQA staff. Furthermore, "throw it over the wall" frequently meant "give the SQA staff the software with very little documentation -- very little in the way of requirements, functional specifications, etc.". Sometimes I had no idea how the SQA team tested out the final product. I could tell that this situation bugged the more talented members of the SQA staff a lot.

There was very little I could do to change the dynamic at this place. For the period of time that I worked there, I at least tried to treat the SQA staff that I worked with as if they were partners in creating the product that we were all supposed to be creating. The results of this were generally positive -- when I worked with the more talented members of the SQA staff, they were definitely able to more thoroughly test the code that I produced. There was even one or two occasions in which a question posed to me by the SQA staff caused me to radically change the product that I worked on, because it turned out that my original design was flawed.

A few jobs later I again found myself working in a "throw it over the wall" shop. This time I was working in an incredibly intense environment, complete with aggressive schedules and incomplete requirements. Still, I took the time to establish a good relationship with the SQA staff, and I even wrote test tools for the SQA staff so that they could better test out my code. When the deadline came and the product shipped, I was pretty happy that my part of the product was well tested and performed well in the field. As for the "throw it over the wall" crowd, well, let's just say that there was a maintenance release and a hairy upgrade in the field...