Today I read some cool news: Marvell is offering a $100 wall-wart computer. This seems to be a pretty neat box, with a reasonable amount of RAM, USB 2.0, GbE, and an ARM-flavored CPU. This would definitely make for a workable home NAS or print server box. Very cool stuff.
I am a big fan of low-powered (electrically speaking) computers. A few years ago I began experimenting with a Soekris net4801 box. I wanted a low-powered (~5 watts at 120 volts) machine to use inside my house. My net4801 uses a consumer-grade 2GB CF card for its hard drive, and it runs a customized Linux distro for its OS.
The thing that drew me towards the Soekris box was the fact that it is x86 compatible, and at the time I purchased the box I was in the middle of a stressful project at $DAYJOB in which I was doing a lot of cross-compilation. I didn't really want to come home from a long day at work to do even more cross-compilation... On my net4801 I have installed a package manager, so adding a new package to the box usually just involves installing from a package.
For my purposes, having a low-powered but decent-performance box available for use at home is VERY handy -- much more handy than simply leaving a noisy, power-sucking tower workstation on all the time at home.
One thing is certain: the whole area of low-power/decent-performance computers is a very interesting place right now. I can't wait to see what the future brings.
24 February 2009
09 February 2009
Fun With Logging
What is it with logging subsystems? Writing one always seems to bring up the most interesting problems....
One day at $DAYJOB I was called upon to write a logging subsystem. So, I dived in.
In my judgment, the logging subsystem I had to design+implement had to satisfy a number of basic constraints:
With all of these constraints, the logging subsystem was not everything to everybody. For example, I was under CONSIDERABLE pressure to make this subsystem as efficient as possible, so when it came time to choosing between making the system easy-to-use or ultra-efficient, I had to choose the latter design goal. Still, I was pleased with my efforts. I unit-tested my code, finished my docs, checked it in, alerted my co-workers, and started to work on other things.
My co-workers thought my work was OK and they started using my logging subsystem.
This is when the interesting problems began.
The problem that hit me right away was that now that our SQA staff had logfiles that they could read, when something went wrong in the product (still in its infancy), most of the time the new bug in the bug-tracking system got assigned to me. What was the rationale for this? Answer (paraphrasing) "The product bombed and I saw an error message in the logfile. Kevin owns the logfile subsystem, so I assigned the problem to Kevin". With this logic, I was getting assigned and bothered by dozens and dozens of bugs. I mean, a logfile message could have said "Fatal error in Ralph's code; please contact Ralph if you see this error" and the bug still likely would have gotten assigned to me. It took me several iterations with SQA to address this problem.
At this point SQA understood that if they wanted to test out my code, they would have to test the logging system ITSELF and not simply look at the messages that happened to be traversing through it. Well, the next test that came up went like this: pump as much {stuff} through the logging system to see it it could handle this. I was a bit dubious of this test, but I had little time to complain. After a week of SQA staying out of my hair, I again started getting many bugs assigned to me. This time the logic went like this: after turning on full logging in every subsystem, heavily loading the system, and then letting it run for a day the system bombed. Oh, and the system was completely out of disk space too, due to the immense size of the resulting logfile.
I had to roll up my sleeves and investigate this situation. It turns out that my code was not the culprit here either, because my code deliberately did not check the return value from the call to write() to send the data to disk. However, now I was getting assigned lots of bugs that had the following pattern: (1) turn on full logging (2) hammer on system for long time (3) observe that it bombs. I did not enjoy getting assigned these sorts of bugs, so I protested to SQA that they were assuming that there was a causal relationship between two things: the fact that the system's disks were filling up and the fact that the system died. I pointed out that these two things might be completely unrelated. Eventually, I had to make this point crystal clear to SQA by writing a tool for them that filled the local disk to 99.999% full before they tested the system. At this point, with a bit of effort, I was able to convince them that if the system didn't die right away when they started their testing, that the problem must lie somewhere else. Eventually, I sat down in the lab for a few hours to figure out where the real root cause was, and sure enough, there were several real problems elsewhere.
The last problem I had with the logging subsystem was the weirdest one. My boss's boss had heard of all of the "problems" that were cropping up "in the logging subsystem" and he decided to see what was going on. I explained the situation with the previous two "problems" to him and he understood right away -- he was, after all, a very good engineer. But then over the next few weeks he asked me several times if I was sure that the logging subsystem was efficient enough. In response to his first few queries as to whether the logging subsystem was "efficient enough" I gave some short but succinct answers: "yeah, it is written very efficiently" and "I used some preprocessor hacks to precompute things", etc. But he kept on asking me about the logging subsystem, and I began to detect a very worried vibe from this gentleman.
The final straw came one day when, for the Nth time, I had to explain that if somebody were to turn on full logging on the system (and you might as well include a few calls to fflush() and fsync() in here, because I made these configurable too), then the system would run slower AND THIS WAS OK. I kept on fielding questions and concerns from various people in the company about this issue, and it was starting to drive me a little bit batty. I'd explain it this way "look, if you ask a computer to do more work, would you also reasonably expect that it would take more time to do this work?". After I managed to extract an answer of "yes" I'd ask "so why are you concerned about our system slowing down if you turn on full debug logging?". I'd also point out that you would NEVER want to do this on a production system.
Anyways, my boss's boss stopped by one more time to ask me about all of this and so I had it out with him. We went back to his office and I explained exactly how the logging subsystem worked and answered all of his questions. Then I got to ask my question: "why are you so concerned about the system running slower when somebody turns on debug logging?". At this point he told me one of HIS war stories: one day when he was working in the telecom world he was involved in a sales demo of a serious, big-iron telephone switch. Not having one of these just laying around for a demo, they brought the potential customer to the central office, where a big-iron switch was operating in a production environment. Well, the demo started, and at one point the potential customer asked "What if something goes wrong on the switch? Are there any debugging capabilities?". Well, the sales engineer, wanting to make a sale said "No problem! I know how to turn on debug output!" and before anybody could stop him he did exactly that. Well, at this point, operations on the switch screeched to a crawl and THIRTY-FIVE THOUSAND PHONE CALLS WERE DROPPED INSTANTLY. He didn't tell me if they got the sale or not.... (I can only assume that this event was the inspiration behind the "can you hear me now!?" slogan)
Anyways, after telling me this, I understood where my boss's boss was coming from. I never got to make any modifications the logging subsystem based on his input (like, maybe some level of debug logging should not be available in a production build) because the issue never really got to be really important and I transitioned to another project a short time later. But, I understood where he was coming from.
Ahh...logging subsystems....never a dull moment.
One day at $DAYJOB I was called upon to write a logging subsystem. So, I dived in.
In my judgment, the logging subsystem I had to design+implement had to satisfy a number of basic constraints:
- output redirectable to a file or terminal or syslog
- configurable so that engineers working on one subsystem could see their own output and ignore the output generated by other subsystems
- configurable so that several levels of output could be generated...informational....warnings....debug...etc.
- ability to generate timestamps with output
- bombproof and reliable
- portable to a number of systems and compilers
- very high performance
- ability to log to a circular memory buffer
- ability to change logfile parameters at runtime
With all of these constraints, the logging subsystem was not everything to everybody. For example, I was under CONSIDERABLE pressure to make this subsystem as efficient as possible, so when it came time to choosing between making the system easy-to-use or ultra-efficient, I had to choose the latter design goal. Still, I was pleased with my efforts. I unit-tested my code, finished my docs, checked it in, alerted my co-workers, and started to work on other things.
My co-workers thought my work was OK and they started using my logging subsystem.
This is when the interesting problems began.
The problem that hit me right away was that now that our SQA staff had logfiles that they could read, when something went wrong in the product (still in its infancy), most of the time the new bug in the bug-tracking system got assigned to me. What was the rationale for this? Answer (paraphrasing) "The product bombed and I saw an error message in the logfile. Kevin owns the logfile subsystem, so I assigned the problem to Kevin". With this logic, I was getting assigned and bothered by dozens and dozens of bugs. I mean, a logfile message could have said "Fatal error in Ralph's code; please contact Ralph if you see this error" and the bug still likely would have gotten assigned to me. It took me several iterations with SQA to address this problem.
At this point SQA understood that if they wanted to test out my code, they would have to test the logging system ITSELF and not simply look at the messages that happened to be traversing through it. Well, the next test that came up went like this: pump as much {stuff} through the logging system to see it it could handle this. I was a bit dubious of this test, but I had little time to complain. After a week of SQA staying out of my hair, I again started getting many bugs assigned to me. This time the logic went like this: after turning on full logging in every subsystem, heavily loading the system, and then letting it run for a day the system bombed. Oh, and the system was completely out of disk space too, due to the immense size of the resulting logfile.
I had to roll up my sleeves and investigate this situation. It turns out that my code was not the culprit here either, because my code deliberately did not check the return value from the call to write() to send the data to disk. However, now I was getting assigned lots of bugs that had the following pattern: (1) turn on full logging (2) hammer on system for long time (3) observe that it bombs. I did not enjoy getting assigned these sorts of bugs, so I protested to SQA that they were assuming that there was a causal relationship between two things: the fact that the system's disks were filling up and the fact that the system died. I pointed out that these two things might be completely unrelated. Eventually, I had to make this point crystal clear to SQA by writing a tool for them that filled the local disk to 99.999% full before they tested the system. At this point, with a bit of effort, I was able to convince them that if the system didn't die right away when they started their testing, that the problem must lie somewhere else. Eventually, I sat down in the lab for a few hours to figure out where the real root cause was, and sure enough, there were several real problems elsewhere.
The last problem I had with the logging subsystem was the weirdest one. My boss's boss had heard of all of the "problems" that were cropping up "in the logging subsystem" and he decided to see what was going on. I explained the situation with the previous two "problems" to him and he understood right away -- he was, after all, a very good engineer. But then over the next few weeks he asked me several times if I was sure that the logging subsystem was efficient enough. In response to his first few queries as to whether the logging subsystem was "efficient enough" I gave some short but succinct answers: "yeah, it is written very efficiently" and "I used some preprocessor hacks to precompute things", etc. But he kept on asking me about the logging subsystem, and I began to detect a very worried vibe from this gentleman.
The final straw came one day when, for the Nth time, I had to explain that if somebody were to turn on full logging on the system (and you might as well include a few calls to fflush() and fsync() in here, because I made these configurable too), then the system would run slower AND THIS WAS OK. I kept on fielding questions and concerns from various people in the company about this issue, and it was starting to drive me a little bit batty. I'd explain it this way "look, if you ask a computer to do more work, would you also reasonably expect that it would take more time to do this work?". After I managed to extract an answer of "yes" I'd ask "so why are you concerned about our system slowing down if you turn on full debug logging?". I'd also point out that you would NEVER want to do this on a production system.
Anyways, my boss's boss stopped by one more time to ask me about all of this and so I had it out with him. We went back to his office and I explained exactly how the logging subsystem worked and answered all of his questions. Then I got to ask my question: "why are you so concerned about the system running slower when somebody turns on debug logging?". At this point he told me one of HIS war stories: one day when he was working in the telecom world he was involved in a sales demo of a serious, big-iron telephone switch. Not having one of these just laying around for a demo, they brought the potential customer to the central office, where a big-iron switch was operating in a production environment. Well, the demo started, and at one point the potential customer asked "What if something goes wrong on the switch? Are there any debugging capabilities?". Well, the sales engineer, wanting to make a sale said "No problem! I know how to turn on debug output!" and before anybody could stop him he did exactly that. Well, at this point, operations on the switch screeched to a crawl and THIRTY-FIVE THOUSAND PHONE CALLS WERE DROPPED INSTANTLY. He didn't tell me if they got the sale or not.... (I can only assume that this event was the inspiration behind the "can you hear me now!?" slogan)
Anyways, after telling me this, I understood where my boss's boss was coming from. I never got to make any modifications the logging subsystem based on his input (like, maybe some level of debug logging should not be available in a production build) because the issue never really got to be really important and I transitioned to another project a short time later. But, I understood where he was coming from.
Ahh...logging subsystems....never a dull moment.
02 February 2009
Patch for Unix Network Programming Interface Code
I am a big fan of the book _UNIX Network Programming_ by Rich Stevens. Maybe if you are reading this blog you are too.
Anyways, I have discovered that some of the code in this book is suffering from some bit-rot. Specifically in section 17.6 of the third edition some code is presented that allows the programmer to traverse the list of network interfaces on the local machine. This code uses uses ioctl(...., SIOCGIFCONF, ...) to get the information.
The source of the bit-rot is that Stevens' code made some assumptions about the size of (struct ifreq).....and these assumptions are no longer valid, at least, not for me who is running a relatively modern Linux 2.6 box.
So, I've come up with a patch. I'm pretty pleased with my patch, because
Anyways, I have discovered that some of the code in this book is suffering from some bit-rot. Specifically in section 17.6 of the third edition some code is presented that allows the programmer to traverse the list of network interfaces on the local machine. This code uses uses ioctl(...., SIOCGIFCONF, ...) to get the information.
The source of the bit-rot is that Stevens' code made some assumptions about the size of (struct ifreq).....and these assumptions are no longer valid, at least, not for me who is running a relatively modern Linux 2.6 box.
So, I've come up with a patch. I'm pretty pleased with my patch, because
- the code can be fixed.
- the code can be simplified.
- the code can be made to be more portable and to still work even if this union is changed in the future.
Labels:
code,
ioctl,
networking,
patch,
programming,
rstevens,
SIOCGIFCONF,
unix,
unp
Subscribe to:
Posts (Atom)