Galleries
Search
Powered by Squarespace
Friday
Sep192014

Implement First, Optimize Second

I've started participating on stackoverflow, mostly by trying to answer questions.  This has made we really want to write some longer articles about programming and programming practices that are generally too long and ranty to be helpful answers.  There are issues that I see brought up over and over again on SA and the advice I want to give does not answer the question because the asker is asking the wrong question.  For my first piece I am going to discuss why worrying about performance before your program is working is a bad idea.

The type of question that motivated me to write this article is some permutation of

I am planning on writing a program to do X.  It needs to be a fast as possible.  What optimizations should I use when I write my code?

You'll see lots of comments (including a few from me) saying effectively

Don't try to optimizate your code until is working and you can profile/benchmark it

or as I like to put it "implement first, optimize second".  I hope this article will explain why this is the correct approach.  There are many reasons why you should implement first and optimize second.  I will order the arguments for this approach in what I consider the most to least compelling arguments. Opinions may vary.

1. Correctness

The hardest, most resource consuming part of programming is writing correct code.  It is really easy to write code, but making sure it is correct is hard.  Now when I say correct, I mean correct in an industrial, deliver to paying customers sense of correct, not the much weaker did my TA happen to catch any bugs sense (a topic for another day...).  The best way to verify correctness is to test.  However you can't test without code to run.  Thus you should attempt to reach a point where you can test your program as soon as possible.  (For a more information on testing and coding, I suggest looking at Test Driven Development).

The fastest way to get correct code?  Write it as simple as possible.  Don't be clever.  Don't use tricks.  Just make it work.  The faster you can get it working the better, even if this means making sub-optimal decisions.  If you've got slow but reliable data structures, use those instead of writing faster ones.  Even if you know they are going to be too slow, use them.  Get your code working first.  Then write tests to verify the correctness of your program, then check to see if your program is too slow.  You may run into performance issues during testing.  That's ok.  You want examples to prove your code is too slow, before you optimize.  If you can't find any, maybe your code is just fine without the optimizations (see below).

Now is the time to start benchmarking, profiling then optimizing.  Lucky for you have good tests to make sure that the optimizations you try don't change the program's behaviour.  This means that as you try different optimizations you can verify that the program still works.  Benchmarking and profiling a program that does not work is a bad idea.  If your change doesn't work, you can always compare the optimized with the unoptimized code (you are using a source code management tool, right?) to isolate the problem.  It is much easier to debug when you know exactly what change broke the behaviour.  Nothing wastes time like debugging the wrong code.

One objection I see raised to this is that is seems like you'll end up coding twice, once the slow way and once that fast way.  Isn't it faster to just write the code once?  No, it is not.  Most programmers spend more time debugging then actually writing code.  Correctness is hard, writing code is not.  Debugging optimized code is harder that debugging simple code.  Debugging isolated changes (a particular optimization) is easier than debugging a whole program (bug could be anywhere).  You will save time by making the debugger easier.

2. 90/10 Rule

There are a million variations on the 90/10 rule, but the one I am talking about is

90% of your program's time is spent in 10% of its code

Basically only a small part of your code is probably performance critical and it may not be the part you think it is.  So how do you determine what is the important code?  Profiling.  Ideally using profiling tools to instrument your code so you can actually see which code matters.  Optimizing the 90% of your code that doesn't matter is a waste of time.  If you try to optimize while you implement, the odds are pretty good you will spend time optimizing code (and debugging that optimized code) when it does not matter.

3. Compilers are smart and do crazy shit

I was going to say that this point only applies to developers of compiled languages like C/C++, however most interpreted languages are moving towards using just in time compilers so it does still apply to you python/ruby types as well.  Compilers probably know way more about your hardware than you do.  Unless you happen to be a compiler developer, just accept this as a fact.  The compiler will make all sorts of optimizations.  You won't be able to predict what the compiler will do to your code.  Thus you really can't know if the code you think you need to optimize will be handled just fine by the compiler or not.  So wait until the code is written and the compiler has had a chance to do its magic before deciding if you need to step in.

I also suspect (although I am not a compiler developer) that compilers do better with code that is, well, simple.  It is easier for the compiler to optimize straight forward code.  Complex code with weird tricks may actually impede the compiler's ability to optimize your code.  As the compiler is probably better at optimizing your code that you are, don't second guess until you know you need to do better.

4.  Stuff changes when you start implementing

When you start implementing your program, you'll discover that your plans won't work and you'll need to do something different.  Often this means going back and changing code you've already written.  Spending time on a piece of code only to discover that you don't need it sucks.  If you've also debugged the optimized version of that code, it sucks twice as hard.  So don't spend time optimizing until you're done.

5.  The simple code may work just fine

Many programs are not performance critical.  Many programs don't need to be as fast as possible, they just need to be fast enough.  It may well be that the optimizations you are planning are totally unnecessary.  Your program might be just fine with the simple code.  A program that is fast enough with simple clear code will be much easier to maintain, especially if someone else may be maintaining it in the future.

Sunday
Oct022011

Updating the BIOS of a Thinkpad X220 using Linux

After spending a day trying to figure out various grub, syslinux and memdisk options, I retreated to googling and quickly found this page on the lenovo support forum.  I am going to copy the contents here to make sure I don't lose them in the future.  All props go to the original author:

1. Get the bios update iso (8duj10uc.iso here) from the lenovo support site.

2. Get 'geteltorito' and extract the boot image from the iso (isobar.c probably works too)

$ wget 'http://www.uni-koblenz.de/~krienke/ftp/noarch/geteltorito/geteltorito.pl'

$ perl geteltorito.pl 8duj10uc.iso > biosupdate.img

3. Copy the image to the usb thumdrive

$ sudo dd if=biosupdate.img of=/dev/usbthumdrive bs=512K

Here is a copy of the geteltorito.pl script for posterity:

 

#!/usr/bin/perl

use Getopt::Std;

#
# geteltorito.pl: a bootimage extractor
# Script that will extract the first El Torito bootimage from a
# bootable CD image
# R. Krienke 08/2001
# krienke@uni-koblenz.de
# License: GPL
#
# Get latest version from:
# http://userpages.uni-koblenz.de/~krienke/ftp/noarch/geteltorito
#
$utilVersion="0.5";
#
# Version 0.5
#    2009/06/22
#    A patch for harddisk emulation images from .
#    For BootMediaType=4 (harddisk emulation) SectorCount is always 1, and geteltorito.pl
#    returns just MBR. This patch guesses the correct bootimage size
#    from MBR (offset+size of the first partitition).
# Version 0.4
#    2007/02/01
#    A patch from Santiago Garcia  to use a virtual sector
#    size (vSecSize) of 512 bytes, as defined on "El Torito" specs and change
#    unpack of the sector count from n to v to get the correct sector count.
# Version 0.3
#    2006/02/21
#    A patch from  Ben Collins  to make the
#    utility work on PPC machines (change from 'L'-encoding in pack to 'V')
# Version 0.2
#    Several patches included from Nathan Stratton Treadway(nathant@ontko.com)
#    to adjust the platform output as well as fixes for other minor bugs
# Version 0.1
#    Initial release
#
# For information on El Torito see
# http://en.wikipedia.org/wiki/El_torito

$vSecSize=512;
$secSize=2048;
$ret=undef;$version=undef;$opt_h=undef;$loadSegment=undef;$systemType=undef;

#
# Read a particular sector from a file
# sector counting starts at 0, not 1
#
sub getSector{
   my ($secNum, $secCount, $file)=@_;
   my ($sec, $count);

   open(FILE, $file) || die "Cannot open \"$file\" \n";

   seek(FILE, $secNum*$secSize, 0);
   $count=read(FILE, $sec, $vSecSize*$secCount, 0) ;
   if( $count != $vSecSize*$secCount ){
       warn "Error reading from file \"$file\"\n";
   }
   close(FILE);

   return($sec);
}


#
# Write eltorito data into a file
#
sub writeOutputFile{
   my($name)=shift;
   my($value)=shift;

   open(OUT, ">".$name)|| die "$0: Cannot open outputfile \"$name\" for writing. Stop.";
   print OUT $value;
   close(OUT);
}

#
# Usage
#
sub usage{
        warn "\n$0 [-hv] [-o outputfilename] cd-image \n",
            "Script will try to extract an El Torito image from a \n",
            "bootable CD (or cd-image) given by  and write \n",
            "the data extracted to STDOUT or to a file.\n",
            "   -h:        This help. \n",
            "   -v:        Print version of script and exit.\n",
            "   -o : Write extracted data to file  instead of STDOUT.\n",
            "\n\n";
        exit 0;
}


# ---------------------------------------------------------------------
$ret=getopts('hvo:');

if( defined($opt_v) ){
        warn "Version: $utilVersion \n";
        exit 0;
}

if( defined($opt_h) || $#ARGV <0 ){
         usage(0);
}

if( defined($opt_o) ){
   $outputFilename="$opt_o";
}

$imageFile=$ARGV[0];

if( ! -r $imageFile ){
        die "Cannot read image/device \"$imageFile\". Aborting\n";
}

#
# Read Sector 17 from CD which should contain a Boot Record Volume
# descriptor. This descriptor contains at its start the text ($isoIdent)
# CD001     and ($toritoSpec)
# EL TORITO SPECIFICATION
# see http://www.cdpage.com/Compact_Disc_Variations/eltoritoi.html
# for details
#

$sector=getSector(17, 1, $imageFile );
($boot, $isoIdent, $version, $toritoSpec,
        $unUsed, $bootP)= unpack( "Ca5CA32A32V", $sector );

if( $isoIdent ne "CD001" || $toritoSpec ne "EL TORITO SPECIFICATION" ){
        die "This data image does not seem to be a bootable CD-image\n";
}

#
# Now fetch the sector of the booting catalog
#
$sector=getSector($bootP, 1, $imageFile );

print STDERR "Booting catalog starts at sector: $bootP \n";

# The first 32 bytes of this sector contains the validation entry for a
# boot. The first byte has to be 01, the next byte determines the
# architecture the image is designed for, where 00 is i86, 01 is PowerPC
# and 02 is Mac. More data give info about manufacturer, etc.  The
# final two bytes must contain 0x55 and 0xAA respectively (as
# defined by the El Torito standard).

$validateEntry=substr($sector, 0, 32);

($header, $platform, $unUsed, $manufact, $unUsed, $five, $aa)=
               unpack( "CCvA24vCC", $validateEntry);

if( $header != 1 || $five != 0x55 || $aa != 0xaa ){
        die "Invalid Validation Entry on image \n";
}

print STDERR "Manufacturer of CD: $manufact\n";
print STDERR "Image architecture: ";
print STDERR "x86" if( $platform == 0 );
print STDERR "PowerPC" if( $platform == 1 );
print STDERR "Mac" if( $platform == 2 );
print STDERR "unknown ($platform)" if( $platform > 2 );
print STDERR "\n";

#
# Now we examine the initial/defaultentry which follows the validate
# entry and has a size of 32 bytes.

$initialEntry=substr($sector, 32, 32);

($boot, $media, $loadSegment, $systemType, $unUsed,
       $sCount, $imgStart, $unUsed)=unpack( "CCvCCvVC", $initialEntry);

if( $boot != 0x88 ){
        die "Boot indicator in Initial/Default-Entry is not 0x88. CD is not bootable. \n";
}

print STDERR "Boot media type is: ";
if( $media == 0 ){
        print STDERR "no emulation";
        $count=0;
}
if( $media == 1 ){
        print STDERR "1.2meg floppy";
        $count=1200*1024/$vSecSize;
}
if( $media == 2 ){
        print STDERR "1.44meg floppy";
        $count=1440*1024/$vSecSize;
}
if( $media == 3 ){
        print STDERR "2.88meg floppy";
        $count=2880*1024/$vSecSize;
}
if( $media == 4 ){
        print STDERR "harddisk";
        $MBR=getSector($imgStart, 1, $imageFile );
        $partition1=substr($MBR, 446, 16);
        ($unUsed, $firstSector, $partitionSize) = unpack( "A8VV", $partition1);
        $count=$firstSector + $partitionSize;
}
print STDERR "\n";

# Only use the internal sector counter if the real size is unknown
# ($count==0)
$cnt=$count==0?$sCount:$count;

print STDERR "El Torito image starts at sector $imgStart and has $cnt sector(s) of $vSecSize Bytes\n";

# We are there:
# Now read the bootimage to stdout
$image=getSector($imgStart, $cnt, $imageFile);

if( length($outputFilename) ){
   writeOutputFile($outputFilename, $image);
   print STDERR "\nImage has been written to file \"$outputFilename\".\n";
}else{
   print "$image";
   print STDERR "Image has been written to stdout ....\n";
}
Friday
Aug192011

A Review of Grand River Rocks

I just got back from my first time climbing at Grand River Rocks, the new climbing gym in Kitchener-Waterloo.  This is going to be a short review from my first session, I'll post something longer once I get a better sense of the gym over the long term.  Tonight was also their first day open, so I'm not going to judge them entirely on my first experience.  Also, you should know that my regular gym is Climber's Rock in Burlington, however I've also climbed a little at the Guelph Grotto.

Grand River Rocks [GGR] seems good.  It is smaller than Climber's Rock, but the same or maybe bigger than Guelph Grotto (I'm not great at estimating the sizes.  Plus, as I am mostly a boulderer, I paid much more attention to the bouldering that the routes).  The bouldering is much better than at the Grotto.  The boulder walls, which fill the center of the gym, has a lot of angles and interesting shapes.  However as the walls are in the center of the gym, it feels a bit more cramped than Climber's Rock.   I found this especially true on the back side of the bouldering at the inside of the right angle of the L shaped feature.  How significant this is will totally depend on how crowded the gym gets.  It is all top-out bouldering which is nice.  The holds are currently very new and very rough, so expect to lose more skin that usual, I'm sure that will only take a few months to resolve.

Another difference between Climber's Rock and GRR is the the bouldering generally starts at a higher level.  Not that it is all hard, but Climber's Rock sets some easy intro problems for beginners.  We found that the lowest grade problem on the bouldering wall was definitely harder that the lowest grade at Climber's Rock.

The problems were generally interesting.  They seem to like low, sit down starts, which was ok for me, but caused some issues for some of my climbling companions.  I've been told that Climber's Rock sets some what easier that other gyms, however I was occasionally surprised at the choice of holds for some of the easier problems at GGR.  For example there was a "yellow" problem, which is their second easiest grade that was almost all two finger pockets.  That seemed a bit harsh for a relatively easy grade.

The route walls looked interesting, although I did not do any routes, so I'll save my opinion until later.  There was space for more routes, and it appears as though they did not have too many problems set that were sub 5.10, (which is what some of my friends are climbing).

I have a few constructive criticism for the GGR crew.  The tape colours for the bouldering problems are occasionally hard to differentiate.  Especially the orange/red and green/brown/black.  I would suggest getting brighter coloured tape as that would really help make them more distinguishable.  I also think this might make it easier to use ink on the tape to distinguish the holds for problems different problems of the same colour.  The floor had extra padding around the bouldering wall, however there were only two gym pads and one crash pad.  I really think they should have more pads.  Hopefully they are on order and simply have not arrived yet (the official grand opening is not until Sept 10).  The other thing that I would really like to know is if they are going to have a regular schedule for resetting the problems and routes.  At Climber's Rock they try to reset on section of the bouldering wall and a number of the routes each week.  This way you know when new problems will set.  I find this a really nice, and I hope GGR comes up with similar schedule (and posts it).  Regularly setting new problems is probably the biggest factor in deciding if I will buy a membership (note: I probably will).

In summary GGR seems to have most of the right pieces for it be a good local gym, the big question is how good their problems are going to be and how regularly they set new ones.  This is especially important as they don't have tons of bouldering, so it may not take too long to do all the problems at your level, and then be stuck with nothing to do.

Sunday
Aug142011

Summer Food 2: BBQ Post mortem

Well, its the morning after.  If you saw my twitter stream, you'll know that things did not go as planned.  Instead of taking 4-6 hours to reach 190 degrees, it took about 10 hours to get to 180.  Ultimately the pulled pork tastes quite good, so that, at least was successful.  I also had problems getting the smoke generated properly.  I checked the smoke box after the cooking was over and I got less that half the wood smoldering. So my plan is to try and deduce what went wrong.  Of course I only have the one data point, so most of this is based on other experiences.

First, the smoke.  I think the smoke box did not start smoldering because I should have put it on earlier, when I was heating the barbecue, not once I had stabilied the temperature.  The fire was not high enough at that point to start the wood smoking.  If it was already smoking, I think it would have maintained generating the smoke, but it was too cold to start the fire.

Second.  The time.  The meat was still cold when it when on the bbq.  If you check the initial temperature when I put the meat on the bbq, it was still basically at fridge temperature in the center.  It took almost a full hour on the heat before the internal temperature was up to room temp.  Given that it only took an hour to get the internal temp to room temp, I'm not sure if that fully explains why it took basically twice the expected time to get to final temp.  Of course heat transfer is a complex process, so who knows.

Anyway, the recipe for the rub made more that I used for this experiment so I figure I should do another in a few weeks.

Another important note, running the bbq for 10 hours at lowest heat used about 1/4 of a tank of propane.

Here are photos of the meat, almost every hour (I missed a few).

Just starting 12 pm

 

1 pm.

2 pm.

3 pm.

4pm

6pm

7pm

8pm

9pm

10pm

12pm.  Post pulling.

Saturday
Aug132011

Summer Food 2: Adventures in BBQ

As I write this I am a hour and thirty minutes into my first "proper" slow cooked bbq.  You can read the up to the minute goings on via my twitter feed.  However this longer post is going to be describing the tools I am using.

Meat, rub, mustard.  I am cooking a 4 lb pork butt roast.  A cut of shoulder meat.  Following the recipe here, I made a rub and then applied it by first applying a layer of mustard and then the rub.  This was yesterday at around 5pm.

Here is a run down of the tools I'm using.

My gymboss interval timer.  Usually this is used for, you know, exercise.  However the timer can go as high as 99 minutes, so I have has set it for 6 intervals of 60 minutes.  I've has also used this timer for grilling, 5 intervals of 2 minutes for cooking steak, and pork chops etc.

A grill thermometer.  I am using this to measure the temperature on the grill, as opposed to the temperature read by the thermometer integrated into the barbecue lid.  More accurate temperature monitoring means more accurate cooking.

This is my little smoke box.  You fill it full of wood chips.  The box is then placed in the bbq and heated until the wood starts to smolder.  This creates smoke that helps to flavour the meat.  Unfortunately, I messed this part up and did not heat the box high enough to get the wood smoldering.  So no smoke this time.  :(

The machine.  This is my Weber Genesis.  The cooking machine for today's little experiment.  1 burner on at lowest flame seems to maintain the internal temp at 225.

The meat as it just goes on, around 1pm.

The meat after the first hour.