February 8, 2014

2014.02.08
This poster was so... what's the term, influential? Not quite, but had a big impact on me when I was a kid... it came with the computer game Elite, and shows the various spaceships in the game. It's an intriguing study in what forms you can make with (I think) strictly-convex simple polygon models

I finally wrote my pseudo-magnum-opus on automated testing, and the skepticism I often have about it. I'd really appreciate any feedback by software folks, because I really need to grow in this area, or get skilled in getting others to agree with me. The link includes this great long passage by Joel Spolsky that I want to include here:
In fact what you'll see is that the hard-core geeks tend to give up on all kinds of useful measures of quality, and basically they get left with the only one they can prove mechanically, which is, does the program behave according to specification. And so we get a very narrow, geeky definition of quality: how closely does the program correspond to the spec. Does it produce the defined outputs given the defined inputs.

The problem, here, is very fundamental. In order to mechanically prove that a program corresponds to some spec, the spec itself needs to be extremely detailed. In fact the spec has to define everything about the program, otherwise, nothing can be proven automatically and mechanically. Now, if the spec does define everything about how the program is going to behave, then, lo and behold, it contains all the information necessary to generate the program! And now certain geeks go off to a very dark place where they start thinking about automatically compiling specs into programs, and they start to think that they've just invented a way to program computers without programming.

Now, this is the software engineering equivalent of a perpetual motion machine. It's one of those things that crackpots keep trying to do, no matter how much you tell them it could never work. If the spec defines precisely what a program will do, with enough detail that it can be used to generate the program itself, this just begs the question: how do you write the spec? Such a complete spec is just as hard to write as the underlying computer program, because just as many details have to be answered by spec writer as the programmer. To use terminology from information theory: the spec needs just as many bits of Shannon entropy as the computer program itself would have. Each bit of entropy is a decision taken by the spec-writer or the programmer.

So, the bottom line is that if there really were a mechanical way to prove things about the correctness of a program, all you'd be able to prove is whether that program is identical to some other program that must contain the same amount of entropy as the first program, otherwise some of the behaviors are going to be undefined, and thus unproven. So now the spec writing is just as hard as writing a program, and all you've done is moved one problem from over here to over there, and accomplished nothing whatsoever.

This seems like a kind of brutal example, but nonetheless, this search for the holy grail of program quality is leading a lot of people to a lot of dead ends.

There is an amusing (and by amusing I mean annoying) bug with iTunes smart playlists... if a song is only on a device because it had a certain rating and so appeared in specific smart playlists, it's difficult if then loses that rating... its ghost still shows up and automagically streams from the cloud, but anything that could carry the metadata is lost, so the thing stays on the list.