Have you ever got to the end of something and thought “you know, if I did that over again, I would do this – and that – and the other differently”?
That, dear reader, is the second system effect. And it is, in my opinion, exactly what RIM are suffering from right now, what Apple suffered from in the 1990s, Microsoft have been there (twice) and it’s what essentially sealed Multics fate before it ever began.
There will, of course, be those among you thinking to yourself (probably smugly, while you sit there sipping a martini made with olives grown by Carmelite nuns and washed in the tears of baby unicorns) that I’m referring to the “Osborne effect”. That’s a similar thing, but not quite. Let me explain.
The Osborne effect is named after the disastrous marketing campaign waged by Osborne computers in the early 80s. They made one of the very first portable computers (well, portable-ish: it was ridiculously heavy). CP/M based with a huge 5 inch screen, it launched in April 1981, and by September they were selling a million dollars worth a month. And then they made a mess. Of everything.
What’s commonly thought of as the “Osborne effect” is a marketing announcement of how good their next computer was going to be: so good, people stopped buying their current one and they ran out of money. In reality, by the time that computer was ready, they had lost so much ground to the competition that they couldn’t make the ground back in sales. Game over.
But in fact, they were banking on evolution, rather than revolution, and just couldn’t come up with compelling improvements fast enough.
Of course, this doesn’t apply to either Apple or RIM: they keep churning out new products at a fairly regular rate and selling quite well.
“The second system effect” is quite a common occurence, however. It’s a stage most computer programmers – and technology companies – go through, fairly frequently.
Basically, the idea is a simple one. You decide that what you’re doing is OK, but you can do it so much better. In fact, so much better – and quicker – that it’s better to have a complete revolution, rather than an evolution. Because you’ll complete the entire NEW product in less time than it would have taken to re-engineer the old one.
Of course, that’s nonsense. Apple’s Copland operating system (the successor to MacOS 8 that never was) was intended as a ground-up rewrite. The project started round about 1994, and continued for two years, accompanied only by the sound of missed deadlines whooshing past in a blur of sheer, demented panic. In the end, they decided to scrap the idea and buy something in. That something ended up being NeXT, a company started (and owned) by Steve Jobs. If you’re partway through his autobiography and don’t want to know how that turned out, then… well, actually, if that’s the case, which cave have you been living in for the last 16 years?
Microsoft weren’t immune. They did it twice. First, with the ill-fated Cairo OS, which took five years. Of course, that never saw the light of day, either – although bits of it ended up in Windows 95 and Windows NT 4. Deciding that learning from your mistakes is for wimps, they did it again – with Windows Vista. Originally code-named Longhorn (aka “Longwait”), the OS was supposed to be ground-up based on the .NET Framework. This never happened, or at least not in the way they intended, and a last-minute rewrite came up with Vista. (If you want to argue it’s happening with Windows Phone, I’d say it’s Osborne effect, not SSE)
Of course, it’s nothing new – it was in fact named after what happened with Multics. In the early 1960s, there were a plethora of competing operating systems – CTSS, DTSS, even the ad-hockery of ITS. What was needed – in true dark side tradition – was a single OS to rule them all.
The specifications for Multics took about two years to write. By 1965, work was underway. But it took time – so much time. So many firsts were to be attempted, it makes the head spin: true virtual memory and paging, and the first OS to be written (almost) entirely in a compiled language, rather than assembler.
By anyone’s standards, a mountain to climb. Especially when the compiler didn’t exist.
But even so, Multics did finally get released, and worked well. In fact, the last Multics system was shut down in 2000. What’s left of Multics has been open sourced, and keen coders have tried to revive it to work on modern hardware.
Of course, this is what RIM are currently trying to do – reinvent themselves with Blackberry 10, based on the bought in QNX operating system. It’s probably not working, in the same way that Nokia’s attempt to rebuild themselves with Windows Phone isn’t really showing much financial impact either – and many users saying that if Nokias ran Android, they’d be more likely to buy them.
You see, there is another lesson to be learned in there as well. What really killed Multics was an operating system called Unix. Not controlled by a large corporate body, it was originally a skunkworks project thrown together by Ken Thompson, Dennis Ritchie, and Brian Kernighan. When AT&T worked out what they had, they just concentrated on selling it and left the techies to do the work.
It turned out to be a smart move.
You see, while they may not run even a single line of Unix code these days, there’s a direct lineage from there to Android, Linux, FreeBSD, NetBSD, OpenBSD, NeXT, Mac OS X and iOS.
I’m writing this on a Windows PC. I’m surrounded by an iPad, iPod and Android phone – all, essentially, Unix-heritage devices.
The lesson is clear. To be the all-encompassing system that runs everything is a dream – a lovely one, yes, but a dream. In reality, we don’t need perfection. We need something that works, reliably, does what we want, and is available right now.
It’s why iPads rule the tablet market, why the iPod pretty much killed off other portable music players, and why the iPhone and Android rule the smartphone market right now.
We don’t need the second system. We just need a first one that works. And we need it right now.