Joel Spolsky is always worth a read, and in his post Strategy Letter VI he has a lot of smart things to say about how to consider programming. His basic message is that if you optimize your code too much to work well and fit in the memory of a current machine, by the time that you are done, you find yourself run over by competitors that just assumed machines would be faster and used the same programming time to implement cooler products.
I just have to take issue with this.
The assumption behind this is that CPU power and memory sizes just keep increasing as time goes on, and that you can take this for granted. But that is exactly what is not happening right now, if you count single-thread performance. You have to be parallel to get this benefit. For me, this means that I would be a bit more careful, unless I was certain that my system and my programmers were really clear on how to make the application in question scale out well as cores multiply but single-thread performance stays the same.
Another problem with Joel’s idea is that sometimes hte hardware platform you target exists now and is fixed. If you code for a particular mobile phone that is on the market currently, you cannot expect it to magically improve its speed over time. It is here and you have to live with that. Same thing for certain embedded applications where hardware remains stable for a long time due to certification and safety concerns. Not everyone can enjoy continuous hardware upgrades…
Finally, sandboxing is GOOD. Not bad as Joel says. If there is one thing that it makes sense to “waste” processor cycles and memory on, it is security. Isolating software from other software and keeping code coming off of the Internet tightly locked up in a little sandbox is a good thing. If we did things more like that, we would have much fewer security problems.