Where I work, we use Exchange as our email server and Outlook as the primary client (at least I do). We also have an email quota that I keep bumping into, since I have a tendency to attract many emails with large attachments like image-happy PowerPoint files or binary code modules to patch things. I am also an extreme user of email folders. My main Outlook account contains some 650 folders, and my offline archive of all my old emails reaches towards 1300, with many 100s of thousands of emails for a total of almost 20 GB. So, pretty extreme.
My problem is: what do I do when the email system tells me (and it is serious, I can attest) that I am close to hitting my quota and that soon email will neither be received nor sent? I want to find the folders that are very large and candidates for some archiving. The answer has eluded me for a long time, until I stumbled upon a 2010 Youtube video: http://www.youtube.com/watch?v=3skJOd4GIak, from “tech-informer.com” (which now looks pretty dead). With some modifications, this solved my problem.
Continue reading “Off-Topic: Analyzing Outlook Mailbox Size”
On the Wind River corporate blog, I have put up a blog post about how Wind River Education Services is going to use Simics to teach networking. What is interesting with this approach is that it shows how a virtual platform can be used for tasks like teaching that don’t have much to do with hardware modeling or similar “typical” VP uses. In this case, the key value is encapsulation of a set of machines running real operating systems and software stacks, and with lots of networks connecting them.
When IBM moved their mainframe systems (the S/360 family that is today called System Z) from BiCMOS to mainstream CMOS in 1994, the net result was a severe loss in clock frequency and thus single-processor performance. Still, the move had to be done, since CMOS would scale much better into the future. As a result, IBM introduced additional parallelism to the system in order to maintain performance parity. Parallelism as a patch, essentially.
Continue reading “IBM Mainframe: Parallelism as Patch”
There is a new post at my Wind River blog, about how a team of researchers at the University of Nebraska at Lincoln is using Simics to force rare bugs to manifest themselves as errors. They used Simics to control a target system to force it into rare situations much more likely to trigger latent bugs, requiring far fewer test runs compared to just randomly rerunning tests again and again and hoping to see a bug.
Carbon Design Systems have been quite busy lately with a flurry of blog posts about various aspects of virtual prototype technology. Mostly good stuff, and I tend to agree with their push that a good approach is to mix fast timing-simplified models with RTL-derived cycle-accurate models. There are exceptions to this, in particular exploratoty architecture and design where AT-style models are needed. Recently, they posted about their new Swap ‘n’ Play technology, which is a old proven idea that has now been reimplemented using ARM fast simulators and Carbon-generated ARM processor models.
Continue reading “Carbon “Swap ‘n’ Play” – A New Implementation of an Old Idea”
Once upon a time, I was young man in high school where our little computer club got a new PC with a color screen and a floating-point coprocessor. One fun little program I wrote was a simple gravity simulator, where a number of point-size assigned various mass flew around interacting with each other. We used that program and tried to set up initial setting for sizes, speeds, and directions of bodies that would result in some kind of stable system. More often that not, all we managed to create were comets that came in, took a sharp corner around a “star” and disappeared out into the void again. Still, it was great fun. And when I discovered Angry Birds Space it felt like a chance to try that again. Overall, “space” as my son calls it is a great spin on the Angry Birds idea. However, the way it is sold does not make me too happy.
Continue reading “Off-Topic: Angry Birds Space (Good Game, Bad Price)”
There is a new post at my Wind River blog, about how the LDRA code coverage tools have been brought to work on Simics using a simulation-only “back door “.
The most interesting part of this is how a simulator can provide an easy way to get information out of target software, without all the software and driver overhead associated with doing the same on a real target. In this case, all that is needed is a single memory-mapped location that can written to be software – which can be put into user-mode-accessible locations if necessary.
Continue reading “Wind River Blog: Code Coverage over a Back Door”
Once upon a time, all programming was bare metal programming. You coded to the processor core, you took care of memory, and no operating system got in your way. Over time, as computer programmers, users, and designers got more sophisticated and as more clock cycles and memory bytes became available, more and more layers were added between the programmer and the computer. However, I have recently spotted what might seem like a trend away from ever-thicker software stacks, in the interest of performance and, in particular, latency.
Continue reading “Back to Bare Metal”
Wind River recently added a couple of new processor models to Simics: the 30-year-old 80186 and the 32-year-old 8051.
I have a blog post about this up on the Wind River tools blog. Pretty amazing to see us model an eight bit machine in 2012 – just proves how long-lived some hardware systems are.
There is a new post at my Wind River blog, about Simics running a model of the new Intel Crystal Forest platform. Crystal Forest is a very complex piece of hardware, but I am pretty happy that we managed to demo it in an understandable way – by essentially using it as a black box and putting a pretty display on top of that (using Eclipse).
I was recently pointed to a 2011 SPLASH presentation by David Ungar, an IBM researcher working on parallel programming for manycore systems. In particular, in a project called Renaissance, run together with the Vrije Universiteit Brussels in Belgium (VUB) and Portland State University in the US. The title of the presentation is “Everything You Know (about Parallel Programming) Is Wrong! A Wild Screed about the Future“, and it has provoked some discussion among people I know about just how wrong is wrong.
Continue reading “David Ungar: It is Good to be Wrong”
Ticket to Ride is a nice real-world board game that is generally considered one of the best family and gateway games (and a decent game even for experienced gamers). We recently got it for our iPod Touches, and the weakness of the computer players quickly turned it from “I wonder if I can win this game” into “let’s shoot for the highest score possible”.
Chasing high scores is fairly typical for computer games – playing against human beings you are motivated to win, even if you win by scoring a measly 75 points… while against the computer it becomes about beating your own old scores. Unfortunately, this also turns repetitive after a while, due to some small design flaws that really should be easy to fix.
Continue reading “Off-Topic: Ticket-to-Ride Pocket is Broken”
There is a new post at my Wind River blog, about how you actually do fault injection in Simics. This particular post is pretty detailed, showing the actual architecture of a fault injector in Simics, not just “yes you can do it”. It includes actual diagrams of system components and how you can insert fault injection into an existing system, so it is a bit more technical than most my Wind River blog posts that tend to be more conceptual.
In this final part of my series on the history of reverse debugging I will look at the products that launched around the mid-2000s and that finally made reverse debugging available in a commercially packaged product and not just research prototypes. Part one of this series provided a background on the technology and part two discussed various research papers on the topic going back to the early 1970s. The first commercial product featuring reverse debugging was launched in 2003, and then there have been a steady trickle of new products up until today.
Originally published in January 2012. Post updated 2012-09-28 with a revised timeline for Lauterbach CTS. Post updated 2016-04-05 to include Mozilla RR. Post updated 2016-12-26 to add Simulics. Post updated 2017-10-08 to add Microsoft WinDbg. Post updated 2018-07-28 to add Borland Turbo Debugger.
Continue reading “Reverse History Part Three – Products”
This is the second post in my series on the history of reverse execution, covering various early research papers. It is clear that reverse debugging has been considered a good idea for a very long time. Sadly though, not a practical one (at the time). The idea is too obvious to be considered new. Here are some of the papers that I have found, going back before reverse debugging got started for real in actual products (around 2003) as well later on for interesting research papers that did not make it into products. It is worth noting that products/useful software has become more common in recent times as the way that reverse debugging ideas get expressed.
Continue reading “Reverse History Part Two – Research”