The new version of Trango’s embedded “secure virtualizer” for the ARM Cortex-A9 MPCore is an interesting solution in that it directly applies virtualization technology to the issue of migrating solutions (complete software stacks) from single-core to multicore. The details are a bit sketchy in just how this is done, there is some hardware support in recent ARM architectures, but a little bit of adaptation of a guest OS using paravirtual techniques are likely not a blocker. It also touches on security, implemented using ARM’s trustzone technology. All in all, I think this is a typical example of something that we are going to see much more of.
One common use-case for multicore processing on the desktop and elsewhere is “doing many things at the same time”. You could be running many user-interface programs at once, like the “typical today’s teenager template” of tens of IM clients, web sessions, email conversations, music and video players, downloading movies, etc. Or it is a more business-like background indexing of harddrives, backups being taken, downloading large business files, compiling software, updating source code repositories, etc.
I have been doing both of these modes to some extent, and the main problem with them at least on a PC is that while the processors might be good at multitasking and sharing the CPU load, my IO system is annoyingly non-parallel.
I just read a EETimes report from a panel at the Power.org Developers Conference (actually, it is more accurately called the Power Architecture Developers Conference, of PADC), about programming multicore processors for the embedded market. Note that I was not there in person, so I can only take the few quotes in the article and comment on them. The main conclusions are that:
- C/C++ is going to be the dominant language for embedded for the near future. Nothing really surprising at that.
- C/C++ being dominant means that parallelism in multicore processors, especially shared-memory systems, will be harder to exploit. That is certainly true.
- Tool vendors have no good idea about what to do next.
- You cannot expect to get traction with a new language.
In a sense, blaming the market for not having the good sense to adapt new tools to tackle multicore.
I don’t think things have to be that bleak.
In my work at Virtutech trying to explain Simics and its simulation philosophy, it is often a struggle to get people to accept that what seems like pretty brutal simplifications of the world actually work quite nicely. Recently, I found a nice analogy in a golf game/simulator. The type where you swing a real club and send a real golf ball through the air.
Continue reading “Golf Games and Computer Simulations”
Joel Spolsky is always worth a read, and in his post Strategy Letter VI he has a lot of smart things to say about how to consider programming. His basic message is that if you optimize your code too much to work well and fit in the memory of a current machine, by the time that you are done, you find yourself run over by competitors that just assumed machines would be faster and used the same programming time to implement cooler products.
I just have to take issue with this.
A question that pops up quite often when computer architects and representatives from firms like Intel encounter a crowd today is but just what do you need more computing power for????. Most regular users are fairly happy with the speed at which they process words, surf the web, read email, do IP phone calls, crunch numbers in Excel, and other common tasks. It is hard to perceive the need for more speed in everyday tasks, unlike a decade or two ago when you could definitely ask for improvement. I remember scrolling a page in PageMaker on a Mac SE (8Mhz 68000). You counted the clicks and waited for the screen to jump, redraw, jump, redraw, stabilize… quite a different experience from working with modern computers and far more complex software that still responds instantaneously to almost any work.
The “Handbook of Real-Time and Embedded Systems” (ToC, Amazon, CRC Press) is now out. I and my university research colleague and friend Andreas Ermedahl have written a chapter on worst-case execution time analysis. We talk some about the theories and techniques, but we try to discuss practical experience in actual industrial use. Both static, dynamic, and hybrid techniques are covered.
I just got my personal copy, but my first impression of the book overall is very positive. The contents seems quite practical to a large extent, not as academic as one might have feared. Do check it out if you are into the field. It is not a collection of research paper, rather instructive chapters informed by solid research but with applications in mind.
A short update to the previous posting on how to compress video for the nano.
It turns out that the “iPod video” profile of Nero Recode is half aimed at showing video from your iPod on external devices. That’s the only good reason for the “high” resolution. I typically got a video size of 15MB per minute with these settings, which quickly fills up even gigabytes of space.
Using the “iPod Video-AVC” profile instead is optimized for viewing on the Nano itself and not on some external device. The resolution is down to 320×200-240 depending on source aspect ratio. And the resulting files are only about 5MB per minute, much more manageable for carrying a large video library on an iPod. I cannot see any difference in the quality of the output…
Update (2007-September-23): The default iPod-AVC setting has some issue with rapid cross-fades between scenes. To get around this, I set the quality settings to “2-pass” and “highest quality” in the detailed settings you can make in the second screen before moving on to actually encode things. This created very nice looking video that had no problems handling even the previously broken fades.
The cost was even more compute time. I think the current settings takes some 5 to 10 hours per material hour to encode (on my Athlon XP 2700+, not exactly a screamer by current standards).
This is not in my self-assigned range of topics, but I like when other people put up their helpful notes of how to accomplish some task that I am researching. Thus, I feel obliged to do the same when I have tested something reasonably new.
The task at hand here is “how to get video into an iPod Nano”.
It just dawned on me recently (and it sure must have been obvious to those working with configuring AMP — Assymtric Multiprocessing Systems) that in an AMP setup, the operating systems involved actually know about each other and have to account for the fact that they are sharing a single processor chip with other operating systems. So you cannot just take two single-core operating system images from an existing multiple-processor (local memory) solution and put them on a single chip and things just work. You do need to prepare the boot process and find a way to nicely share the common I/O devices, timers, accelerator engines and other resources on the chip. This is materially different from a virtualized setup.
Some more thoughts on how to program multicore machines that did not make it into my original posting from last week. Some of this was discussed at the multicore day, and others I have been thinking about for some time now.
One of the best ways to handle any hard problem is to make it “somebody else’s problem“. In computer science this is also known as abstraction, and it is a very useful principle for designing more productive programming languages and environments. Basically, the idea I am after is to let a programmer focus on the problem at hand, leaving somebody else to fill in the details and map the problem solution onto the execution substrate.
Back in 1996, DVP celebrated its 15th anniversary. When looking through my digital and paper archive, I found this gem: The official badge and logo for the 1996 anniversary! We also produced some mouse pads with this logo on them, one of which I still use for my daily job. Pretty good quality I must say.
The picture shown here was saved as GIF for use on the web. But scarily enough, apart from a few more GIF files, I could not open or even understand the file type of most of the files from that time, only ten years ago. Our digital archives are not very robust — more on that below.
The SICS Multicore Day August 31 was a really great event! We had some fantastic speakers presenting the latest industry research view on multicores and how to program them. Marc Tremblay did the first presentation in Europe of Sun’s upcoming Rock processor. Tim Mattson from Intel tried hard to provoke the crowd, and Vijay Saraswat of IBM presented their X10 language. Erik Hagersten from Uppsala University provided a short scene-setting talk about how multicore is becoming the norm.
A company called Fastscale Technologies has a product that is simple in concept and yet very powerful. Instead of using complete installs of heavy operating systems like Linux or Windows to run applications on virtual machines, they offer tools that provide minimal operating system configurations that are tailored to the needs of a particular application. Since only that application is going to be run on the virtual machine, this is sufficient. According to press reports, this means that you can run several times more virtual machines on a given host, compared to default OS installs. And boot an order of a magnitude faster.
Just found a man with the same name as me also blogging: http://jakobengblom.blogspot.com/. Funny. But I know there are few people with my name in Sweden, so it is not that much surprising.
ArsTechnica is running a history of the Amiga, and in part 3, “The first prototype” they describe a really interesting “simulation” solution for the custom chips in the first Amiga. This is in 1982-83, and there are no VHDL or Verilog simulators, nor any other EDA tools as we know them today. Even if they were, the Amiga company would not have been able to afford them. So in order to test their design, the Amiga engineers built chip replicas using breadboards and discrete logic chips. All in all, 7200 chips and a very large numbers of wires. Quite fascinating stuff, and they did manage to interface the main 68000 CPU to the breadboards and get a fully functional if a bit slow simulation of a complete Amiga computer with all its unique custom chips.
My dear old education program, DVL, later DVP, (which made us call it DV*) is celebrating its 25th anniversary with a large dinner at Norrlands Nation on October 6, 2007. The official site is www.dvp.nu/25. I really hope that I can make it, it would be great seeing all of the other alumni frÃ¥n datavetenskapliga linjen/programmet and see where they have ended up and what they are doing now.
They also emailed out a call for pictures from the history of DV*. I’ll look through my old collections of memorabilia and see what I can find. What a chance for a trip down memory lane. It’s been ten years since I graduated. Time flies.
Sun slots transactional memory into Rock | The Register
That is just so cool! For us around Stockholm, we can hear Marc Tremblay talk about this at the SiCS Multicore Day on August 31, 2007.
RTiS 2007 just took place in Västerås, Sweden. It is a biannual event where Swedish real-time research (and that really means embedded in general these days) presents new results and summarizes results from the past two years. For someone who has worked in the field for ten years, it really feels like a gathering of friends and old acquaintances. And always some fresh new faces. Due to a scheduling conflict, I was only able to make it to day one of two.
I presented a short summary of a paper I and a colleague at Virtutech wrote last year together with Ericsson and TietoEnator, on the Simics-based simulator for the Ericsson CPP system (see the publications page for 2006 and soon for 2007). I also presented the Simics tool and demoed it in the demo session. Overall, nice to be talking to the mixed academic-industrial audience.
The register report “IBM embraces – wtf – Sun’s Solaris across x86 server line” is a very appropriate headline for something quite surprising. The day before this happened, we discussed the announced announcement and said “nah, it can’t be about operating systems”. The idea of IBM in-sourcing Solaris for x86 just felt like the kind of thing that was in the same realm as flying pigs, freezing hells, and similar unlikely events.
Matts Today in History: The Vasa Sinks, August 10, 1628
is the latest installment in the very good and long-running PodCast called “Matt’s Today in History”. I really appreciate the effort going into the production of it, and the perserverance of Matt in keeping it up for more than two years.
This particular issue was interesting in two regards.
First, I suggested the topic.
Second, it featured what at least seemed like real paid advertising at the start. This is thanks to PodShow, the “media network” used to distribute this podcast. The deal behind PodShow is quite simple fo the podcaster: you get bandwitdh for free, in return for the possibility of there being advertising inserted into the audio.
The reasoning behind PodShow is nicely explained in a podcast from the Stanford Technology Ventures Entrepreneurial Thought Leaders series. Here, Ron Bloom and Ray Lane of PodShow describe the way PodShow got started and just what it is. Basically, they are building a new media company, to compete with radio and television. It is not just a nice place to find podcasts. Recommended listen for anyone interested in just how podcasting can be monetized. They describe how their staff constantly monitors the various shows that they carry, and find those popular and targeted enough to carry some paid advertising. Other shows carry intros and pointers to various other PodShow shows, to drive audience to more popular properties.
Thus, the conclusion must be that Matt’s Today in History has reached some threshold of audience that makes it valuable enough to carry advertising. Great job, and a sure sign of popularity of the podcast.
I just listened to Episode 103 of the Security Now podcast, where Leo Laporte and Steve Gibson talk to the head of security at PayPal. PayPal is doing the right thing right now, issuing their customers with RSA security keys. Which gives them two-factor authentication (password and security key passnumber).
But for some reason, they do not enforce the use of security keys on their customers. Even when you have obtained a security key (which is optional, weirdly enough) and said you are using it, you can still login without it doing some additional security questions. For the reason of convenience! Which basically reduces the security added to nothing, since you can still login in a traditional fashion.
I just had a nice vacation in the Estonian town of Pärnu. Pärnu is a really nice little town full of summer visitors and still with lots of local character.
Getting there, however, was less pleasant than it could have been, thanks to Tallink where we booked the trip and the hotel nights in Pärnu.
When we booked the trip, they told us that there were convenient buses from Tallinn to Pärnu, and that we did not need to bring a car. They also booked us on a nice brand-new integrated hotel containing a “water land” and spa services, and being located very close to the beach. Sounded perfect.
As it turned out, some of these things fell through:
- The buses to Pärnu left from the central bus station in Tallinn, which is not close to the docks where the ferries arrive, but rather some kilometers away. It would have been nice if this had been clear from the start. Instead, Tallink representatives and information made sound as if the buses left directly from the docks, or at least in some place very close by.
- The staff on the ferry to Tallinn did not know about the direct local buses from the docks to the central bus station (a tip: it is bus number 2, which stops right outside of terminal D. Or walk some more and take tram number 2). They gave us confused and incorrect information as how to get to the bus station. At least they told us where the bus station was…
- At the last minute (one day before departure) it turned out that our main hotel was overbooked and that we would be given a different hotel. After some discussions they also promised us entrance tickets to the water land in our booked hotel. However, it was not clear how this was to work out in practice. Or if our new hotel was any better or worse than the one we were booked on initially. Customer service gave the impression that all would be handled at check-in in their terminal in Stockholm.
- When we checked in in Stockholm, we did get hotel vouchers for the replacement hotel. But for a double room, not the suite that was what they had said initially. And the check-in personell had no idea about the entrance tickets to the water land. “there is no note of that in the computer system”. We got to talk to a supervisor who told us that things should work out, wrote a note to the hotel on a copy of our booking, and had the good sense to give us a name and phone number to call would they not.
- Once we arrive in Pärnu, the hotel that we were staying at did provide an envelope containing the tickets to the water land that we needed. The hotel was also recently renovated and very fresh (it was the St. Petersburg hotel, in a carefully renovated 16th-17th-century building in downtown Pärnu). The location was more convenient for eating out and shopping, if a bit more removed from the beach (20 minutes walk rather than five).
Thus, in the end, things worked out and we got decent value for our money. Even so, it is still annoying how Tallink handled things, especially since the fixes are mostly in precision of communication and should actually be cheaper for them to do right.
So how could Tallink have done better in our case (and quite probably in general):
- Run their own bus shuttle from Tallink to PÃ¤rnu and other interesting destinations. They do that in Sweden, so why not in Estonia? We would have been happy to pay some extra for a bus conveniently arriving at the docks to take us straight to the destination.
- Present correct and complete facts about each destination on the phone and on their homepage. If they refer people to the bus service to PÃ¤rnu, do provide a time-table, a map on how to get to the main bus station, and a map of the end location to help you find your hotel. After all, Tallink have local staff in Tallinn that can easily find out for you.
- Have their customer service staff be precise and clear. In the end, things did work out and we were not cheated of our vacation. But the details like the standard of our replacement hotel, how the water land tickets would work, and similar simple things could have been clearly communicated from the start. That would have saved them lots of phone service time, and us a bunch of unnecessary annoyment and anxiety.
Finally, the main drawback of a trip of this type where you spend a night on the ferry each way is that the ferry trip takes a lot of time from the vacation. This would not be so bad if it was enjoyable time, and they are trying to give off the impression that it is kind of a luxurious experience to travel on their modern ferries to Tallinn. And mostly it is nice. Going on a ship where you can walk around and have lots of space is vastly superior to inhuman modes of transport like long-distance air travel or car trips. For the kids, having a dedicated playroom is great.
But since the length of the trip makes it necessary to eat dinner and breakfast onboard, the food is quite a important component. And here Tallink and most other Baltic ferries I have tried fall down by simply providing fairly taste-less and disappointing fare. The tradition of a grand buffet is great in principle, but something makes it so that each course is cheapened down to its simplest least tasty version. Creating a rather disappointing experience overall. And there is no indication that the a la carte restaurants are any better. So for now, you eat because you have to and not as much because you enjoy it.
Why this is the case, I don’t know. Either they think their customers do not care or cannot tell a good meal from a poor one, or they lack pride in the kitchen, or they are saving money by using the cheapest stuff they can get away with, or something else.
So I have finally decided to try to write a blog of my own. Having seen the phenomenon grow over the past few years, the urge to have my own place just to write short public posts or essays on interesting things got the better of me.
I might write about anything that I find interesting, but the topic is really business and computer software, and how they interrelate. Which is sufficiently broad to let me write on almost anything