Andras Vajda of Ericsson wrote an interesting blog post on domain-specific languages (DSLs). Thanks for some success stories and support in what sometimes feels like an uphill battle trying to make people accept that DSLs are a large part of the future of programming. In particular for parallel computing, as they let you hide the complexities of parallel programming.
Quotes from the blog post:
Sequential languages hiding parallelism:
Sequential imperative languages are notoriously good at hiding or at least obfuscating inherent parallelism in the applications – which may be good news for compiler providers who spend – and charge – big bucks for building reverse engineering technologies that can detect parallelism from sequential code; as well as for highly skilled programmers capable of building parallel code using sequential tools; however, it weights heavily on the total cost of developing software, not to mention the recurring cost of porting to newer HW with e.g. more cores.
Applying DSLs to signal processing:
During the work on the domain-specific language for DSP programming we have seen some interesting results; for example, several pages of optimized C code were re-written to one PowerPoint slide worth of DSL code, while the DSL to C compiler was able to output efficient code comparable in size to the original code.
I agree fully with the final note:
The big question is not anymore if DSLs will take off – it’s more if your pain level is higher than the cost of accepting an alternative and different approach; as the cost for taking in a new language tends to stay constant while the complexity of developing software is increasing, odds are that there will be an uptake of domain-specific languages.
Go and read the full article!