Non-Volatile Memory is Different from Non-Volatile Storage

The introduction of non-volatile memory that is accessed and addressed like traditional RAM instead of using a special interface has some rather interesting effects on software. It blurs the traditional line between persistent long-term mass storage and volatile memory. On the surface, it sounds pretty simple: you can keep things living in RAM-like memory across reboots and shutdowns of a system. Suddenly, there is no need to reload things into RAM for execution following a reboot. Every piece of data and code can be kept immediately accessible in the memory that the processor uses. A computer could in principle just get rid of the whole disk/memory split and just get a single huge magic pool of storage that makes life easier. No file system, no complications, easy programmer life. Or is it that simple?

When you think it through, you realize that non-volatile memory does indeed have the potential to change if not everything, at least a rather significant portion of common assumptions used in software development.  The split between volatile memory and non-volatile persistent storage that has been part of computers for the last half a century or more is not just a performance annoyance. It is also a rather useful functionality benefit.

Non-volatile memory and reset

The most common operation on any electronic system is likely to reset it. “Power-cycle it and see if that fixes it” is one of the most universal operations of the modern age. It does not matter what the device is, you can always reset it to hopefully get it back to working order after it has failed in some way. And most of the time, that works. This is thanks to the fact that resetting a device clears out accumulated volatile state from the memory, and forces the system to rebuild the state from scratch. Having persistent storage split from volatile working memory is a benefit. If the system state memory was actually persistent, the reboot would be rather less powerful.

Thus, non-volatile memory has to be implemented wisely to avoid making errors persistent along with useful data. There has to be some way to reset and rebuild, even if all data is just “in memory”. Not an easy problem.

Non-volatile memory and files

The concepts of “file system” and “file formats” might seem to be obsoleted by non-volatile memory, but that is not really the case.

In-memory data structures representing the contents of a file for editing and display are more complex and less resilient compared to typical file formats (this is part of the magic of reset, at the level of individual programs rather than the system as a whole). This means that a file on disk is likely to be more robust than the file information loaded into memory.  Furthermore, a file on disk is also explicitly written out when saved, which makes it possible to keep track of previous versions and revert back to them, helping users recover accidentally erased or lost work. Thus, having data in “files” using simpler file formats helps keep the system robust.

For example, consider a text file. On disk, this is a sequence of bytes that is rather easy to make sense of. When loaded into a text editor for editing, it becomes a lot more complex. The editor will use a complex set of structures to allow things like quickly inserting text in the middle of a document without moving every single character after it to a new place in memory. There will be display optimizations like pointers to where lines should start. If these structures go bad due to a program error, the result is typically a total jumble, or a crashed editor. If such structures were used for the persistent representation of the file, it is easy to see that it would be a lot more brittle (of course, the file system used on a disk suffers from many of the same risks – but at least that is a single system that is proven once. Not something that each and every program implements on their own). Today, worst-case you kill the editor and reload from disk, and most of the time the result will be a functioning editor. The simple-complex representation duality really adds benefits, not just costs.

Another aspect of persistence is that there has to be a way to send files (documents, whatever you want to call them) between users and between computing devices. Such a representation has to be host-independent, which for example means not using memory pointers to link between different parts of a document.

Thus, we need file formats and files for exchange purposes. It seems that the trusty old concept of a file is still needed. There has to be something that offers a way to save data in a format that can be read across machines and across time. A format that is simple enough that it can be reliable.

On the other hand, we would like to use non-volatile memory to keep important data in memory and avoid the need to reload from files, across power losses, restarts, and resets (persistence only matters in cases where volatile memory loses its content). There are huge benefits to be gained if we can avoid rebuilding the in-memory data structures and reloading gigabytes and gigabytes from comparatively slow disk-style storage. It is just important to make sure that there is a way to reset such structures too.

Memory management

Persistent memory also needs memory management for the persistent memory. This memory management has work essentially like file system: recycle areas that are not needed any more, and making sure that the persistent memory just does not fill up over time with zombie data from applications that do not properly clear out the memory they had allocated. Forcibly throwing an application’s data out of RAM if a system runs out of RAM is usually not a big problem – the application just has to be restarted and reloaded.

For persistent memory, a model closer to storage is going to be needed. It is not just “malloc”, it gets a bit more complex than that. The malloc system has to be built with robustness and persistence in mind – for example, even if a reset happens in the middle of an allocation, no memory is lost. There is also likely a need to do garbage collection and find data that is old and no longer needed – like the application that allocated it is gone, or the user tells the system to reprioritize, or whatever.

Non-volatile memory and addressing

Yet another aspect of non-volatile memory is the nature of addresses. During a reboot, virtual memory mappings are likely to be rebuilt and possibly changed. Thus, using pointers based on the virtual memory mappings that were created during one bootup of a system is likely to not work very well the next time the system is booted – it is simply very hard to guarantee repeatable allocation as software and the system configuration changes.

Thus, data structures in memory will have to be smart about how “pointers” are used – offsets and indirection tables are likely going to be more useful than direct pointers. A plain linked list is not likely to show up in non-volatile memory. There is also the need to have some kind of directory or root in place to allow the system to locate persistent data structures in persistent memory following an application restart. It sounds a lot like file systems…

Database example

All of these aspects of how to use non-volatile memory – and more – are addressed in a couple of papers from the VLDB (43rd international conference on Very Large Databases) 2017 conference that took place in September 2017.

SAP HANA adoption of non-volatile memory”, by Mihnea Andrei at al (with authors from SAP and Intel). This paper discusses the initial adoption of non-volatile memory in SAP HANA, an in-memory database from SAP. The approach they take is to explore benefits without a total rewrite. As they say in the paper:

“We have focused on an early adoption, where HANA consumes NVRAM without heart surgery on the core relational engine.”

The paper is full of interesting observations on the real-world use of NVM and the peculiarities of using it for a database. They use NVM as memory, and not as a disk. For SAP HANA, having more memory available at the same cost means better performance and larger datasets. Non-volatility of “RAM” is very useful to reduce the restart costs of the database – when using an in-memory database, you have to normally reload terabytes of data from disk to RAM for processing following a reset or reboot. With NVM, that reload can be significantly reduced. Since data structures are already long-lived in memory for HANA, it is a not very large step to make them persistent in memory.

It is worth noting that NVRAM is used as an optimization – features like high availability and redundancy are currently based on disk files and disk accesses, and were not rearchitected to use NVRAM. This means that it is possible to place only a subset of the database in NVRAM without risk of failure.

There are many details in how to make NVRAM usage robust enough for a mission-critical database – there is a lot more to it than just writing stuff to NVRAM. There is still need to make sure that data is kept consistent across changes, and to have commit mechanisms to track which NVRAM data is valid and which is not.

Finally, the evaluation of the performance impact is interesting. The paper was written before commercial hardware was available with actual NVRAM DIMMs. Thus, a special variant of an Intel platform is used where the hardware can artificially assign additional latency to certain RAM channels. This makes it possible to simulate the timing overhead of NVRAM, even though the actual persistence is not part of the platform.

Recommended reading, I found the paper well-written and absolutely fascinating.  NVM is cool stuff, and it might well be a standard topic in computer science education within a decade.

In addition, Intel Software (where I happen to be working) has a series of introductory videos on NVM available at https://software.intel.com/en-us/persistent-memory/get-started/series that I just discovered and that provide a decent enough introduction to the concepts and programming models.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.