There are two constants in data center storage; the need for greater performance and the need for greater capacity. Flash based storage devices have become the go-to option to address the first challenge. But application owners and users quickly move from an initial euphoria with flash performance to demanding more. Since the flash NAND is essentially the constant in the equation, the surrounding infrastructure has to evolve to extract optimal performance from the technology. But achieving maximum performance often leads to proprietary architectures and designs. NVMe (Non Volatile Memory) is a new industry standard that enables data centers to realize full flash potential without compatibility headaches.
As SSDs become more common, you’ll also hear more about Non-Volatile Memory Express, a.k.a. NVM Express, or more commonly—NVMe. NVMe is a communications interface/protocol developed specially for SSDs by a consortium of vendors including Intel, Samsung, Sandisk, Dell, and Seagate.
Like SCSI and SATA, NVMe is designed to take advantage of the unique properties of pipeline-rich, random access, memory-based storage. The spec also reflects improvements in methods to lower data latency since SATA and AHCI were introduced
Advances include requiring only a single message for 4KB transfers as opposed to two, and the ability to process multiple queues instead of only one. By multiple, I mean a whopping 65,536 of them. That’s going to speed things up a lot for servers processing lots of simultaneous disk I/O requests, though it’ll be of less benefit to consumer PCs.
There are no shortage of Flash vendors out there who (rightfully) would have jumped at the chance to set my misinformed self on the straight and narrow; they would have been correct, too. Flash isn’t just “cool,” it allows the coordination, access, and retreival of data in ways that simply weren’t possible with more traditional media.
There are different ways to use Flash, of course, and different architectures abound in the marketplace – from fully “All Flash Arrays” (AFA), “Hybrid Arrays” (which are a combination of Flash and spinning disk), to more traditional systems that have simply replaced the spinning drives with Flash drives (without much change to the architecture).
Even through these architectures, though, Flash is still constrained by the very basic tenets of SCSI connectivity. While this type of connectivity works very well (and has a long and illustrious history), the characteristics of Flash memory allow for some capabilities that SCSI wasn’t built to do.
NVMe is the latest high performance and optimized protocol which supersedes AHCI and compliments PCIe technology. It offers an optimised command and completion path for use with NVMe based storage. It was developed by a consortium of manufacturers specifically for SSDs to overcome the speed bottleneck imposed by the older SATA connection. It is akin to a more efficient language between storage device and PC: one message needs to be sent for a 4GB transfer instead of two, NVMe can handle 65,000 queues of data each with 65,000 commands, instead of one queue that with the capacity for 32 commands, and it only has seven major commands (read, write, flush etc).
NVMe delivers better performance and reduced latency and is a scalable, but at a price! Take a look at Samsung’s offering of this technology in the 950 Pro to see how performance and price compares. This particular drive relies upon a new M.2 internal mount on the PC motherboard, as presumably will other future NVMe based SSDs. NVMe will also be the protocol of choice for the next generation of storage technologies such as 3D XPoint.
NVMe : Built for SSDs
If you’ve read our SSD coverage over the past couple of years, it shouldn’t be news that solid state storage has run into a significant hurdle: legacy storage buses. Serial ATA and Serial Attached SCSI (SAS) offer plenty of bandwidth for hard drives, but for increasingly speedy SSDs, they’ve run out of steam.
Because of SATA’s 600Gbps ceiling, just about any top-flight SATA SSD will score the same in our testing these days—around 500MBps. Even 12GBps SAS SSD performance stalls at around 1.5GBps. SSD technology is capable of much more.
The industry knew this impasse was coming from the get-go. SSDs have far more in common with fast system memory than with the slow hard drives they emulate. It was simply more convenient to use the existing PC storage infrastructure, putting SSDs on relatively slow (compared to memory) SATA and SAS. For a long time this was fine, as it took a while for SSDs to ramp up in speed. Those days are long gone.
Leveraging existing technology
Fortunately, a suitable high-bandwidth bus technology was already in place—PCI Express, or PCIe. PCIe is the underlying data transport layer for graphics and other add-in cards, as well as Thunderbolt. (Gen 2) offers approximately 500MBps per lane, and version 3.x (Gen 3), around 985MBps per lane. Put a card in a x4 (four-lane) slot and you’ve got 2GBps of bandwidth with Gen 2 and nearly 4GBps with Gen 3. That’s a vast improvement, and in the latter case, a wide enough pipe for today’s fastest SSDs.
PCIe expansion card solutions such as OCZ’s RevoDrive, Kingston’s Hyperx Predator M.2/PCIe, Plextor’s M6e and others have been available for some time now, but to date, they have relied on the SCSI or SATA protocols with their straight-line hard drive methodologies. Obviously, a new approach was required.