Enmotus Blog

Evolution of Storage Software

Posted by Adam Zagorski on Apr 4, 2018 12:37:47 PM

Part 1 … the server and cluster


Since time immemorial, we have used the SCSI-based file stack to define how we talk to drives. Mature, but very verbose, it was an ideal match to single-core CPUs and slow interfaces to very slow hard drives. With this stack, it was perfectly acceptable to initiate an I/O and then swap processes, since the I/O took many milliseconds to complete.

The arrival of flash drives upset this applecart completely. IOPS per drive grew by 1000X in short order and neither SCSI-based SAS nor SATA could keep up. The problem continues to get worse, with the most recent flash card leader, Smart IOPS, delivering 1.7 million IOPS, a 10-fold further increase.

The industry’s answer to this performance issue is replacing SAS and SATA with PCIe and the protocol with NVMe. This gives us a solution where multiple ring-buffers contain queues of storage operations, with these queues being contexted to cores or even apps. This allows a bunch of operations to be pulled from the queue and processed by the drive using RDMA techniques. On the return side, response queues are likewise built up and serviced by the appropriate host.  Interrupts are concatenated so that one interrupt services many responses.

Read More

Topics: NVMe, big data, Intel Optane, software defined storage

Optimizing Dataflow in Next-Gen Clusters

Posted by Jim O'Reilly on Sep 6, 2017 10:57:55 AM

We are on the edge of some dramatic changes in computing infrastructure. New packaging methods, ultra-dense SSDs and high core counts will change what a cluster looks like. Can you imagine a 1U box having 60 cores and a raw SSD capacity of 1 petabyte? What about drives using 25GbE interfaces (with RDMA and NVMe over Fabrics), accessed by any server in the cluster?

Consider Intel’s new “ruler” drive, the P4500 (shown below with a concept server). It’s easy to see 32 to 40 TB of capacity per drive, which means that the 32 drives in their

concept storage appliance give a petabyte of raw capacity (and over 5PB compressed). It’s a relatively easy step to see those two controllers replaced by ARM-based data movers which reduce system overhead dramatically and boost performance nearer to available drive performance, but the likely next step is to replace the ARM units with merchant class GbE switches and talk directly to the drives.

I can imagine a few of these units at the top of each rack with a bunch of 25/50 GbE links to physically compact, but powerful, servers (2 or 4 per rack U) which use NVDIMM as close-in persistent memory.

The clear benefit is that admins can react to the changing needs of the cluster for performance and bulk storage independently of the compute horsepower deployed. This is very important as storage moves from low-capacity structured to huge capacity big-data unstructured.

Read More

Topics: All Flash Array, Intel Optane, Data Center, NVMe over Fibre, data analytics

The Evolution of Hyper-servers

Posted by Jim O'Reilly on Aug 2, 2017 2:51:09 PM

A few months ago, the CTO of one of the companies involved in system design asked me how I would handle and store the literal flood of data expected from the Square-Kilometer Array, the world’s most ambitious radio-astronomy program. The answer was in two parts. First, data needs compression, which isn’t trivial with astronomy data and this implies a new way to process at upwards of 100 Gigabyte speed. Second, the hardware platforms become very parallel in design, especially in communications.

We are talking designs where each ultra-fast NVMe drive has access to network bandwidth capable of keeping up with the stream. This implies having at least one 50GbE link per drive, since drives with 80 gigabit/second streaming speeds are entering volume production today.

Read More

Topics: NVMe, big data, Intel Optane, Data Center

Flash Tiering: The Future of Hyper-converged Infrastructure

Posted by Adam Zagorski on Jan 12, 2017 1:04:00 PM

The Future of Hyper-converged Infrastructure

Read More

Topics: NVMe, big data, 3D Xpoint, SSD, Intel Optane, Data Center, hyperconverged

The Evolution Of Storage

Posted by Jim O'Reilly on Nov 29, 2016 4:09:29 PM

The storage industry continues to evolve rapidly, which is both exciting and challenging. I intend this blog to look at the hot news in the industry, as well as taking a view of trends and occasionally long-term directions.

This promises to be an interesting effort. There are plenty of innovations to describe, while retakes on older ideas crop up quite often. I hope you will find the subject as fascinating as I do.

Trends

1.It’s clear that the high performance enterprise hard drive is a dying breed. SSDs and all-flash arrays have undercut demand. With improved wear life, flash-based products meet the stringent needs of the datacenter plus, they are cooler, quieter and smaller and of course they are much faster.

Relevant news on this includes:

Read More

Topics: All Flash Array, 3D Xpoint, SSD, Intel Optane, Data Center

Delivering Data Faster

Accelerating cloud, enterprise and high performance computing

Enmotus FuzeDrive accelerates your hot data when you need it, stores it on cost effective media when you don't, and does it all automatically so you don't have to.

 

  • Visual performance monitoring
  • Graphical managment interface
  • Best in class performance/capacity

Subscribe to Email Updates

Recent Posts