Enmotus Blog

Evolution of Storage - Part 2

Posted by Jim O'Reilly on May 10, 2018 10:30:51 AM

Part 2 …The Drive

Over time, the smarts in storage have migrated back and forth between the drive and the host system. Behind this shifting picture are two key factors. First, the intelligence of a micro-controller chip determines what a drive can do, while secondly, the need to correct media errors establishes what a drive must do.

Once SCSI hit the market, the functionality split between host and drive essentially froze and continued so for nearly 3 decades. The advent of new error-correction needs for SSDs, combined with the arrival of ARM CPUs that are both cheap and powerful, making function-shifting once again interesting.

Certainly, some of the new compute power goes to sophisticated multi-tier error correction to compensate for the wear out of QLC drives or the effects of media variations, but a 4-core or 8-core

 ARM still has a lot of unused capability. We’ve struggled to figure out how to use that power for meaningful storage functions and that’s led to a number of early initiatives.

The first to bat was Seagate’s Kinetic drive. Making a play for storing “Big Data” in a more native form, Kinetic adds a key/data store to its interface, replacing the traditional block access altogether. While the Kinetic interface is an open standard and free to emulate, no other vendor has yet jumped on the bandwagon and Seagate’s sales are small.

Read More

Topics: NVMe, enmotus, software defined storage, SDS, NVDIMM

Evolution of Storage Software

Posted by Adam Zagorski on Apr 4, 2018 12:37:47 PM

Part 1 … the server and cluster


Since time immemorial, we have used the SCSI-based file stack to define how we talk to drives. Mature, but very verbose, it was an ideal match to single-core CPUs and slow interfaces to very slow hard drives. With this stack, it was perfectly acceptable to initiate an I/O and then swap processes, since the I/O took many milliseconds to complete.

The arrival of flash drives upset this applecart completely. IOPS per drive grew by 1000X in short order and neither SCSI-based SAS nor SATA could keep up. The problem continues to get worse, with the most recent flash card leader, Smart IOPS, delivering 1.7 million IOPS, a 10-fold further increase.

The industry’s answer to this performance issue is replacing SAS and SATA with PCIe and the protocol with NVMe. This gives us a solution where multiple ring-buffers contain queues of storage operations, with these queues being contexted to cores or even apps. This allows a bunch of operations to be pulled from the queue and processed by the drive using RDMA techniques. On the return side, response queues are likewise built up and serviced by the appropriate host.  Interrupts are concatenated so that one interrupt services many responses.

Read More

Topics: NVMe, big data, Intel Optane, software defined storage

Storage Analytics and SDS

Posted by Jim O'Reilly on Aug 17, 2017 11:30:36 AM

Software-defined storage (SDS) is a part of the drive to make infrastructure virtual by providing an abstraction of the control logic software (the control plane) from the low-level data management (data plane). In the process, the control plane becomes a virtual instance that can reside in any instance in the computer cluster.

The SDS approach allows the control micro-services to be scaled for increased demand, to be chained for more complex operations (Index+compress+encrypt, for example), while making the systems generally hardware agnostic. No longer is it necessary to buy storage units with a given set of functions only to face a forklift upgrade if new features are needed.

SDS systems are very dynamic, with mashups of micro-services that may survive only for a few blocks of data. This brings new challenges:

  • Data flow - Network VLAN paths are transient, with rerouting continuously happening for new operations, for failure recovery and for load balancing
  • Failure detection - Hard failures are readily detectable, allowing a replacement instance and recovery to occur quickly. Soft failures are the problem. Intermittent errors need to be trapped, analyzed and mitigation exercised
  • Bottlenecks - Slowdowns occur in many different places. Code is not perfect, nor is it 100 percent tested bug-free. In complex storage systems, we’ll see path or device slowdowns, on the storage side, and instance or app issues, on the server side. Moreover, problems may reside in the network caused by collisions both at the endpoints of a VLAN and in the intermediate routing nodes.
  • Everything is virtual - The abstraction of the planes complicates root cause analysis tremendously
  • Automation - There is little human intervention in the operation of SDS. Reconnecting and analyzing manually is naturally very difficult, especially in real-time
Read More

Topics: Data Center, software defined storage, storage analytics, SDS

Delivering Data Faster

Accelerating cloud, enterprise and high performance computing

Enmotus FuzeDrive accelerates your hot data when you need it, stores it on cost effective media when you don't, and does it all automatically so you don't have to.

 

  • Visual performance monitoring
  • Graphical managment interface
  • Best in class performance/capacity

Subscribe to Email Updates

Recent Posts