Enmotus Blog

Jim O'Reilly

Jim O’Reilly is a well-respected consultant and commentator on the IT industry. He regularly writes for top-tier publishing sites. Jim is also an experienced CEO with a track record that includes leading the development of SCSI and built the industry’s first SCSI chip, now in the Smithsonian. Jim’s profile is at https://www.linkedin.com/in/jamesmoreilly
Find me on:

Recent Posts

A New Age Storage Stack

Posted by Jim O'Reilly on Jun 4, 2018 3:42:54 PM

For over three decades, we’ve lived with a boring truth. Disk drive performance was stuck in a rut, only doubling over all that time. One consequence was that storage architecture became frozen, with little real innovation. RAID added a boost, but at a high price. In fact, we didn’t get a break until SSDs arrived on the scene.

SSDs really upset the applecart. Per drive performance increased 1000X in just a few years and all bets were off at that point. Little did we realize that the potential of SSDs reached into stratospheric levels of millions of IOPS per drive.

All of this performance broke the standard SCSI model of the storage stack in the operating system. An interrupt-driven, verbose stack with up to seven levels of address translation just doesn’t cut the I/O rate needed. The answer is the NVMe stack, which consolidates I/O’s and interrupts efficiently and uses the power of RDMA to reduce round-trip counts and overhead dramatically. IOPS rates in excess of 20M IOPS have been demonstrated and there is still room to speed up the protocol.

Read More

Topics: NVMe, autotiering, hyperconverged, NVMe over Fibre, enmotus, data analytics, NVDIMM

Evolution of Storage - Part 2

Posted by Jim O'Reilly on May 10, 2018 10:30:51 AM

Part 2 …The Drive

Over time, the smarts in storage have migrated back and forth between the drive and the host system. Behind this shifting picture are two key factors. First, the intelligence of a micro-controller chip determines what a drive can do, while secondly, the need to correct media errors establishes what a drive must do.

Once SCSI hit the market, the functionality split between host and drive essentially froze and continued so for nearly 3 decades. The advent of new error-correction needs for SSDs, combined with the arrival of ARM CPUs that are both cheap and powerful, making function-shifting once again interesting.

Certainly, some of the new compute power goes to sophisticated multi-tier error correction to compensate for the wear out of QLC drives or the effects of media variations, but a 4-core or 8-core

 ARM still has a lot of unused capability. We’ve struggled to figure out how to use that power for meaningful storage functions and that’s led to a number of early initiatives.

The first to bat was Seagate’s Kinetic drive. Making a play for storing “Big Data” in a more native form, Kinetic adds a key/data store to its interface, replacing the traditional block access altogether. While the Kinetic interface is an open standard and free to emulate, no other vendor has yet jumped on the bandwagon and Seagate’s sales are small.

Read More

Topics: NVMe, enmotus, software defined storage, SDS, NVDIMM

A.I. For Storage

Posted by Jim O'Reilly on Dec 18, 2017 2:12:46 PM

As we saw in the previous part of this two-part series, “Storage for A.I.”, the performance demands of A.I. will combine with technical advances in non-volatile memory to dramatically increase performance and scale within the storage pool and also move addressing of data to a much finer granularity, the byte level rather than 4KB block. This all creates a manageability challenge that must be resolved if we are to attain the potential of A.I. systems (and next-gen computing in general).

Simply put, storage is getting complex and will become ever more so as we expand the size and use of Big Data. Rapid and agile monetization of data will be the mantra of the next decade. Consequentially, the IT industry is starting to look for ways to migrate from today’s essentially manual storage management paradigms to emulate and exceed the automation of control demonstrated in 

public clouds.

Read More

Topics: NVMe, Data Center, NVMe over Fibre, enmotus, data analytics, NVDIMM, artificial intelligence

Storage for Artificial Intelligence

Posted by Jim O'Reilly on Dec 4, 2017 1:17:42 PM

It’s not often I can write about two dissimilar views of the same technology, but recent moves in the industry on the A.I. front mean that not only does storage need to better align with A.I. needs than any traditional storage approach, but the rise of software-defined storage concepts makes A.I. an inevitable choice for solving advanced problems. The result, this article on “Storage for A.I.” and the second part of the story on “A.I for Storage”.

The issue is delivery. A.I. is very data hungry. The more data A.I. sees, the better its results. Traditional storage, the world of RAID and SAN, iSCSI and arrays of drives, is a world of bottlenecks, queues and latencies. There’s the much-layered file stack in the requesting server, protocol latency, and then the ultimate choke point, the array controller.

That controller can talk to 64 drives or more, via SATA or SAS, but typically only has output equivalent to maybe 8 SATA ports. This didn’t matter much with HDDs, but SSDs can deliver data much faster than spinning rust and so we have a massive choke point just in reducing streams to the array output ports’ capability.

There’s more! That controller is in the data path and data is queued up in its memory, adding latency. Then we need to look at the file stack latency. That stack is a much-patched solution with layer upon layer of added functionality and virtualization. In fact, the “address” of a block of data is transformed no less than 7 times before it reaches the actual bits and bytes on the drive. This was very necessary for the array world, but solid state drives are fundamentally different and simplicity is a possibility.

Read More

Topics: NVMe, SSD, NVDIMM, artificial intelligence, machine learning

Information Storage – A truly novel concept

Posted by Jim O'Reilly on Oct 17, 2017 9:37:29 AM

When you see “storage” mentioned it’s often “data storage”. The implication is that there is nothing in the “data” that is informational, which even at a verbatim read is clearly no longer true. Open the storage up, of course, and the content is a vast source of information, both mined and unmined, but our worldview of storage has been to treat objects as essentially dumb, inanimate things.

This 1970’s view of storage’s mission is beginning to change. The dumb storage appliance is turning into smart software-defined storage services running in virtual clusters or clouds, with direct access to storage drives. As this evolution to SDS has picked up momentum, pioneers in the industry are taking a step beyond and looking at ways to extract useful information from what is stored and convert it to new ways to manage the information lifecycle, protect integrity and security and provide guidance that is information-centric to assist processing and guide the other activities around the object.

Read More

Topics: big data, SSD, Data Center

Car Wrecks and Crashing Computers

Posted by Jim O'Reilly on Sep 13, 2017 12:05:33 PM

We are just starting the self-driving car era. It’s a logical follow-on to having GPS and always-connected vehicles, but we are still in the early days of evolution. Even so, it’s a fair bet that a decade from now, most, if not all, vehicles will have self-driving capability.

What isn’t clear is what it will look like. Getting from point A to point B is easy enough (GPS), and avoiding hitting anything else seems to be in the bag, too. What isn’t figured is how to stop those awful traffic jams. I live in Los Angeles and a 3-hour commute Friday afternoon is commonplace. In fact, Angelinos typically spend between 6 and 20 hours a week in their cars, with the engine running, gas being guzzled and their tempers being frayed!

It’s particularly true in LA that each car usually has a single occupant, so that’s a lot of gas, metal and pavement space for a small payload. What this leads us to is the idea of

  1. Automating car control and centralizing routing. This would allow, via a cloud app, load-balancing the roads and routing around slowdowns
  2. Making the vehicles single or dual seater electric mini-cars
  3. Using the Mini-cars to pack more effective lanes and move cars closer together
Read More

Topics: big data, data analytics, cloud storage

Optimizing Dataflow in Next-Gen Clusters

Posted by Jim O'Reilly on Sep 6, 2017 10:57:55 AM

We are on the edge of some dramatic changes in computing infrastructure. New packaging methods, ultra-dense SSDs and high core counts will change what a cluster looks like. Can you imagine a 1U box having 60 cores and a raw SSD capacity of 1 petabyte? What about drives using 25GbE interfaces (with RDMA and NVMe over Fabrics), accessed by any server in the cluster?

Consider Intel’s new “ruler” drive, the P4500 (shown below with a concept server). It’s easy to see 32 to 40 TB of capacity per drive, which means that the 32 drives in their

concept storage appliance give a petabyte of raw capacity (and over 5PB compressed). It’s a relatively easy step to see those two controllers replaced by ARM-based data movers which reduce system overhead dramatically and boost performance nearer to available drive performance, but the likely next step is to replace the ARM units with merchant class GbE switches and talk directly to the drives.

I can imagine a few of these units at the top of each rack with a bunch of 25/50 GbE links to physically compact, but powerful, servers (2 or 4 per rack U) which use NVDIMM as close-in persistent memory.

The clear benefit is that admins can react to the changing needs of the cluster for performance and bulk storage independently of the compute horsepower deployed. This is very important as storage moves from low-capacity structured to huge capacity big-data unstructured.

Read More

Topics: All Flash Array, Intel Optane, Data Center, NVMe over Fibre, data analytics

Storage Analytics and SDS

Posted by Jim O'Reilly on Aug 17, 2017 11:30:36 AM

Software-defined storage (SDS) is a part of the drive to make infrastructure virtual by providing an abstraction of the control logic software (the control plane) from the low-level data management (data plane). In the process, the control plane becomes a virtual instance that can reside in any instance in the computer cluster.

The SDS approach allows the control micro-services to be scaled for increased demand, to be chained for more complex operations (Index+compress+encrypt, for example), while making the systems generally hardware agnostic. No longer is it necessary to buy storage units with a given set of functions only to face a forklift upgrade if new features are needed.

SDS systems are very dynamic, with mashups of micro-services that may survive only for a few blocks of data. This brings new challenges:

  • Data flow - Network VLAN paths are transient, with rerouting continuously happening for new operations, for failure recovery and for load balancing
  • Failure detection - Hard failures are readily detectable, allowing a replacement instance and recovery to occur quickly. Soft failures are the problem. Intermittent errors need to be trapped, analyzed and mitigation exercised
  • Bottlenecks - Slowdowns occur in many different places. Code is not perfect, nor is it 100 percent tested bug-free. In complex storage systems, we’ll see path or device slowdowns, on the storage side, and instance or app issues, on the server side. Moreover, problems may reside in the network caused by collisions both at the endpoints of a VLAN and in the intermediate routing nodes.
  • Everything is virtual - The abstraction of the planes complicates root cause analysis tremendously
  • Automation - There is little human intervention in the operation of SDS. Reconnecting and analyzing manually is naturally very difficult, especially in real-time
Read More

Topics: Data Center, software defined storage, storage analytics, SDS

The Evolution of Hyper-servers

Posted by Jim O'Reilly on Aug 2, 2017 2:51:09 PM

A few months ago, the CTO of one of the companies involved in system design asked me how I would handle and store the literal flood of data expected from the Square-Kilometer Array, the world’s most ambitious radio-astronomy program. The answer was in two parts. First, data needs compression, which isn’t trivial with astronomy data and this implies a new way to process at upwards of 100 Gigabyte speed. Second, the hardware platforms become very parallel in design, especially in communications.

We are talking designs where each ultra-fast NVMe drive has access to network bandwidth capable of keeping up with the stream. This implies having at least one 50GbE link per drive, since drives with 80 gigabit/second streaming speeds are entering volume production today.

Read More

Topics: NVMe, big data, Intel Optane, Data Center

Automating Storage Performance in Hybrid and Private Clouds

Posted by Jim O'Reilly on Jun 27, 2017 10:10:00 AM

Reading current blogs on clouds and storage it’s impossible not to conclude that most cloud users have abandoned hope on tuning system performance and are just ignoring the topic. The reality is that our cloud models struggle with performance issues. For example, a server can hold roughly 1000 virtual machines.

With an SSD giving 40K IOPS, that’s just 40 IOPS per VM. This is on the low side for many use cases, but now let’s move to Docker containers, using the next generation of server. The compute power and, more importantly, DRAM space increased to match the 4,000 containers in the system, but IOPS dropped to just 10/container.

Now this is the best that we can get with typical instances. One local instance drive and all the rest is networked I/O. The problem is that network storage is also pooled and this limits storage avail

ability to any instance. The numbers are not brilliant!

We see potential bottlenecks everywhere. Data can be halfway across a datacenter instead of localized to a rack where compute instances are accessing it. Ideally, the data is local (possible with a hyper-converged architecture) so that it avoids crossing multiple switches and routers. This may be impossible to achieve, especially if diverse datasets are being used for an app.

Networks choke and that is true of VLANs used in cloud clusters. The problem with container-based systems is that the instances and VLANs involved are often closed down by the time you get a notification. That’s the downside of agility!

Apps choke, too, and microservices likewise. The fact that these often only exist for short periods makes debug both a glorious challenge and very frustrating. Being able to understand why a given node or instance runs slower than the rest in a pack can fix a hidden bottleneck that slows completion of the whole job stream.

Hybrid clouds add a new complexity. Typically, these are heterogeneous. The cloud stack in the private segment likely is OpenStack though Azure Stack promises to be an alternative. The public cloud will be one of AWS, Azure or Google, most likely. This means two separate environments, very different from each other in operation, syntax and billing, and an interface between the two.

Read More

Topics: Data Center, data analytics, cloud storage

Delivering Data Faster

Accelerating cloud, enterprise and high performance computing

Enmotus FuzeDrive accelerates your hot data when you need it, stores it on cost effective media when you don't, and does it all automatically so you don't have to.


  • Visual performance monitoring
  • Graphical managment interface
  • Best in class performance/capacity

Subscribe to Email Updates

Recent Posts