Enmotus Blog

Jim O'Reilly

Jim O’Reilly is a well-respected consultant and commentator on the IT industry. He regularly writes for top-tier publishing sites. Jim is also an experienced CEO with a track record that includes leading the development of SCSI and built the industry’s first SCSI chip, now in the Smithsonian. Jim’s profile is at https://www.linkedin.com/in/jamesmoreilly
Find me on:

Recent Posts

Storage analytics impact performance and scaling

Posted by Jim O'Reilly on Jun 14, 2017 11:21:10 AM

For the last 3 decades of computer storage use, we’ve operated essentially blindfolded. What we’ve known about performance has been gleaned from artificial benchmarks such as IOMeter and guestimates of IOPS requirements during operations that depend on a sense of how fast an application is running.

The result is something like steering a car without a speedometer ... it’s a mess of close calls and inefficient operations.

On the whole, though, we muddled through. That’s no longer adequate in the storage New Age. Storage performance is stellar in comparison to those early days, with SSDs changing the level of IOPS per drive by a factor of as much as 1000X. Wait, you say, tons of IOPS…why do we have problems?

The issue is that we share much of our data across clusters of systems, while the IO demand of any given server has jumped up in response to virtualization, containers and the horsepower of the latest CPUs. In fact, that huge jump in data moving around between nodes makes driving blind impossible even for small virtualized clusters, never mind scaled-out clouds.

All of this is happening against a background of application-based resilience. System uptime is no longer measured in how long a server runs. The key measurement is how long an app runs properly. Orchestrated virtual systems recover from server failures quite quickly. The app is restarted on another instance in a different server.

Read More

Topics: All Flash Array, Data Center, hyperconverged, NVMe over Fibre, data analytics

Storage Visions 2017

Posted by Jim O'Reilly on Jan 18, 2017 2:22:42 PM

Here it is. A new year opens up in front of us. This one is going to be lively and storage is no exception. In fact, 2017 should see some real fireworks as we break away from old approaches and move on to some new technologies and software.

Read More

Topics: NVMe, SSD, Data Center, data anlytics, NVMe over Fibre

The Art of “Storage-as-a-Service”

Posted by Jim O'Reilly on Jan 9, 2017 2:24:50 PM

The Art of “Storage-as-a-Service”

Most enterprise datacenters are today considering the hybrid cloud model for their future deployments. Agile and flexible, the model is expected to yield higher efficiencies than traditional setups, while allowing a datacenter to be sized to average, as opposed to peak, workloads.

In reality, achieving portability of apps between clouds and reacting rapidly to workload increases both run up against a data placement problem. The agility idea fails when data is in the wrong cloud when a burst is needed. This is exacerbated by the new containers approach, which can start up a new instance in a few milliseconds.

Data placement is in fact the most critical issue in hybrid cloud deployment. Pre-emptively providing data in the right cloud prior to firing up the instances that use it is the only way to assure adequate those expected efficiency gains.

A number of approaches have been tried, with varying success, but none are truly easy to implement and all require heavy manual intervention. Let’s look at some of these approaches:

  1. Sharding the dataset – By identifying the hottest segment of the dataset (e.g. Names beginning with S), this approach places a snapshot of those files in the public cloud and periodically updates it. When a cloudburst is needed, locks for any files being changed are passed over to the public cloud and the in-house versions of the files are blocked from updating. The public cloud files are then updated and the locks cleared.
Read More

Topics: NVMe, autotiering, big data, SSD, hyperconverged

The Evolution Of Storage

Posted by Jim O'Reilly on Nov 29, 2016 4:09:29 PM

The storage industry continues to evolve rapidly, which is both exciting and challenging. I intend this blog to look at the hot news in the industry, as well as taking a view of trends and occasionally long-term directions.

This promises to be an interesting effort. There are plenty of innovations to describe, while retakes on older ideas crop up quite often. I hope you will find the subject as fascinating as I do.

Trends

1.It’s clear that the high performance enterprise hard drive is a dying breed. SSDs and all-flash arrays have undercut demand. With improved wear life, flash-based products meet the stringent needs of the datacenter plus, they are cooler, quieter and smaller and of course they are much faster.

Relevant news on this includes:

Read More

Topics: All Flash Array, 3D Xpoint, SSD, Intel Optane, Data Center

Why Auto-Tiering is Critical

Posted by Jim O'Reilly on Sep 22, 2016 9:39:46 AM

 

 Storage in IT comes in multiple flavors. We have super-fast NVDIMMs, fast and slow SSDs and snail-paced hard drives. Add in the complexities of networking versus local connection and price, and capacity, and figuring the optimum configuration is no fun. Economics and performance goals guarantee that any enterprise configuration will be a hybrid of several storage types.

Enter auto-tiering. This is a deceptively simple concept. Auto-tiering moves data back and forth between the layers of storage, running in the background. This should keep the hottest data on the most accessible tier of storage, while relegating old, cold data to the most distant layer of storage.

A simplistic approach isn’t quite good enough, unfortunately. Computers think in microseconds, while job queues often have a daily or weekly cycle. Data that the computer thinks is cold may suddenly get hotter than Hades when that job hits the system. Similarly, admins know that certain files are created, stored and never seen again.

This layer of user knowledge is handled by incorporating a policy engine into auto-tiering, allowing an admin to anticipate data needs and promote data through the tiers in advance of need.

Read More

Topics: NVMe, autotiering, big data

Delivering Data Faster

Accelerating cloud, enterprise and high performance computing

Enmotus FuzeDrive accelerates your hot data when you need it, stores it on cost effective media when you don't, and does it all automatically so you don't have to.

 

  • Visual performance monitoring
  • Graphical managment interface
  • Best in class performance/capacity

Subscribe to Email Updates

Recent Posts