Enmotus Blog

How Many IOPS Do You Need For Real-World Storage Performance?

Posted by Adam Zagorski on Aug 22, 2017 11:12:17 AM

We hear lots of hype today about millions of IOPS from someone’s latest flash offering. It’s true that these units are very fast, but the devil is in the detail and often using the products yields a much weaker performance than the marketing would lead you to expect. That’s because most vendors measure their performance using highly tweaked benchmark software. With this type of code, the devil is in the details.

A bit extreme, perhaps, but all benchmarks can be tuned for optimal performance, while we never hear about the other, slower, results.

What eats up all of that performance? In the real world, events are not as smoothly sequenced as they are in a benchmark. Data requests are not evenly spread over all the storage drives, nor are they evenly spread in time. In fact, I/O goes where the apps direct, which means some files get much more access, making the drives they are on work hard but leaving other drives nearly idling.

Read More

Topics: NVMe, big data, Data Center, hyperconverged, storage analytics

Storage Analytics and SDS

Posted by Jim O'Reilly on Aug 17, 2017 11:30:36 AM

Software-defined storage (SDS) is a part of the drive to make infrastructure virtual by providing an abstraction of the control logic software (the control plane) from the low-level data management (data plane). In the process, the control plane becomes a virtual instance that can reside in any instance in the computer cluster.

The SDS approach allows the control micro-services to be scaled for increased demand, to be chained for more complex operations (Index+compress+encrypt, for example), while making the systems generally hardware agnostic. No longer is it necessary to buy storage units with a given set of functions only to face a forklift upgrade if new features are needed.

SDS systems are very dynamic, with mashups of micro-services that may survive only for a few blocks of data. This brings new challenges:

  • Data flow - Network VLAN paths are transient, with rerouting continuously happening for new operations, for failure recovery and for load balancing
  • Failure detection - Hard failures are readily detectable, allowing a replacement instance and recovery to occur quickly. Soft failures are the problem. Intermittent errors need to be trapped, analyzed and mitigation exercised
  • Bottlenecks - Slowdowns occur in many different places. Code is not perfect, nor is it 100 percent tested bug-free. In complex storage systems, we’ll see path or device slowdowns, on the storage side, and instance or app issues, on the server side. Moreover, problems may reside in the network caused by collisions both at the endpoints of a VLAN and in the intermediate routing nodes.
  • Everything is virtual - The abstraction of the planes complicates root cause analysis tremendously
  • Automation - There is little human intervention in the operation of SDS. Reconnecting and analyzing manually is naturally very difficult, especially in real-time
Read More

Topics: Data Center, software defined storage, storage analytics, SDS

Content driven tiering using storage analytics

Posted by Adam Zagorski on Aug 9, 2017 10:05:00 AM

IT has used auto-tiering for years as a way to move data from expensive fast storage to cheaper and slower secondary bulk storage. The approach was at best a crude approximation, being only able to distinguish between objects on the basis of age or lack of use. This meant, for instance, that documents and files stayed much longer in expensive storage than was warranted. There simply was no mechanism for sending such files automatically to cheap storage.

Now, to make life even more complicated, we’ve added a new tier of storage at each end of the food chain. At the fast end, we now have ultra-fast NVDIMM offering an even more expensive and, more importantly space limited, way to boost access speed, while at the other end of the spectrum the cloud is reducing the need for in-house long-term storage even more. Simple auto-tiering doesn’t do enough to optimize the spectrum of storage in a 4-state system like this. We need to get much savvier about where we keep things.

The successor to auto-tiering has to take into account traffic patterns for objects and plan their lifecycle accordingly. For example, a Word document may be stored as a fully editable file in today’s solutions, but the reality is that most of these documents, once fully edited, become read-only objects moved in their entirety to be read. If changes occur, a new, renamed, version of the document is created and the old one kept intact.

Read More

Topics: autotiering, big data, Data Center, NVMe over Fibre, enmotus, data analytics

The Evolution of Hyper-servers

Posted by Jim O'Reilly on Aug 2, 2017 2:51:09 PM

A few months ago, the CTO of one of the companies involved in system design asked me how I would handle and store the literal flood of data expected from the Square-Kilometer Array, the world’s most ambitious radio-astronomy program. The answer was in two parts. First, data needs compression, which isn’t trivial with astronomy data and this implies a new way to process at upwards of 100 Gigabyte speed. Second, the hardware platforms become very parallel in design, especially in communications.

We are talking designs where each ultra-fast NVMe drive has access to network bandwidth capable of keeping up with the stream. This implies having at least one 50GbE link per drive, since drives with 80 gigabit/second streaming speeds are entering volume production today.

Read More

Topics: NVMe, big data, Intel Optane, Data Center

Automating Storage Performance in Hybrid and Private Clouds

Posted by Jim O'Reilly on Jun 27, 2017 10:10:00 AM

Reading current blogs on clouds and storage it’s impossible not to conclude that most cloud users have abandoned hope on tuning system performance and are just ignoring the topic. The reality is that our cloud models struggle with performance issues. For example, a server can hold roughly 1000 virtual machines.

With an SSD giving 40K IOPS, that’s just 40 IOPS per VM. This is on the low side for many use cases, but now let’s move to Docker containers, using the next generation of server. The compute power and, more importantly, DRAM space increased to match the 4,000 containers in the system, but IOPS dropped to just 10/container.

Now this is the best that we can get with typical instances. One local instance drive and all the rest is networked I/O. The problem is that network storage is also pooled and this limits storage avail

ability to any instance. The numbers are not brilliant!

We see potential bottlenecks everywhere. Data can be halfway across a datacenter instead of localized to a rack where compute instances are accessing it. Ideally, the data is local (possible with a hyper-converged architecture) so that it avoids crossing multiple switches and routers. This may be impossible to achieve, especially if diverse datasets are being used for an app.

Networks choke and that is true of VLANs used in cloud clusters. The problem with container-based systems is that the instances and VLANs involved are often closed down by the time you get a notification. That’s the downside of agility!

Apps choke, too, and microservices likewise. The fact that these often only exist for short periods makes debug both a glorious challenge and very frustrating. Being able to understand why a given node or instance runs slower than the rest in a pack can fix a hidden bottleneck that slows completion of the whole job stream.

Hybrid clouds add a new complexity. Typically, these are heterogeneous. The cloud stack in the private segment likely is OpenStack though Azure Stack promises to be an alternative. The public cloud will be one of AWS, Azure or Google, most likely. This means two separate environments, very different from each other in operation, syntax and billing, and an interface between the two.

Read More

Topics: Data Center, data analytics, cloud storage

How To Prevent Over-Provisioning - Dynamically Match Workloads With Storage Resources

Posted by Adam Zagorski on Jun 25, 2017 10:05:00 AM

The Greek philosopher Heraclitus said, “The only thing that is constant is change.” This adage rings true today in most modern datacenters. The demands on workloads tend to be unpredictable, which creates constant change. At any given point in time, an application can have very few demands placed on it, and at a moment notice the workload demands spike. Satisfying the fluctuations in demand is a serious challenge for datacenters. Solving this challenge will translate to significant cost savings amounting to millions of dollars for data centers.

Traditionally, data centers have thrown more hardware at this problem. Ultimately, they over provision to make sure they have enough performance to satisfy peak periods of demand. This includes scaling out with more and more servers filled with hard drives, quite often short stroking the hard drives to minimize latency. While hard drive costs are reasonable, this massive scale out increases power, cooling and management costs. The figure below shows an example of the disparity between capacity requirements and performance requirements. Achieving capacity goals with HDDs is quite easy, but given that individual high performance HDDs are only able to achieve about 200 random IOPS, it takes quite a few HDDs to meet performance goals of modern database applications.

Today, storage companies are pushing all flash arrays as the solution to this challenge. This addresses both the performance issue as well as the power and cooling, but now massive amounts of non-active (cold) data are stored on your most expensive storage media. In addition, not all applications need flash performance. Adding all flash is just another form of overprovisioning with a significantly higher cost penalty.

Read More

Topics: NVMe, autotiering, big data, All Flash Array, SSD, Data Center, NVMe over Fibre, data analytics

Storage analytics impact performance and scaling

Posted by Jim O'Reilly on Jun 14, 2017 11:21:10 AM

For the last 3 decades of computer storage use, we’ve operated essentially blindfolded. What we’ve known about performance has been gleaned from artificial benchmarks such as IOMeter and guestimates of IOPS requirements during operations that depend on a sense of how fast an application is running.

The result is something like steering a car without a speedometer ... it’s a mess of close calls and inefficient operations.

On the whole, though, we muddled through. That’s no longer adequate in the storage New Age. Storage performance is stellar in comparison to those early days, with SSDs changing the level of IOPS per drive by a factor of as much as 1000X. Wait, you say, tons of IOPS…why do we have problems?

The issue is that we share much of our data across clusters of systems, while the IO demand of any given server has jumped up in response to virtualization, containers and the horsepower of the latest CPUs. In fact, that huge jump in data moving around between nodes makes driving blind impossible even for small virtualized clusters, never mind scaled-out clouds.

All of this is happening against a background of application-based resilience. System uptime is no longer measured in how long a server runs. The key measurement is how long an app runs properly. Orchestrated virtual systems recover from server failures quite quickly. The app is restarted on another instance in a different server.

Read More

Topics: All Flash Array, Data Center, hyperconverged, NVMe over Fibre, data analytics

Virtual Reality Drives Data Center Demand for Storage

Posted by Andy Mills on Feb 8, 2017 11:39:41 AM

Twenty seven years ago in 1989, I attended one of the very early virtual reality (VR) headset demonstrations in the UK. It was put on by a bunch of ex-INMOS engineers demonstrating the use of Transputers and Intel’s i860 to generate real time image rendering in VR environments, along with the first VR gloves.

Apart from the obvious VR wow factor, a significant memory of the event was someone falling off the stage as they lost their balance and orientation, which was quite impressive given the low resolution graphics at the time i.e. CGA, 640x200 pixels at 4-bit resolution. Luckily they were not seriously injured.

The killer app presented at the time was remote VR teleconferencing where individuals would magically appear across the table in front of you and be able to push an electronic document toward you which you could manipulate, read and mark up, all virtually of course. Wind forward to 2017. VR, thanks to dramatic advances in display technologies and smaller compact VR gear, is finally making it into some mainstream applications with far more realistic video and smoother graphics at a much lower cost point, along with a growing amount of web based or gaming content to fuel demand.

So why to do we care about this in the world of storage?

Read More

Topics: Data Center, virtual reality, virtualization, enmotus

Storage Automation In Next Generation Data Centers

Posted by Adam Zagorski on Jan 31, 2017 1:04:37 PM

Automation of device management and performance monitoring analytics are necessary to control costs of web scale data centers, especially as most organizations continually ask for their employees to do more with fewer resources.

Big Data and massive data growth are at the forefront of datacenter growth. Imagine what it takes to manage the datacenters that provide us with this information.

 

According to research conducted by Seagate, time consuming drive management activities represent the largest storage related pain points for datacenter managers. In addition to trying to manage potential failures of all of the disk drives, managers must monitor the performance of multiple servers as well. As indicated by Seagate, there are tremendous opportunities in cost savings if the timing of retiring disk drives can be optimized. Significant savings can also result from streamlining the management process.

 

While there is no such thing as a typical datacenter, for the purpose of discussion, we will assume that a typical micro-datacenter contains about 10,000 servers while a large scale data center contains on the order of 100,000 servers. In a webscale hyperconverged environment, if each server housed 15 devices (hard drives and/or flash drives), a datacenter contains anywhere from 150,000 to 1.5 million devices. That is an enormous amount of servers and devices to manage. Even if we scaled back by an order of a magnitude or two, to 50 servers and 750 drives for example, managing a data center is a daunting task.

 

Read More

Topics: NVMe, big data, All Flash Array, hyperconverged, NVMe over Fibre

Storage Visions 2017

Posted by Jim O'Reilly on Jan 18, 2017 2:22:42 PM

Here it is. A new year opens up in front of us. This one is going to be lively and storage is no exception. In fact, 2017 should see some real fireworks as we break away from old approaches and move on to some new technologies and software.

Read More

Topics: NVMe, SSD, Data Center, data anlytics, NVMe over Fibre

Delivering Data Faster

Accelerating cloud, enterprise and high performance computing

Enmotus FuzeDrive accelerates your hot data when you need it, stores it on cost effective media when you don't, and does it all automatically so you don't have to.

 

  • Visual performance monitoring
  • Graphical managment interface
  • Best in class performance/capacity

Subscribe to Email Updates

Recent Posts