Enmotus Blog

Virtual Reality Drives Data Center Demand for Storage

Posted by Andy Mills on Feb 8, 2017 11:39:41 AM

Twenty seven years ago in 1989, I attended one of the very early virtual reality (VR) headset demonstrations in the UK. It was put on by a bunch of ex-INMOS engineers demonstrating the use of Transputers and Intel’s i860 to generate real time image rendering in VR environments, along with the first VR gloves.

Apart from the obvious VR wow factor, a significant memory of the event was someone falling off the stage as they lost their balance and orientation, which was quite impressive given the low resolution graphics at the time i.e. CGA, 640x200 pixels at 4-bit resolution. Luckily they were not seriously injured.

The killer app presented at the time was remote VR teleconferencing where individuals would magically appear across the table in front of you and be able to push an electronic document toward you which you could manipulate, read and mark up, all virtually of course. Wind forward to 2017. VR, thanks to dramatic advances in display technologies and smaller compact VR gear, is finally making it into some mainstream applications with far more realistic video and smoother graphics at a much lower cost point, along with a growing amount of web based or gaming content to fuel demand.

So why to do we care about this in the world of storage?

Read More

Topics: Data Center, virtual reality, virtualization, enmotus

Storage Automation In Next Generation Data Centers

Posted by Adam Zagorski on Jan 31, 2017 1:04:37 PM

Automation of device management and performance monitoring analytics are necessary to control costs of web scale data centers, especially as most organizations continually ask for their employees to do more with fewer resources.

Big Data and massive data growth are at the forefront of datacenter growth. Imagine what it takes to manage the datacenters that provide us with this information.

 

According to research conducted by Seagate, time consuming drive management activities represent the largest storage related pain points for datacenter managers. In addition to trying to manage potential failures of all of the disk drives, managers must monitor the performance of multiple servers as well. As indicated by Seagate, there are tremendous opportunities in cost savings if the timing of retiring disk drives can be optimized. Significant savings can also result from streamlining the management process.

 

While there is no such thing as a typical datacenter, for the purpose of discussion, we will assume that a typical micro-datacenter contains about 10,000 servers while a large scale data center contains on the order of 100,000 servers. In a webscale hyperconverged environment, if each server housed 15 devices (hard drives and/or flash drives), a datacenter contains anywhere from 150,000 to 1.5 million devices. That is an enormous amount of servers and devices to manage. Even if we scaled back by an order of a magnitude or two, to 50 servers and 750 drives for example, managing a data center is a daunting task.

 

Read More

Topics: NVMe, big data, All Flash Array, hyperconverged, NVMe over Fibre

Storage Visions 2017

Posted by Jim O'Reilly on Jan 18, 2017 2:22:42 PM

Here it is. A new year opens up in front of us. This one is going to be lively and storage is no exception. In fact, 2017 should see some real fireworks as we break away from old approaches and move on to some new technologies and software.

Read More

Topics: NVMe, SSD, Data Center, data anlytics, NVMe over Fibre

Flash Tiering: The Future of Hyper-converged Infrastructure

Posted by Adam Zagorski on Jan 12, 2017 1:04:00 PM

The Future of Hyper-converged Infrastructure

Read More

Topics: NVMe, big data, 3D Xpoint, SSD, Intel Optane, Data Center, hyperconverged

The Art of “Storage-as-a-Service”

Posted by Jim O'Reilly on Jan 9, 2017 2:24:50 PM

The Art of “Storage-as-a-Service”

Most enterprise datacenters are today considering the hybrid cloud model for their future deployments. Agile and flexible, the model is expected to yield higher efficiencies than traditional setups, while allowing a datacenter to be sized to average, as opposed to peak, workloads.

In reality, achieving portability of apps between clouds and reacting rapidly to workload increases both run up against a data placement problem. The agility idea fails when data is in the wrong cloud when a burst is needed. This is exacerbated by the new containers approach, which can start up a new instance in a few milliseconds.

Data placement is in fact the most critical issue in hybrid cloud deployment. Pre-emptively providing data in the right cloud prior to firing up the instances that use it is the only way to assure adequate those expected efficiency gains.

A number of approaches have been tried, with varying success, but none are truly easy to implement and all require heavy manual intervention. Let’s look at some of these approaches:

  1. Sharding the dataset – By identifying the hottest segment of the dataset (e.g. Names beginning with S), this approach places a snapshot of those files in the public cloud and periodically updates it. When a cloudburst is needed, locks for any files being changed are passed over to the public cloud and the in-house versions of the files are blocked from updating. The public cloud files are then updated and the locks cleared.
Read More

Topics: NVMe, autotiering, big data, SSD, hyperconverged

Hot Trends In Storage

Posted by Adam Zagorski on Dec 13, 2016 2:02:41 PM

Storage continues to be a volatile segment of IT. Hot areas trending in the news this month include NVMe over Fibre Channel, which is being hyped heavily now that the Broadcom acquisition of Brocade is a done deal. Another hot segment is the hyper-converged space, complimented by activity in software-defined storage from several vendors.

Flash is now running ahead of enterprise hard drives in the market, contributing to foundry changeovers to 3D NAND to temporarily put upward pressure on SSD pricing. High-performance storage solutions built on COTS platforms have been announced, too, which will create more pressure to reduce appliance prices.

Let’s cover these topics and more in detail:

  1. NVMe over Fibre-Channel is in full hype mode right now. This solution is a major step away from traditional FC insofar as it no longer encapsulates the SCSI block-IO protocol. Instead, it uses a now-standard direct-memory access approach to reduce overhead and speed up performance significantly.
Read More

Topics: NVMe, SSD, hyperconverged, NVMe over Fibre

The Evolution Of Storage

Posted by Jim O'Reilly on Nov 29, 2016 4:09:29 PM

The storage industry continues to evolve rapidly, which is both exciting and challenging. I intend this blog to look at the hot news in the industry, as well as taking a view of trends and occasionally long-term directions.

This promises to be an interesting effort. There are plenty of innovations to describe, while retakes on older ideas crop up quite often. I hope you will find the subject as fascinating as I do.

Trends

1.It’s clear that the high performance enterprise hard drive is a dying breed. SSDs and all-flash arrays have undercut demand. With improved wear life, flash-based products meet the stringent needs of the datacenter plus, they are cooler, quieter and smaller and of course they are much faster.

Relevant news on this includes:

Read More

Topics: All Flash Array, 3D Xpoint, SSD, Intel Optane, Data Center

Why Auto-Tiering is Critical

Posted by Jim O'Reilly on Sep 22, 2016 9:39:46 AM

 

 Storage in IT comes in multiple flavors. We have super-fast NVDIMMs, fast and slow SSDs and snail-paced hard drives. Add in the complexities of networking versus local connection and price, and capacity, and figuring the optimum configuration is no fun. Economics and performance goals guarantee that any enterprise configuration will be a hybrid of several storage types.

Enter auto-tiering. This is a deceptively simple concept. Auto-tiering moves data back and forth between the layers of storage, running in the background. This should keep the hottest data on the most accessible tier of storage, while relegating old, cold data to the most distant layer of storage.

A simplistic approach isn’t quite good enough, unfortunately. Computers think in microseconds, while job queues often have a daily or weekly cycle. Data that the computer thinks is cold may suddenly get hotter than Hades when that job hits the system. Similarly, admins know that certain files are created, stored and never seen again.

This layer of user knowledge is handled by incorporating a policy engine into auto-tiering, allowing an admin to anticipate data needs and promote data through the tiers in advance of need.

Read More

Topics: NVMe, autotiering, big data

Virtual SSDs – All the benefits without the cost

Posted by Adam Zagorski on Sep 1, 2016 11:51:00 AM

Webster defines the word Virtual as “Very close to being something without actually being it.” By that definition, a Virtual SSD is very close to being an SSD without actually being an SSD. From a more practical viewpoint, a Virtual SSD provides the benefits of an SSD, performance, without the downside of an SSD, price and limited capacity.

A virtual SSD blends 2 or more classes of storage media into a single volume with the performance characteristics of the fastest storage media and the capacity of the combined media. The flexibility of a virtual SSD can be tailored to meet a wide variety of needs. Virtual SSDs are:

  • Cost effective because you can blend SSDs with cost effective hard drives
  • Uncompromising because you can blend high performing NVMe drives with capacity SAS/SATA SSDs for an all flash virtual SSD
  • High value because they offer both best in class cost/IO metrics as well as cost/GB
Read More

95% of all storage applications do not need all flash

Posted by Adam Zagorski on Jun 23, 2016 10:35:23 AM

If you could have all flash performance at 80% of the cost of an all-flash solution, would you be interested? From a financial point of view, it is hard to argue.

 

It is pretty well documented that for most applications, less than 5-10% of the data across a volume is active at any point in time. Given this, it does not make sense to pay all flash prices to store your inactive data. Granted, there are applications that are active across the entire LBA range of a volume, or that cannot tolerate any unpredictable latency. For these applications, by all means, an all-flash solution is the right choice.

 

For the majority of us, a hybrid solution is the optimal choice. Keep in mind that not all hybrid solutions are created equal. For optimal performance, your hybrid solution should have the following characteristics:

  • File pinning because there are applications for which you want to guarantee full flash performance for all IO
  • Real time data promotion to performance storage because the solution should react to dynamic requirements, not yesterday’s activity
  • Full automation because your time is too valuable to constantly manage storage
  • Unrestricted fast tier capacities because you need the ability to size your storage to your unique needs
  • Visibility into your storage behavior because you need information to guarantee your storage is optimized

 

Read More

Delivering Data Faster

Accelerating cloud, enterprise and high performance computing

Enmotus FuzeDrive accelerates your hot data when you need it, stores it on cost effective media when you don't, and does it all automatically so you don't have to.

 

  • Visual performance monitoring
  • Graphical managment interface
  • Best in class performance/capacity

Subscribe to Email Updates