Datacenter Challenge - How much flash do you need?
Disparate workloads create fluctuating demands on datacenter resources. Wide-ranging applications along with fluctuating demand can strain resources during peak periods, while those same resources remain idle during slow periods.
Using flash storage for your enterprise applications is the accepted standard. NVMe flash is becoming the flash of choice for performance reasons, but cost prohibits it from being utilized for your cold or less frequently accessed data. This leaves datacenter managers with the quandary of how much NVMe flash they require.
If the flash is under provisioned, costs are contained, but your applications will suffer performance issues during peak periods. The alternative is to overprovision, but this solution has its downsides as well. Cost of ownership increases, and your most expensive resources remain idle during extended periods. In scale out datacenters with large volumes of servers, costs can spiral out of control very fast.
Dynamic Flash Provisioning
Enmotus has developed a software based technology called Dynamic Flash Provisioning, which accurately measures unpredictable workloads in data center computing environments and dynamically matches those workloads to available storage or memory resources in real time according to user configurable policies and rules.
This technology is based on Storage Automation and Analytics technology (SAA), which enables common pools of performance storage to be deployed intelligently and automatically without human intervention. This solution has been shown to dramatically improve response times while lowering operating and capital costs associated with the deployment of flash and memory class technologies in modern enterprise data centers.
How it works
Dynamic Flash Provisioning software Monitors storage device activity and health, Measures active data usage and Accelerates applications by allocating the performance flash to the applications that need it. It is in effect a highly effective real time performance provisioning solution.
The device analytics engine is based on the FuzeDrive Virtual SSD technology. The monitor function analyzes the device activity and provides feedback on IOPS, throughput, latency, data written, network bandwidth as well as Enmotus’ custom region metrics. The region metrics are key in determining the size of the active data set as well as the location of activity for a volume.
The performance metrics are a critical component in determining the actions required to identify and accelerate high priority volumes.
The key role of the Measure function is to determine the actual working set of each application by monitoring the local servers for performance and capacity usage. Prior to measuring, the selected user volume is converted into a Single Media Tier (SMT). In essence, this step transforms the volume’s underlying physical or virtual machine disk by adding a layer of advanced statistics gathering that is capable of analyzing both capacity usage by location on the disk, as well as variance over time. Once this transformation has been completed, Enmotus’ technology continuously monitors the volume, identifying the size of the active working set as well as the performance and communicating the metrics to a local storage analytics engine.
These metrics are used to determine the actions to be taken to provision performance storage media. In simple terms, knowing the exact working set size as well as the performance requirements, the analytics engine can easily asses how much performance storage needs to be allocated to precise regions of the user volume.
Utilizing the metrics from the measurement function, flash is “fuzed” to the existing volume. The flash volume can be internal to the server or shared with the server over a SAN or NVMe over a fabric highway. This can be fully automated, thereby negating the need for admin intervention. The storage is virtualized with a new flash device, which provides just the right amount of high speed storage to satisfy the actual working set. It is important to note that the “fuzing” of the flash to the existing volume is done non-disruptively while the volume remains online.
A key value of knowing the size of active data set is that it increases the utilization efficiency of the high performance flash, which when spread out over thousands of servers in a web scale data center leads to significant TCO benefits. The benefits are realized not only by accurate provisioning of flash, but also through lower server count, as each server becomes more efficient. Additional savings are achieved by minimizing power consumption, as fewer servers require less power and cooling. Finally, having metrics that provide drive failure predictions, simplifies management and procurement of the physical resources as well.