Enmotus Blog

The Smart 2 In One SSD

Posted by Adam Zagorski on Feb 3, 2021 2:19:23 PM

Marc Sauter, from the German IT periodical Golem.de recently reviewed the Enmotus FuzeDrive SSD.  Below is Marc’s conclusion from the testing.  A link to his original review can also be found below.

The FuzeDrive implements an old idea more consistently than previous solutions: While most QLC-based SSDs use a dynamic SLC buffer and pure caching models such as Intel's Optane Memory, H10 even mixes two storage types, Enmotus relies on tiered storage plus Artificial Intelligence.

This ensures that the fast SLC area is always available to write or read data at high speed. Because Enmotus blocks a quarter of the NAND capacity for this, the storage space is reduced from 2 TB to 1.6 TB. However, 128 GB of SLC memory are consistently available, whereas other TLC / QLC SSDs with a purely dynamic write cache reduce the capacity as the fill level increases.

FuzeDrive SSD is now available for purchases in the US from Amazon as well as NewEgg

Read More

Topics: NVMe, SSD, Intel Optane, NVDIMM, artificial intelligence, pc gaming

Enmotus And PCSpecialist Accelerate High Performance Computer Gaming

Posted by Adam Zagorski on Jan 22, 2021 10:00:00 AM

FuzeDrive, The World’s Smartest SSD, Now Available With PCSpecialist

Read More

Topics: NVMe, SSD, artificial intelligence, pc gaming

An SSD Renaissance

Posted by Andy Mills on Jan 22, 2021 9:16:17 AM

Revitalizing old ideas seems to be the new marketing trend of the day. Our family are the proud owners of a BMW Mini, a reminder of our first Mini we acquired second hand in Wales in 1979. Of course, when we get in the new Mini, it’s nothing like the original. No more squeezing past the exceptionally large steering wheel into the driver’s seat, feeling like you are sitting on the floor and no more rocking back and forth helping the car get up those steep Welsh hills. And what a difference a 1.6 liter with turbo makes versus our original 850 cc engine! It may look similar on the outside, but on the inside and how it handles, it's a very different story.

Read More

Topics: NVMe, SSD, artificial intelligence, steam gaming

PC Capacity Anxiety - Where Will I Store All of My Data?

Posted by Adam Zagorski on Nov 12, 2019 9:56:22 AM

Computer games are now approaching 175GB in size.  Add some videos to your library, a few high resolution photos, and suddenly you realize you are out of disk space.  Has this happened to you?

Traditionally, If you need performance, then an SSD is the only choice, but they are expensive and capacities have been limited.  Hard drives provide the capacity but are too slow to give you optimal performance, especially if you are a gamer.  The archaic solution is to buy both and manually move data back and forth to get your hot applications into your fast SSD.  Think about it, this is the 21st century.  Cars are already driving themselves but you are stuck manually moving data between 2 types of drives.  Really?  There  has to be a better solution.

Read More

Topics: NVMe, SSD, Intel Optane, virtual reality, machine learning, pc gaming, twitchtv, steam gaming

Storage for Artificial Intelligence

Posted by Jim O'Reilly on Dec 4, 2017 1:17:42 PM

It’s not often I can write about two dissimilar views of the same technology, but recent moves in the industry on the A.I. front mean that not only does storage need to better align with A.I. needs than any traditional storage approach, but the rise of software-defined storage concepts makes A.I. an inevitable choice for solving advanced problems. The result, this article on “Storage for A.I.” and the second part of the story on “A.I for Storage”.

The issue is delivery. A.I. is very data hungry. The more data A.I. sees, the better its results. Traditional storage, the world of RAID and SAN, iSCSI and arrays of drives, is a world of bottlenecks, queues and latencies. There’s the much-layered file stack in the requesting server, protocol latency, and then the ultimate choke point, the array controller.

That controller can talk to 64 drives or more, via SATA or SAS, but typically only has output equivalent to maybe 8 SATA ports. This didn’t matter much with HDDs, but SSDs can deliver data much faster than spinning rust and so we have a massive choke point just in reducing streams to the array output ports’ capability.

There’s more! That controller is in the data path and data is queued up in its memory, adding latency. Then we need to look at the file stack latency. That stack is a much-patched solution with layer upon layer of added functionality and virtualization. In fact, the “address” of a block of data is transformed no less than 7 times before it reaches the actual bits and bytes on the drive. This was very necessary for the array world, but solid state drives are fundamentally different and simplicity is a possibility.

Read More

Topics: NVMe, SSD, NVDIMM, artificial intelligence, machine learning

Information Storage – A truly novel concept

Posted by Jim O'Reilly on Oct 17, 2017 9:37:29 AM

When you see “storage” mentioned it’s often “data storage”. The implication is that there is nothing in the “data” that is informational, which even at a verbatim read is clearly no longer true. Open the storage up, of course, and the content is a vast source of information, both mined and unmined, but our worldview of storage has been to treat objects as essentially dumb, inanimate things.

This 1970’s view of storage’s mission is beginning to change. The dumb storage appliance is turning into smart software-defined storage services running in virtual clusters or clouds, with direct access to storage drives. As this evolution to SDS has picked up momentum, pioneers in the industry are taking a step beyond and looking at ways to extract useful information from what is stored and convert it to new ways to manage the information lifecycle, protect integrity and security and provide guidance that is information-centric to assist processing and guide the other activities around the object.

Read More

Topics: big data, SSD, Data Center

How To Prevent Over-Provisioning - Dynamically Match Workloads With Storage Resources

Posted by Adam Zagorski on Jun 25, 2017 10:05:00 AM

The Greek philosopher Heraclitus said, “The only thing that is constant is change.” This adage rings true today in most modern datacenters. The demands on workloads tend to be unpredictable, which creates constant change. At any given point in time, an application can have very few demands placed on it, and at a moment notice the workload demands spike. Satisfying the fluctuations in demand is a serious challenge for datacenters. Solving this challenge will translate to significant cost savings amounting to millions of dollars for data centers.

Traditionally, data centers have thrown more hardware at this problem. Ultimately, they over provision to make sure they have enough performance to satisfy peak periods of demand. This includes scaling out with more and more servers filled with hard drives, quite often short stroking the hard drives to minimize latency. While hard drive costs are reasonable, this massive scale out increases power, cooling and management costs. The figure below shows an example of the disparity between capacity requirements and performance requirements. Achieving capacity goals with HDDs is quite easy, but given that individual high performance HDDs are only able to achieve about 200 random IOPS, it takes quite a few HDDs to meet performance goals of modern database applications.

Today, storage companies are pushing all flash arrays as the solution to this challenge. This addresses both the performance issue as well as the power and cooling, but now massive amounts of non-active (cold) data are stored on your most expensive storage media. In addition, not all applications need flash performance. Adding all flash is just another form of overprovisioning with a significantly higher cost penalty.

Read More

Topics: NVMe, autotiering, big data, All Flash Array, SSD, Data Center, NVMe over Fibre, data analytics

Storage Visions 2017

Posted by Jim O'Reilly on Jan 18, 2017 2:22:42 PM

Here it is. A new year opens up in front of us. This one is going to be lively and storage is no exception. In fact, 2017 should see some real fireworks as we break away from old approaches and move on to some new technologies and software.

Read More

Topics: NVMe, SSD, Data Center, data anlytics, NVMe over Fibre

Flash Tiering: The Future of Hyper-converged Infrastructure

Posted by Adam Zagorski on Jan 12, 2017 1:04:00 PM

The Future of Hyper-converged Infrastructure

Read More

Topics: NVMe, big data, 3D Xpoint, SSD, Intel Optane, Data Center, hyperconverged

The Art of “Storage-as-a-Service”

Posted by Jim O'Reilly on Jan 9, 2017 2:24:50 PM

The Art of “Storage-as-a-Service”

Most enterprise datacenters are today considering the hybrid cloud model for their future deployments. Agile and flexible, the model is expected to yield higher efficiencies than traditional setups, while allowing a datacenter to be sized to average, as opposed to peak, workloads.

In reality, achieving portability of apps between clouds and reacting rapidly to workload increases both run up against a data placement problem. The agility idea fails when data is in the wrong cloud when a burst is needed. This is exacerbated by the new containers approach, which can start up a new instance in a few milliseconds.

Data placement is in fact the most critical issue in hybrid cloud deployment. Pre-emptively providing data in the right cloud prior to firing up the instances that use it is the only way to assure adequate those expected efficiency gains.

A number of approaches have been tried, with varying success, but none are truly easy to implement and all require heavy manual intervention. Let’s look at some of these approaches:

  1. Sharding the dataset – By identifying the hottest segment of the dataset (e.g. Names beginning with S), this approach places a snapshot of those files in the public cloud and periodically updates it. When a cloudburst is needed, locks for any files being changed are passed over to the public cloud and the in-house versions of the files are blocked from updating. The public cloud files are then updated and the locks cleared.
Read More

Topics: NVMe, autotiering, big data, SSD, hyperconverged