Enmotus Blog

The Smart 2 In One SSD

Posted by Adam Zagorski on Feb 3, 2021 2:19:23 PM

Marc Sauter, from the German IT periodical Golem.de recently reviewed the Enmotus FuzeDrive SSD.  Below is Marc’s conclusion from the testing.  A link to his original review can also be found below.

The FuzeDrive implements an old idea more consistently than previous solutions: While most QLC-based SSDs use a dynamic SLC buffer and pure caching models such as Intel's Optane Memory, H10 even mixes two storage types, Enmotus relies on tiered storage plus Artificial Intelligence.

This ensures that the fast SLC area is always available to write or read data at high speed. Because Enmotus blocks a quarter of the NAND capacity for this, the storage space is reduced from 2 TB to 1.6 TB. However, 128 GB of SLC memory are consistently available, whereas other TLC / QLC SSDs with a purely dynamic write cache reduce the capacity as the fill level increases.

FuzeDrive SSD is now available for purchases in the US from Amazon as well as NewEgg

Read More

Topics: NVMe, SSD, Intel Optane, NVDIMM, artificial intelligence, pc gaming

Enmotus And PCSpecialist Accelerate High Performance Computer Gaming

Posted by Adam Zagorski on Jan 22, 2021 10:00:00 AM

FuzeDrive, The World’s Smartest SSD, Now Available With PCSpecialist

Read More

Topics: NVMe, SSD, artificial intelligence, pc gaming

An SSD Renaissance

Posted by Andy Mills on Jan 22, 2021 9:16:17 AM

Revitalizing old ideas seems to be the new marketing trend of the day. Our family are the proud owners of a BMW Mini, a reminder of our first Mini we acquired second hand in Wales in 1979. Of course, when we get in the new Mini, it’s nothing like the original. No more squeezing past the exceptionally large steering wheel into the driver’s seat, feeling like you are sitting on the floor and no more rocking back and forth helping the car get up those steep Welsh hills. And what a difference a 1.6 liter with turbo makes versus our original 850 cc engine! It may look similar on the outside, but on the inside and how it handles, it's a very different story.

Read More

Topics: NVMe, SSD, artificial intelligence, steam gaming

A.I. For Storage

Posted by Jim O'Reilly on Dec 18, 2017 2:12:46 PM

As we saw in the previous part of this two-part series, “Storage for A.I.”, the performance demands of A.I. will combine with technical advances in non-volatile memory to dramatically increase performance and scale within the storage pool and also move addressing of data to a much finer granularity, the byte level rather than 4KB block. This all creates a manageability challenge that must be resolved if we are to attain the potential of A.I. systems (and next-gen computing in general).

Simply put, storage is getting complex and will become ever more so as we expand the size and use of Big Data. Rapid and agile monetization of data will be the mantra of the next decade. Consequentially, the IT industry is starting to look for ways to migrate from today’s essentially manual storage management paradigms to emulate and exceed the automation of control demonstrated in 

public clouds.

Read More

Topics: NVMe, Data Center, NVMe over Fibre, enmotus, data analytics, NVDIMM, artificial intelligence

Storage for Artificial Intelligence

Posted by Jim O'Reilly on Dec 4, 2017 1:17:42 PM

It’s not often I can write about two dissimilar views of the same technology, but recent moves in the industry on the A.I. front mean that not only does storage need to better align with A.I. needs than any traditional storage approach, but the rise of software-defined storage concepts makes A.I. an inevitable choice for solving advanced problems. The result, this article on “Storage for A.I.” and the second part of the story on “A.I for Storage”.

The issue is delivery. A.I. is very data hungry. The more data A.I. sees, the better its results. Traditional storage, the world of RAID and SAN, iSCSI and arrays of drives, is a world of bottlenecks, queues and latencies. There’s the much-layered file stack in the requesting server, protocol latency, and then the ultimate choke point, the array controller.

That controller can talk to 64 drives or more, via SATA or SAS, but typically only has output equivalent to maybe 8 SATA ports. This didn’t matter much with HDDs, but SSDs can deliver data much faster than spinning rust and so we have a massive choke point just in reducing streams to the array output ports’ capability.

There’s more! That controller is in the data path and data is queued up in its memory, adding latency. Then we need to look at the file stack latency. That stack is a much-patched solution with layer upon layer of added functionality and virtualization. In fact, the “address” of a block of data is transformed no less than 7 times before it reaches the actual bits and bytes on the drive. This was very necessary for the array world, but solid state drives are fundamentally different and simplicity is a possibility.

Read More

Topics: NVMe, SSD, NVDIMM, artificial intelligence, machine learning