SSD performance -- is a slowdown inevitable?

Started by dhilipkumar, May 08, 2009, 05:43 PM

Previous topic - Next topic

dhilipkumar

SSD performance -- is a slowdown inevitable?


The recent revelation that Intel Corp.'s consumer-class solid-state disk (SSD) drives suffer from fragmentation that can cause a significant performance degradation raises the question: Do all SSDs slow down with use over time?

The answer is yes -- and every drive manufacturer knows it.

Here's the rub: Drive performance and longevity are inherently connected, meaning drive manufacturers work to come up with the best balance between blazing speed and endurance. And since SSDs are fairly new to the market, users are finding that while they do offer better speed in some ways than hard disk drives, questions remain about how much of that speed they deliver for the long haul. One thing you can be sure of is that the shiny new SSD you just bought isn't likely to continue performing at the same level it did when you first pulled it out of the box. That's important to know, given the speed with which SSDs have proliferated in the marketplace amid claims that they're faster, use less power and can be more reliable -- especially in laptops -- since there are no moving parts.

They also remain more expensive than their spinning-disk hard drive counterparts.

"An empty [SSD] drive will perform better than one written to. We all know that," said Alvin Cox, co-chairman of the Joint Electron Device Engineering Council's (JEDEC) JC-64.8 subcommittee for SSDs, which expects to publish standards this year for measuring drive endurance. Cox, a senior staff engineer at Seagate, said a quality SSD should last between five and 10 years.The good news is that after an initial dip in performance, SSDs tend to level off, according to Eden Kim, chairman of the Solid State Storage Initative's Consumer SSD Market Development Task Force. Even if they do drop in performance over time -- undercutting a manufacturer's claims -- consumer flash drives are still vastly faster than traditional hard drives, because they can perform two to five times the input/ouput operations (I/Os) per second of a hard drive, he said.

Coming soon, standards and specs

In May 2008, the JEDEC subcommittee co-chaired by Seagate and Micron, held its first meeting to address the standards development needs of the still-emerging SSD market.

JEDEC is among several groups working to publish either standards or specifications for the drives by year's end. Along with IDEMA (International Disk Drive Equipment and Materials Association) and the SSD Alliance, headquartered in Taipei, Taiwan, the Storage Networking Industry Association's (SNIA) Solid State Storage Initiative plans to publish performance specifications no later than the third quarter for vendors to adopt and eventually use on their SSD packaging.


SNIA's specifications will set up standard benchmarks for measuring new drive performance and degradation over time, depending on the applications being used.

Phil Mills, chairman of the Solid State Storage Initiative, said the performance numbers most manufacturers use now for marketing represent a drive's "burst rate" -- not its steady state or average read rate. "So there's already a huge difference between out-of-the-box versus constant use," he said. "And then, in both burst mode and steady state, there are huge differences in performance between manufacturers."Because SSDs have no moving parts, when the drives go bad -- and they do on occasion -- what users are apt to see are failures at the controller or chip level where firmware bugs can affect I/O operations with a computer's operating system. With such relatively new technology, hiccups are possible.

For example, a Computerworld editor who purchased a 120GB SSD from OCZ Technology last month, found that the drive failed after only two weeks of use. He's now using a replacement -- and backing up data often.

computerworld

dhilipkumar

Why does performance drop?

Users typically notice that an SSD drive runs at the manufacturer's stated peak I/O performance at first, but soon after that it begins to drop. That's because, unlike a hard disk drive, any write operation to an SSD requires not one step, but two: an erase followed by the write.

When an SSD is new, the NAND flash memory inside it has been pre-erased; Users start with a clean slate, so to speak. But, as data is written to the drive, data management algorithms in the controller begin to move that data around the flash memory in an operation known as wear-leveling. Even though wear-leveling is meant to prolong the life of the drive, it can eventually lead to performance issues.

SSD performance and endurance are related. Generally, the poorer the performance of a drive, the shorter the lifespan. That's because the management overhead of an SSD is related to how many writes and erases to the drive take place. The more write/erase cycles there are, the shorter the drive's lifespan. Consumer-grade multi-level cell (MLC) memory can sustain from 2,000 to 10,000 write cycles. Enterprise-class single-level cell (SLC) memory can last through 10 times the number of write cycles of an MLC-based drive.

A brief refresher on the difference between the two technologies: SLC simply means one bit of data is written to each flash memory cell, while MLC allows two bits, or more, to be written to cells. MLC drives are notably less expensive than SLC drives.Manufacturers moderate how long the flash memory in an SSD will last in several ways, but all involve either adding DRAM cache -- so data writes are buffered to reduce the number of write/erase cycles -- or using special firmware located in the drive's processor or controller to combine writes for efficiency.

According to Bob Merritt, an analyst with research firm Convergent Semiconductors, another element of SSD longevity is whether extra memory cells are available and, if so, how many. Some manufacturers over-provision storage, so that when blocks of flash memory wear out, additional blocks become available. For example, a drive may be listed as offering 120GB of memory, but may actually contain 140GB of capacity. The extra 20GB remains unused until it's needed.

The performance problems involving Intel's consumer-grade X25-M SSD were related to its wear-leveling algorithm.At its most basic, wear-leveling algorithms are used to more evenly distribute data across flash memory so that no one portion wears out faster than another, which prolongs the life of whole drive. The SSD's controller in wear-leveling operations keeps a record of where data is set down on the drive as it's relocated from one portion to another."To accomplish this, you need to move commonly used data to different locations, which naturally leads to some data fragmentation, depending on the size of the data blocks required," said Jim McGregor, chief technology strategist for research firm In-Stat Inc.


computerworld

dhilipkumar

Intel's X25-M issues

In Intel's case, reviewers at PC Perspective spent months testing X25-M SSDs using multiple PCs and applications to study Intel's advanced wear-leveling and write-combining algorithms. The results showed that write speeds dropped from 80MB/sec. when the drives were new to 30MB/sec. and read speeds dropped from 250MB/sec to 60MB/sec. for some large block writes."We found that a 'used' X25-M will always perform worse than a 'new' one, regardless of any adaptive algorithms that may be at play," PC Perspective wrote.

Intel said the drive's performance problem was related to a bug in the firmware that has since been corrected with an upgrade. PC Perspective re-tested the drive and found the problem had, indeed, been fixed.

Another factor contributing to SSD performance and endurance degradation is something native to all NAND flash memory: write amplification. With NAND flash memory, data is laid down in blocks, just as it is on a hard disk drive. But, unlike a traditional spinning disk, block sizes on an SSD are fixed; even a small 4k chunk of data write can take up a 512k block of space, depending on the NAND flash memory being used. When any portion of the data on the drive is changed, a block must first be marked for deletion in preparation for accommodating the new data.

When you compare the size of NAND blocks with the typical write request used by Windows, there's a mismatch because most writes are small. (Mas OS X is less affected by this issue because its write requests are smaller.)

The amount of space required for each new write can vary, but according to Knut Grimsrud, a director of storage architecture in Intel's research and development laboratory, write amplification on many consumer SSDs is anywhere from 15 to 20. That means for every 1MB of data written to the drive, 15MB to 20MBs of space is actually needed.

Read-write algorithms matter
For example, a read-modify-write algorithm in an SSD controller will take a block about to be written to, retrieve any data already in it, mark the block for deletion, redistribute the old data, then lay down the new data in the old block.

"So you had to write that old data back again," said Grimsrud, whose group developed some of the core technology for Intel's SSDs. "None of that is progress in terms of what the user was trying to do with the new data. It was all just overhead. That's the crux of the problem with NAND [memory] management -- all the granularity involved in managing it."It's a general issue of all NAND-based SSDs that these are issues that have to be grappled with and it's just a matter of how well manufacturers grapple with it," Grimsrud added.Because of the limited number of writes and erases an SSD can sustain, manufacturers try to reduce write amplification and reduce overhead. Some use algorithms that combine writes to more efficiently use NAND flash memory space; others use cache to store writes in order to lay them down more efficiently. But details about the techniques used are hard to come by, as each manufacturer considers that technology proprietary.

Intel has addressed write amplification through controller firmware that combines writes to reduce the amount of capacity needed to store data. Intel states that its write amplification is a low 1.1, meaning for every 1MB of data written to the SSD, 1.1MB of capacity is actually used. Another manufacturer, Samsung, pegs the "Wear Acceleration Index" for its SSDs at 1.03, a 3% average overhead for writes.Many SSD manufacturers also use mean time between (or before) failure (MBTF) on their marketing material, a metric given to hard disk drives that may or may not be accurate. All things being equal, a drive's MTBF all depends on how the drive is used. Intel's X25-M's MTBF is 1.2 million hours, about the same as the average consumer hard-disk drive. To put it another way, Intel predicts its X25-M will last for five years -- assuming 100GB or more of write-erase operations per day.

Much depends on whether an SSD uses MLC or SLC technology. The SLC version of Intel's X25-E 64GB SSD can handle up to 2 petabytes of random writes. By comparison, the MLC-based X25-M can handle only 15TB of random writes over its lifetime. Intel said users should think about it as analogous to a car.


computerworld

dhilipkumar

Bugs can cause slowdowns, too


Though it's highly regarded, Intel's X25-M SSD had a firmware bug that adjusted the priorities of random and sequential writes, leading to a major fragmentation problem that dropped throughput dramatically. The issue was originally uncovered by PC Perspective after two months of testing. Those tests showed that write speeds dropped from 80MB/sec. to 30MB/sec. over time, and read speeds dropped from 250MB/sec. to 60MB/sec. for some large block writes.

"I supposed if you ran the same tests across many SSDs, most of them have a similar problem...," said Pat Wilkinson, vice president of marketing and business development at SSD vendor STEC Inc.Algorithms used for wear-leveling are complex and still in their infancy, so while they are likely to improve over time, drive makers cannot eliminate fragmentation all together, McGregor said.

Although Intel acknowledged that all of its SSDs will suffer from reduced performance because of significant fragmentation, the type of write levels needed to reproduce PC Perspective's results aren't likely for everyday users, whether they're running Windows and Apple's Mac OS X. Even so, it still released the firmware upgrade to slow fragmentation.

"The 8820 firmware now services both random and sequential write to ensure that fragmentation does not put the drive in a lower-than-expected performance state," Intel said.

Intel isn't alone when it comes to performance issues. Computerworld recently tested a consumer-grade 120GB SSD from OCZ Inc. Initial tests using the ATTO Disk Benchmark tool showed excellent read and write speeds of 230MB/sec. and 153MB/sec., respectively. But a second test showed that the read/write speeds had dropped to about 178MB/sec. and 80MB/sec., respectively. OCZ admits its Apex SSDs use a controller made by Taiwanese-based JMicron Technology Corp., which is known to have issues with random write performance and it can play havoc when a user is multitasking on a computer. An OCZ spokeswoman suggested customers visit its "very popular" community forum, which offers "plenty of tweaks and workarounds for users to optimize their drives."

Synthetic workloads like those produced by benchmarking software are typically not a real-world test of a drive performance over time because they do many small writes, which can overtax wear-leveling and data combining algorithm. While SSDs can slow over time, Intel's problem was "an edge case," according to Gene Ruth, a senior analyst with the Burton Group.


Standard benchmarking metrics
Later in the year SSD fans should get more -- and better -- information about the various drives and SSD technology on the market. Part of the problem so far in evaluating the drives' longevity and performance has been the lack of standards.The JEDEC plan to publish standards by the end of this year involves two methods to determine the SSD endurance. The first will be targeted at original equipment manufacturers, such as Dell and Lenovo, who will be able to determine the number of erases per block an SSD can sustain. The standard will include predictive life modeling based on various workload classes to confirm, or refute, the stated life expectancy.

A second standard -- targeted at SSD manufacturers -- will be an endurance rating based on an SSD's average performance after use with wear-leveling and write-amplification algorithms. The standard will not be based on pre-erased drives, Intel's Cox said."[Drive manufacturers] know the characteristics of their components," he said. "From those numbers, they can determine how many terabytes can be written to that drive or what the drive is capable of [sustaining]. That will be a standardized number you'd see on a manufacturer's box."

Only then are SSD users likely to get the answers to the questions now about how well their drive will perform over its life and how long it will last.

computerworld