Data Storage News

Started by VelMurugan, May 07, 2009, 03:33 PM

Previous topic - Next topic

VelMurugan

Windows 7 will boost SSDs, says Microsoft

Microsoft has given a strong, though qualified, endorsement for running Windows 7 on PCs equipped with solid-state disk (SSD) drives, saying it has tuned the upcoming operating system to run faster on the still-emerging storage technology.

At the same time, Microsoft admitted that it has not solved two lingering problems that can cause SSDs - mostly lower-end, older ones - to perform sluggishly or even worse than conventional hard drives.

Out of the box, Windows 7 should install and "operate efficiently on SSDs without requiring any customer intervention," Microsoft distinguished engineer Michael Fortin wrote in a posting at the Engineering Windows 7 blog.

Users of Windows 7 - the Release Candidate 1 became available for public download today - will experience the full benefit of SSDs in areas where the storage technology shines.

Small chunks of data can be read about 100 times faster from an SSD than a hard drive, since an SSD doesn't require a rotating disk head to be physically repositioned, Fortin wrote. SSDs will also read large files such as videos up to twice as fast as a hard drive, wrote Fortin. Many SSDs will also write large files more quickly than a hard drive, especially when the SSD is new or empty.

The first generation of SSDs introduced mostly via netbooks two years ago were largely a disappointment, as they were slower and pricier than expected. But performance gains, as well as falling prices, have many PC makers excited anew about SSDs.

Asus has debuted its S121 netbook with a 512GB SSD that will run Windows 7 when it becomes available.

However, Fortin said that Windows 7 users could experience freeze-ups while writing small files and see overall performance slow down over time, depending on the quality and age of the SSD they're using. The freezing problem is caused by the "complex arrangement" of memory cells in flash chips, he said, as well as the fact that data must be erased from cells before new data can be written to them.

And few SSDs today include RAM caches that can speed up performance, as most hard drives do. As a result, "We see the worst of the SSDs producing very long I/O times as well, as much as one half to one full second to complete individual random write and flush requests," Fortin wrote.

"This is abysmal for many workloads and can make the entire system feel choppy, unresponsive and sluggish."

That is despite improvements Microsoft made in Windows 7 such as resizing partitions to better fit SSDs and "reducing the frequency of writes and flushes," wrote Fortin.

Even features such as ReadyBoost, which was created by Microsoft to take advantage of USB flash drives using solid-state memory to accelerate performance of Windows Vista or 7, will actually slow down when run with most SSDs, wrote Fortin. As a result, Windows 7 will turn off ReadyBoost for SSDs.

Meanwhile, performance degradation over time is caused, again, by the need to erase data before it can be written, and the increasing fragmentation of data on SSDs as they fill up.

Some vendors such as Intel say they have mitigated the problem on their SSDs, but none claim to have solved it.

Unlike with hard drives, automatically defragmenting SSDs is not recommended because it can prematurely wear them out. Windows 7 turns off defragging by default.

Fortin said the performance degradation is not as serious as the freeze-ups. "We do not consider this to be a show stopper," he wrote. "We don't expect users to notice the drop during normal use."

Disk compression is also not recommended for heavily-written data such as web browser caches or email files, Fortin said, because of the potential for a slowdown on SSDs, though it is fine for non-heavily written data. However, some features, such as Windows Search and Bitlocker encryption, should work identically well or better on SSDs, Fortin said.

Source : techworld

VelMurugan

Clearpace goes virtual with NParchive

British archiving software vendor Clearpace is launching a virtual appliance version of its data compression product NParchive.

NParchive is used to compress and archive structured data, including databases. Data within NParchive is typically compressed to less than 5 percent of its original data size, according to Clearpace. The data can then be accessed using normal SQL queries.

Advantages of running the product as a virtual appliance, as opposed to on a traditional server, include easier installation - since the appliance comes prepackaged - and better use of computing resources, according to John Bantleman, CEO at Clearpace.

The advantages with virtual appliances are so big that offering them should be mandatory for all software companies, Bantleman said.

The appliance itself can run on virtualisation platforms from VMware, Microsoft and Xen, according to Bantleman. He also hints about the product becoming available as a cloud-based service in the near future.

Archiving solutions are generally a good fit for running in the cloud, according to Bantleman. It's not something a customer uses every day and if more performance is needed to speed up archiving, the cloud can supply that, he said.

VelMurugan

Seagate drives hit by firmware glitch

A UK data recovery specialist has warned that users are still at risk from a firmware update from Seagate, which could leave their organisations with "bricked" hard disk drives (HDDs).

The Seagate firmware issue has been known for some months now, Indeed, UK-based DiskEng Data Recovery, said that it had been aware of the problem late last year, but since then it had seen "a massive rise in emergency data recovery cases."

This was mostly during the first quarter, but DiskEng warns that it is still seeing cases of bricked Seagate HDDs.

According to DiskEng, the serious firmware issue concerns the 500GB to 1.5TB range of Seagate hard drives. These include the Barracuda 7200.11, Barracuda ES.2 SATA, and DiamondMax 22 drives, with firmware revisions SD15, SD16, SD17, SD18 and MX15.

Alkas Ali, a director at DiskEng, said the original problem with these HDDs is that the "firmware contains an event log (i.e. what it was doing last), which during a certain power cycle of the drive, causes the event log to point to an invalid location, a location that simply does not exist. This causes the drive to hang in a busy state, which the drive cannot come out off once triggered, and therefore the drive then remains inaccessible."

According to Ali, Seagate realised there was a problem and issued a firmware update to combat this problem. Unfortunately, the update left some working drives "bricked" (i.e. dead).

"We have had to recover data from these apparently damaged drives and also the ones that failed suddenly because of the firmware bug," said Ali. "What happened is that the firmware update was supposed to correct this problem, but ended up destroying the drive completely."

"Seagate then issued another firmware update, and this second update is fine," he said. But Ali warned that a problem can still occur because of the gap between updates, and people can still end up with bricked drives because they have failed to apply the second firmware update.

"This was an issue that surfaced quite some time ago, and we had posted information for customers on our website as well as on other industry sites to provide a fix," a Seagate spokesman said in an email. He pointed to a Seagate support site that allows users to determine if their hard disk is affected.

Ali admitted that the number of faulty drives has now fallen back to normal levels, thanks in part to Seagate releasing the corrective firmware, as well as offering a free data recovery service if anyone encountered this problem.

But he warned that Seagate's recovery service is too lengthy. He said that many businesses which typically use these HDDs in their servers, needed to recover the data within a single day. Seagate, said Ali, first needs to confirm that it is the firmware issue [that caused the fault] and that "takes time and takes clearance," in order for the drive to qualify for free data recovery.

VelMurugan

Tandberg launches virtual tape range

Tandberg Data has launched a new range of virtual tape libraries offering users the chance to export data to both virtual tape and physical tape.

Each unit can connect up to 100 separate systems, presents unique virtual library for each host system.

"The ability to move to physical tape gives businesses another option," said James Jackson, product manager for Tandberg UK. "It frees up some storage space and the physical tape can be used for offsite back-up."

Tandberg said that the DPS1000 offered users an easy way to connect to a tape library. "We offer a direct connection," said Jackson

He said that the product also benefited from a single management tool Jackson said that users could take a single, unified view of the system. "There's just one throat to choke as they say in the US," he said.

There are two models in the series: the DPS 1100 and DPS1200. The 1100 is a 1u model that offers 3TB of storage while the 1200 is a 2u unit with 6TB of storage. In addition, the 1200 offers a redundant power supply.

The next release of the DPS 1000 will include data deduplication, said Jackson. "We're just working through some issues we've encountered with the technology and expect to have a release out in the fourth quarter of this year."

VelMurugan

Samsung expands memory cards to 32GB

Samsung has announced a 32GB NAND memory card, the highest-density embedded memory card to date and one that offers twice the capacity of previous cards.

Samsung's 32GB moviNAND card is the first embedded memory card to use 32Gbit chips based on 30-nanometer lithography technology. Current moviNAND cards use 16Gbit chips based on 40nm-class technology.

Each 32GB moviNAND device incorporates eight Samsung 30nm-class 32Gb NAND chips, a multimedia card (MMC) controller and firmware. Samsung's 30nm moviNAND card is also available in 16GB, 8GB and 4GB capacities.

Due to an explosion in the amount of personal data stored on mobile devices, the use of higher-capacity memory cards is expected to grow exponentially in the next four years. Research firm iSupply expects an eight-fold growth in 32GB and larger memory cards by 2013.

About 120 million 16Gbit NAND-based cards have shipped to data, about 13 percent of global memory card shipments. By 2013, 950 million cards are expected to have shipped, making up 72 percent of the total world shipments, according to iSupply.

The new cards are aimed at high-end phones, music players and other mobile consumer electronics. Samsung said the higher-capacity cards offer better performance for processing and storing large amounts of multimedia content such as videos, video games and TV broadcasts.

Samsung's proprietary moviNAND chip uses a high-speed interface jointly developed by JEDEC and MMCA (MultiMediaCard Association) and the eMMCv4.3 specification that includes a power-on feature that reduces boot-up time and a sleep command to cut power consumption.

VelMurugan

IBM launches new range of SSDs

IBM is continuing its push into SSDs (solid-state drives), announcing flash drives for server and storage platforms as well as new software for allocating data among different types of drives.

Enterprise SSDs allow for faster access to data but cost far more, per bit, than spinning HDDs (hard disk drives). IBM is clearly committed to the emerging technology, as are EMC and other enterprise storage vendors. IBM, though, doesn't believe SSDs will make up more than 5 percent of any average company's total storage capacity.

For the foreseeable future, SSDs will be used as part of tiered storage architectures alongside HDDs, said Charlie Andrews, director of marketing in IBM's Dynamic Infrastructure group. For that reason, the company offers a variety of software to help store "hot" data in SSDs and "cold" data on HDDs. Its latest annoucement,  the IBM i:ASP Data Balancer, automatically shifts different bits of data to the most appropriate tier in a storage system. The software uses an algorithm that draws upon information such as how often each bit of data has been used, Andrews said. The i:ASP Data Balancer is designed for IBM's iSeries servers, part of the company's Power line.

The Power line became the latest class of IBM servers to have SSD options, with a set of 69GB SSDs going on sale that can be used on all Power6 systems. These SSDs are available in 2.5-inch and 3.5-inch form factors and use a SAS (serial-attached SCSI) controller, which offers greater speed and reliability, according to IBM. List prices for the Power SSDs are about $145(£92) per gigabyte.

The company also announced availability of new SSDs for System X servers, which have been offered with SSD options since 2007. There is now a 50GB SATA (Serial Advanced Technology Attachment) drive in a 2.5-inch disk package, which can run on 2.1 watts of power. Another 50GB drive, designed for higher I/O performance, comes in either a 2.5-inch or a 3.5-inch form factor. These SSDs can be used with Windows, Linux and VMware's ESX Server. The list price is about $50 per gigabyte.

IBM has also announced availability of 3.5-inch SSDs for its System Storage DS8000 storage platform.

The new SSDs can improve IBM DB2 transaction performance by as much as 800 percent over HDDs, while reducing the physical space requirement by about 80 percent and energy consumption by as much as 90 percent, IBM said.

Enterprise-class SSDs are built to much higher standards than consumer versions, Andrews said. Consumer flash drives, such as on a thumb drive or portable music player, can pack much more data into a given amount of space with a multi-level design. The enterprise drives have only a single level because they need to be longer lasting and more reliable under more intense use, he said.

As a result, enterprise SSDs are more expensive and can't ride the same steep curve toward higher density and lower price per bit, Andrews said. The upside is that most enterprise drives should last five years under typical enterprise use, he said.

VelMurugan

New Amazon service offers quick backup option

Amazon's S3 has launched a new option for its cloud service. AWS Import/Export, for quickly uploading large amounts of information to its data centres. It uses a well-developed, multi-modal content delivery network that can transmit terabytes of data faster than a high-speed leased line.

The fact that this network is based on jets, trucks and messengers with walkie-talkies doesn't make it any less useful to enterprises, many of which have been using overnight shipping services for backups for several years, according to 451 Group analyst Henry Baltazar. Just make sure the data's encrypted in case it falls off the back of a truck or otherwise gets lost, he said.

According to an Amazon blog post, AWS Import/Export from Amazon Web Services lets customers send virtually unlimited amounts of data to Amazon when they want to start using S3 for the first time, back up their content offsite, or streamline the Direct Data Interchange process with their partners. All customers will have to do is copy their data to a device, such as an external hard drive, create a manifest file with authentication information and a digital signature, email loading instructions and ship the device. AWS lays out guidelines for the storage devices on an information page at its website.

When it arrives, the device will go to an AWS Import/Export station and the data will be loaded onto the customer's S3 data bucket, generally the next business day. Customers will pay US$80(£50) per device handled and $2.49 per hour for the labour involved in loading the data, plus the standard charges for storing that data on S3. The service is available now in beta testing, for importing only, but will be expanded to include exporting in the coming months, Amazon said.

With many enterprise Internet connections, Import/Export will often be faster than online uploads or downloads, according to Amazon. For example, on a 1.5MB/s leased line per second), with 80 percent of that line devoted to the transfer, it would take 82 days to send 1TB of data, Amazon said. As a general rule, S3 customers with such leased lines should think about using Import/Export for sending 100GB or more of data, the company said.

Even a faster leased line (just under 45Mb per second) would take three days to send 1TB, so shipping would be a good option for anything above 2TB, Amazon said. A Gigabit Ethernet Internet connection could send 1TB in less than a day, Amazon said. But even if an enterprise is using a metro Ethernet link like that, it's unlikely to have that amount of capacity all the way to Amazon, 451's Baltazar pointed out.

"If you really have to have that data up there fast, it does make sense," Baltazar said. The method isn't new: For example, when banks set up new branches and want to have large amounts of information available on site, they typically ship drives because they don't have days to wait for a transfer. Online backup and disaster-recovery vendors also offer this approach. It's developed in just the past few years as the growth in data, driven by multimedia, has outpaced the acceleration of Internet connections, he said.

What's new is that Amazon, a cloud storage provider that offers more than just backup, is using the technique. The core business model for AWS is providing storage on S3 for applications that run on Amazon's EC2 cloud computing infrastructure.

VelMurugan

Users set to ditch tape for online storage

Users are set to ditch tape as a storage medium as one in ten businesses have lost data following a failure of a tape backup system.

That's according to research published from business continuity specialist Connect. The survey of 151 UK IT managers and directors also found that three quarters of SMEs still use traditional backup tapes as the default option to store their data, but that nearly half (49 percent) of all companies expect to switch to an online backup service within the next 3 years.

The study also found that one in five have already switched away from traditional backup tapes, with 10 percent expected to shift across over the next 12 months.

Tapes have been used to store data since the 1960s, but Connect feels that tape as a backup method is "hugely vulnerable and problematic. It is not even cheaper than more reliable options," said the survey.

Mark MacGregor, CEO of Connect told Techworld that over the last 12 months his company had stopped recommending tape as standard for their clients, mostly down to the poor reliability of tapes for recovering data and the decline in costs for online backups.

"Until 18 months ago, our recommendation to our clients was that online backup was not speedy enough and was too expensive," he said. "Online backup was ok for small amounts of data, but over the last year or so, that equation has changed, as the price of online backup has come down and line speed has improved."

MacGregor said that the failure rates of tapes was not so much due to the technology itself, but rather with what people actually did with their tapes. "It is not failure of tapes per se, more failure of the process," he said. "Those process problems, combined with falling costs of online backup, or alternative methods, makes switching to online backup a no-brainer now. Obviously, there can be exceptions though."

However, data storage provider Tandberg Data dismissed the idea that tape has had its day.

"Tape is not going to die, it will evolve," said Simon Anderson, product manager for tape drive and media at Tandberg. "Maybe not as primary backup, but certainly for archiving purposes. The retention period for storing data is increasing, especially considering recent European legislation, and organisations such as Google and Yahoo are now storing data for much longer."

"Connect are obvious painting a rosy picture (of online backup), but if you compare it against it all tape technologies, customers will see the difference," he added. "LTO-4 runs at 120MB/s, so it is more a case of utilising new tape technologies, and customers using new tape technology will benefit from new advantages."

"Online backup is not suitable for everyone... and online backup can be an expensive mistake if you get it wrong," he warned.

"You can store tape in a vault or offsite for 30 years," added Marije Stijnen, director of corporate marketing at Tandberg. "Tape is also a no power medium, with no spinning disks needed, so there are huge power consumption savings," she added.

"Tape faults are more associated with human error, such as leaving tapes too long before rotation etc," said Stijnen. "When you are dealing with huge data sets and quick access, I would hate to rely on online backup."

"Tape is not irrelevant now, and indeed for some companies, it will still be the most suitable medium," concedes Connect's MacGregor. "But the number and type of those companies is decreasing all the time."

"Tape is not dead, but it certainly is not going to be growth industry over the next few years," he added. "That is why we believe that the default option over next three years is going to be online backup. My instinct is that it will be quicker than that."

dhoni

they have given more data storage details have given
this should have been more information in many type of data storage