Dec 16, 2019, 04:11 PM

News:

GinGly.com - Used by 85,000 Members - SMS Backed up 7,35,000 - Contacts Stored  28,850 !!


Operating Systems and Servers News

Started by VelMurugan, Mar 17, 2009, 04:57 PM

previous topic - next topic
Go Down

VelMurugan

Mar 17, 2009, 04:57 PM Last Edit: Mar 17, 2009, 04:59 PM by VelMurugan
Cisco shakes up data centre market with Unified Computing launch

Cisco shakes up data centre market with Unified Computing launch
Cisco has launched its Unified Computing System, comprising virtualisation technology, services and blade servers in its bid to shake up the data centre market and put pressure on its previous partners, HP and IBM.

Cisco's new architecture comprises compute, network, storage access and virtualisation within a single rackable system designed to cut IT infrastructure costs and complexity, stretch existing IT investments and allow enterprise customers to build a data centre that could easily be extended for future growth, according to the company.

The company has also taken its first steps in the server market, where it will compete with long-standing partners like HPand IBM, Cisco new UCS B-Series blades, previously known under the code-name Cailfornia, are based on the forthcoming Intel Nehalem processors. Cisco said the blades would incorporate extended memory technology for applications with large data sets.

However, Cisco also is teaming up with software partners such as Microsoft, VMware and BMC to provide technology for its new system. Cisco will pre-package, resell and support Windows Server 2003, Windows Server 2008 with Hyper-V technology and Microsoft SQL Server 2008 as part of the Unified Computing System. For its part, VMware is providing virtualisation technology to the new system and BMC is contributing resource-management software.

Systems integrator Accenture has already signed on to be a services partner, introducing four services options for its customers to deploy the Unified Computing System. Cisco also is inviting its broad network of channel partners to work with the company to provide the new infrastructure to enterprise customers.

Prior to Monday's announcement, Cisco had laid out its intention to eliminate the manual integration of computing and storage platforms with networks and virtualisation systems. In a recent blog posting, CTO Padmasree Warrior acknowledged this will lead Cisco to compete with some of its partners.

Most enterprises have been able to realise the benefits of the first phase of virtualisation, consolidating their data centres for economies of scale and simpler management. But it's been hard to achieve full virtualization, in which virtual machines can continuously move among servers, analysts say. This would allow for adding processing power as demand for an application grows, or for moving tasks off a physical server at night for hardware maintenance.

Vendors of servers, storage and software all can play roles in managing resources in virtualized data centers. Cisco executives have said the network is the best place to tackle many of these tasks because it is the only element of the infrastructure that touches everything.

IBM and HP have not overlooked the importance of networks in controlling data centres. IBM has aligned with Cisco rival Juniper Networks in a broad strategy called Stratus, and HP is expected to increasingly tie its growing ProCurve networking business in with its computing offerings.

Cisco has pushed to bring more functions into the network infrastructure for several years. These have included security, application optimisation, and adaptation of multimedia to fit different clients and purposes. As revenue growth from its core routing and switching businesses slows, the company is aggressively branching out into new areas, including consumer electronics and the Telepresence high-definition videoconferencing line.

Source : TechWorld

VelMurugan

Data centres fail energy efficiency tests

It's not all doom and gloom in the IT industy. A survey of US execs reveals they're spending more on their data centres, although companies are still not curbing power use

On average, data centre budgets are increasing nearly 7 percent this year, according to an online survey of 300 senior-level IT decision makers at some of the largest North American companies.

More than four out of five companies surveyed are planning data centre expansions in the next 12 to 24 months. Space requirements for these data centres are growing 16 percent year over year, to an average of 21,000 square feet in 2009.

"One finding that may surprise people is that companies are increasing their data centre budgets in 2009. This is a reflection of how companies view their data centers as critical assets for increasing productivity while reducing costs," Chris Crosby, a senior vice president at Digital Realty Trust,. Digital Realty Trust, which offers a variety of data centreproducts and services, conducted the survey in January.

The survey comes on the heels of a study by the AFCOM Data Center Institute, which found a bleaker picture. AFCOM said budget cuts are forcing IT departments to shed older, more experienced workers, and that many data centres are delaying or cancelling planned physical expansions and relocations.
Specific findings from Digital Realty Trust include the following:

    * 84 percent of surveyed companies are planning data centre expansions in the next 12 to 24 months
    * 64 percent of companies that plan to expand in 2009 will do so in two or more locations;
    * Surveyed companies plan to increase data centre spending 6.6 percent in 2009;
    * Data centres account for 35 percent of the average IT budget in surveyed companies.

Digital Realty Trust also surveyed companies about PUE - or power usage effectiveness, a Green Grid-devised measure that compares the total power used in a facility with the power devoted specifically to IT equipment. A PUE of 2.0 would indicate that for every watt of power consumed by servers and other IT equipment, an additional watt is needed to cool and distribute power to these IT resources. A PUE of 3.0 indicates that two additional watts are needed to cool and distribute power.

A large majority of companies surveyed were measuring power usage. Four out of five reported PUE ratings of at least 2.0, and 26 percent reported PUE ratings of at least 3.0.

"Current facilities are not highly energy efficient," the Digital Realty Trust concluded.

VelMurugan

Rackspace set to take on Amazon

Hosting company Rackspace is set to take on Amazon with the launch of two new cloud-based services. It has launched CloudServer , the pay-as-go, on-demand server option and has taken Cloud Files, the online storage service, that competes with Amazon's S3, out of beta.

Emil Sayegh, general manager for Rackspace cloud services said that the availability of Cloud Servers, Cloud Files and the existing Cloud Sites meant that the company was the only one that offered a complete suite of cloud services. He added that with its dedicated server offerings it was also the only company that offered a complete choice between dedicated and shared servers.

He said that the company was going to use that flexibility to position itself against Amazon, although he also said that the company was going to be a strong player in the space thanks to its service ethos. "We're a service company, contactable by phone, email or chat, 24x7, that's what we're good at." He added that the completion might find the technical provisioning only one aspect of supplying on-demand products and fall down on the customer service side. "Service is complicated," he said - although Amazon with its high emphasis on customer satisfaction is probably aware of that.

Sayegh said that the company was hoping to build even more on its service offering. "At the moment we manage dedicated and cloud through separate consoles, we're aiming to bring them together. We all have separate billing and are looking to merge them into one in the near future," he said.

In addition to the new launches, the company has announced that its sister companyJungle Disk, the company it bought last year, will now be offering it fle and backup services on Rackspace as well as Amazon S3. "Customers logging on to the service will now be offered a choice," he said. Although Jungle users are offered a choice, Rackspace's Erik Carlin blogged that the Rackspace offering provided significant advantages notably on performance and price.

In a further move, Sayegh announced that the company was dropping the Mosso brand name that it had used in the US. "It was name that was unknown in the rest of the world," he said, "what was the point of keeping it?"

Pricing for the Rackspace cloud serices is currently in dollars although the company is moving to local currency pricing before the end of the year."We hope to have UK pricing up well before then, " he said. He added that Rackspace was also looking to open new data centres in the UK in the near future but wouldn't be drawn on a date.

VelMurugan

Amazon cloud users get reservation option

Amazon Web Services (AWS) has started offering customers the opportunity to reserve capacity in advance on its  Elastic Compute Cloud infrastructure.

The new pricing, called Reserved Instance, is an additional pricing model to the company's pay-as-you-go pricing, said AWS spokeswoman Kay Kintonl. Like its current pricing model, customers will still only pay for the compute capacity they consume, she said.

Under the new initiative, which is only available in the US currently, customers pay once to reserve capacity on EC2 in either one-year or three-year terms. If they find they will need more usage than has already been reserved and paid for, they can also add more capacity via the on-demand model and pay for that as needed.

Reserving computing capacity is a good option for customers who have predictable computing needs and plan for that usage in their IT budgets ahead of time, Kinton said. AWS added the pricing model at the request of some larger customers who told them that reserving capacity "more closely aligns with how they budget for IT," she said.

For companies that can use reserved-capacity pricing, it actually reduces their IT costs, she said. For instance, AWS charges $.10 per hour for what it calls a "small" On-Demand Instance on EC2. A comparable IT deployment using a one-year Reserved Instance pricing costs $325, while a three-year costs $500. That reserved pricing works out to be $.03 per hour, according to an AWS pricing chart.

AWS expects to offer the new pricing to European customers soon, Kinton said. AWS began offering its EC2 service to Europe in December.

VelMurugan

Hitachi enters server, network monitoring business

Hitachi Data Systems has released three new software products to provide enhanced monitoring of business applications, virtualised servers and the storage they use.

Hitachi said the products address a new market for the storage systems and software provider: a holistic view of everything that an enterprise uses to connect to its storage systems. The products will also work with competing vendors' systems.

The Hitachi IT Operations Analyzer is an application aimed at midsize companies that provides a single-pane view of data centre servers and IP networks, as well as Fibre Channel storage-area networks, LANs and the storage hardware attached through those networks. Hitachi said the application provides performance and availability monitoring across heterogeneous servers, networks and storage systems.

"It's certainly a new kind of product for Hitachi in that they obviously come from a storage background. I think this is a first-generation effort to really reach beyond storage and offer customers and channel partners visibility into both network servers and storage," said Mary Johnston Turner, an analyst at IDC.

Turner said Hitachi will likely use the IT Operations Analyzer as a platform for other monitoring tools in the future. The software features automated root-cause analysis and network visualisation and runs off an agentless architecture. It has a web-based interface.

"It uses I/O path awareness to look at bottlenecks proactively so there's no need for active monitoring or the setting of thresholds," said Sean Moser, vice president of software at Hitachi Data Systems.

Moser said Hitachi's IT Operations Analyzer is designed to help users monitor and improve customer service levels and can easily be managed across a data centre without requiring specialised training.

Hitachi's second release, its Virtual Server Reporter (VSR), uses Aptare's StorageConsole to provide an end-to-end view of virtualised VMware servers and their respective storage usage. VSR is intended for enterprise installations and extends the capabilities of heterogeneous storage management reporting to individual virtual machines.

VSR is an agentless application that automatically discovers all ESX server deployments and then monitors hardware capacity and CPU utilisation. The application can automatically load-balance workload across virtual machine server deployments, "and it helps customers find spare capacity across their server footprint," Moser said.

VSR works independently of storage installations, which is a first for Hitachi Data Systems, a storage-centric vendor.

Moser said that by integrating the reporting of virtual servers, users can increase the visibility of their virtual environments and more effectively manage the storage and backups associated with virtual machines, allowing them to make the best use of storage assets and decrease costs.

Hitachi's third product is the Storage Command Portal, which consolidates storage reporting within the Hitachi Storage Command Suite of management software to report on Hitachi storage systems and the business applications that rely on them.

The software provides a business application view of the Hitachi storage environment, with capacity and performance reporting to help manage service-level agreements.

VelMurugan

AMD drops dual-core server CPUs from lineup

Advanced Micro Devices has revealed it is cutting prices of some Opteron chips by up to 43 percent, and has also withdrawn dual-core server processors from its product lineup.

Dual-core Opteron chips were part of AMD's earlier server processors, but were not included in an updated price list posted on Tuesday. That leaves only quad-core Opteron processors as part of AMD's server chip lineup.

"For the bulk of our customers, our AMD Opteron processor offerings are all now quad-core," said Teresa Osborne, an AMD spokeswoman, in an email.

AMD, however, is still offering dual-core Athlon and Phenom chips for consumer desktops and laptops. It also offers triple-core Phenom chips for desktops.

AMD also slashed prices on some Opteron server chips. It cut the prices of Opteron 2374 HE chips by 43 percent from $450 (306) to $255 (173) and prices of Opteron 8376 HE processors by 35 percent from $1,514 to $989. AMD also cut prices for other Opteron HE chips.

The price cuts went into effect last week with the introduction of four quad-core Opteron HE processors, Osborne said.

The HE chips are part of AMD's family of Opteron server processors code-named Shanghai. The offerings fall between low-power Opteron EE chips and high-end Opteron SE chips, which are speedier and draw more power than HE chips. Shanghai chips draw between 40 watts and 105 watts of power and operate at speeds between 2.1GHz and 3.1GHz.

The cut of dual-core server processors is an incremental step as chip companies like AMD and Intel add cores to improve processor performance while trying to reduce power consumption. Increasing the number of cores helps servers execute more tasks, which could lower the number of servers needed in data centres and cut hardware acquisition and energy costs for customers.

AMD is set to ship the 6-core Opteron processor code-named Istanbul in May. AMD is also planning to release a 12-core chip codenamed Magny-Cours in 2010. Both the chips will be made using the 45-nanometer process.

In an effort to jump ahead of Intel in the core battle, AMD also last week said it would release a chip codenamed Interlagos in 2011 that will include between 12 and 16 processor cores. Intel has only officially announced an 8-core Xeon chip code-named Nehalem-EX, which is due for release in 2010.

Interlagos chips will go into servers with two to four processor sockets, resulting in a maximum of 64 cores per system. The chip will be primarily used in data centre servers, and will be made using the 32-nm process technology. AMD also announced a processor codenamed Valencia with up to 8 cores, also made using the 32-nm process, which is set for release in 2011.

VelMurugan

Intel touts ROI returns from new machines

Microsoft is not the only vendor smarting from companies delaying PC purchases, after Intel presented new research citing how large firms buying new PCs equipped with its vPro security and management technology, can recoup their investment in less than a year.

A company with 30,000 PCs that upgrades to new Core 2 Duo or Quad computers would make its money back in 17 months - and in just 10 months if those PCs are also equipped with vPro-enabled motherboards, according to Intel.

One analyst, however, said such ROI figures apply only to a limited set of firms and do not encompass other costs of PC upgrades, such as buying or upgrading new software licenses, something many companies have also been delaying.

"This might make sense for IT outsourcing firms or very large companies," said Jim McGregor, an analyst with In-Stat. "But for many of us, this is the worst economic downturn of our lives. Unless it fits in your company budget, it doesn't make sense."

According to an Intel-commissioned survey by Wipro Consulting of 106 North American and European companies, only 32 percent have slowed their PC refresh rates in the last six months. The majority (60 percent) of the companies haven't changed their PC upgrade policy, while 8 percent have actually accelerated them. The companies that were surveyed had a minimum of 5,000 PCs if they were based in North America, 2,500 if they're located in Europe.

"Corporate IT is overdue for an update from Windows XP," said Rob Crook, general manager and vice-president for Intel's business client group. "Yes, it's tough economic times, but those who can, are [upgrading]."

Though still profitable, Intel blamed delayed PC upgrades on down revenues in recent months.

Slower PC upgrades mean that many enterprises haven't tried out vPro, which Intel launched in April 2006. The first desktop PCs featuring the technology began shipping about several months later.

vPro enables a number of services such as Active Management Technology (AMT) that allow IT managers to remotely configure and set policies for PCs, and Intel Trusted Execution Technology (TXT) to enforce security policies.

According to independent analyst, Jack Gold, only 10 to 15 percent of enterprise PCs deployed today have vPro today. Gold released research last week showing that upgrading laptops makes financial sense, primarily due to the cost of maintaining breakage-prone, out-of-warranty hardware.

Desktop PCs don't have that problem, Gold said. Upgrading to new vPro-enabled PCs on the desktop side can still be financially smart, though he said the payback is less straightforward than Intel may make it sound.

TXT is supposed to be more secure because it is in the hardware, not software, layer. That didn't stop security experts from breaking TXT earlier this year, and explaining how it was done.

McGregor still praises vPro's features. But he says vPro technology needs to be paired with strong back-end management tools and run in large environments for it to translate into increased security and lower costs.

VelMurugan

Microsoft talks cloud for infrastructure software

Microsoft has begun outlining its plans for both public and private clouds in relation to its stack of data centre infrastructure software.

At its annual Microsoft Management Summit this week, Redmond laid out its vision of a corporate IT world that straddles both "public clouds and private clouds," six months after it launch its Azure cloud OS.

The announcement comes a week after VMware made a similar announcement around private clouds.

But rather than technology and architecture changes, Microsoft is using semantics to align its existing stack of infrastructure and management software with the trendiest topic in network computing.

Microsoft's "private cloud platform" is built on a familiar IT combination of Microsoft software. And the cloud moniker is more or less the third phase of Microsoft's management platform that began in 2003 as the Dynamic Systems Initiative (DSI), morphed into Dynamic IT and is now its public cloud/private cloud strategy.

"Think about the private cloud as your internal data centre optimised for performance and cost and with the compatibility with your existing applications all built on tools you know today - Windows Server and System Center," said Bob Kelly, Microsoft's corporate vice president for infrastructure server marketing, during his opening keynote on Tuesday. "We believe that business enterprise customers will demand the same levels of reliability and predictability [found in the public cloud] for their internal data centres."

The one piece that many corporate IT shops may not be up to speed with at this point is virtualisation, which Microsoft has built into Windows Server with Hyper-V and System Center with its Virtual Machine Manager.

Both of those products will get more sophisticated as new versions are rolled out in the next six months.

Microsoft also is aligning its Visual Studio development tools and .Net platform with both of its cloud environments to make it easy for users to build applications that straddle the two.

While little of this is new, Microsoft is clearly beginning to turn its marketing message toward the cloud phenomenon and create a bridge to its Azure platform for customers already committed to Windows infrastructure internally.

"We think customers will not standardise on one cloud," Kelly said. "We think it will be critical to deploy applications on premises and federate across two clouds."

To that end, Microsoft is building cloud federation features into a forthcoming version of Virtual Machine Manager that will let users manage from a single console virtual machines that are spread across public and private clouds.

Microsoft also is introducing enhancements to its web platform with Internet Information Server 7.5 and supporting ASP.Net on its Server Core, a scaled down installation of Windows Server 2008. The platform will rely heavily on PowerShell, Microsoft's new scripting technology.

And Microsoft is developing new technologies such as server applications virtualisation so users can create application and operating system images that can be combined on the fly for quick and efficient deployment on any cloud-based environment.

Microsoft said public clouds will eventually take on a number of key characteristics.

"Availability will be king," Kelly said. He also said public clouds will be distributed and heterogeneous in nature and be able to expand and contract.

"These four elements are absolutely critical to deliver that service, that foundation to the private cloud," Kelly said.

VelMurugan

Sun updates Solaris with Nehalem

Sun Microsystems made released a new version of its venerable Solaris server operating system - expected to be the last release before the company's takeover by Oracle.

Enhancements include features that boost performance and lower energy consumption for servers powered by Intel's new Nehalem-class Xeon processors, and improvements to virtualization and storage.

The next major release of Solaris, version 11, is scheduled for the middle of next year, says Larry Wake, group manager for Solaris marketing at Sun. He declined to speculate on the fate of Solaris after the Oracle acquisition.

After buying Sun earlier this month, Oracle CEO Larry Ellison called Solaris "by far the best Unix technology available in the market" and the "heart of the business" for Sun.

Sun released Solaris in 1992 as a successor to its earlier SunOS. The then-proprietary operating system was historically tightly integrated with Sun's Sparc CPUs, though that has changed.

"We want to make sure Solaris runs well on the boxes we make, but also make sure it works on everybody else's systems," Wake said.

That commitment resulted in features tailored for Intel's new Nehalem CPUs such as a "power-aware dispatcher" that enables Solaris to aggregate workloads onto the fewest number of CPU cores needed and then turn off the rest, he said.

Solaris now can also offload network traffic processing from the CPU onto compliant network cards.

Sun released Solaris 10 early in 2005, the same time it announced a plan to gradually make the OS fully open-source. Its last major update to Solaris was in mid-2006, when it introduced its 128-bit ZFS file system and its Containers lightweight virtualization technology, and also bundled the PostGres open-source database.

Both ZFS and Containers were also updated with this release. ZFS now has a high-speed cloning feature that lets a new server point to existing data rather than copy and write a second, unnecessary time. That can speed up the creation of virtual machines under Containers, Wake said.

ZFS also can now recognise solid-state disks (SSD) and take advantage of their faster speed.

Sun officially released OpenSolaris last year. Some analysts speculate that Oracle might want to merge OpenSolaris with Linux. Wake declined to comment.

VelMurugan

Windows 7 could be out as early as October

Forget talk about next year. there are indications that Windows 7 may be available as soon as October.

Speaking at an Acer press launch, a company  representative was quoted as saying that a new Windows 7 PC called the Z5600 wouldl be one of the company's key products in the run up to the Christmas holiday period, and would be on store shelves on 23 October.

Microsoft has so far refused to commit to an exact release date for Windows 7, but speculation has suggested that the company would release it later this year, rather than early 2010 as some executives have been predicting. The operating system was made available as a beta in January and a release candidate (RC) is now with developers.

The Windows RC will be available to the public next week, and those who download it will be able to use it for free for a year.

The distribution of the RC is one of the last steps before the Windows 7 code is locked down and sent off to manufacturers ahead of its commercial release.

Go Up
 

Quick Reply

With Quick-Reply you can write a post when viewing a topic without loading a new page. You can still use bulletin board code and smileys as you would in a normal post.

Note: this post will not display until it's been approved by a moderator.
Name:
Email:
Verification:
Please leave this box empty:

Type the letters shown in the picture
Listen to the letters / Request another image

Type the letters shown in the picture:

shortcuts: alt+s submit/post or alt+p preview
IT Acumens Web Design Chennai | GinGly :: Mobile SMS Backup | CineBuzz :: Latest Cinema News | My Kids Diary :: Kids magical moment :: Book domain name @ 99 Rs monthly
Copyright 2005 - 2019 :: IT Acumens :: All Rights Reserved.
ITAcumens Forum with 2 lakhs post running for 14 years - Powered by IT Acumens Dedicated Server