HP-Microsoft Cloud Pact Targets Large Enterprises

While it is not known how many customers have signed on to Microsoft's Office 365 service since it launched nearly six months ago, Office division president Kurt DelBene last month said 90 percent are small businesses. Gunning for the largest of corporations and government agencies, Microsoft and Hewlett-Packard said they will jointly offer Office 365 with the HP Enterprise Cloud Services portfolio.

The two companies announced a four-year partnership in which HP will host at its datacenters Microsoft's Exchange, SharePoint and Lync, as well as resell the subscription-based Office 365. The pact is aimed at organizations with more than 5,000 seats, Patricia Wilkey, HP's global director of marketing for workplace services, told me this week.

Most enterprises of that size are typically not looking to migrate their entire user bases over to public cloud services like Office 365 for a variety of reasons. One key factor is governance, compliance or the need for specific service levels. By bundling both Office 365 with a private cloud implementation of Exchange, SharePoint and Lync, the two companies are arguing they can offer these large customers an integrated hybrid cloud offering.

"It is a solution at a private cloud level, still allowing rapid scalability, but it is designed to meet segregation of data needs, a single governance model, auditing rights, a higher SLA for customers who might need immediate real-time collaborative access to business applications and support of multiple applications," Wilkey said.

Through HP Enterprise Cloud Services, she said private-public cloud integration would allow for a seamless user experience such as looking up free-busy time on calendars, sharing of SharePoint content, combined directories and other interactions among individuals.

HP hasn't announced any large enterprise wins from this pact, though Wilkey insists there are numerous interested parties.

Posted by Jeffrey Schwartz on December 14, 20110 comments


Amazon Wins Cloud Storage Shootout, Microsoft Places Second

Amazon Web Services edged out 16 cloud storage providers in a 26-month stress test that measured scalability, availability, stability and performance.

The company's Simple Storage Service (S3) was one of only six that made the cut, with Microsoft's Windows Azure coming in second. The tests were conducted by Nasuni, a provider of premises-based network attached storage (NAS) gear that uses cloud storage providers for primary storage backups and/or disaster recovery.

While this benchmark is based on one vendor's assessment aligned with its own criteria and service-level requirements, it is the first I have come across that has measured cloud storage providers over a prolonged period of time and publicly disclosed its findings.

In addition to Amazon and Microsoft, AT&T, Nirvanix, Peer1 Hosting and Rackspace all passed Nasuni's stress test. The company declined to name the 10 that didn't make the cut, noting that as those providers mature their offerings, they stand a good chance of passing the tests in the future.

"The large providers certainly have a leg up with regard to economies of scale and tenure of performance," said Jennifer Sullivan, Nasuni's VP of marketing, in an e-mail. "We'll continue to monitor a variety of cloud providers, and as adoption of the cloud increases (e.g., different use cases for the use of cloud in organizations emerge), this will help shape what cloud storage has to become to be adopted and embraced by the enterprise."

Nasuni has maintained that the cloud is merely a component of an overall storage solution, particularly enterprises with distributed locations. Nasuni's on-premise storage controller, which leverages the cloud as a target for data, provides added security and access control.

When offering its solution, Nasuni chooses a cloud storage provider for a customer that will meet the service-level agreements at any given time. "We choose the cloud provider and we can also migrate providers if we feel that one provider offers better performance," Sullivan said, likening the process to computer makers that choose hard disk drives for customers. "We dedicate the cycles to evaluating the providers so our customers don't have to."

Here are some findings from the report:

  • Writing large files: Windows Azure had the highest average speed at 2.38 MB per second. Nirvanix was close behind at 2.32 Mbps. The remainder of the six had similar speeds except for Peer1, whose average write speed was 1.49 Mbps.
  • Reading large files: Nirvanix was fastest at 13.3 Mbps, with Windows Azure coming close behind at 13.2 Mbps. Amazon posted 11.28 Mbps.
  • Writing medium-sized files: Windows Azure led at 2.1 Mbps, followed closely by Amazon S3 at 2.0 Mbps. The remainder came in 28 to 70 percent slower.
  • Reading medium-sized files: Amazon significantly outpaced everyone else at 9.2 Mbps. Coming in second was Microsoft, though 28 percent slower at 6.6 Mbps.
  • Reading small files: Amazon S3, at 387 files-per-second, was 41 percent faster than its nearest rival, AT&T.
  • Writing objects: Windows Azure led with 154 files per second, with Amazon S3 coming in second at 135 files per second. AT&T came in third with 98 files per second. The remaining three were much slower.
  • Outages: Amazon had the fewest, with only 1.4 per month, and the average duration was not significant, giving it an uptime of nearly 100 percent. (Those who experienced some of its major disruptions earlier this year, including April's four-day outage, may beg to differ.) Microsoft had 11.1 outages per month with an overall uptime of 99.9 percent. Peer1 had 6.8 outages, Rackspace experienced 10.3, AT&T averaged 10.4 (though posted uptime of 99.5 percent) and Nirvanix was less fortunate with 332, though the outages apparently were not significant since its uptime still came in at 99.8 percent.

Sullivan said it will be interesting to see if Amazon holds the top spot, noting Microsoft has a good chance of taking the lead. "Time will tell," she noted. A copy of the report is available for download here.

Posted by Jeffrey Schwartz on December 13, 20110 comments


Tier 3 Adds .NET to Cloud Foundry

VMware's open source Cloud Foundry Platform as a Service (PaaS) is getting an unlikely addition: support for Microsoft's .NET Framework.

It's not coming from VMware but from cloud provider Tier 3, which announced it is contributing its own fork of the .NET Framework for Cloud Foundry to the open source community. The framework will allow developers to port their .NET applications to Cloud Foundry. 

The move comes just one day after Microsoft announced an upgraded release of its PaaS -- the Windows Azure platform -- which among other things includes a preview of its Hadoop connectors, a Node.js software development kit (SDK) and a JavaScript plug-in for Eclipse developers.

Bellevue, Wash.-based Tier 3 will contribute its .NET fork of Cloud Foundry, called Iron Foundry, as well as its Windows version of the Cloud Foundry Explorer and a Visual Studio plug-in for Cloud Foundry. Tier 3 is making the code available at ironfoundry.org and at GitHub under an Apache 2 license.

"Because developers can run their own instances of Iron Foundry in-house or with any service provider who supports it, developers finally have a truly open, interoperable .NET PaaS solution that can be run inside and outside the firewall," said a company blog post. "And because you can run your own instances of Iron Foundry, it's easy to have a full test, QA, and staging environment before pushing to production. In addition, operations teams now have the freedom to choose among various service providers that meet their needs in areas such as security, compliance, availability, location, etc."

In a bid to accelerate adoption of its .NET fork of Cloud Foundry, Tier 3 is offering developers trial usage consisting of one Web and one database instance for 90 days, running on the company's cloud platform.

Cloud Foundry, launched in April, appears to be gaining momentum. It was rated the top cloud PaaS platform by developers, according to the results of a survey by Evans Data Corp. last month. Cloud Foundry is designed to run Spring, Rails, Node.js and Scala applications. With .NET support available on Cloud Foundry, that should only broaden its appeal.

Posted by Jeffrey Schwartz on December 13, 20110 comments


Survey Finds Cost Savings from Cloud Elusive for Some

Access to data from multiple mobile devices outweighs cost savings when it comes to justifying the reason for deploying cloud-based solutions.

That's the rather curious finding from a study released this week by CSC, the global integrator based in Falls Church, Va. According to CSC's Cloud Usage Index, in a report based on a survey of 3,645 IT decision makers in eight countries, 33 percent cited access to data from mobile devices as the primary reason for adopting cloud computing. Only 17 percent said reducing costs was the most important reason for moving to the cloud.

Meanwhile, 82 percent said their cloud efforts have reduced their IT costs but in many cases those savings are minimal. In the United States, 23 percent of enterprises and 45 percent of small businesses with fewer than 50 employees said they were not saving any money at all, while 35 percent of U.S. organizations have said those savings were less than $20,000.

"Although requirements for business agility and cost savings certainly factor in, neither is the single most important driver for cloud adoption," according to the report. At the same time, "in terms of overall IT performance, an overwhelming 93 percent of respondents say cloud improved their data center efficiency/utilization or another measure. And 80 percent see improvements like these within six months of moving to the cloud."

Among the other findings in CSC's Cloud Usage Index:

  • 14 percent downsized their IT departments after moving to the cloud.
  • 20 percent of organizations hired more cloud experts.
  • 65 percent signed on for subscriptions lasting more than one year.
  • 64 percent reported the cloud has helped lower energy use.
  • 48 percent of U.S. government agencies moved at least one workflow to the cloud in line with the "cloud-first" initiative.

Despite the minimal overall IT savings, 47 percent said they saw lower operating costs after moving to the cloud.

I find it surprising that cost savings isn't a higher priority and outcome. I'd be curious to hear if the IT cost savings are more important and substantial in your organization than CSC's Cloud Index suggests. Leave a comment below or drop me a line at [email protected].

Posted by Jeffrey Schwartz on December 07, 20110 comments


Cisco Outlines Cloud Framework

Like every IT vendor these days, Cisco Systems has talked up the cloud for some time. But now, it has a new umbrella cloud strategy.

The networking giant on Tuesday outlined its framework aimed at tying together private, hybrid and public clouds using its network gear, datacenter infrastructure and apps and services.

The framework, called CloudVerse, is designed to enable customers and partners to construct, connect and manage public, private and hybrid clouds, as well as cloud-based applications.

CloudVerse brings together three of the company's key segments:

  • Unified Data Center, which includes integrated servers, access networking, storage networking and the management of those components.

  • Cloud Intelligent Network, the networking components and management infrastructure aimed at providing connectivity and automation among multiple clouds, including its Nexus and Catalyst switches, and other routers, firewalls and related hardware and software.

  • Cloud Applications and Services, Cisco's portfolio of cloud collaboration offerings, including its WebEx and TelePresence services.

The launch of CloudVerse comes just one week after Cisco released its first Cloud Index Report, where it forecast twelvefold growth in cloud computing traffic between 2010 and 2015 to 1.6 zettabytes of data. Cisco sees an opportunity to use its presence as a leading supplier of network automation gear to bring together stove-piped clouds and datacenters.

"Until now cloud technology resided in silos, making it harder to build and manage clouds, and to interconnect multiple clouds, posing critical challenges for many organizations," said Padmasree Warrior, Cisco's senior VP of engineering and chief technology officer, in a statement.

Cisco announced several cloud providers and enterprises that are already using CloudVerse, including Fujitsu, LinkedIn, Qualcomm, Silicon Valley Bank, Verizon's Terremark subsidiary and Xerox's Affiliated Computer Services (ACS).

While CloudVerse is a framework that brings together existing products and services, Cisco announced some key new offerings that will advance its aim toward bringing together existing cloud silos.

Among them are Cisco Intelligent Automation for the Cloud, an offering that includes automated cloud provisioning and an on-demand orchestration engine; Cisco Network Services Manager 5.0, which lets organizations combine existing network and cloud resources into a multi-tenant datacenter architecture; and its Cloud-to-Cloud Connect based on Cisco's Network Positioning System (NPS), a technology that exposes network intelligence to the cloud.

NPS will be included with Cisco's forthcoming Aggregation Services Routers 1000 and 9000, due out next year. Cisco said the new routers will provide network automation between datacenters and clouds.

Posted by Jeffrey Schwartz on December 06, 20110 comments


SAP Makes Cloud Play with Deal To Acquire SuccessFactors

Over the weekend, SAP announced it has agreed to acquire SuccessFactors, a provider of cloud-based human capital management solutions, for $3.4 billion.

The deal represents a 52 percent premium over SuccessFactors' share price at the close of the equity markets on Friday. By acquiring SuccessFactors, SAP, primarily known for its premises-based line of business and ERP software, is hoping it will propel its push into the cloud.

"SAP's cloud strategy has been struggling with time-to-market issues, and its core on-premises HR management software has been at a competitive disadvantage with best-of-breed solutions in areas such as employee performance, succession planning, and learning management," said Forrester analyst Paul Hamerman in a blog post. "By acquiring SuccessFactors, SAP puts itself into a much stronger competitive position in human resources applications and reaffirms its commitment to software-as-a-service as a key business model."

Hamerman noted that SAP's subscription revenue has been flat for the first nine months of the year, only representing 3.7 percent of software revenues. With SuccessFactors' 42 percent growth last quarter, he said that SAP's SaaS effort -- which includes Business ByDesign (ERP), Sales OnDemand (CRM), Carbon Impact OnDemand (sustainability), Sourcing OnDemand and Travel OnDemand (expense reporting) -- should accelerate

SAP said that SuccessFactors, with 3,500 customers in 168 countries, is forecast to generate $400 million in revenues in 2012 and generated a 59 percent increase in revenues during the first nine months of this year.

SuccessFactors will operate as an independent business SAP business unit, much like database and mobile integration software provider Sybase is run. In addition to heading the new subsidiary, Lars Dalgaard, SuccessFactors founder and CEO, will lead SAP's overall cloud strategy. "Now is the time to take this game to the next level," Dalgaard said on a conference call for analysts Saturday.

"They will provide leadership and expertise to accelerate our cloud strategy," added SAP co-CEO Bill McDermott. "They truly understand the go-to-market dynamics in this fast evolving cloud space, and are one of the fastest growing cloud companies based on 10 years of on demand expertise."

Posted by Jeffrey Schwartz on December 05, 20110 comments


Cisco Forecasts Twelvefold Increase in Cloud Traffic by 2015

According to Cisco's first Global Cloud Index Report released this week, cloud computing traffic will reach 1.6 zettabytes by 2015, a twelvefold increase over last year's traffic, which topped 166 exabytes.

That translates to a 66 percent compounded annual growth rate (CAGR). The cloud today represents 11 percent of datacenter traffic, which Cisco says is growing at a CAGR of 33 percent and is expected to equate to 4.8 zettabytes (a zettabyte is 1 trillion gigabytes). By 2015, the cloud will represent 33 percent of datacenter traffic, according to Cisco's forecast.

Cisco said in a whitepaper that it gathered data such as server shipments from a number of analyst firms, where it calculated workloads by type and implementation. The company also assembled network stats from 10 enterprises and Internet centers.

Here are some of Cisco's other findings:

  • The number of workloads per installed traditional server will increase from 1.4 in 2010 to 2.0 in 2015.
  • The number of workloads per installed cloud server will increase from 3.5 in 2010 to 7.8 in 2015.
  • By 2014, more than 50 percent of all workloads will be processed in the cloud.
  • In 2015, global cloud IP traffic will reach 133 exabytes per month.

All of this is further validation of a significant transition of workloads from the datacenter to the cloud, but Cisco doesn't see the in-house systems going away anytime soon. Rather, the cloud will take up a substantial chunk of workloads and storage in the coming years.

Are these findings by Cisco consistent with where you see your organizations headed? Leave a comment below or drop me a line at [email protected].

Posted by Jeffrey Schwartz on November 30, 20110 comments


Would You Heat Your Home with a Cloud Data Furnace?

Are you frustrated by the high cost of heating your home? With the winter weather arriving in many parts and furnaces kicking into high gear, once again we can look forward to exorbitant bills for oil or natural gas.

If you can't justify the hefty investment in solar panels or other alternative energy sources, would you consider replacing that furnace with a cabinet full of servers, storage and network gear?

That's what researchers at Microsoft and the University of Virginia are proposing. They have introduced the concept of the Data Furnace, or DF for short, to heat homes and office buildings, while at the same time reducing operational costs for those hosting cloud infrastructures by offloading at least some of the expense of running servers in large datacenters that consume huge amounts of energy and require substantial cooling facilities.

With servers used to power cloud computing infrastructures, these DFs can generate enough heat to act as a primary heating system in a home or building, the researchers proposed in a paper presented back in June at the annual USENIX Workshop on Hot Topics in Cloud Computing, held in Portland, Ore.

Though the paper got little attention at the time, New York Times columnist Randall Stross wrote about it in his popular Digital Domain column Sunday, thereby exposing the idea to a mass audience.

The authors defined three primary benefits to cloud providers deploying DFs in homes and office buildings: a reduced carbon footprint, lower total cost of ownership per server, and the ability to bring the compute and storage closer to the users (by further caching data, for example). The DF shares a footprint similar to a typical furnace, in a metal cabinet that is linked to ducts or hot water pipes.

"DFs create new opportunities for both lower cost and improved quality of service, if cloud computing applications can exploit the differences in the cost structure and resource profile between Data Furnaces and conventional data centers," the authors wrote. "Energy efficiency is not only important to reduce operational costs, but is also a matter of social responsibility for the entire IT industry."

A typical server farm generates heat ranging from 104 to 122 degrees. While not hot enough to sustainably regenerate electricity, it is ideal for powering heating systems, clothes dryers and water heaters, the authors wrote.

Cheaper servers, improved network connectivity and advances in systems management also make this a practical notion, thanks to the ability to remotely re-image or reboot a server.

Still, there are some obstacles. It can cost anywhere from 10 to 50 percent more for electricity in a home to power the DF versus the cost cloud providers pay in an industrial complex. Also, network bandwidth to the home could be more costly. And maintaining the geographically dispersed systems becomes more complex and expensive.

Co-author Kamin Whitehouse, an assistant professor of computer science at the University of Virginia, told Stross that he has received more response than he typically gets when publishing a scientific paper. In fact, he said he has heard from some who are already heating their homes with servers "which shows that it works."

While it may work, I'd like to see some cloud providers trying this out, find out how well it works in the home or building and see what the total economics are. It seems reasonable that the industry should seriously evaluate this concept.

Posted by Jeffrey Schwartz on November 30, 20112 comments


Skytap Extends VM Portability with Open Virtualization Support

Cloud provider Skytap is looking to simplify use of its service, particularly as it applies to providing compatibility with in-house datacenters.

Skytap said it is providing support for the Open Virtualization Format (OVF), a Distributed Management Task Force (DMTF) standard for packaging and distributing virtual machines.

By supporting OFV, users of Skytap's cloud service will have an efficient and flexible way to import and export existing virtualized configurations without making changes to them, said Brett Goodwin, the company's VP of marketing and business development. That means it will support the VHD format in Microsoft's Hyper-V, Amazon Web Services' Amazon Machine Image (AMI), Xen Disk Image and the QEMU format (qcow2) associated with KVM.

Until now, Skytap users were confined to using VMware's VMDK file format. "It [OVF] improves the portability and decreases the platform dependence, and it also allows IT to leverage a common set of tools when they are working with VM workload software configurations on their end in the private infrastructure and on the hybrid and public cloud," Goodwin said.

In addition, Skytap has added advanced notification, aimed at alerting both end users and IT if thresholds are exceeded such as compute or storage usage. The advanced notification capability is intended to avoid surprise bills, Goodwin explained. IT can set customized alerts to inform administrators if users are approaching certain usage thresholds, such as 90 percent of budgeted storage quotas.

The company has also added self-healing network automation to its service for those running hybrid cloud deployments, which is quite common among its customer base, Goodwin said. The self-healing features include auto-detecting VPN connection failures and automatically re-establishing those links.

Goodwin said its service is primarily used by those who develop and test applications, though it is also used for product and proof-of-concept demonstrations, as well as for IT and technical training.

Posted by Jeffrey Schwartz on November 29, 20110 comments


AT&T Adds PaaS to Its Cloud Portfolio

AT&T has extended its cloud portfolio with a Platform as as Service offering (PaaS) aimed at letting business users, enterprise developers and ISVs build, test and run their apps in the telco's hosted environment.

Launched this week, AT&T Platform as a Service will allow application developers and tech-savvy business people to build and deploy apps using either AT&T-provided tooling or Eclipse-based development tools. Those using AT&T's Web-based tools and templates don't require coding expertise, according to AT&T.

The tools consist of templates that allow either customer development or the use of 50 pre-built apps. The tools also allow developers to configure their apps for mobile devices and add social networking features. The service is built on LongJump's platform. LongJump's PaaS is a Java-based platform that provides the templates enabling non-technical users to build line-of-business apps, or developers using Eclipse-based tools to build custom applications.

AT&T's entrée suggests the PaaS market is poised to mature, according to Forrester analyst Stefan Ried. "AT&T has the potential to get into a real volume business with this offering bridging the gap between consumer style services and corporate usage of PaaS -- similar to what Google managed around email and the rest of Google's applications," Ried wrote in a blog post.

Will telecom giants such as AT&T and Verizon ultimately seize a big piece of the PaaS pie? While they have the advantage of their robust network infrastructures, players such as Google, Microsoft, Red Hat and VMware have aggressive plans with their own PaaS offerings. But the telcos promise to make it an even more heavily contested battle in 2012.

Posted by Jeffrey Schwartz on November 17, 20110 comments


CA To Perform Cloud Assessments with Cloud 360

CA Technologies wants to help enterprises determine what applications may be suited to move to the cloud.

The company launched Cloud 360 at its annual CA World conference, which took place this week in Las Vegas. Cloud 360 is a portfolio of consulting services bundled with CA's software to model and perform cost-benefit and performance analyses of moving apps to the cloud. It also is intended to help customers develop migration plans.

"It lets CIOs determine which apps or services they want to move to the cloud and which cloud they want to move them to, if any," said Andi Mann, CA's VP of strategic solutions. "Some apps and some services will never go to the cloud. This gives CIOs a real deterministic model for understanding what's in their portfolio that they might be able to get a benefit from moving to the cloud."

Among other things, Mann explained Cloud 360 will let CIOs understand what sort of performance, service levels, security and cost and reliability criteria they need to consider. It will match that up against different cloud options -- such as public clouds, private clouds and Software as a Service -- and it will let customers simulate and model their chosen apps and cloud environments to determine if the service they're considering is suited for their requirements, Mann said.

Since the outcome of this will ultimately result in customers buying CA's various software offerings, this service will appeal to customers comfortable with going that route. The offering starts off with an app portfolio analysis consisting of a one-day workshop followed by CA's Application Discovery and Portfolio Analysis conducted by the company's consultants using CA's Clarity PPM On Demand tooling.

Once it is determined what apps will be moved to the cloud, CA will help determine service-level agreement requirements using its CA Oblicore On Demand service-level management software. Among other CA wares to be used in helping simulate and determine capacity and virtualization requirements are CA Capacity Management and Reporting Suite, CA Virtual Placement Manager and CA LISA Suite.

The company also launched two new identity and access management (IAM) security services aimed at providing single sign-on to internal and cloud-based applications. Both are cloud-based services that provide access to apps delivered by online providers such as Salesforce.com as well as premises-based systems.

CA IdentityMinder as-a-Service offers password management, user provisioning, management of access requests and reporting and auditing. CA FedMinder as-a-Service offers the cross-domain single-sign-on capability. It supports the SAML 2.0 standard and has policy management capabilities.

Also at CA World, the company launched the Cloud Commons Marketplace and Developer Studio. The Cloud Commons Marketplace is a portal that lets ISVs put their applications up for sale. "This is essentially going to be an app store for the enterprise," Mann said. "Enterprises can go up onto the cloud commons marketplace and buy them and service providers can host them."

The Cloud Commons Developer Studio is a free service that allows developers to build and test apps using the CA AppLogic platform.

Posted by Jeffrey Schwartz on November 16, 20110 comments


ScaleXtreme Extends Management of Multiple Clouds

ScaleXtreme, a company that lets IT administrators and service providers manage public and private clouds, this week updated its service to allow customers to model, configure and launch servers.

The company's new Dynamic Server Assembly lets IT pros who use ScaleXtreme's Web-based Xpress and Xpert services build templates that represent how a machine is built, rather than binding it to a specific cloud provider or virtual machine stack. Administrators can use those templates to manage systems and apps running on multiple public and private clouds.

"We are talking about a new way of modeling and templating machines that allows you to build a canonical expression of a machine and instantiate that on one or more cloud providers so the machine effectively gets built on demand," said ScaleXtreme CEO and Co-Founder Nand Mulchandani.

ScaleXtreme competes with cloud management providers such as RightScale, though Mulchandani argues that his company is better suited for managing both internal private clouds and public clouds. ScaleXtreme itself is a cloud-hosted service and puts agents on internal servers, allowing IT admins or service providers to create and manage virtual machine templates and VMs; start and stop VMs; and, at the OS layer, configure, patch, audit, monitor and remotely access the system.

The service provides consolidated views of multiple cloud services and internal servers and allows admins to browse the file system; monitor and graph OS metrics; and store, edit and run automation scripts in the cloud.

ScaleXtreme offers a free version of its service, which is limited to one administrator and one cloud. A paid service, which costs $15 per month for each server, provides management of an unlimited number of clouds and administrators.

ScaleXtreme manages clouds from Amazon Web Services, Rackspace and those based on OpenStack and VMware's vCloud. The company last month added support for Citrix Systems' CloudStack. With its support for CloudStack, which Citrix picked up earlier this year with its acquisition of Cloud.com, ScaleXtreme claims it can now manage most public and private clouds.

"We probably cover 80 to 90 percent of the footprint of public clouds or semi-private clouds that you can buy capacity from," Mulchandani said. Among those they don't cover are Eucalyptus and Microsoft's Windows Azure and Hyper-V.

"What Microsoft does not have that the other players have in the market have is a templating, cataloging API layer that allows you to programmatically access all the functions of Hyper-V so you can do things like provision machines and manage machines through the APIs," Mulchandani said. He believes once Microsoft delivers a new capability in its System Center 2012 called System Center App Controller 2012,  code-named "Project Concero", that those barriers to managing Azure and HyperV will be lifted. Microsoft released System Center App Controller to beta late last month and said it expects it to be commercially available in the first half of 2012.

Posted by Jeffrey Schwartz on November 16, 20110 comments