eXo Readies Service That Ties Collaboration and Social Networking

A number of companies are looking to offer cloud-based document collaboration services, among the most prominent of course are Box and Dropbox. Both are attempting to provide some of the core collaboration features offered in Microsoft SharePoint.

The latest company to enter those sweepstakes is eXo, a French company that last year set up shop in San Francisco after inking a relationship with Red Hat Software to source the technology that became the basis of its JBoss Enterprise Portal platform. As announced this week , eXo is now gunning to meld document collaboration with enterprise social networking.

SharePoint has some social networking features and Microsoft is expected to beef those up with the anticipated release of the next version, code-named SharePoint 15. While Box and Dropbox have carved their niche with their cloud-based storage and document sharing services, and have tons of venture money to keep them afloat for a long time, companies such as Jive Software, Salesforce.com's Chatter and Yammer are better known for their enterprise social networking capabilities.

If eXo has its way, it will attract enterprise customers by offering the best of both worlds. The company released to beta a free service called eXo Cloud Workspaces. Benjamin Mestrallet, founder and CEO of eXo said the problem with services such as Box and Dropbox is they allow individuals to share enterprise information without giving IT any control over the data. Though it's easy for users to move data onto those services, "what happens if you need to bring documents back? How can you ensure documents and information can be brought back?" Mestrallet said.

Based on the eXo Platform 3.5, the company's premises-based collaboration platform, eXo's Cloud Workspaces is a multitenant platform as a service, according to Mestrallet, who said the company spent two years re-architecting a multitenant version that would allow customers to share Java Virtual Machines (JVMs) and application servers and spin additional ones as needed on an elastic basis. "Hence we can get density that is 100 times greater than traditional PaaS," Mestrallet claims.

Cloud Workspaces features a document management repository that lets users organize, tag and share documents with version control. It offers an enterprise wiki that can be shared by communities, activity feeds and native applications for iOS and Android-based devices. The company also offers a cloud-based IDE. The ultimate goal is to let IT link content in Cloud Workspaces to eXo Platform 3.5.

During the beta period, eXo is offering customers unlimited storage space and it is open to anyone who wants to test the service. The company is not saying how much it will charge once it commercially releases the service.

Posted by Jeffrey Schwartz on April 11, 20120 comments


HP Sets Launch of Public Cloud Service

More than a year after promising to offer a public cloud service, Hewlett-Packard today is officially jumping into the fray.

After an eight-month beta test period, the company said its new HP Cloud Services, a portfolio of public cloud infrastructure offerings that includes compute, object storage, block storage, relational database (initially MySQL) and identity services, will be available May 10. As anticipated, HP Cloud Services will come with tools for Ruby, Java, Python and PHP developers. While there is no shortage of infrastructure as a service (IaaS) cloud offerings, HP's entrée is noteworthy because it is one of the leading providers of computing infrastructure.

The IT industry is closely watching how HP will transform itself under its new CEO Meg Whitman, and the cloud promises to play a key role in shaping the company's fortunes. As other key IT players such as Dell, IBM and Microsoft also offer public cloud services, critics have pointed to HP's absence. The rise of non-traditional competitors such as Amazon Web Services and Rackspace in recent years has also put pressure on HP, whose servers, storage and networking gear will become more dispensable by its traditional customer base as it moves to cloud services.

Considering more than half of HP's profits come from its enterprise business, the success of HP Cloud Services will be important to the company's long-term future of providing infrastructure, presuming the growing trend toward moving more compute and storage to public clouds continues. By the company's own estimates, 43 percent of enterprises will spend between $500,000 to $1 million per year on cloud computing (both public and private) through 2020. Of those, nearly 10 percent will spend more than $1 million.

HP Cloud Services will also test the viability of the OpenStack Project, the widely promoted open source cloud computing initiative spearheaded by NASA and RackSpace and now sponsored by 155 companies with 55 active contributors. Besides Rackspace, HP is the most prominent company yet to launch a public cloud service based on OpenStack. Dell has also pledged support for OpenStack.

But with few major implementations under its belt, the OpenStack effort last week came under fire when onetime supporter Citrix pulled back in favor of its recently acquired CloudStack platform, which it contributed to the Apache Software Foundation. While Citrix hasn't entirely ruled out supporting OpenStack in the future, the move raised serious objections to its current readiness.

Biri Singh, senior VP and general manager of HP Cloud Services told me he didn't see the Citrix move as a reason for pause. Rather, he thought Citrix's move will be good for the future of open source IaaS. "You are seeing the early innings of basic raw cloud infrastructure landscape being vetted out," Singh said. "I think a lot of these architectural approaches are very similar. CloudStack is similar to the compute binding in OpenStack, which is Nova. But OpenStack has other things that have been built out and contributed [such as storage and networking]."

The promise of OpenStack is its portability, which will give customers the option to switch between services and private cloud infrastructure that support the platform. "Being on OpenStack means we can move between HP and Rackspace or someone else," says Jon Hyman, CIO of Appboy, a New York-based startup that has built a management platform for mobile application developers.

Appboy is an early tester of HP Cloud Services. Hyman said he has moved some of Appboy's non-mission critical workloads from Amazon to HP because he believes HP will offer better customer service and more options. "With Amazon Web Services, the support that you get as a smaller company is fairly limited," Hyman said. "You can buy support from Amazon but we're talking a couple of hundred dollars a month at least. With HP, one of the big features they are touting is good customer support. Having that peace of mind was fairly big for us."

The other compelling reason for trying out HP Cloud Services is the larger palette of machine instances and configurations. For example, Hyman wanted machines with more memory but didn't require any more CPU cores. However in order to get the added RAM with Amazon, he had to procure the added cores as well. With HP, he was able to just add the RAM without the cores.

Nonetheless Amazon is also responding by once again lowering its overall prices and offering more options, Hyman has noticed. "They are driving down costs and making machine configurations cheaper," he said. Hyman is still deciding whether to move more workloads from Amazon to HP and will continue testing its service. "Depending on how we like it we will probably move over some of our application servers once it is more battle tested," he said.

Today's launch is an important milestone for HP's cloud effort. Singh said the company is emphasizing a hybrid delivery model, called HP Converged Cloud, consisting of premises-based private cloud infrastructure, private cloud services running in HP datacenters and now the public HP Cloud Services. HP offers the option of letting customers manage its cloud offerings or they can be managed by the company's services business.

"It's not just raw compute infrastructure or compute and storage, it's a complete stack," Singh said. "So when an enterprise customer says 'I have a private cloud environment that is managed by HP, now if I need to burst into the public cloud for additional bandwidth or if I need to build out a bunch of new services, I can essentially step into that world as well.' What HP is trying to do is deliver a set of solutions across multiple distributed environments and be able to manage them in a way that is fairly unique."

With its acquisitions last year of Autonomy and Vertica, HP plans to offer a suite of cloud-based analytics services that leverage Big Data, Singh said. On the database side, HP is initially offering MySQL for structured data and plans NoSQL offerings as well.

Initially HP Cloud Services will offer Ubuntu Linux for compute with Windows Server slated to come later this year.

A dual-core configuration with 2 GB of RAM and 60 GB of storage will start at 8 cents per hour. Singh characterized the pricing of HP Cloud Services as competitive but doesn't intend to wage a price war with Amazon (although since we spoke, HP announced it is offering 50 percent off its cloud services for a limited time). "I think the market for low cost WalMart-like scale clouds is something a lot of people out there can do really well, but our goal is to really focus on providing a set of value and a set of quality of service and secure experience for our customers that I think there is definitely a need for," Singh said.

Singh also pointed out while HP will compete with Amazon, it also sees Amazon as a partner. "We work with them on a bunch of things and they are a customer," he said. "I think there is plenty of room in the market, and we have focused our efforts on people looking for a slightly alternative view on how basic compute and storage is provisioned."

Posted by Jeffrey Schwartz on April 10, 20120 comments


Citrix Dumps OpenStack for Amazon-Style CloudStack

Citrix turned heads Tuesday when it announced it's contributing the CloudStack cloud management platform it acquired last summer from Cloud.com to the Apache Software Foundation with plans to release its own version of that distribution as the focal point of its cloud infrastructure offering.

The bombshell announcement didn't have to state the obvious: Citrix is dumping its previously planned support for OpenStack, the popular open source cloud management effort led by NASA and Rackspace. While Citrix didn't entirely rule out working with OpenStack in the future, there appears to be no love lost on Citrix's part.

While the announcement didn't even mention OpenStack, Peder Ulander, vice president of marketing for Citrix's cloud platforms group said in an e-mailed statement that OpenStack was not ready for major cloud implementations, a charge that OpenStack officials refuted.

"Our initial plan was to build on top of the OpenStack platform, adopting key components over time as it matured," Ulander noted. "While we remain supportive of the intent behind OpenStack and will continue to integrate any appropriate technologies into our own products as they mature, we have been disappointed with the rate of progress OpenStack has made over the past year and no longer believe we can afford to bet our cloud strategy on its success."

So where does OpenStack fall short? The biggest problem is proven scale in production, Ulander said. "It's been a full year since we first joined OpenStack, and they still don't have a single customer in production," he said. "Despite all the good intentions, the fact remains that it is not ready for prime time. During the same year, CloudStack has seen hundreds of customers go into full production. These customers include some of the biggest brands in the world, collectively generating over $1 billion in cloud revenue today. No other platform comes close."

Jonathan Bryce, chairman of the OpenStack Project Policy Board and co-founder Rackspace Cloud, told me Tuesday that's simply not true. OpenStack has a number of customers in production such as Deutsch Telecom, the San Diego Supercomputing Center and MercadoLibre, a large e-commerce site in Buenos Aires that runs about 7,000 virtual machines on OpenStack. "We've got a broad range of users," Bryce said.

Another reason Citrix decided to focus on CloudStack was its compatibility with Amazon Web Services APIs. "Every customer building a new cloud service today wants some level of compatibility with Amazon," Ulander noted. "Unfortunately, the leaders of OpenStack have decided to focus their energy on establishing an entirely new set of APIs with no assurance of Amazon compatibility (not surprising, since OpenStack is run by Rackspace, an avowed Amazon competitor). We do not believe this approach is in the best interest of the industry and would rather focus all our attention on delivering a platform that is 'Proven Amazon Compatible.'"

But in his most stinging rebuke of OpenStack, Ulander questioned whether it will adhere to true open source principles. "Rather than build OpenStack in a truly open fashion, Rackspace has decided to create a 'pay-to-play' foundation that favors corporate investment and sponsorship to lead governance, rather than developer contributions," he said. "We believe this approach taints the openness of the program, resulting in decisions driven by the internal vendor strategies, rather than what's best for customers."

When I asked Bryce about Ulander's "pay-to-play" charge, he said the process to move stewardship from the auspices of Rackspace to an independent foundation is moving along and will be completed this year. Bryce denied any notion that OpenStack had a "pay for play" model.

"That's just false," Bryce said. "If you look at the process, look at how many contributors there are to the projects. "We are far and away the most inclusive in terms of the number of companies and the number of contributors who are making code submissions. We have large and small startups and big enterprise software companies who are all actively engaged in OpenStack."

But some say OpenStack is still immature relative to CloudStack. Gartner analyst Lydia Leong described it in a blog post as "unstable and buggy and far from feature complete." By comparison, she noted, CloudStack is, at this point in its evolution, a solid product -- it's production-stable and relatively turnkey, comparable to VMware's vCloud Director (some providers who have lab-tested both even claim stability and ease of implementation are better than vCD)."

Forrester analyst James Staten agreed adding that OpenStack, appeared to be a drag on Citrix. "Ever since Citrix joined OpenStack its core technology has been in somewhat of a limbo state," Staten said in a blog post. "The code in cloudstack.org overlaps with a lot of the OpenStack code base and Citrix's official stance had been that when OpenStack was ready, it would incorporate it. This made it hard for a service provider or enterprise to bet on CloudStack today, under fear that they would have to migrate to OpenStack over time. That might still happen, as Citrix has kept the pledge to incorporate OpenStack software if and when the time is right but they are clearly betting their fortunes on cloudstack.org's success."

Timing was clearly an issue, he pointed out. "For a company that needs revenue now and has a more mature solution, a break away from OpenStack, while politically unpopular, is clearly the right business decision."

Despite Citrix's decision to pull the rug out from OpenStack, Bryce shrugged it off, at least in his response to my questions. "I don't by any means think that it's the death knell for OpenStack or anything like that," Bryce said. "I think the Apache Software Foundation is a great place to run open source projects and we will keep working with the CloudStack software wherever it makes sense."

When it comes to open source cloud computing platforms, OpenStack continues to have the momentum with well over 155 sponsors and 55 active contributors and adopters including AT&T, (Ubunto distributor) Canonical , Cisco, Dell, Hewlett-Packard, Opscode and RightScale.

Analysts say Citrix had good business reasons for shifting its focus on CloudStack. Among them, Gartner analyst Leong noted was despite effectively giving away most of its CloudStack IP, it should help boost sales of its XenServer, "plus Citrix will continue to provide commercial support for CloudStack," she noted. "They rightfully see VMware as the enemy, so explicitly embracing the Amazon ecosystem makes a lot of sense."

Do you think it makes sense? Drop me a line at [email protected].

Posted by Jeffrey Schwartz on April 04, 20120 comments


Verizon's Terremark Joins Private Cloud Rush

There's a growing trend by providers of public cloud services to offer secure connectivity to the datacenter. The latest provider to do so is Verizon's Terremark, which this week rolled out Enterprise Cloud Private Edition.

Terremark, a major provider of public cloud services acquired last year by Verizon for $1.4 billion, said its new offering is based on its flagship platform but designed to run as a single-tenant environment for its large corporate and government clients that require security, perhaps to meet compliance requirements.

The company described Enterprise Cloud Private Edition as an extension of its hybrid cloud strategy, allowing customers to migrate workloads between dedicated infrastructure and public infrastructure services.

"Our Private Edition solution is designed to meet the strong customer demand we've seen for the agility, cost efficiencies and flexibility of cloud computing, delivered in a single-tenant, dedicated environment," said Ellen Rubin, Terremark's vice president of Cloud Products, in a statement.

Using Terremark's CloudSwitch software, customers can integrate their datacenters with its public cloud services, the company said, adding its software allows customers to migrate workloads to other cloud providers.

Posted by Jeffrey Schwartz on April 03, 20120 comments


Amazon and Eucalyptus Forge API Sharing Pact

Looking to provide tighter integration between its public cloud services and enterprise datacenters, Amazon Web Services has inked an agreement with Eucalyptus Systems to support its platform.

While Eucalyptus already offers APIs designed to provide compatibility between private clouds running its platform and Amazon's Elastic Compute Cloud (EC2) and Simple Storage Service (S3), the deal will provide interoperability blessed and co-developed by both companies.

The move, announced last week, is Amazon's latest effort to tie its popular cloud infrastructure services to private clouds. Amazon in January released the AWS Storage Gateway, an appliance that allows customers to mirror their local storage with Amazon's cloud service.

For Eucalyptus, it can now assure customers that connecting to Amazon will work with the company's consent. "You should expect us to deliver more API interoperability faster, and with higher accuracy and fidelity," said Eucalyptus CEO Marten Mikos in an e-mail. "Customers can run applications in their existing datacenters that are compatible with popular Amazon Web Services such EC2 and S3."

Enterprises will be able to "take advantage of a common set of APIs that work with both AWS and Eucalyptus, enabling the use of scripts and other management tools across both platforms without the need to rewrite or maintain environment-specific versions," said Terry Wise, director of the AWS partner ecosystem, in a statement. "Additionally, customers can leverage their existing skills and knowledge of the AWS platform by using the same, familiar AWS SDKs and command line tools in their existing data centers."

While the companies have not specified what deliverables will come from the agreement or any timeframe, Mikos said they are both working closely. Although Eucalyptus does not disclose how many of its customers link their datacenters to Amazon, that is one of Eucalyptus' key selling points.

Eucalyptus claims it is enabling 25,000 cloud starts each year and that it is working with 20 percent of the Fortune 100 and counts among its customers the U.S. Department of Defense, the Intercontinental Hotel Group, Puma, Raytheon, Sony and a number of other federal government agencies.

While Eucalyptus bills its software as open source, critics say it didn't develop a significant community, an problem the company has started to remedy with the hiring of Red Hat veteran Greg DeKoenigsberg as Eucalyptus's VP of Community.

Eucalyptus faces a strong challenge from other open source private cloud efforts, including the VMware-led CloudFoundry effort and the OpenStack effort led by Rackspace Hosting and NASA and supported by the likes of Cisco, Citrix, Dell, Hewlett-Packard and more than 100 other players.

These open source efforts, along with other widely marketed cloud services such as Microsoft's Windows Azure, are all gunning to challenge Amazon's dominance in providing public cloud infrastructure. And they are doing so by forging interoperability with private clouds.

Since Amazon and Eucalyptus have shown no interest in any of those open source efforts to date, this marriage of convenience could benefit both providers and customers who are committed to their respective offerings.

Posted by Jeffrey Schwartz on March 28, 20121 comments


Opscode Bags $19.5 Million To Grow Cloud Infrastructure Automation Offerings

Cloud infrastructure automation vendor Opscode this week said it has received $19.5 million in Series C funding led by Ignition Partners and joined by backers Battery Ventures and Draper Fisher Jurvetson. In concert with the investment, Ignition partner John Conners, the onetime Microsoft CFO, has joined Opscode's board.

The funding will help expand Opscode's development efforts as well as sales and marketing initiatives. The company is aggressively hiring developers at its Seattle headquarters as well as its new development facility in Raleigh, N.C.

Opscode's claim to fame is its Hosted Chef and Private Chef tools, designed to automate the process of building, provisioning and managing public and private clouds, respectively. Chef is designed for environments where continuous deployment and integration of cloud services is the norm. "It's a tool for defining infrastructure once and building it multiple times and iterating through the build process lots of different configurations," said Opscode CEO Mitch Hill in an interview.

The company claims adoption of its open source Chef cookbooks has grown tenfold since September 2010 with an estimated 800,000 downloads. Its open source community of 13,000 developers has produced 400 "community cookbooks," which cover everything from basic Apache, Java, MySQL and Node.js templates to more specialized infrastructures.

In addition to open source environments, there are also Chef cookbooks for proprietary platforms such as Microsoft's IIS and SQL Server. Opscode said some major customers are using Chef, including Fidelity Investments, LAN Airlines, Ancestry.com and Electronic Arts.

"Virtually all of our customers are using more complex stacks. Many are open source stacks or mixed proprietary and open source stacks, and what we see time and time again is the skills to manage these environments don't exist in the marketplace," Hill said.

"The skills are hard to build. You can't go to school to learn how to do this stuff. You have computer science grads who graduate as competent developers and systems guys but infrastructure engineers who know how to operate at the source, scale and complexity level of the world is operating at are hard to find. That's our opportunity. We feel that Chef is a force multiplier for any company that is trying to do [cloud] computing at scale."

Opscode said it will host its first user conference on May 15 to 17 with presentations by representatives of Fidelity, Intuit, Hewlett-Packard and Joyent, Ancestry.com and Fastly, among others.

Posted by Jeffrey Schwartz on March 27, 20120 comments


Report Finds Major Lags in Some Big Data Cloud Migrations from AWS S3

Test results published by cloud storage provider Nasuni this week suggest it's easier to move terabytes of data to Amazon Web Services S3 service than to Microsoft's Windows Azure or Rackspace's Cloud Files.

Nasuni found that moving 12 TB blocks of data consisting of approximately 22 million files to Amazon S3 (or between S3 storage buckets) took only four hours from Windows Azure and five hours from Rackspace Cloud Files. Going the other way though took considerably longer -- 40 hours to Windows Azure and just under a week to Rackspace Cloud Files from Amazon's S3.

Nasuni
[Click on image for larger view.]
Estimated minimum hours to transfer a 12 TB volume. (Source: Nasuni)

Officials at Nasuni found the results surprising and wondered if the reason was due to the fact that Microsoft and Rackspace throttle down the bandwidth when writing data to their respective services. In the case of Windows Azure, Nasuni was able to reach peak bandwidth rates of 25 Mbps, which was deemed good, but it dropped off significantly during peak hours, said Nasuni CEO Andres Rodriguez.

"Our suspicion is that Microsoft is throttling the maximum performance to a common data set, to make sure that the quality of service is maintained to the rest of the customers, who are sharing their piece of infrastructure," Rodriguez said. "That's a good thing for everyone. It's not a great thing for those trying to get a lot of performance out of their storage tier."

While acknowledging it was his own speculation and Nasuni company had not conferred with Microsoft or Rackspace about the issue, a Microsoft spokeswoman denied Rodriguez's theory. "Microsoft does not throttle bandwidth, ever," she said. While Microsoft looked at the report, the company declined further comment, noting it doesn't have "deep insight" into Nasuni's testing methods.

Ironically, Windows Azure performed well in a December test by Nasuni, in which it ranked fastest when it came to writing large files. Amazon, Microsoft and Rackspace all performed the best in that December shootout, which is why they were singled out in the current test. (Also, Amazon is currently Nasuni's preferred storage provider.)

Conversely, Nasuni was able to move files from Windows Azure to Amazon much faster; when moving from Windows Azure to Amazon S3, Amazon received data at more than 270 Mbps, resulting in the 12 TB of data moving in approximately four hours. "This test demonstrated that Amazon S3 had tremendous write performance and bandwidth into S3, and also that Microsoft Azure could provide the data fast enough to support the movement," the report noted.

Nasuni acknowledged that writes are typically slower than reads in most storage systems and external bandwidth is far more limited than internal bandwidth. Also, Nasuni noted in the report explaining the results that the limit it hit could have either have been Amazon's write limit just as much as Microsoft's read limit -- both the result of their respective bandwidth capacity and infrastructure.

Engineers at Nasuni also noted they were surprised at the limits of Amazon's EC2 in regard to how many machines a customer can run by default -- only 20 machines (for more you have to contact Amazon). To bypass that limit, Nasuni combined machines from multiple accounts.

For its part, Rackspace officials were miffed as to why it took so much time to move data from Amazon to the Rackspace Cloud Files service. "The results were surprising to us but we are making efforts to understand how the test was run and understand where some of that limitation might have been coming from," said Scott Gibson, the company's director of product for big data. Unlike Microsoft, Gibson acknowledged there are cases when Rackspace does put some limits on requests. But he said the company just completed a significant hardware upgrade in mid-February to alleviate those situations. Nasuni conducted the tests between Jan. 31 and Feb. 8.

Gibson was skeptical that this was a burning problem among Rackspace customers. "I wouldn't say it's horribly common," he said. "We do have [some] customers who move large amounts of data between datacenters. If that level of performance was the norm, we would probably hear about it loud and clear."

Rodriguez insisted he had no axe to grind with either provider. In fact he suggested he'd like to see both companies and others be able to offer the ability to move large amounts of data between providers to give Nasuni the most flexibility to offer higher levels of price performance and redundancy. He emphasized the purpose of the test was to gauge how long it would take to move large blocks of data among the providers.

When conducting the tests, Nasuni moved data between the providers via encrypted HTTPS machines. The company did not store any data, which was encrypted at the source, on disks in transit. Nasuni scaled from one machine to 40 and saw higher error rates from the providers as the loads increased, though ultimately the transfers were completed after a number of retries, the report noted.

It would be hard to conclude from one test that anyone wanting to move large blocks of data from provider to provider would experience similar results, but it also points to the likelihood that swapping between providers is not going to be a piece of cake.

Posted by Jeffrey Schwartz on March 22, 20120 comments


Google Steps Up Authentication for its APIs and Servers

Google is adding an extra layer of security for developers building applications that access a variety of its server-side platform services.

The company's new Service Accounts, launched Tuesday, will provide certificate-based authentication to Google APIs for server-to-server interactions. Until now, Google secured its APIs in these scenarios via passwords or shared keys.

"This means, for example, that a request from a Web application to Google Cloud Storage can be authenticated via a certificate instead of a shared key," said Google product manager Justin Smith, in a blog post, noting that unlike passwords and shared keys, certs can't be guessed.

In addition to Cloud Storage, Google's Prediction API, URL Shortener, OAuth 2.0 Authorization Server, APIs Console and its API libraries for Python, Java and PHPs will support certificates. Other APIs and client libraries, including Ruby and .NET, will follow over time, according to Smith.

The certs are implemented as an OAuth 2.0 flow. An app using the certificate service generates a JSON structure, which is signed with a private key and encoded as a JSON Web Token (JWT). Once the JWT accesses Google's OAuth 2.0 Authorization Server it provides an access token, which is sent to Google Cloud Storage or the Prediction API.

Adding certificate-based authentication will be welcome to those who require better security than passwords and shared keys offer, said Forrester Research analyst Eve Maler. "Government agencies and other high-risk players generally demand certificate-based authentication," she said. "This decision by Google enables, for its service ecosystem, this stronger option for those who need it. Google has experimented rapidly to come up with maximally effective API security mechanisms."

Developers can set up Service Accounts on the Google APIs Console.

Posted by Jeffrey Schwartz on March 21, 20120 comments


Ping to Offer Federated ID Management in the Cloud

While software-as-a-service applications such as Cisco WebEx, Google Apps, Microsoft Office 365 and Salesforce.com, among others, are becoming a popular way of letting organizations deliver apps to their employees, they come with an added level of baggage: managing user authentication.

Every SaaS-based app has its own login and authentication mechanism, meaning users have to separately sign into those systems. Likewise, IT has no central means of managing that authentication when an employee joins or leaves a company (or has a change in role). While directories such as Microsoft's Active Directory have helped provide single sign-on to enterprise apps, third parties such as Ping Identity, Okta and Symplified offer tools that provide connectivity to apps not accessible via AD, including SaaS-based apps. But those are expensive and complex, hence typically used by large enterprises.

Ping Identity this week said it will start offering a service in April called PingOne that will provide an alternative and/or adjunct to PingFederate, the company's premises-based identity management platform. PingOne itself is Saas-based, it will run in a cloud developed by Ping and distributed to Amazon Web Services EC2 service.

In effect PingOne, which will start at $5,000 and cost $5 per user per month, will provide single sign-on to enterprise systems and SaaS-based apps alike via Active Directory or an organization's preferred LDAP-based authentication platform. Ping claims it has 800 customers using its flagship PingFederate software, 90 percent of which are large enterprises.

But smaller and mid-sized enterprises, though they typically run Active Directory, are reticent to deploy additional software to add federated identity management, and in fact many have passed on Microsoft's own free add-on, Active Direction Federation Services (ADFS).

With PingOne, customers won't need to install any software, other than an agent in Active Directory which connects it to the cloud-based PingOne. "With that one connection to the switch you can now reach all your different SaaS vendors or applications providers," said Jonathan Buckley, VP of Ping's On-Demand Business. "This changes federation, which has been a one-to-one to one networking game."

By moving federated identity management to the cloud, organizations don't need administrators who are knowledgeable about authentication protocols like OAuth, OpenID and SAML, Buckley added. "The tools are designed with a junior to mid level IT manager in mind," he said.

Ping is not the first vendor to bring federated identity management to the cloud – Covisint offers a vertical industry portal that offers single sign-on as has Symplified and Okta. But Buckley said Ping is trying to bring federated identity management to the masses.

"The model they are pursuing is a very horizontal version of what a number of folks have done in a more vertical space with more limited circles of trust," said Forrester Research analyst Eve Maler. "I think they are trying to build an ecosystem that is global and that could be interesting if they attract the right players on the identity-producing side and the identity-consuming side, namely a lot of SaaS services."

Another effect of these cloud-based identity management services is their potential to lessen the dependence on Active Directory, said Gartner analyst Mark Diodati. PingOne promises to make that happen via its implementation of directory synchronization, Diotati said.

PingOne looks closely at an enterprise's Active Directory and detects any changes, and if so replicates them to PingOne. Specifically, if someone adds a user to an LDAP group, or moves him or her to a new organizational unit or gives it a specific attribute, that change will automatically replicate to PingOne.

"The ability to do that directory synchronization, getting identities into the hosted part of it, and also the single sign-on, are extremely difficult to do without on-premises components to pull it off," Diodati said.

Ping stresses that passwords and authentication data are not stored in the cloud. But it stands to reason many companies will have to balance the appeal of simplifying authentication and management of access rights to multiple SaaS-based apps with the novelty of extending that function into the cloud. Do you think customers will be reluctant to move federated identity management to the cloud or will they relish the simplification and cost reduction it promises? Drop me a line at [email protected].

Posted by Jeffrey Schwartz on March 21, 20120 comments


HP Nearing Launch of Public Cloud Service

One year has passed since Hewlett-Packard announced its plans to launch a public cloud service and it appears that service will arrive in May.

Zorawar Biri Singh, senior vice president and general manager of HP's cloud services business last week told The New York Times that the service is on pace to go online in two months. While HP is launching a portfolio of public cloud infrastructure services similar to those of Amazon Web Services, Singh is setting modest expectations for taking on the behemoth.

"We won't pull (Amazon's) customers out by the horns but we already have customers in beta who see us as a great alternative,"Singh said, adding HP does not intend to compete on price. Fully aware that Amazon has aggressively cut its rates, Singh said HP will compete by offering more "personal sales and service."

Of course, HP won't stand alone on that front as players such as Rackspace, IBM and Microsoft, just to name a few, promote their focus on customer service. That said, HP can't afford not to offer a viable public cloud service for enterprises. Along with its public cloud service, Singh said HP will offer:

  • Tools for Ruby, Java and PHP developers
  • Support for provisioning and management of workloads remotely
  • An online store where customers can rent software in HP's cloud (as indicated last year)
  • Connectivity to private clouds
  • A platform layer with third party services

Offering software as a service also appears high on the agenda. The first will be a data analytics service, leveraging last year's acquisitions of Vertica and Autonomy.

HP's cloud services will launch initially with datacenters in the United States on both the east and west coasts, with a global rollout to follow. Like any major IT vendor, HP knows it must execute well in the cloud. Even if it isn't a major revenue generator in the short term, a robust cloud portfolio will be critical to HP's future.

Posted by Jeffrey Schwartz on March 15, 20120 comments


Microsoft Promises Better Communication After Azure Leap Day Outage

It was bad enough that Microsoft's Windows Azure cloud service was unavailable for much of the day on Feb. 29 thanks to the so-called Leap Day bug. But customers struggled to find out what was going on and when service would be restored.

That's because the Windows Azure Dashboard itself wasn't fully available, noted Bill Laing, corporate VP of Microsoft's Server and Cloud division, in a blog post Friday, where he provided an in-depth post-mortem with extensive technical details outlining what went wrong. In very simple terms, it was the result of a coding error that led the system to calculate a future date that didn't exist.

But others may be less interested in what went wrong than in how reliable Windows Azure and public cloud services will be over the long haul. On that front, Laing was pretty candid, saying, "The three truths of cloud computing are: hardware fails, software has bugs and people make mistakes. Our job is to mitigate all of these unpredictable issues to provide a robust service for our customers."

Did Microsoft do enough to mitigate this issue? Laing admits Microsoft could have done better to prevent, detect and respond to the problems. In terms of prevention, Microsoft said it will improve testing to discover time-related bugs by upgrading its code analysis tools to uncover those and similar types of coding issues. The problem took too long -- 75 minutes -- to detect, Laing added, noting the specific issue regarding detecting fault with the guest agent where the bug was found.

Exacerbating the whole matter was the breakdown in communication. The Windows Azure Service Dashboard failed to "provide the granularity of detail and transparency our customers need and expect," Laing said. Hourly updates failed to appear and information on the dashboard lacked helpful insight, he acknowledged.

"Customers have asked that we provide more details and new information on the specific work taking place to resolve the issue," he said. "We are committed to providing more detail and transparency on steps we're taking to resolve an outage as well as details on progress and setbacks along the way."

Noting that customer service telephone lines were jammed due to the lack of information on the dashboard, Laing promised users will not be kept in the dark. "We are reevaluating our customer support staffing needs and taking steps to provide more transparent communication through a broader set of channels," he said. Those channels will include Facebook and Twitter, among other forums.

Windows Azure customers affected by the outage will receive a 33 percent credit, which will automatically be applied to their bills. However, such credits, while welcome, rarely make up for the cost associated with downtime. But if Microsoft delivers on Laing's commitments, perhaps the next outage will be less painful.

See Also:

Posted by Jeffrey Schwartz on March 14, 20121 comments


Cloud Sherpas Merges with GlobalOne

One of Google's largest cloud partners is now even larger. Cloud Sherpas, a major provider of Google Apps, this week said it has merged with GlobalOne, making it a major Salesforce.com partner, as well.

Since its founding in 2008, Atlanta-based Cloud Sherpas has focused its business on replacing premises e-mail and collaboration platforms with cloud services based on the Google Apps stack. New York-based GlobalOne has concentrated on offering CRM services from Salesforce.com since its formation in 2007.

GlobalOne CEO David Northington will now be CEO of Cloud Sherpas, while former Cloud Sherpas CEO Douglas Shepard will become president of Cloud Sherpas' Google business unit.

"Both GlobalOne and Cloud Sherpas were born in the cloud as pure-play cloud service providers," Northington said in a statement. "The combined firm further enables our singular mission -- to help customers transform their businesses by leveraging the power of the cloud. The new Cloud Sherpas has the talent, domain expertise and geographic reach to help businesses improve IT agility and lower costs through a series of cloud consulting, integration and support services."

And giving Cloud Sherpas a further boost, Columbia Capital, which last year invested $15 million in the provider, is adding a $20 million round. Cloud Sherpas said it intends to use the funds to expand into new geographic regions, add to its portfolio of vertical market offerings and extend its cloud-based applications.

The new Cloud Sherpas employs 300 people and has a presence in Atlanta, Ga.; Brisbane, Australia; Chicago; Manila, Philippines; New York; San Francisco; Sydney, Australia; and Wellington, New Zealand.

Is it only a matter of time before a major provider scoops up Cloud Sherpas? Northington told The New York Times Cloud Sherpas is targeting its own growth to offer more cloud-based services and plans to broaden its portfolio.

Posted by Jeffrey Schwartz on March 08, 20120 comments