Dell this week said it is abandoning plans to push forward with Infrastructure as a Service (IaaS) cloud offerings and will deliver cloud hardware and software through partners. The move came on the eve of VMware's announcement that it will launch an IaaS.
The company is discontinuing its VMware-focused cloud IaaS, while putting the breaks on plans to deliver services based on the open source OpenStack platform. Instead, Dell said it will equip and support its partners to build and host such services on various cloud platforms.
Dell's decision to pull out of the public cloud market is not very surprising. Gartner analyst Lydia Leong noted in a blog post that Dell never gained much traction with its VMware-based service, nor did most other providers other than CSC. "The writing was mostly on the wall already," Leong said.
While it was a major contributor to OpenStack, its service based on that platform never quite got off the ground. Dell's acquisition of Enstratius earlier this month was a further signal that the company was going to focus on helping service providers and enterprises manage multiple clouds.
Also, it's not surprising that Dell was reticent to invest in building out multiple public cloud services, given its current plan to go private. Ironically, as VMware goes direct (though it insists it's still committed to offering its software and services through its partners), Dell's cloud strategy now goes deeper on enabling enterprises to manage multiple clouds offered by third-party providers.
"Dell is going to need a partner ecosystem of credible, market-leading IaaS offerings. Enstratius already has those partners -- now they need to become part of the Dell solutions portfolio," Leong noted in a separate blog post. "If Dell really wants to be serious about this market, though, it should start scooping up every other vendor that's becoming significant in the public cloud management space that has complementing offerings (everyone from New Relic to Opscode, etc.), building itself into an ITOM vendor that can comprehensively address cloud management challenges."
Posted by Jeffrey Schwartz on May 23, 20130 comments
VMware this week revealed it will launch a public cloud Infrastructure as a Service (IaaS), a move it said doesn't deviate from its commitment to its partners.
Nevertheless, the launch of its vCloud Hybrid Cloud Service IaaS is a shift in strategy for VMware, which until recently had indicated it had no plans to roll out a public cloud offering. Although many had predicted that VMware ultimately would do so, rumors of its plans only began to surface a few months ago.
VMware made it official during a launch event webcast Tuesday, when the company said it will offer an early-access program next month in the United States, with general availability slated for the third quarter of this year. The service will run the company's vSphere virtualization and vCloud Director management platforms.
The company will offer the service in two modes. The vCloud Hybrid Service Dedicated Cloud will consist of reserved compute instances that are physically isolated and require an annual contract at a starting price of 13 cents per hour for redundant 1 GB virtual machines with a single processor. The other offering, vCloud Hybrid Service Virtual Private Cloud, is based on similar hardware but is multitenant, but will require only monthly terms with pricing starting at 4.5 cents per hour.
Until now, VMware's cloud strategy revolved solely around having service provider partners deliver IaaS based on its wares. At its launch event, the company insisted that its partners are a key part of its plan and go-to-market strategy, and that they will continue to have the option of running their own VMware-based services or reselling the company's new service.
"Overall, we see this as the most partner-friendly public cloud," said Bill Fathers, a VMware senior VP and general manager of the VMware Hybrid Cloud service, during the Tuesday webcast. "We are enabling thousands of channel and solution provider partners to offer vCloud Hybrid cloud service so our clients will be able to continue to order and get support from the same channel partners they entrusted with their IT purchasing for many years. We'll be making all this technology and IP and the powers of the vCloud Hybrid Service available to our ecosystem, the service provider and systems integration partners so they can deliver cloud-based solutions based on this platform. We also expect to see the same partners develop value-added services on top of and around the offering and may seek to differentiate by industry vertical, by application or perhaps by geography."
Fathers indicated that VMware may turn to partners to facilitate the rollout in Europe and Asia, set for 2014. Despite promising to stick to a partner-only strategy for delivering cloud services, VMware may have had no choice but to offer a public cloud service, said Gartner analyst Lydia Leong in a blog post. CSC is the only partner that gained significant market share, according to Leong, with Bluelock following way behind. Dell's decision to discontinue its IaaS offering, and the decision to use vSphere but not vCloud Director, has also diminished the success of VMware's ecosystem, according to Leong.
With its decision to offer its own IaaS, VMware is poised to have more success, Leong added. "No one should underestimate the power of brand in the cloud IaaS market, particularly since VMware is coming to market with something real," she noted. "VMware has a rich suite of ITOM capabilities that it can begin to build into an offering. It also has CloudFoundry, which it will integrate, and would logically be as synergistic with this offering as any other IaaS/PaaS integration (much as Microsoft believes Azure PaaS and IaaS elements are synergistic)."
Indeed, VMware officials talked up the fact that the 3,700 apps certified to run on its virtualization platform can move seamlessly between the datacenter and its new public cloud, without requiring any modifications. That's the same model Microsoft espouses with its "cloud OS" strategy, designed to let customers move data from Windows Server to Windows Azure.
As a partner, which looks more compelling to you? Or do you see riding on both? Comment below or drop me a line at [email protected].
Posted by Jeffrey Schwartz on May 23, 20130 comments
Dell recently announced its acquisition of Enstratius in a move that extends its push into the multi-cloud management space.
Enstratius (which until last year was known as EnStratus) is regarded as a leading supplier of premises- and Software as a Service (SaaS)-based cloud management platforms. The 5-year-old company competes with RightScale. Both offer cloud management systems that let IT administrators monitor and control various public cloud services, including those offered by Amazon Web Services.
In addition to Amazon, Enstratius' cloud management platform can manage clouds built on the OpenStack environment, VMware's vCloud and Microsoft's Windows Azure. In a statement, Enstratius CEO David Bagley welcomed the resources of Dell to help extend its multi-cloud management story.
"Together, Enstratius and Dell create new opportunities for organizations to accelerate application and IT service delivery across on-premises data centers and private clouds, combined with off-premises public cloud solutions," according to Bagley. "This capability is enhanced with powerful software for systems management, security, business intelligence and application management for customers, worldwide."
Enstratius broadens Dell's overall systems and cloud management portfolio and complements the technology the company acquired from Gale Technologies, whose Active System Manager also manages multiple cloud environments and provides application configuration.
Dell also indicated it will integrate Enstratius with its Foglight performance management tool, Quest One identity and access management software, Boomi cloud integration middleware, and its backup and recovery offerings AppAssure and NetVault.
Posted by Jeffrey Schwartz on May 08, 20130 comments
Amazon recently launched the Amazon Web Services Certification Program to give partners and customers access to training and validation for implementing systems and apps in Amazon's cloud.
While AWS offers a robust portfolio of cloud offerings and, rightfully, claims it operates some of the largest cloud implementations, until now it has lacked a meaningful way of ensuring its partners were certified to implement its services.
Amazon tapped testing partner Kryterion to implement the new training programs. The first available exam will be for the "AWS Certified Solutions Architect - Associate Level." That certification will be for architects and those who design and develop apps that run on AWS, the company said.
In the pipeline are certifications for systems operations (SysOps), administrators and developers, which the company will roll out later this year. The exams will be available at 750 locations throughout 100 countries, Amazon said.
The certifications will allow partners to assert their expertise in the company's cloud offering as a way of differentiating it among a growing partner ecosystem that now boasts 664 (up from 650 earlier in the week) solution providers in the AWS Partner Network (APN) and 735 consultancies.
"Once you complete the certification requirements, you will receive an AWS Certified logo badge that you can use on your business cards and other professional collateral," said AWS evangelist Jeff Barr in a blog post. "This will help you to gain recognition and visibility for your AWS expertise."
Posted by Jeffrey Schwartz on May 02, 20130 comments
At its fourth annual Pulse conference last month in Las Vegas, IBM announced that all of its cloud services and software will be based on open standards, with OpenStack -- the open source effort initiated by Rackspace and NASA nearly three years ago -- at the Infrastructure as a service (IaaS) layer, the Topology and Orchestration Specification for Cloud Applications (TOSCA) for Platform as a Service (PaaS) application portability, and HTML 5 for Software as a Service (SaaS).
While Big Blue was an earlier participant in the project and now a platinum sponsor of the OpenStack Foundation, it waited until last year to publicly acknowledge its involvement in the OpenStack initiative. Now, IBM is throwing all of its weight behind the project.
IBM officials described the announcement as a commitment to lead in the stewardship and support of cloud standards tantamount to its support for Linux over a decade ago, Apache and Java 2 Enterprise Edition at the Web application server layer, and Eclipse at providing standardized integrated development environment (IDE) tools.
"The need for open cloud services is a must," said Robert Leblanc, senior vice president for middleware at IBM, speaking at a press conference at Pulse. "It's not a nice-to-have. I think it has become a must. Clients cannot afford the time and energy it takes to write specific interfaces to all the various cloud environments that are out there today. This has become too important, too large for us not to help clients, and so basing on a set of open standards is key and that's why we are moving all of the SmartCloud Capabilities over to cloud standards. We are jumping in full force."
Jay Snyder, director of platform engineering at the insurance giant Aetna, was present at the briefing and said he will only use cloud-based solutions that are standards-based.
"I can't just stress enough the importance of open standards and that's really regardless of platform," Snyder said. "If you think about the cloud, the layers of the stack in the cloud, the hypervisor, operating system and orchestration, we expect those layers of the stack to evolve and change. If we don't have standards, we potentially run the risk of vendor lock-in and that's something we absolutely want to avoid. For us, having those standards in place ensures if -- for financial reasons or functional reasons -- we want to replace a component of the stack, we can do that. And that's critical to our success."
For example, Snyder said his organization wants to be able to select a hypervisor without it locking him into certain cloud management, orchestration and cloud operating systems. "We want to be able to flexibly replace those components as they evolve," he said. "Standards, we think, is a great way to protect freedom of choice and innovation, and that's why we're focused on standards."
The first key deliverable from IBM to come out of this effort is its new SmartCloud Orchestrator software that lets organizations build new cloud services using patterns or templates with a GUI-based "orchestrator" that enables cloud automation. It automates cloud-based app deployment and lifecycle management providing configuration of compute, storage and network resources. It also provides a self-service portal to manage and account for the cost of using cloud resources.
Posted by Jeffrey Schwartz on March 13, 20131 comments
One of cloud computing's biggest promises is that it will reduce infrastructure costs while providing compute and storage capacity on demand. But as a pair of recent surveys show, that promise of cost savings isn't necessarily a guarantee.
In a Rackspace survey of 1,300 businesses in the United States and United Kingdom, 66 percent of respondents found cloud computing has reduced their IT costs, while 17 percent said it failed to do so. The remainder had no opinion. Yet another survey commissioned by Internap, which runs 12 datacenters throughout the United States, primarily for colocation but also for its cloud computing business, suggests that of the 65 percent of respondents who said they are considering the use of cloud services, 41 percent expect them to reduce their costs.
This obviously isn't an apples-to-apples comparison since, among other variations, the Rackspace study surveyed those who already use cloud services while the Internap survey didn't query only those running apps in the cloud. But the two surveys offer some interesting data points on the role costs play in determining the value of using cloud computing services.
"It used to be debatable whether the cloud was saving money or not, but apparently the businesses we surveyed believe it is saving them money," said Rackspace CTO John Engates in an interview last month.
But depending on your application, cloud computing can actually cost more, warned Raj Dutt, senior VP of technology at Internap. That's especially the case for applications that have consistent and predictable compute and storage usage, he explained.
"People move to the cloud for perceived cost savings and what we're finding is it gets really expensive compared to colocation, [particularly] if you look at the three-year overall total cost of ownership of an application that is pretty constant," Dutt said.
The cynic in me says, "Of course, Rackspace is going to share data that finds cloud computing reduces IT costs, and why wouldn't a colocation provider want to deliver numbers that show the benefits of running your own gear in offsite facilities, even if it has a cloud business as well?" But what these two surveys have in common is they both put forth a healthy long-term prognosis for cloud computing. Indeed, Engates pointed out that Rackspace uses the colocation facilities of Equinix.
While nearly two-thirds of those surveyed by Internap are considering cloud services, the company didn't ask if they were already using them. Nonetheless, 57 percent said they were considering hybrid IT infrastructure services, which Dutt said bodes well for the future use of colocation facilities since customers would likely cloud-enable or extend the apps already running in those facilities to Infrastructure as a Service (IaaS) providers.
"What Internap is interested in doing is bringing a lot of the cloud capabilities like remote insight management, APIs, even the ability to control your infrastructure programmatically remotely without having to call the datacenter or send someone to fix your problem in your rack," Dutt said. "We're able to provide the service delivery promise that the cloud offers into the 'colo' world where no one is expecting it, and we're able to do it under a single pane of glass [from a] single vendor and allow you to build your app on the building block that best makes sense for you."
From the Rackspace survey, of those already using cloud computing:
- The largest sample, 41 percent, said cloud computing reduced costs from 10 to 25 percent, while 19 percent said it providing 25 to 50 percent in IT savings, and 27 percent said it only cut costs by 10 percent or less.
- 54 percent said use of cloud services helped accelerate IT project implementation, including application development, while 17 percent begged to differ. The rest weren't sure.
- 56 percent saw increased profits while 18 percent reported no benefit to the bottom line, with 26 percent unsure.
- 49 percent said cloud computing helped grow their businesses, with 21 percent seeing no such benefit, and 30 percent unsure.
- 59 percent said cloud services provided better disaster recovery.
- 56 percent were using open source cloud technology, though in the United States that figure is 70 percent.
If you're using cloud services, is it saving you money? And if so, what are you doing with those savings? And where do colocation facilities fit in your future IT and cloud plans? Share your findings below or drop me a line at [email protected].
Posted by Jeffrey Schwartz on March 07, 20130 comments
VMware's new CEO, Pat Gelsinger, urged VMware partners this week to do whatever it takes to steer customers away from Amazon's public cloud.
"[If] a workload goes to Amazon, you lose, and we have lost forever," Gelsinger told top VMware partners on Wednesday during the company's Partner Exchange Conference in Las Vegas, according to CRN's account of the event.
"We want to own corporate workload," Gelsinger continued. "We all lose if they end up in these commodity public clouds. We want to extend our franchise from the private cloud into the public cloud and uniquely enable our customers with the benefits of both. Own the corporate workload now and forever."
The widely reported remarks resulted in a blunt rebuke by respected Forrester analyst James Staten.
"Forgive my frankness, Mr. Gelsinger, but you just don't get it," Staten charged in a blog post. "Public clouds are not your enemy. And the disruption they are causing to your forward revenues are not their capture of enterprise workloads. The battle lines you should be focusing on are between advanced virtualization and true cloud services and the future placement of Systems of Engagement versus Systems of Record."
Staten argued that vSphere is used primarily to manage static workloads and functions such as live migrations and disaster recovery, where they provide high SLAs for business-critical apps that run in virtual environments.
Furthermore, he argued vSphere has failed to capture modern apps, such as those targeted at mobile devices or those that have unpredictable capacity requirements. "It's not that vSphere isn't capable of hosting these applications -- but that the buyer values functionality that lies at a far higher level than where VMware has its strength," Staten noted.
Most vSphere configurations aren't implemented as self-service infrastructure, he added. "It doesn't provide fast access to fully configured environments. It wouldn't know what to do with a Chef script and it certainly couldn't be had for $5 on a Visa card. For VMware and for enterprise vSphere administrators to capture the new enterprise applications, they need to rethink their approach and make the radical and culturally difficult shift from infrastructure management to service delivery. You need to learn from the clouds, not demonize them."
If that wasn't blunt enough, Staten concluded: "What you should be doing is admitting you screwed up with vCloud Director 1.0 and 1.5 and kicking ass in engineering to get a true cloud to market ASAP."
VMware appears to have had a love-hate relationship with the public cloud for many years. At one point, it is believed VMware was quietly aiming to acquire Terremark (it held a minority stake), which Verizon ultimately scooped up for $1.4 billion two years ago. VMware has said it wouldn't compete with its partners and launch its own public cloud.
Nevertheless, rumors surfaced back in August that VMware is developing a public cloud, code-named Project Zephyr. On Friday, CRN reported that VMware is planning a "top secret" public cloud -- not Project Zephyr, but a service internally known as VMware Public Cloud that is "intended to slow Amazon's momentum and generate more revenue in areas that lie outside its core virtualization business."
Posted by Jeffrey Schwartz on March 01, 20132 comments
Given last week's study that found Windows Azure storage to have the fastest response times out of five large cloud networks -- beating those operated by Amazon Web Services, Google, HP and Rackspace -- this weekend's Windows Azure outage came at a particularly bad time for Microsoft.
Microsoft's Windows Azure cloud storage service went down worldwide late Friday afternoon. An expired SSL certificate was the cause of the outage, Microsoft eventually confirmed. Good thing for Microsoft that Nasuni, the vendor that ran last week's cloud storage study, wasn't testing Windows Azure this weekend.
Once Windows Azure was back up Saturday, I updated my report to say that Microsoft had fixed the problem and users could once again access their data. The company said the service was 99 percent available early Saturday and completely restored by 8 p.m. PST. But the damage was already done -- and many of Microsoft's partners and customers were furious.
In comments posted on a Windows Azure forum, Sepia Labs' Brian Reischl, who first pointed to the SSL certificate as the likely culprit, seemed to feel users should cut Microsoft some slack. Reischl said letting an SSL certificate fall through the cracks is a mistake anyone could make. "I know I have. It's easy to forget, right?" he posted. "It's an amateur mistake, but it happens. You end up with some egg on your face, add a calendar reminder for next year, and move on."
But one has to wonder how Microsoft, which has staked its future on the cloud and has spent billions to build Windows Azure into one of the largest global cloud services, could not have put in safeguards to prevent the domino effect that occurred when that cert expired -- much less have a mechanism in place to know when all certificates are about to expire. Putting it in admins' Outlook calendars would be a good start.
Of course, there are more sophisticated tools to make sure SSL certificates don't expire. Among them are Solar Winds' certificate monitoring and expiration management component of its Server & Application Monitor, a favorite among readers of our sister publication, Redmond. Another option not so coincidently hit my inbox this week: Matt Watson, founder of Stackify, spent a few hours over the weekend developing a free tool called CertAlert.me, which allows site owners to scan the Web sites they own and track SSL and domain name expirations.
"It happens a lot," Watson told me in a brief telephone conversation regarding outages like the one that struck Friday, which affected Stackify. "All you can do is sit on your hands and pray," he said, adding that years ago he had to deal with an expired SSL certificate. "You buy them and you forget about them and the next thing you know, your site's gone. It's one of those things that get overlooked."
Asked what's the business opportunity for offering this free service, Watson said he saw it as an opportunity to bring exposure to his startup's namesake offering, a Windows Azure-based server monitoring platform targeted at easing access for developers while ensuring they don't have access to production systems.
Indeed, you can bet Microsoft is going to ensure it doesn't happen. "Our teams are also working hard on a full root cause analysis (RCA), including steps to help prevent any future reoccurrence," said Steven Martin, Microsoft's general manager of Windows Azure business and operations, in a blog post apologizing for the disruption. Given the scope of the outage, Microsoft will offer credits in conformance with its SLAs, Martin said.
This is not the first outage Microsoft has had to explain and probably won't be the last. And we all know the number of well-publicized outages Amazon Web Services has encountered in recent years.
If you're a Windows Azure customer, did last week's slip-up erode your confidence in storing your data in Microsoft's cloud? Drop me a line at [email protected] or leave a comment below.
Posted by Jeffrey Schwartz on February 26, 20132 comments
In aim to make it easier for developers to automate the process of modeling, deploying and scaling their apps, Amazon Web Services this week launched an application management service called AWS OpsWorks.
AWS OpsWorks, takes management templates developed from Opscode called Chef Recipes, designed to provide flexible capacity provisioning, configuration management and deployment, while allowing administrators to manage access control and to monitor the app, the company said Tuesday. Administrators can use AWS OpsWorks from the AWS Management Console.
"AWS OpsWorks was designed to simplify the process of managing the application lifecycle without imposing arbitrary limits or forcing you to work within an overly constrained model," said AWS evangelist Jeff Barr in a blog post. "You have the freedom to design your application stack as you see fit."
AWS OpsWorks is the latest service aimed at allowing more sophisticated management of the company's cloud services. It follows the release two years of AWS Elastic Beanstalk, aimed at rapid deployment and management of apps running among Amazon's portfolio of cloud services. Amazon more recently added CloudFormation, aimed at bringing together and managing various AWS resources.
The launch of AWS OpsWorks comes just days after Amazon made available its data warehousing service called Redshift. Amazon announced its plans to offer Redshift back in November at its first ever re: Invent partner and customer conference.
Amazon is hoping it can do to the data warehousing business with Redshift what it has done to computing and storage with EC2 and S3, respectively. "We designed Amazon Redshift to deliver 10 times the performance at 1/10th the cost of the on-premises data warehouses that are commonly used today," Barr wrote in an earlier blog post last week. We used a number of techniques to do this including columnar data storage, advanced compression, and high-performance disk and network I/O."
Amazon will be taking on some pretty large and established rivals in the data warehousing market, including Oracle, IBM, Teradata SAP and Microsoft. Not that taking on entrenched players has ever stopped Amazon before. And many of them are also already partnering with Amazon.
What's your take on Amazon's latest new offerings? Do you think the company will commoditize app management and data warehousing? Drop me a line at [email protected] or leave a comment below.
Posted by Jeffrey Schwartz on February 20, 20130 comments
In Nasuni's second annual comparison of leading providers of public cloud infrastructure services, Microsoft's Windows Azure BLOB storage performed significantly better than last year's runaway winner, Amazon Web Services.
Nasuni is a closely-held supplier of turnkey data protection appliances that use public Infrastructure as a Service (IaaS) providers' object storage repositories as backup and recovery targets. While Nasuni officials said they conducted more exhaustive tests for the shootout, such as by benchmarking a wider range of file sizes (from 1KB to 1GB), the company only compared five preferred IaaS providers -- Amazon, Google, Hewlett-Packard, Microsoft and Rackspace -- compared with 16 last year.
Among those holdovers that didn't make this year's cut were AT&T, Nirvanix and Peer1 Hosting. Nasuni decided to go with fewer providers this year because the company only wanted to test those they considered the most likely providers it would use as backup targets for its customers. The company currently uses Amazon exclusively for that purpose and last year's shootout results appear to have validated that choice.
"Amazon was just heads and shoulders ahead of the rest last year," said Conner Fee, Nasuni's director of marketing, who said he was shocked to see Microsoft turn the tables on Amazon this year. Nasuni rated the speed of reads, writes and deletes to Windows Azure BLOB services at 99.96 percent, while Amazon performed only at 68 percent.
Response times when reading, writing and deleting files to Windows Azure averaged a half-second, with Amazon dropping from first place to second, though still performing reasonably well, Fee said. Not faring as well were Rackspace, where response times were a second-and-a-half to two seconds. Fee said he was also surprised by Google's weak performance.
"This year, Microsoft's Windows Azure took a huge leap forward," Fee said. "It was incredibly surprising to us as we view this as a relative commodity space and we expect the experienced players to be out in frond. What we found is that Microsoft's investments in Azure that they've been talking about for a while gave them the opportunity to leapfrog Amazon."
Brad Calder, general manager for Windows Azure storage at Microsoft, spelled out those improvements in a November blog post, describing the company's next-generation storage architecture, called Gen2. Microsoft deployed what it calls a Flat Network Storage (FNS) architecture that enables high-bandwidth links to storage clients. It also replaces traditional hard disk drives (HDDs) with flash-based solid state drives (SSDs). Here's how Calder described FNS:
"This new network design and resulting bandwidth improvements allows us to support Windows Azure Virtual Machines, where we store VM persistent disks as durable network attached blobs in Windows Azure Storage. Additionally, the new network design enables scenarios such as MapReduce and HPC that can require significant bandwidth between compute and storage."
Given the reason Nasuni conducts these tests is to determine which cloud service providers to use, does this mean Nasuni will shift some or all of the data it backs up for its customers from Amazon to Windows Azure? Not so fast, according to Fee. "Amazon has always been our primary supplier and Azure our distant second," he said. "I think we'll see more opportunities to use them. Will this change this year? Maybe but probably not. There's a lot more widgets to be made before we're willing to jumps ship."
However in several conversations with Nasuni, officials describe IaaS providers as commodity providers of storage, equivalent to the role HDD vendors play to storage system vendors like EMC and NetApp. "We do this testing because we're constantly evaluating suppliers," he said. "We test, compare and benchmark because we always want to make sure we're using the best suppliers and want to make sure our customers have the best possible experience."
When speaking to Rackspace CTO John Engates about another matter, I asked if he had heard about his company's poor showing in the Nasuni tests (Fee said the company had shared the findings with all the providers but Rackspace hadn't responded). Engates, though familiar with last year's shootout, said he hadn't heard about this year's findings, hence he didn't want to comment.
But he did say it's tough to draw any conclusions based on any one set of tests or benchmarks. "It depends on what your customers are doing as to whether your cloud is perfect or not," Engates said. Much of the data stored in Rackspace Cloud Files tend to be large data types that are enhanced by its partner Akami's content delivery network (CDN), Engates said. Likewise, Fee received feedback from Amazon that suggested Amazon felt the tests were biased toward scenarios with lots of small files rather than large data types.
As it turns out, one of the reasons Microsoft's Windows Azure performed so well, Fee said, was that its architecture is optimized for large quantities of small files. "That's where Azure excelled," he said. "We based our tests from real-world customer data. It wasn't something we made up or can change. A lot of these guys were much better at handling larger files, and Azure exceeded well at small files and that really influenced the results."
Despite the strong showing for Windows Azure, Fee said he believes that with the investments all five companies are making, that all of them could be contenders moving forward. "It wouldn't surprise me to see a new leader next year," he said.
Posted by Jeffrey Schwartz on February 19, 20135 comments
Dell's 2010 acquisition of cloud integration upstart Boomi was driven by its goal to become a leading provider of connectivity from private to public cloud application services.
The acquisition seems to be paying dividends: Dell this week said its tools are used for 1 million integrations per day.
What does that mean? Boomi founder and GM of the business unit Rick Nucci described an integration as the execution of a process which moves data between two or more applications. For example, when a sales rep closes a deal and enters the data into a CRM system, you need to invoice the customer. That means connecting the CRM system to the billing app, Nucci explained.
A portion of the data may be in a premises-based system and other information may reside in a Software as a Service (SaaS) app. Boomi Atoms provide that connectivity with its SaaS-based platform and software connectors. This SaaS-based messaging middleware offering aims to offer an alternative to messaging middleware from the likes of Microsoft, IBM, Oracle and Tibco.
"The cloud is being adopted by large enterprises and as they do so, the way they think of integration is changing," Nucci said. "The way we think of middleware and integration is fundamentally changing and Boomi has built a product meant to solve integration in the cloud era."
The company on Monday said it has partnered with services company Wipro, which will use Dell Boomi's service as part of its cloud integration practice. "They have found the traditional on premise middleware technologies just don't work to integrate with cloud service," Nucci said. The partnership follows a recent pact with Infosys. Nucci said Dell Boomi will be announcing a number of additional partnerships in the coming months.
Posted by Jeffrey Schwartz on February 12, 20130 comments
Lately, it seems like every day there's a software supplier or service provider offering new options to use the public cloud for storage and data protection.
The latest is Veeam Software, which this week released a connector that will let users of its backup and recovery software use any of 15 public cloud Infrastructures as a Service (IaaS) as backup targets. Among them are Microsoft's Windows Azure, Rackspace's Cloud Files, HP Cloud and Amazon Web Services' S3 storage and Glacier archiving services.
Veeam Backup Cloud Edition addresses data security with support for AES 256-bit encryption and aims to address network performance via its compression and de-duplication algorithms. Customers can also boost performance using WAN accelerators, explained Rick Vanover, Veeam's product strategy specialist. The company has partnerships with WAN optimization vendor Riverbed and cloud gateway supplier TwinStrata.
Customers can backup virtual machines, Vanover said. The offering allows enterprise customers to choose IaaS providers without having to learn their respective APIs. Are customers really looking to replace traditional tape with the cloud as a backup target? "People have been asking for this," Vanover said.
Last week, cloud provider Savvis announced the release of its Symphony Cloud Storage offering. PJ Farmer, director of Savvis' cloud storage product management, said in a blog post that the service offers "automatic protection from geographic disaster and for easily providing local storage targets for distributed applications."
Based on EMC's Atmos platform, Symphony Cloud Storage offers built-in replication and enables organizations that must address data sovereignty to set policies where data is stored.
But it's not just the big players that are eyeing storage and backup and recovery. I've talked to a number of providers who target small and medium businesses (SMBs). Cloud storage was a big topic at the Parallels Summit in Las Vegas last week, where the company launched Parallels Cloud Storage, a platform that allows SMB-focused cloud and hosting providers to improve storage capacity and utilization to create self-healing, distributed, high-performance storage pools.
"It's highly available, self-healing and fully fault-tolerant with auto-recovery," explained Parallels CEO Birger Steen. "It looks simple. It's hard to do but conceptually it's pretty simple."
Related:
Posted by Jeffrey Schwartz on February 12, 20130 comments