Rackspace Hosting is shutting down its Slicehost service  within the next 12 months, the company said  in a letter to customers. 
		Acquired by Rackspace in 2008, Slicehost is a managed  hosting provider that Rackspace maintained as a separate business unit. The  move is likely to be unwelcome news to those who must migrate from the  Slicehost service.
		"With two brands,  two control panels and two sets of support, engineering and operations teams,  it has been a challenge to keep development parity between the products,"  wrote Mark Interrante, Rackspace VP of product. 
		The company's emphasis  on its OpenStack open source project and the need to convert to IPv6 are  the two primary reasons the company has decided to shut down Slicehost.  Rackspace plans to convert all Slicehost accounts to Cloud Servers over the  next year. 
 By making the  conversion to Rackspace Cloud Servers accounts, the company said that customers  will be better positioned for IPv6 and have access to Cloud Files, the Cloud  Files content delivery network and the recently released Cloud  Load Balancers. 
		"Naturally, this  decision has not been easy," Interrante said. "There has been  extensive planning, and will continue to be more, to ensure this change is as  seamless as possible for everyone."
		Following the company's announcement, Interrante posted  a detailed Q&A outlining the transition, and Rackspace's rationale for shutting down  Slicehost. "We truly believe this change will be in the best interest  of Slicehost customers over the long term," he said. "A big reason we  purchased Slicehost was to learn from their technology and their customers so  we could build up the Rackspace Cloud solutions to the Slicehost level of  excellence. We want to retain or improve your product experience, not make it  worse." 
 
	Posted by Jeffrey Schwartz on May 04, 20110 comments
          
	
 
            
                
                
 
    
    
	
    		One of the most popular PC and server backup and recovery  software products is Symantec's Backup Exec and the company said this week that  cloud-based support is on the way. 
		Symantec announced Backup Exec.cloud at its annual Vision  conference in Las Vegas.  The new offering is targeted at small and medium businesses and branch offices  of larger enterprises that want to offload backups to a cloud-based service.
		Backup Exec.cloud will complement  Symantec's plans to  offer expanded Software as a Service (SaaS) solutions for security, e-mail  management and data protection, the company said. The new service, due out  later this year, will allow customers to stream backups over SSL connections to  Symantec datacenters. 
		While Symantec will compete with a slew of other cloud  providers offering such services, Backup Exec's strong installed base should  give it an advantage to those shops looking to migrate their backups off site. 
		The service will allow individuals to restore individual  files, the company said. Pricing was not announced but it will be subscription-based.
 
	Posted by Jeffrey Schwartz on May 04, 20110 comments
          
	
 
            
                
                
 
    
    
	
    	[UPDATE: Amazon  released a detailed report explaining the cause of the outage on Friday. Read the story here.]
	Amazon Web Services' four-day outage was a defining moment  in the history of cloud computing -- not only for its impact but for the company's  deafening silence.
		The widely  reported outage at Amazon's Northern Virginia  datacenter left a number of sites crippled for several days, though Amazon most  recently reported that service has been restored. However, the company has  acknowledged that .07 percent of the Elastic Block Storage (EBS) volumes  apparently won't  be fully recoverable.
		"Every day, inside companies all over the world, there are  technology outages," Rackspace Chief  Strategy Officer Lew Moorman told The New York Times. "Each episode is smaller, but they add up to far more lost  time, money and business."
		As for the Amazon outage, he added: "We  all have an interest in Amazon handling this well." Did Amazon handle this  well? Let's presume the company did everything in its power to remedy the  problem and get its customers back online. Amazon has promised to issue a  post-mortem once it gets everyone restored and figures out what went wrong. 
		But the company went dark from a  communications perspective. Sure, it posted periodic updates on its Service Health Dashboard, but the company  issued no other public statements on the situation as it was unfolding (though  it was in direct communication with affected customers). Considering how  visible Amazon technologists are on social media, including Twitter, a mere  reference to the dashboard felt shallow. 
		"Most customers are saying today they have not been  very transparent and open about what has exactly happened," Forrester  analyst Vanessa Alverez told  Bloomberg TV. "Their public relations to date has not been up to par."
		Consider the communiqué of one of Amazon's customers  affected by the outage. In a blog post called "Making  it Right..." HootSuite explained to customers what happened and how it was  going to make good on the downtime it experienced. Although its terms of  service require reimbursement after a 24-hour outage and it was down for only 15 hours,  HootSuite said it would offer credits. 
		"We acknowledge users were  inconvenienced and we want to make things right," the company said.  "We are taking steps to increase  redundancy of our services and data across multiple geographic regions. This  was a bit of a unique outage which is highly unlikely to occur again, but we'll  be even more prepared for future emergencies."
		During the outage and as of this writing a week after it  first hit, no such communication has come from Amazon. PundIT analyst Charles  King said in a research note that datacenter failures, even major ones, are  inevitable, but communication is critical. He wrote:
		
				"The fact that disaster is inevitable is why  good communications skills are so crucial for any company to develop, and why  Amazon's anemic public response to the outage made a bad situation far worse  than it needed to be. Yes, the company maintained a site that regularly updated  how repairs were progressing, and, to its credit, Amazon says it will publish a  full analysis of the outage after its investigation is complete.
		
		
				"But while the company has been among the  industry's most vocal cloud services cheerleaders, it seemed essentially tone  deaf to the damage its inaction was doing to public perception of cloud  computing. At the end of the day, we expect Amazon will use the lessons learned  from the EC2 outage to significantly improve its service offerings. But if it  fails to closely evaluate communications efforts around the event, the company's  and its customers' suffering will be wasted." 
		
		I remember during the dotcom boom over a decade ago when  companies like Charles Schwab, E-Trade and eBay had highly visible outages that  affected many thousands of customers. They took big PR hits for their lack of  availability but their Web businesses prospered nonetheless. 
		While Amazon's outage will upgrade the discussion to the  importance of resiliency and redundancy (those discussions were already  happening), it seems highly unlikely that it will alter the move to cloud  computing, even if it serves as a historic speed bump. "We shouldn't let Amazon off the  hook and should expect a very thorough postmortem. But in no way does this  change the landscape for the age-old public-private debate," writes analyst Ben Kepes. 
		While Amazon's outage was a black eye for cloud  computing, providers of all sizes, including Amazon, will undoubtedly learn  from the mistakes that were made, both technical and procedural. Hopefully,  that will include better communications moving forward.
 
	Posted by Jeffrey Schwartz on April 28, 20114 comments
          
	
 
            
                
                
 
    
    
	
    
		Boomi, the provider of cloud integration software acquired  by Dell late  last year, has upgraded its AtomSphere software with improved middleware  connectivity, support for large datasets and extended monitoring capabilities. 
		AtomSphere is designed to connect Software as a Service  cloud offerings from the likes of Salesforce.com, NetSuite and others to on-premises systems.
		"Larger enterprises are continuing to adapt SaaS, and as  a result the integration requirements are growing in scale and complexity,"  said Rick Nucci, Boomi's founder and now CTO of the Dell business unit. "We  are seeing enterprises look at cloud and look at Boomi to help them integrate  and then proceed to fit them into their environment as efficiently as possible  and adhere to current investments that they've made."
		AtomSphere Spring 11 includes a new middleware cloud gateway  based on a Java message service connector that links to existing middleware  offerings from IBM, Progress Software, Tibco and webMethods. The gateway  connects to more than 70 SaaS applications, Nucci said. 
		Previously, AtomSphere connected directly to the apps but  Nucci said customers wanted the ability to link to their existing middleware "because  they've built intelligence or logic or validation routines into that middleware."
		The new release also adds support for change data capture,  or CDC, as well as large data processing in the form of hundreds of gigabytes  per atom. For Salesforce.com shops, AtomSphere now offers optimized integration  as a result of support for that company's Bulk API. 
		"It's a pretty complex API," Nucci said. "The  approach we've taken abstracts a lot of those technical details and allows the  user to give the data set to our connector and have our connector optimize and  transmit the data up to Salesforce."
		A new AtomSphere API allows customers to integrate its  monitoring capabilities with their own systems management consoles. 
		The company also has launched a partner certification  program. Dell Boomi has 70 partners now, many of which are SaaS providers and  systems integration implementation providers. Nucci said the company is looking  to bolster that number since partners are  its primary route to market.
		"As part of that scale and growth comes the need to  ensure quality and make sure we have a very scalable and reliable and  consistent means to acknowledge and accredit a partner who is investing in  learning Boomi and really demonstrating that they get it," Nucci said.
		To attain certification, partners will need to complete two  implementations, pass an exam and commit to annual recertification. 
 
	Posted by Jeffrey Schwartz on April 27, 20110 comments
          
	
 
            
                
                
 
    
    
	
    
		BMC Software has upgraded  its Cloud Lifecycle Management platform to support creation and management of  complete private and hybrid cloud stacks. 
		The introduction of  CLM 2.0 comes a year after the first release, which focused on virtualization  management and datacenter automation, thanks to the company's $800 million  acquisition of BladeLogic. BMC describes CLM 2.0 as a cradle-to-grave cloud  provisioning and management platform.
		BMC said it made  architectural improvements to CLM with two key new features. One is "service blueprints,"  which are geared to enable administrators to create configurable, full-stack,  multi-tiered cloud offerings for their users.
		"They have been  designed to be incredibly flexible and support a really broad range of cloud  services being delivered through the environment," said Lilac Schoenbeck, BMC's  senior manager for solutions marketing and cloud computing.
		The second key  feature is a service governor which lets customers set policies for which cloud  services are configured and managed, Schoenbeck said. 
		CLM 2.0 also  includes a planning and design tool that helps determine capacity needs. It  allows the use of BMC's ProactiveNet Performance Management (BPPM) tool to monitor  public cloud services running in Amazon's EC2 and Microsoft's Windows Azure environments. Schoenbeck said the company is  working with other service providers to create adaptors but BMC also has an API  that will work with any cloud provider.
 
	Posted by Jeffrey Schwartz on April 27, 20110 comments
          
	
 
            
                
                
 
    
    
	
    		Microsoft International President  Jean-Philippe Courtois earlier this month told  Bloomberg that the company will spend a whopping 90 percent of its $9.6  billion research and development budget on cloud computing this year. 
That brings up the question: Is Microsoft putting  all its eggs in one basket? Sourya Biswas asks that same thing in a blog  post this week. A proponent of cloud computing and, according to his LinkedIn profile, an MBA student at  the University of Notre Dame and a former risk analytics manager at Citigroup,  Biswas wonders if Microsoft is throwing the baby out  with the bathwater. He writes in his blog:
Make  no mistake; I believe that cloud computing is the technology of and for the  future. But allocating 90 percent of the research budget on an emerging  technology without paying adequate attention to established products in which  it has dominance is too big a risk in my book. Especially since that dominance  is under threat, with the rise of Firefox and Chrome against the Microsoft  Internet Explorer, and the growing popularity of Linux versus Microsoft  Windows.
I believe  there may be a sense of hubris in the way Microsoft is neglecting its  established revenue lines. While its Windows still powers more than 80% of the  computers in the world, there are several complaints against the operating  system. In fact, many would argue that a lot of that $9.6 billion R&D  should have been allocated to making the next edition of Windows bug-free,  resource-light and malware-resistant.
Despite Microsoft's preaching that it is "all in"  the cloud, the company has taken a measured approach at emphasizing that users will  continue to work on local client devices and have access to their data offline. 
While keeping its eye on rivals such as Google,  Salesforce.com and Amazon Web Services, Microsoft needs to keep investing in  technologies such as Windows, Office, SharePoint and Lync. Even if they all  ultimately have substantial cloud components, the offline world will remain a  critical component to users and Microsoft customers will expect significant  investments in technologies that support the local device. I think Microsoft  knows and understands this.
Time will tell what Microsoft's R&D emphasis will bring.  But Biswas' point that Microsoft needs to invest in  Windows and Internet Explorer is important. Do you think that Microsoft's plan  to invest 90 percent of its R&D budget on cloud computing is going too far?  Or is the company just putting a cloud tag on everything it does? Drop me a  line at [email protected].
 
	Posted by Jeffrey Schwartz on April 21, 20112 comments
          
	
 
            
                
                
 
    
    
	
    		Rackspace Hosting this week added a new load balancing  service aimed at letting customers rapidly scale capacity.
Called Rackspace Cloud Load Balancers, the service is  intended for those with mission-critical Web apps. It lets customers configure  cloud servers or dedicated hosts with more capacity as workloads require.
"We designed it in a way where a load balancer is  provisioned for a customer in literally a matter of seconds, always under a  minute," said Josh Odom, a product line leader at Rackspace. "It's  designed to be highly configurable."
Rackspace designed the product to be interoperable with its  RackConnect solution, which allows Rackspace cloud customers to mix and match  dedicated server infrastructure with cloud servers, according to Odom.
Upon establishing an account with a Rackspace Cloud Server,  a customer can log into the control panel and select a cloud load balancer from  the Hosting menu. Customers can add a cloud load balancer via the API. 
The service is powered by Cambridge, U.K.-based Zeus  Technology, and includes static IP addresses, built-in high availability,  support for multiple protocols and algorithms, an API and control pane access  and session persistence, Rackspace said.
Pricing for the load balancing service starts at 1.5 cents  an hour, or $10.95 per month. Customers are only charged for the Cloud Server  if they build the server. 
 
	Posted by Jeffrey Schwartz on April 20, 20110 comments
          
	
 
            
                
                
 
    
    
	
    		Hewlett-Packard Co. last week released Cloud Services  Automation 2.0, an upgraded version of its toolset aimed at simplifying the  transformation of premises-based apps to those that can run in the cloud. 
CSA 2.0 not only accelerates the deployment of cloud  infrastructure but it expedites the deployment and configuration of the  applications, said Paul Muller, VP of strategic marketing for HP Software  products.
"Most applications take a considerable amount of manual  time and effort to tune and configure," Muller said. "Even if the  imaging of that application is being automated, it's often the configuration  and tuning of that application to get it ready for production workloads that is  the last mile required to make an application run in an optimal fashion in a  cloud environment. That's exactly what we've done with Cloud Service Automation  2.0, is package up everything from infrastructure through platform through  application deployment."
One of the key capabilities in CSA 2.0 is over 4,000 new or  updated workflows and best practices for the deployment of infrastructure and  applications and middleware, or Platform as a Service (PaaS), according to  Muller. Enabling that capability was the acquisition of Stratavia back in  August. 
Stratavia offers deployment, configuration and management  software for databases, middleware and packaged apps. HP now calls that  technology Database Middleware Automation, or DMA. 
CSA 2.0 also includes service request catalog capability  aimed at minimizing the need to utilize multiple service providers' portals,  providing a more simplified consumer-like interface for selecting and  requesting services. 
"Once the service is requested, the deployment is  seamlessly automated behind the scenes," Muller said. CSA 2.0 employs new  intelligent resource management and policy enforcement that can address the  need for highly available infrastructure, least expensive service or  infrastructure that's pinned to a specific geography.
Pricing for the software starts at $35,000.
 
	Posted by Jeffrey Schwartz on April 20, 20110 comments
          
	
 
            
                
                
 
    
    
	
    		Fresh off Microsoft's  announcement last week that the next version of its Dynamics AX enterprise  resource planning (ERP) suite will be available as a hosted cloud service, systems  integrator Avanade said it will do the same -- but customers don't have to wait.
Avanade, 80 percent of which is held by IT outsourcing firm  Accenture with the remaining stake held by Microsoft, launched Cloud ERP at the  annual Convergence conference that took place in Atlanta last week. Cloud ERP will work with the  forthcoming version, Dynamics AX 2012, as well as the current release.
"We are seeing demand from our current clients for a  cloud-based Software as a Service-type ERP provisioning around Dynamics,"  said Bernd Weidenmueller, VP of Dynamics AX at Avanade. 
The service is available to companies with at least 40  employees but can scale up to the largest of Fortune 500 companies, Weidenmueller  said. Avanade performs the customization of customers' applications and uses a  third-party hosting company to host the application, Weidenmueller explained. 
Genesis Casket Co., an Indianapolis-based startup  manufacturer, is Avanade's first reference customer. The company needed an ERP  solution and didn't want to develop or run it internally. "They were  looking for a provider that can provide an ERP system with the strength of  Dynamics AX but run in a Software as a Service delivery model which could scale  up with their user growth with pricing attached to that," Weidenmueller  said.
 
	Posted by Jeffrey Schwartz on April 19, 20110 comments
          
	
 
            
                
                
 
    
    
	
    		At the MIX 11 conference in Las Vegas  this week, Microsoft revealed a number of new features in its Windows Azure  service, as well as several new offers to those testing the company's cloud  service. 
The new features are targeted at developers to help them  build apps faster, while accelerating the performance of applications and providing  access to those apps via popular identity providers, including Microsoft's  Active Directory, Windows Live ID, Google, Yahoo! and Facebook.
Here's how an MSDN blog post describes each of the new services:
  - An update to the Windows Azure       SDK that includes a Web Deployment Tool to simplify the migration,       management and deployment of IIS Web servers, Web applications and Web       sites. This new tool integrates with Visual Studio 2010 and the Web       Platform Installer. 
   - Updates to the Windows Azure       AppFabric Access Control service, which provides a single-sign-on       experience to Windows Azure applications by integrating with enterprise       directories and Web identities.
   - Release of the Windows Azure       AppFabric Caching service in the next 30 days, which will accelerate the       performance of Windows Azure and SQL Azure applications.
   - A community technology preview       (CTP) of Windows Azure Traffic Manager, a new service that allows Windows       Azure customers to more easily balance application performance across       multiple geographies.
   - A preview of the Windows Azure       Content Delivery Network (CDN) for Internet Information Services (IIS)       Smooth Streaming capabilities, which allows developers to upload IIS       Smooth Streaming-encoded video to a Windows Azure Storage account and       deliver that video to Silverlight, iOS and Android Honeycomb clients.
 
On the offer side, Microsoft announced the following,  as described by the MSDN post:
  - The extension of the expiration       date and increases to the amount of free storage, storage transactions and       data transfers in the Windows Azure Introductory Special offer. This       promotional offer now includes 750 hours of extra-small instances and 25       hours of small instances of the Windows Azure service, 20GB of storage,       50K of storage transactions, and 40GB of data transfers provided each       month at no charge until September 30, 2011. More information can be found here.
       - An existing customer who signed        up for the original Windows Azure Introductory Special offer will get a        free upgrade. An existing customer who signed up for a different offer        (other than the Windows Azure Introductory Special) would need to sign up        for the updated Windows Azure Introductory Special Offer separately.
   
  - MSDN Ultimate and Premium       subscribers will benefit from increased compute, storage and bandwidth       benefits for Windows Azure. More information can be found here.
   - The Cloud Essentials Pack for       Microsoft partners now includes 750 hours of extra-small instances and 25       hours of small instances of the Windows Azure service, 20GB of storage and       50GB of data transfers provided each month at no charge. In addition, the       Cloud Essentials Pack also contains other Microsoft cloud services       including SQL Azure, Windows Azure AppFabric, Microsoft Office 365,       Windows Intune and Microsoft Dynamics CRM Online. More information can be       found here.
 
 
	Posted by Jeffrey Schwartz on April 13, 20110 comments
          
	
 
            
                
                
 
    
    
	
    		Iron   Mountain is shutting down  its public cloud storage services Virtual File Store and Archive Service  Platform, market researcher Gartner reported in a research  note last week, making it the third company to make an exit over the past  year.
Startup Vaultscape shut down last year, and EMC's Atmos  Online also went offline last year, Gartner noted. "To date, public cloud  storage IaaS has had a modest level of adoption," according to the  research note. "Not incidentally, all three service providers'  go-to-market strategies focused purely on cloud storage unaccompanied by any  cloud compute services."
In a statement, Iron   Mountain confirmed that  it is shutting down the two services. "Iron Mountain  did recently notify customers of our Virtual File Store and Archive Service  Platform that we are retiring these two commodity cloud-storage solutions,"  the company said. 
"This decision only affects those using Virtual File  Store, a low-cost cloud storage option for inactive files, and technology  partners who use the Archive Service Platform as a general-purpose cloud for  storing their customers' data," the statement continued. "As the Gartner report notes, public cloud  service offerings like these have seen modest levels of adoption."
While the company stopped taking orders for the services as  of April 1, they won't be retired before the first half of 2013. Iron Mountain  will transfer Virtual File Store customers to its higher-value File System  Archiving (FSA) service next year, a hybrid service that uses policy-based  archiving of data both on-site and in the cloud. There is no migration path for  Archive Service Platform customers.
 
	Posted by Jeffrey Schwartz on April 13, 20110 comments
          
	
 
            
                
                
 
    
    
	
    		A high-profile startup run by key founders of Amazon Web  Services EC2 cloud service is shipping its first product.
Nimbula last week officially released Director 1.0, a cloud  operating system that enterprise customers and service providers can install on  their own servers. The software provides an EC2-like experience, according to  the Mountain View, Calif.-based company.
Using policy-based identity management, Director lets  customers manage both on- and off-premise cloud services. Running on x86-based  server hardware, Director provides the EC2 model offering secure multitenancy,  orchestration and metering and monitoring, said Martin Buhr, the company's vice  president of sales and business development. 
The company came out of stealth mode last June, when it  delivered the first private beta of Director, followed by a public beta in  December. "The announcement of GA has been a huge catalyst to us,"  Buhr said. "We were talking to a lot of potential customers and prospects  both with enterprises and with service providers who were waiting for us to  exit beta."
The company is continuing to encourage customers to test its  software. For deployments up to 40 cores, the company is offering Director free  of charge. For larger installations, Nimbula is offering annual subscriptions  that include support and maintenance. 
While there is no shortage of cloud startups, Nimbula has an  all-star cast of founders. CEO Chris Pinkham was VP of engineering at Amazon,  where he was responsible for the company's worldwide hardware and software  infrastructure and shepherded the EC2 project in its early days.
Also, Willem van Biljon, who wrote the business plan for EC2  and managed the original EC2 engineering team, is a Nimbula co-founder and VP  of products. And Buhr himself was on the original EC2 sales and business development  team. 
The company has $20.75 million in funding from Accel  Partners and Sequoia Capital. 
 
	Posted by Jeffrey Schwartz on April 12, 20110 comments