IBM's SmartCloud public and private  cloud portfolio are being fleshed out in answer to a growing demand from IT and enterprise users.
Based on an IBM survey of 500 enterprise IT and business  executives, 33 percent have deployed more than one cloud pilot to date, a  figure poised to double by 2014. The survey also found that 40 percent see the  cloud as bringing "substantial change" to their IT environments. IBM  said it will be supporting 200 million users in the cloud by the end of next  year.
"It's clear to us, what we're seeing is a fundamental  transformation of how our clients are trying to change the economics of their  IT environment and speed the delivery of new innovative products and services,"  said Scott Hebner, VP of market strategy at IBM Tivoli. 
On the public cloud front, IBM plans to launch a  platform-as-a-service (PaaS) called SmartCloud Application Services, which will  consist of a managed services offering that will provide application  infrastructure. The service will offer application lifecycle management,  application integration and the ability to manage applications in the hosted  environment. 
The new PaaS offering, due to go into beta later this  quarter, will run atop IBM's Smart Cloud Enterprise and Enterprise+, the  company's public infrastructure-as-a-service (IaaS) offerings announced  in April. IBM this week announced the commercial availability of the IaaS  offerings in the United    States with full global availability slated  for the end of 2012.
Building on its private cloud portfolio, IBM launched its  SmartCloud Foundation, for customers looking to build and operate cloud  infrastructures internally. The company introduced three offerings: 
  - SmartCloud       Entry: a starter kit that allows organizations to build private clouds       running on standard x86 or IBM's Power processor-based infrastructure. The       modular offering lets organizations scale as demand for capacity increases. 
- SmartCloud       Provisioning: Software consisting of a provisioning engine and an image       management platform that can spin more than 4,000 virtual machines in less       than an hour, IBM said.
- SmartCloud       Monitoring: A tool that provides views of virtual and physical       environments including storage, networks and servers. It offers predictive       and historical analytics designed to warn IT admins of potential outages. 
IBM also announced the availability of its SAP Managed  Application Services. Announced back in April, the service will allow for the  automated provisioning and management of SAP environments.
 
 
	Posted by Jeffrey Schwartz on October 14, 20110 comments
          
	
 
            
                
                
 
    
    
	
    		Microsoft's SQL Server database server platform and its  cloud-based SQL Azure may share many core technologies but they are not one and the same. As a result, moving data and apps from one to the other is not all  that simple. 
Two companies this week set out to address that during the  annual PASS Summit taking place in Seattle. Attunity and CA  Technologies introduced tools targeted at simplifying the process of moving  data from on-premises databases to SQL Azure. 
Attunity Replicate loads data from SQL Server, Oracle and  IBM's DB2 databases to SQL Azure and does so without requiring major  development efforts, claimed Itamar Ankorion, Attunity's VP of business  development and corporate strategy. 
"The whole idea is to facilitate the adoption of SQL  Azure, allow organizations and ISVs to benefit from cloud environments and the  promise of SQL Server in the cloud," Ankorion said. "One of the main  challenges that customers have is how do they get their data into the cloud.  Today, it requires some development effort, which naturally creates a barrier  to adoption, more risk for people, more investment, with our tools, it's a  click away."
It does so by using Microsoft's SQL Server Integration  Services, which provides data integration and transformation, and Attunity's  change data capture (CDC) technology, designed to efficiently process and  replicate data as it changes. "We wanted to basically provide software  that would allow you to drag a source and drag a target, click, replicate and  go," Ankorion said.
For its part, CA rolled out a new version of its popular  ERwin data modeling tool for SQL Azure. CA ERwin Data Modeler for SQL Azure lets  customers integrate their in-house databases with SQL Azure. 
"CA ERwin Data Modeler for Microsoft SQL Azure provides  visibility into data assets and the complex relationships between them,  enabling customers to remain in constant control of their database  architectures even as they move to public, private and hybrid IT environments,"  said Mike Crest, CA's general manager for data management, in a statement.
CA's tool provides a common interface for combining data  assets from both premises and cloud-based databases. Customers can use the same  modeling procedure to maintain their SQL Azure databases. 
 
 
	Posted by Jeffrey Schwartz on October 13, 20110 comments
          
	
 
            
                
                
 
    
    
	
    
		Looking to quell criticism that it has too much control over  the OpenStack Project, Rackspace has agreed to establish an independent  foundation next year that will assume ownership and governance for the  open source cloud platform.
		Rackspace had been under pressure to make such a move, and did  so last week, announcing its intention to form the foundation during the  OpenStack Conference in Boston.
		"This marks a major milestone in the evolution of OpenStack  as a movement to establish the industry standard for cloud software," said  Mark Collier, Rackspace's VP of business and corporate development, in a blog  post. "The promise of a vendor-neutral, truly open cloud standard is  within reach.  By doing this important work together, as a community, we  can achieve something much bigger with a lasting impact on the future of  computing."
		Developed by NASA and Rackspace, the two made the OpenStack  code freely available under the terms of the Apache 2.0 license last year. More  than 100 companies have jumped on the OpenStack bandwagon, including Canonical,  Citrix, Cisco, Dell, Hewlett-Packard, Intel and SuSE. Many have contributed  code and have committed to developing cloud products and services based on  OpenStack. While the project has grown, Rackspace's control over the effort was  an ongoing concern.
		The company held a meeting last Thursday to take the first  step toward creating the foundation. According to the meeting notes of Scott  Sanchez, director of business development for Rackspace Cloud Builders, the gathering  was a session to answer questions, gather input and for the company to explain  its intentions. No decisions were made regarding how the foundation will be  structured or funded. 
		Lew Moorman, Rackspace's president and chief strategy  officer told attendees that the company's motives were not to defray costs or  re-assign his company's personnel to other tasks but rather to prevent  OpenStack from "forking," according to the meeting notes. 
		One concern was the potential for the transition process to  cause new member companies to delay joining, since they will want to see how  the organization is structured. The issue, noted Moorman, is to "ensure  long term independence for OpenStack," while not creating short-term  barriers to progressing the effort. 
		Rackspace moved forward this week, creating a mailing  list aimed at getting the discussion going. The initial discussion will  revolve around determining the foundation's mission and scope, Collier noted in  an updated blog  post. 
		It appears Rackspace is taking an important step that should  be welcome by the OpenStack community and future stakeholders. The challenge  will be to get everyone on board with the structure and funding, neither of  which is trivial, while not moving too slow that the process ends up in limbo. 
		What's your take on Rackspace's decision to move OpenStack  to an independent foundation? Leave a comment below or drop me a line at [email protected].
 
	Posted by Jeffrey Schwartz on October 12, 20110 comments
          
	
 
            
                
                
 
    
    
	
    
		In a move that will boost its portfolio of high-performance  computing (HPC) and cloud management software, IBM on Tuesday said it has agreed  to acquire Platform Computing for an undisclosed sum. 
		Founded 19 years ago, Toronto-based Platform is regarded as  a leading provider of workload management software for distributed, clustered  and grid computing environments. IBM boasts Platform has more than 2,000  customers, including 23 of the 30 the world's largest enterprises. Among them  are the European Organization for Nuclear Research (better known as CERN),  Citigroup and Pratt & Whitney.
		"IBM considers the acquisition of Platform Computing to  be a strategic element for the transformation of HPC into the high growth  segment of technical computing and an important part of our smarter computing  strategy," said Helene Armitage, general  manager of Systems Software at IBM, in a statement.
		That strategy includes allowing customers to move HPC  workloads to public and private clouds. Among its offerings are Platform ISF, a  private cloud management tool that manages workloads across various virtual  machines, operating systems, cloud resources and physical and virtual servers.
		Customers can also create new clusters to cloud and hosting  providers with Platform LSF, a workload management platform, and Platform  MultiCluster, a cluster consolidation tool, enabling them to create  policy-based distribution of workloads between in-house HPC clusters and  cloud-based resources.  
		The addition of Platform will augment IBM's existing efforts  to bridge HPC-based applications to the cloud. Big Blue's HPC Management Suite  for Cloud enables provisioning of different operating system images on bare  metal hardware and virtual machines, provides access to the HPC infrastructure  via a Web-based management interface and allows for the sharing of HPC  resources.
 
	Posted by Jeffrey Schwartz on October 11, 20110 comments
          
	
 
            
                
                
 
    
    
	
    		Hosting provider Savvis this week said it will offer  Microsoft's SQL Server and Oracle's Enterprise  11g RAC databases in the cloud. 
		Savvis said its new Symphony Database lets customers  provision the databases without having to license the software or acquire hardware,  while providing a scale-up and scale-down architecture.
		"Unlike traditional database offerings, Symphony  Database does not require hardware provisioning and software licensing, freeing  enterprises from long-term contracts and expenses," said Brent Juelich, Savvis  senior director of managed services, in a statement.
		The database offering is the latest in a series of new  services added by Savvis, which earlier this year was  acquired by CenturyLink for $2.5 billion. The company also recently  launched its Virtual Private Data Center Premier offering, aimed at proving  a higher level of performance, security and support for mission-critical  applications. 
		Savvis is in the midst of expanding its datacenters in North America. The company added new capacity in Atlanta and Seattle and is  set to expand its facilities in Boston, Piscataway, N.J. and Toronto in the coming  weeks. 
 
	
Posted by Jeffrey Schwartz on October 06, 20110 comments
          
	
 
            
                
                
 
    
    
	
    
		Looking to convince large enterprises to use its broader  suite of infrastructure and platform cloud services, Google has launched its  Cloud Transformation Program.
		To date, much of the company's enterprise cloud emphasis has  focused on Google Apps, its suite of e-mail, calendaring, collaboration and  productivity tools. Now the company is looking to extend its other cloud  offerings, notably its Google App Engine Platform as a Service (PaaS), to large  enterprises.
		The company has tapped seven partners to help large organizations  use its cloud services, including App Engine, Google Storage for Developers,  Google Apps Script and the Google Prediction API.
		The partners include CSC, Cloud Sherpas, Cognizant, Opera  Solutions, Razorfish, SADA Systems and TempusNova. Google said it intends to  bring onboard additional cloud implementation partners.
		Google wants enterprises to use its cloud services to build  Web sites, mobile, social media, business process and customer-facing  applications using App Engine and Apps Script, said Rahul Sood, global head of  enterprise partnerships in  a blog post.
		With the Prediction API, Google sees customers building apps  that detect fraud and analyze customer churn, for example. And with Google  Storage for Developers, Google is pitching an array of services such as backup  and data sharing.
 
	Posted by Jeffrey Schwartz on October 06, 20111 comments
          
	
 
            
                
                
 
    
    
	
    
		Adobe Systems has been slow to move its traditional desktop  software business to the cloud, but the company will take a key step forward to  change that when it lets users of its Creative Suite of apps share and  synchronize content through a new cloud service it plans to launch next month.
		The company announced Creative Cloud at its annual AdobeMAX  2011 conference in Los Angeles  this week. Initially, the service will offer 20 GB of storage capacity to users  of Adobe Touch Apps, also launched this week, and the flagship Adobe Creative  Suite, enabling collaboration and sharing of the content created with the  software.
		Adobe Creative Cloud will be the hub for Adobe Touch Apps,  designed to allow creative professionals to deliver content that run on tablet  devices. Content developed with Adobe Touch Apps can be shared and viewed  across devices and transferred to Adobe Creative Suite CS5.5, the company said.
		Early next year, the service will offer access to Adobe's  flagship Creative Suite tools which include Photoshop, InDesign, Illustrator,  Dreamweaver, Premier Pro, Edge and Muse.
		"The move to  the Creative Cloud is a major component in the transformation of Adobe,"  said Kevin Lynch,  Adobe's chief technology officer, in a statement
		Pricing for the service will be announced next month. 
 
	Posted by Jeffrey Schwartz on October 06, 20110 comments
          
	
 
            
                
                
 
    
    
	
    
		Startup Piston Computing came out of stealth mode this week,  introducing a hardened operating system based on the open source OpenStack  project for private enterprise clouds.
		Piston is led by CEO and co-founder Joshua McKenty, who was  technical lead and cloud architect of NASA's Nebula Cloud Computing Platform.  NASA and Rackspace co-founded the OpenStack Project. Just last month, former  NASA CTO Chris Kemp launched  Nebula, which offers a turnkey appliance based on the OpenStack platform. McKenty  left NASA last summer to launch Piston with the goal of bringing private clouds  like Nebula to enterprises based on OpenStack.
		McKenty maintains that much of the attention on OpenStack  has been on the potential for service providers to build clouds based on the  open source platform, but there has been little emphasis on opportunities for  private clouds.
		"A lot of the early contributions were around service  provider requirements and there seemed to be more and more focus on that side  of the story," McKenty said. "We had enterprise customers showing up  at every [OpenStack] Design Summit saying, 'Hey what about our needs? We need  things to deal with regulatory compliance and security and we know NASA worked  on these -- why aren't they in the code base?' We really set out to rectify that.  In a lot of ways I'm trying to finish what I started [at NASA]."
		Piston is launching pentOS, which stands for Piston  Enterprise operating system. The three key attributes of pentOS that Piston is  emphasizing centers around its built-in security, interoperability and ease of  deployment. 
		McKenty said pentOS is based on what the company calls a "null-tier"  architecture that integrates compute, storage and networking on every node,  providing a massively scalable platform.
		Thanks to a hardened custom-built Linux distribution, pentOS  is secure, McKenty said. Enabling IT to securely deploy pentOS is a feature  called Cloud Key, which allows for the automated distribution of the software  onto servers and switches via a USB stick. Admins can configure the OS on a  laptop and then install it onto the hardware. This provides a critical  component of security, McKenty explained, because it minimizes the number of  administrators who need credentials for the physical hardware.
		McKenty said 50 percent of all attacks come from insiders,  and by reducing those who need credentials, the more secure the environment  will be. "This is the largest single concern for enterprise IT security,"  he said. "So the fewer users that have administrative rights on your  physical hardware, the better, in my opinion."
		Piston claims pentOS includes the first implementation of  the Cloud Audit standard, which provides a common interface and namespace, or  repository, for cloud providers to automate audit, assertion, assessment and assurance  of their environments. McKenty, who is on the Cloud Audit working group, said  implementing the standard is important to enterprises who rely on  certifications such as HIPAA, PCI, NIST 800-53 and other compliance frameworks. 
		The pentOS software can be installed on any server hardware  and initially on switches supplied by Arista Networks and, shortly, on  Hewlett-Packard and Dell Force10 switches, McKentry said, with others to  follow. 
		Founded earlier this year, Piston has $4.5 million in Series  A funding from Hummer Winblad and True Ventures. 
		Piston will issue a developer preview of pentOS next week at  the OpenStack Design Summit with general availability scheduled for Nov. 29. The  company is not yet revealing pricing but it will be based on per-server  licensing and a subscription service for security updates.
 
	Posted by Jeffrey Schwartz on September 28, 20111 comments
          
	
 
            
                
                
 
    
    
	
    
		Looking to extend the reach of its customer support and  service platform, Salesforce.com last week said it has acquired Assistly for $50  million. 
		The company said it will integrate Assistly, a provider of  online customer service help-desk apps for small and medium businesses that run  in the cloud, into the Salesforce.com Service Cloud offering. Assistly is  targeted at SMBs and startups with its $49 per month, per user subscription  fee.
		Founded in 2009, Assistly pulls customer interactions from a  variety of channels and social media including Facebook, Twitter, e-mail and  phone into a common user interface, enabling customer service agents to engage  in real time with customers. 
		The move is in line with Salesforce.com's push to bring  social networking to enterprises using its platforms. At last month's annual  Dreamforce conference in San    Francisco, CEO Marc Benioff focused on what he calls  the emergence of "the social enterprise."
		"Service Cloud will now enable even the smallest  companies to become a social enterprise," said Alex Dayon, Salesforce.com's  executive VP of applications, in a statement. 
		Of its 104,000 customers, Salesforce.com said 17,000 use  Service Desk. The company said the acquisition of Assistly will help it extend  its reach to SMBs by providing a lower barrier to entry and providing the  option of migrating to other Service Cloud services over time. 
 
	Posted by Jeffrey Schwartz on September 27, 20110 comments
          
	
 
            
                
                
 
    
    
	
    
		Cloud infrastructure software vendor Nimbula has released an  upgrade of its Infrastructure as a Service (IaaS) platform.
		Nimbula Director 1.5 now supports a geographically  distributed cloud and can manage the multi-site cloud from a single management  interface. It also features  policy-based automation for compute and storage,  aimed at simplifying the request of resources; persistent block store, which  provides self-service interfaces to cloud storage; and a customizable installer  for operating system, drivers and management software.
		Mountain View, Calif.-based  Nimbula also announced it has expanded its partner ecosystem with the addition  of Atalanta Systems, Electric Cloud and Standing Cloud to its roster. 
		"Nimbula is continuing its commitment in building a  rich, diverse partner base to serve the new use cases and solutions enabled by  cloud computing," said Reza Malekzadeh, Nimbula's VP of marketing, in a  statement. "We are able to strengthen our complete end-to-end solution  offerings by supporting PaaS [Platform as a Service] and DevOps use cases for agile application  development and by supporting additional storage solutions."
		Atalanta Systems will provide integration of Opscode Chef  with Nimbula Director, enabling customers to use Chef to orchestrate and manage  workloads running on the Nimbula platform. Electric Cloud is enabling its  Electric Commander to automate build-test-deploy processes with the Nimbula  cloud platform. Standing Cloud and Nimbula created a solution that allows users  and developers to build, deploy and manage applications that run on Standing  Cloud's PaaS.
 
	Posted by Jeffrey Schwartz on September 27, 20110 comments
          
	
 
            
                
                
 
    
    
	
    
		The OpenStack consortium on Thursday released the fourth version  of its open source cloud operating system.
		Dubbed Diablo, the new release gains improved compute scalability,  storage and networking. And OpenStack is introducing identity management  features and a new Web-based management interface slated for the next release  of the platform.
		OpenStack, founded by NASA and Rackspace, is an open source  project to deliver a cloud operating system for enterprises to build private  clouds and for service providers. It has more than 100 companies contributing  to the project. 
		Diablo takes an important step forward in making OpenStack  more suited for enterprise cloud deployments, said Jonathan Bryce, chairman of  the OpenStack Project Policy Board and a founder of the Rackspace Cloud.
		"It's a pretty significant release for us, it's a  release that opens the door for many more organizations to come in and use  OpenStack," Bryce said. "With the last release [Cactus], the  functionality of the cloud was all there, but if you were not a sophisticated  IT department or development shop, it could be difficult to jump right into it.  I think there are quite a few features in this one that make it a lot easier  and a lot more deployable for many more organizations."
		With 70 new features in Diablo, Bryce emphasized three:
		  - Compute       (Nova): Includes a distributed scheduler for the deployment of virtual       machines globally, a high-availability networking mode and a new       authentication system called OpenStack Identity management.
 
 
- Object       Storage (Swift): A new feature called Container Syncing allows an       administrator to pick individual containers or folders of data and mark       them for syncing to an entire separate OpenStack Object Storage cluster. "It       will keep track of the changes and make sure the containers stay in sync       across separate clusters, not just within a cluster," Bryce said. "It's       another added level of scale across multiple locations and data       resiliency."
 
 
- Image       Service (Glance): The Image service includes new filtering and searching       functions through the API. 
Also introduced in conjunction with Diablo are two  incubation projects that will be core in the next release of the OpenStack  platform, called Essex, which is slated for  release in the next six months. 
		The first is Dashboard, which will let admins provision and  manage cloud resources via a self-service portal. The second is Keystone, aimed  at providing common authentication across all OpenStack projects. Keystone is  not a new directory system, Bryce explained, but rather one that can interface  with existing authentication platforms such as Microsoft's Active Directory and  repositories based on LDAP. 
		Coinciding with next year's release of Essex,  OpenStack will release another incubation project called Quantum, which  provides an API that dynamically requests and configures virtual networks and  offers advanced networking and virtualization capabilities, Bryce said.
		Plans for Essex will be discussed at the OpenStack Design  Summit and Conference, to be held in Boston  during the first week of October.
 
	Posted by Jeffrey Schwartz on September 22, 20110 comments
          
	
 
            
                
                
 
    
    
	
    		Wondering what caused the outage that brought down some of  Microsoft's cloud services two weeks ago? While Microsoft attributed it to a  DNS error, that was all the company was saying at the time. 
		The three-hour outage occurred on the  evening of Sept. 8 and affected Windows Live services such as SkyDrive, MSN  and Hotmail for three hours. Also affected was Microsoft's Office 365 service.
		Arthur de Haan, VP of Windows Live test and service engineering  at Microsoft, elaborated on the incident in  a blog post on Tuesday night, explaining a corrupt file in the company's DNS service was to  blame. 
		Microsoft was in the process of updating a tool that helps  balance network traffic and the update went awry, he noted. Consequently, the  configurations were corrupted, resulting in the outage, he said. 
		"The file corruption was a result of two rare  conditions occurring at the same time," de Haan said.  "The first  condition is related to how the load balancing devices in the DNS service respond  to a malformed input string (i.e., the software was unable to parse an  incorrectly constructed line in the configuration file). The second condition  was related to how the configuration is synchronized across the DNS service to  ensure all client requests return the same response regardless of the  connection location of the client. Each of these conditions was tracked  to the networking device firmware used in the Microsoft DNS service."
		He said Microsoft intends to further harden the DNS service  to by providing greater redundancy and failover capability.
 
	Posted by Jeffrey Schwartz on September 21, 20110 comments