Gartner has endorsed Google's Gmail service as a viable alternative to Microsoft's Exchange Online for enterprises with more than 5,000 employees.
The IT market researcher last week released a report called "Google Gmail Emerges as a Significant Threat to Microsoft in the Enterprise." The conclusions are noteworthy given the IT market researcher's clout with CIOs and the cutthroat rivalry between Microsoft and Google to win large enterprise cloud e-mail contracts.
"The road to its enterprise enlightenment has been long and bumpy, but Gmail should now be considered a mainstream cloud e-mail supplier," said Gartner research VP Matthew Cain in a statement.
While Gmail only accounts for about 1 percent of all enterprise e-mail, it has gained close to half the market for enterprise cloud e-mail, Cain noted. While Cloud e-mail only accounts for 3 to 4 percent of the overall enterprise e-mail installed base, Cain said Gartner is forecasting it will account for 20 percent of the e-mail market by 2016 and 55 percent by 2020.
The two competitors are the only ones gaining meaningful share, according to Gartner, noting that Novell GroupWise and IBM Lotus Notes/Domino are struggling. VMware has only recently started focusing its Zimbra cloud e-mail offering on the enterprise, Gartner added.
Posted by Jeffrey Schwartz on September 20, 2011 at 11:58 AM7 comments
Joyent has launched a new public cloud service that it says is faster and cheaper than Amazon Web Services' popular Elastic Cloud Compute (EC2), as well as other Infrastructure as a Service (IaaS) offerings.
The San Francisco-based company's upgraded Joyent Cloud service is built on a new platform, called SmartDataCenter 6, which Joyent claimed offers improved performance, management and security.
Customers can now run Windows or Linux instances atop Joyent's SmartOS SmartMachines, which the company said boots in seconds and offers distributed storage. The new platform provides instant CPU bursting and intelligent data caching. The bursting capability can scale up to 400 percent, reducing the number of CPU instances customers have to add, the company said.
"Joyent Cloud provides customers with peace of mind that their infrastructure will just work, just scale, and be fully visible and transparent at all times," said Joyent Cloud GM Steve Tuck in a statement. "We eliminate the compromises inherent in running on other clouds while delivering exceptional performance and speed at a competitive price."
Along with the new service, new APIs allow customers to configure or change their SmartMachines themselves or via third-party cloud management systems. And the company has added two analytics tools to the platform, Joyent Cloud Analytics and New Relic, that provide real-time views of the stack ranging from the CPU through the application layer, helping determine issues that might be causing latency.
Posted by Jeffrey Schwartz on September 20, 2011 at 11:59 AM0 comments
Amazon Web Services (AWS) has picked up another security accreditation from the federal government, which should allow it to further penetrate the public sector with its popular infrastructure services, as well as give it more credibility with businesses.
The U.S. General Services Administration (GSA) last week gave Amazon the green light with its Federal Information Security Management Act (FISMA) Moderate level authorization and accreditation.
"The door is now open for a much wider range of U.S. government agencies to use AWS as their cloud provider," said AWS evangelist Jeff Barr in a blog post. "Based on detailed security baselines established by the National Institute of Standards and Technology (NIST), FISMA Moderate certification and accreditation required us to address an extensive set of security configuration and controls."
The accreditation covers Amazon's Elastic Compute Cloud (EC2), Simple Storage Service (S3) and Virtual Private cloud services and their associated infrastructures, the company said.
Amazon's security and compliance framework already covered FISMA Low, PCI DSS Level 1, FIPS 140-2, ISO 27001 and SAS-70 type II, and is set up to comply with HIPAA regulations.
Posted by Jeffrey Schwartz on September 19, 2011 at 11:58 AM0 comments
The world got to see the next version of Windows this week, and while its new Metro user interface promises to change how users interact with their PCs, it also will leverage the cloud in new ways.
Not that that should be a surprise. With Microsoft's marching orders that it is "all in" the cloud, one would expect Windows to lead the charge. At its Build conference in Anaheim, Calif. this week, Microsoft gave the first in-depth public demos of Windows 8 and Windows Server 8, the code names for the next Windows releases.
Also at Build, Microsoft announced some important updates to its Windows Azure cloud service, including the Windows Azure Toolkit for Windows 8, enabling developers to build cloud-based services for Metro-based apps. (For more on the Windows Azure announcements, see my colleague Kurt Mackie's report).
The new touch-based Metro user interface in Windows 8 replaces traditional Windows icons with tiles, the most significant overhaul in Windows since its release. Windows 8 will usher in a new crop of slate-based devices designed to compete with Apple's iPad. But Windows 8 will also work on traditional desktop PCs and laptops.
Windows 8 is intertwined with Windows Live, Microsoft's set of cloud services for individuals. Users will be able to log on to their Windows 8 devices using their Windows Live IDs. Controls ranging from browser history, themes, e-mail accounts and various other settings will be saved in the cloud and can be shared among a user's multiple PCs and phones.
"Windows 8 gets much better when you connect it up to an incredibly rich set of cloud-based services with Windows Live and use some of the new Metro-style apps that we've written to connect up to Windows Live," said Steven Sinofsky, president of Microsoft's Windows and Windows Live division, during the Build keynote Tuesday.
Chris Jones, Microsoft's corporate VP for Windows Live, demonstrated a connected address book which is a Metro-styled app that lets users combine their contacts from various e-mail accounts and social networks such as Facebook and LinkedIn. Jones also demonstrated a photo app that uses Windows Live to share photos from other services such as Facebook and Flickr.
Windows 8 also will leverage Microsoft's SkyDrive, a cloud-based storage service that provides every Windows Live user with 25 GB of storage capacity. Users can access files in SkyDrive just as they do in the local file system, Jones explained.
Using the Live APIs for SkyDrive, developers can build their own cloud-connected Metro apps, Sinofsky said.
What's your take on Windows 8? Comment below or drop me a line at email@example.com.
Posted by Jeffrey Schwartz on September 15, 2011 at 11:59 AM1 comments
Rackspace plans to deploy the OpenStack open source cloud platform across its entire infrastructure.
OpenStack was originally developed by Rackspace and NASA, which built the NASA Nebula Cloud Computing Platform. The year-old OpenStack Project now has more than 90 members, with a community of developers collaborating on the open source cloud operating system. The OpenStack code is freely available under the Apache 2.0 license.
"Rackspace is very committed to moving onto the OpenStack technology," said Rackspace VP of product Mark Interrante. The company's object storage system, called Cloud Files, is already based on the OpenStack platform.
The next phase is to move Cloud Servers, the Rackspace compute offering, onto OpenStack, Interrante said. Later this year, Rackspace will transition the Cloud Servers infrastructure to OpenStack Compute, code-named "Nova." OpenStack describes Nova as a cloud computing fabric controller, the heart of the Infrastructure as a Service platform.
"It's our plan to move our Cloud Servers infrastructure to OpenStack," Interrante said. "We will have customers running in 2011. It will be some customers at least. We don't have a fully locked-down set of dates for all the transitions yet but we are very happy with the progress we are making. We are definitely doing a lot of performance testing and QA right now."
The impact on customers should be negligible, he said, suggesting it would be the equivalent of typical maintenance activity. The Linux and Windows servers that customers are running shouldn't change, only the way they are controlled.
Posted by Jeffrey Schwartz on September 14, 2011 at 11:59 AM0 comments
New backup and recovery software from CA Technologies is designed to support a number of cloud services.
The company's ARCserve r16, released last week, provides a common cloud connection for all forms of data protection, including core file backup, disk imaging, replication and high availability.
It works with such public cloud services as Amazon Web Services Elastic Compute Cloud (EC2) and Microsoft's Windows Azure, as well as cloud services based on software provided by Eucalyptus. It also works with remote monitoring and management platforms from the likes of N-able and LabTech.
The new release also lets customers use Amazon's EC2 as their disaster recovery infrastructure, CA said. The software is designed to let customers back up data both locally as well as in the cloud.
"With the release of CA ARCserve r16, we are delivering the capabilities and intelligence necessary to protect large and dynamic virtualized private and public cloud environments," said Michael Crest, general manager for the data management customer solutions unit at CA, in a blog post.
The entire ARCserve suite is priced between $10,982 and 27,600, depending on capacity, for a perpetual license. For those opting for monthly subscription, rates range from $566 to $976.
Posted by Jeffrey Schwartz on September 14, 2011 at 11:58 AM0 comments
While chaos seems to be reigning at Hewlett-Packard Co., one area where it doesn't appear to be making any strategic shifts is in its push into cloud computing. On Wednesday, the company revealed plans for its public cloud services, making them available to select beta testers. And at last week's VMworld conference in Las Vegas, HP launched a number of products aimed at helping both enterprises and service providers build cloud-based infrastructures.
Perhaps most noteworthy was HP's new VirtualSystem platform consisting of converged servers, storage and networking, controlled by the company's Insight management software.
According to HP, VirtualSystem provides the basis of a consolidated architecture by accelerating virtual machine mobility by 40 percent, doubling throughput and reducing network downtime with the new HP FlexFabric virtualized network platform. With HP Lefthand and 3PAR storage, HP claims VirtualSystem will cut capacity requirements by 50 percent and double virtual machine density.
The VirtualSystem is available in three configurations: VS1, VS2 and VS3. The first, VS1, is targeted at small and medium businesses, consisting of rack-mounted Proliant servers, Lefthand storage and HP's VirtualConnect network platform to tie it together. The Insight management software is tightly integrated with VMware's new vSphere 5 and ESX software.
VS2 is similar but instead of Proliant rack-mounted servers, it has HP's BladeSystem blade servers also tied to the company's Lefthand storage systems. The high-end offering, VS3, allows for the ability to extend multiple racks and uses HP's 3PAR storage. It's targeted at large enterprises that are looking to do major VMware consolidation projects or service providers.
The VS3 can host up to 6,000 virtual machines, said Tom Joyce, VP of marketing strategy and operations for HP storage. "That's a massive VM consolidation platform," Joyce said, noting the offerings also come with HP's TippingPoint intrusion detection software.
The VS3 hardware is identical to the hardware that runs HP's CloudSystem platform. HP launched CloudSystem in January, a turnkey appliance which comes with the company's Cloud Service Automation software. "If a customer says, 'Look, I'm solving my VMware problem today but I'm looking to build my private cloud in a year from now,' if they want to build their cloud on that part of that same hardware, all they have to do is bring in the software components of CloudSystem," Joyce said.
Pricing for the VirtualSystem platform starts at $167,300.
Posted by Jeffrey Schwartz on September 08, 2011 at 11:59 AM0 comments
Dell this week launched its first public and hybrid cloud offerings, following on its announcement earlier this year that it would invest $1 billion to deliver cloud computing infrastructure and services. The company also introduced cloud-based application services for small and medium businesses.
On Monday at the annual VMworld conference in Las Vegas, Dell and VMware jointly released Dell Cloud, which is built upon vCloud Datacenter Services. The two companies said they will jointly offer the Infrastructure as a Service (IaaS) platform to enterprises, hosting and outsourcing providers, systems integrators, and service providers.
The service will include automation, security and availability, and offer on-demand capacity, according to Dell. That will allow organizations to scale infrastructures as workloads require. Dell Cloud will be made available in three options:
- Public: Hosted in Dell datacenters, the service will provide on-demand access to computing, storage and network capacity.
- Private: Dell will construct private clouds either on a customer's premises or run it in a Dell datacenter.
- Hybrid: Using VMware's vCloud Connector, this configuration offers management of on- and off-premise private clouds and Dell's public cloud offering.
Dell Cloud is in beta now and will be generally available in the United States next quarter. It will be available in Europe and Asia next year.
Separately, the company Tuesday announced Dell Cloud Business Applications Services, a bundle of pre-integrated SaaS applications that can be linked to existing software via its Dell Boomi platform. Dell last year acquired Boomi, a SaaS provider that connects cloud and premises-based applications.
Dell also said it is expanding its partnership with Salesforce.com, where it will offer integration of the Salesforce Sales Cloud customer relationship management (CRM) services with Intuit's QuickBooks and Microsoft's Dynamics GP software packages.
Salesforce.com and Dell are now jointly offering a CRM service for SMBs. Dell intends to offer additional SaaS-based apps such as marketing and accounting next year. Dell also said it plans to offer cross-application dashboards and reporting capabilities delivered as a service. Called Cloud Integrated Analytics, the service will be released in the first half of next year.
Available now, pricing for the Dell Boomi-Salesforce.com integration starts at $565 per month. Integration service packages start at $5,000.
Posted by Jeffrey Schwartz on September 01, 2011 at 11:58 AM0 comments
Verizon Communications has acquired CloudSwitch, whose software makes it possible to move applications and workloads between public and internal datacenters. Terms of the deal were not disclosed.
The company's gateway appliance includes software that allows administrators to move workloads from enterprise datacenters to public clouds without changing the application or infrastructure layer. Applications maintain policies when moved between various cloud environments and internal datacenters.
CloudSwitch will become part of Verizon's Terremark division. Verizon acquired enterprise cloud provider Terremark Worldwide earlier this year in a deal worth $1.4 billion.
"Our founding vision has always been to create a seamless and secure federation of cloud environments across enterprise data centers and global cloud services," said CloudSwitch CEO John McEleney in a statement. "Together, we will be able to provide enterprises with an unmatched level of flexibility, scalability and control in the cloud with point-and-click simplicity. This will go a long way in helping achieve widespread adoption of the cloud especially when managing complex workloads."
Verizon is betting that the addition of CloudSwitch will ease customer resistance to moving enterprise workloads to the cloud.
Posted by Jeffrey Schwartz on August 26, 2011 at 11:58 AM0 comments
VMware's Cloud Foundry project released a beta version of its open source Platform as a Service (PaaS) for developer laptops and desktops.
The Micro Cloud Foundry is software for developers who want to build and test applications locally without having to connect to the cloud. Developers can download a virtual machine image of Micro Cloud Foundry, which is compatible with VMware Fusion for Mac OS X, VMware Workstation and VMware Player for Linux and Windows.
"Today we are taking the next step toward providing developers what they need -- a simple PaaS solution you can quickly download and install on your machine," said VMware CTO Steve Herrod in a blog post.
The release of the client-side software, which was expected, is aimed at making it easier for developers to build or repurpose their apps for Cloud Foundry by providing the environment on their local machines.
Herrod said Micro Cloud Foundry supports Java on Spring, Ruby on Rails/Sinatra and Node.JS frameworks in addition to MySQL, MongoDB and Redis databases. In addition, it works with Cloud Foundry's scriptable command line interface, called vmc, and with the Eclipse-based SpringSource Tool Suite (STS), Herrod noted. "This allows developers to retarget deployments between on-premise and public environments without code modifications," he said.
The beta is available for download.
Posted by Jeffrey Schwartz on August 26, 2011 at 11:58 AM0 comments
Eucalyptus Systems disclosed plans to roll out the third version of its open source Infrastructure as a Service (IaaS) private cloud software.
The new release, dubbed Eucalyptus 3, adds high availability (HA) to its cloud platform, meaning customers will be ensured uptime in the event of a hardware, software or network failure. The system will fail over if it goes down for any reason, including a failed disk drive, memory corruption or even a power outage. In any such event, the software will fail over to a "hot spare" service running on different hardware.
"Implementing HA was the obvious next major evolution for Eucalyptus," said Eucalyptus CEO Marten Mickos in a blog post. "After all, a key reason for organizations to run a private (i.e. on-premise) cloud is typically that they want more control than they have over a public cloud. Public clouds provide amazing uptime. But at the end of the day, this uptime is nothing you can influence. In a private cloud, however, it's your hardware and you can set the parameters. You can determine the level of assurance. With Eucalyptus 3 supporting HA in the cloud platform itself, your cloud rises to a new level of availability."
Mickos acknowledged that the company initially intended to offer HA just for customers who were requesting it. But the company since determined that it was a feature most customers would need.
Eucalyptus claims it has deployed 25,000 private clouds and counts 21 of the Fortune 100 that have deployed Eucalyptus-based clouds. Among its customers are Puma, the USDA, Plinga, Aerospace Corp., InterContinental Hotels Group, Wetpaint and USASpending.gov.
The appeal of Eucalyptus software is that it lets customers build internal clouds based on the Amazon Web Services API, allowing customers to move applications and data between AWS and their private clouds. Customers can also run Amazon Machine Images (AMIs) on Eucalyptus and AWS-compatible clouds and can use management tools to administer Eucalyptus clouds.
That could be attractive to organizations that want to create hybrid clouds, where some of their systems run in the public cloud and others remain in the datacenter. It also allows for portability among public and private clouds.
Also new in Eucalyptus 3, users will be able to boot from Amazon Elastic Block Storage (EBS). Eucalyptus 3 will also support the AWS Identity and Access Management (IAM) API, as well as the ability to map identities from LDAP and Microsoft Active Directory servers to Eucalyptus accounts, groups and users.
Eucalyptus 3 will be available next quarter.
Posted by Jeffrey Schwartz on August 25, 2011 at 11:58 AM0 comments
Amazon Web Services on Tuesday launched a new Web caching service that lets customers deploy an in-memory cache for applications running in the cloud.
The company says its new ElastiCache service lets customers add an in-memory cache to their application architectures. That will enable them to boost the performance of applications by letting customers retrieve information from the in-memory cache rather than from slower disk-based databases.
ElastiCache is suited toward read-heavy workloads such as social networking, gaming and media sharing, Amazon said.
"Caching is a core part of so many Web applications today, but running your own caching infrastructure is time-consuming and rarely adds differentiated value for your business," said Raju Gulabani, AWS' vice president of database services, in a statement. "Until today, businesses have had little choice but to shoulder this responsibility themselves -- and indeed, many AWS customers have built and managed caching solutions on top of AWS for some time. Amazon ElastiCache answers one of the most highly requested functionalities of AWS customers by providing a managed, flexible and resilient caching service in the cloud."
ElastiCache supports Memcached, an open source distributed object memory system. Existing apps, tools and code that use Memcached can migrate their apps to ElastiCache with minimal effort, Amazon said.
"If you are already running Memcached on some Amazon EC2 instances, you can simply create a new cluster and point your existing code at the nodes in the cluster," said AWS evangelist Jeff Barr in a blog post. "If you are not using any caching, you'll need to spend some time examining your application architecture in order to figure out how to get started. Memcached client libraries exist for just about every popular programming language."
Via the AWS Management Console, customers can launch a Cache Cluster composed of a number of Cache Nodes. Customers can add or subtract nodes to scale the amount of memory tied to a cache cluster.
Customers can monitor performance characteristics associated with Cache Nodes via Amazon's CloudWatch.
Amazon said pricing is based on the size of the Cache Nodes used, with an entry price of $0.095 per hour for a small Cache Node (1.3 GB of memory). A large Cache Node (7.1 GB) costs $0.38 per hour and an extra-large Cache Node (14.6 GB) is $0.76 per hour. ElastiCache is initially available in Amazon's Virginia region and will be rolled out to its other regions in the coming months.
Posted by Jeffrey Schwartz on August 23, 2011 at 11:58 AM0 comments