Rough Start to 2018 for Microsoft Cortana

Given all that Microsoft has invested in creating the illusion that Cortana has a personality, it's not too weird to think she must be a little depressed.

It's certainly been a rough start to 2018 for Microsoft's virtual assistant.

  • Even inside Microsoft, Cortana's been getting some rejections. On Jan. 5, Microsoft discontinued a public preview of an integration between Cortana and Dynamics 365 that the company had previously promoted. The preview had put Dynamics 365 in Cortana's notebook, and Cortana had prompted users with relevant information about sales activities, accounts, opportunities and meetings.

  • Cortana was supposed to be besties with Alexa right now. Microsoft and Amazon had announced back in August that people would be able to use Cortana on Windows 10 PCs to access Alexa and to use Alexa on the Amazon Echo and other Alexa-enabled devices to access Cortana. The two would become like a team of assistants, allowing Alexa to handle managing Cortana specialties like booking meetings or accessing work calendars when a user was near an Echo, and allowing Cortana to control Alexa specialties like shopping on or controlling smart home devices from a Windows 10 PC. The integration was supposed to be done by the end of the year. But the companies missed the deadline and have not provided a new target date.

  • Alexa is elbowing its way onto Windows territory. During CES last week, Acer announced that it would be bringing Alexa to some of its Aspire, Spin, Switch and Swift notebooks starting in the United States in the first quarter of 2018, with broader availability coming in the middle of the year. Other OEMs have discussed Alexa integrations, as well.

  • CES buzz in general was heavy on Alexa, with some Google Assistant thrown in. It was the second big Alexa year in a row for CES. Cortana, on the other hand, did not make any kind of splash at the show. Apple Siri was also a non-factor. Microsoft did try to generate some Cortana CES buzz by highlighting some reference designs from Allwinner, Synaptics, TONLY and Qualcomm.

  • Outsiders haven't been bothering to teach Cortana many new skills. As All About Microsoft's Mary Jo Foley pointed out in mid-December, Cortana is seriously lagging behind Alexa in the skills department. Microsoft released the Cortana Skills Kit in May 2017, and take-up has been slow. Alexa had 25,784 skills to start 2018, according to Cortana had just 230 as of mid-December. The enthusiasm level is reminiscent of Microsoft's efforts to get modern apps for Windows 8 and apps for Windows Phone -- a slow, late start.

That Cortana is far behind while there's a lot of excitement about voice assistants is not surprising.

For one thing, she's on the wrong platform. Cortana launched as a public face of Windows Phone, and a good one too. With a backstory and fan base from the "Halo" video game franchise, the name was an inspired choice with a built-in personality to draw upon. But Windows Phone went nowhere, so that's not a user base. (Maybe if the Surface Phone materializes, it will be worth revisiting.)

Smartphones are a logical place for voice input -- typing and texting on phones is challenging and annoying, making the annoyances of dealing with a voice interface a reasonable tradeoff. And talking and listening to a phone is theoretically safer than attempting to look at one while driving. There are more than a billion Android smartphones out there, making Google Assistant an automatic player in the voice assistant game. (The inability of Siri to break out as a voice platform is probably more of a strategic concern for Apple than Cortana's position is for Microsoft.)

When it comes to voice-enabled speakers like the Amazon Echo, voice isn't just a competitive interface choice -- it's the only option in most cases. While Amazon is starting from a small base of maybe 20 to 30 million Echo devices sold to date, the company has all the momentum and a lot of industry partner enthusiasm.

Cortana's user base for now is PCs, and when it comes to voice input, it's not a great place to be. The keyboard and mouse/trackpad are an awesome combination -- voice has to get very, very good before it can ever displace those very mature inputs for a user seated in front of a laptop or PC. It's for the same reason that Alexa integration with PCs may be less promising than the PC OEMs make it out to be.

Microsoft's virtual assistant ambitions are bigger than the PC base; in fact, they're bigger than Cortana.

The PC user base is only part of Microsoft's market, and it's a shrinking part. As the company redefines itself as a cloud company, one of its real strengths is its deep history with the enterprise development community and its experience at enabling that community.

Microsoft's official statement about discontinuing the Cortana-Dynamics 365 public preview provides a clear example of the strategy in action:

We are working to deliver a robust and scalable digital assistant experience across all of our Dynamics 365 offerings. This includes natural language integration for customers and partners across multiple channels including Cortana. To that end, we are discontinuing the current Cortana integration preview feature that was made available for Dynamics 365 and we are focusing on building a new long term intelligent solution experience, which will include Cortana digital assistant integration.

Getting developers to use Azure services for voice recognition, chatbots, translation, machine learning and artificial intelligence are all strategic plays for Microsoft. Expect the company to keep working to develop first-rate user experiences that evolve the gimmicky aspects of Cortana's personality into a better and better virtual assistant interface for unlocking deeper business value from more and more of Microsoft's advanced cloud services.

Bad start to 2018 or not, Microsoft needs to keep a hand in virtual assistant technologies. As long as that's the case, Cortana will probably continue her role as the public face of that broader and deeper effort.

Posted by Scott Bekker on January 16, 2018 at 2:56 PM0 comments

SharePoint Online Deployments Surging, According to Survey

The rate of cloud-based Microsoft SharePoint deployments ballooned by triple digits in 2017, based on a recent industry poll.

SharePoint tools suppliers Sharegate, Hyperfish and Nintex this week released their "The SharePoint and Office 365 Industry Survey," which included responses from about 450 SharePoint administrators and IT professionals. The three companies also surveyed a random sample of their combined client pools in 2016, providing lots of data points for comparison.

SharePoint Online deployments increased by an impressive 167 percent from 2016 to 2017. While only 21 percent of respondents in 2016 had SharePoint Online deployed, that number soared to 56 percent in 2017. Even though that means that more than half of companies had SharePoint Online deployed, a lot of them were also still running on-premise SharePoint deployments in parallel.

Yet another data point in the survey shows more and more users trusting their entire SharePoint workload to the cloud. In 2016, one-fifth of users had SharePoint deployed exclusively online. A year later, that number was nearly a third (31 percent). At the same time, hybrid environments (a mix of SharePoint Online and on-premises SharePoint deployments) dropped by 7 percentage points to 34 percent and on-premises-only environments dropped by 2 percentage points to 35 percent in 2017.

The shift to the cloud in SharePoint is mirrored on the Active Directory (AD) side in the vendor survey. In 2016, a very slight majority of AD deployments involved on-premises AD (51 percent). But in 2017, that number fell to 42 percent, while a mix of on-premises and Azure AD jumped 3 percentage points to 34 percent and pure Azure AD deployments rose 4 percentage points to 16 percent.

The survey also reveals the relative share of the last six on-premises versions of SharePoint, dating all the way back to SharePoint 2001, although that version and SharePoint 2003 are present in low enough numbers to make any conclusions about the trends on those editions statistically questionable.

Among the newer versions, the only one gaining significant share is the most recent, SharePoint 2016, which saw a 67 percent increase in deployments from 2016 to 2017. While impressive, it's gaining share at a much lower rate than SharePoint Online/Office 365 and from a smaller base. SharePoint 2016 ended 2017 with a presence in 25 percent of respondents' shops.

Holding steady and maintaining the largest share of any edition, including SharePoint Online, is SharePoint 2013. Deployed at 66 percent of respondents' sites, SharePoint 2013 won't maintain its lead through 2018 if SharePoint Online continues its momentum.

For 2017, SharePoint Online seemed to be taking most of its share from SharePoint 2007, which dropped 2 percentage points to 18 percent, and especially from SharePoint 2010, which dropped 8 percentage points to 40 percent.

As Office 365 deployments continue to gallop ahead, there is little reason to suspect that SharePoint Online's share of overall SharePoint workloads won't continue to increase. The question is how fast.

As befits a survey fielded by tools vendors, a statement accompanying the data points out that obstacles remain for those still moving to SharePoint Online.

"The move to the cloud is not always as easy as it sounds. Microsoft has released a content migration tool to help customers leave SharePoint 2010 and 2013, but it just isn't enough. Here at Sharegate, we still see a large number of customers leveraging our tools to migrate while keeping their existing site structure and objects," said Benjamin Niaulin, Microsoft Regional Director & Product Advisor at Sharegate.

Among the challenges are ongoing concerns about security, cost constraints, time constraints and difficulties in migrating SharePoint customizations from on-premises to online.

This survey says progress to the cloud in 2017 was rapid. The question for 2018 will be whether that pace can continue. Were we looking at low-hanging fruit, easy wins and pilot projects that could stall slightly this year? Or was it an early majority shift that could bring nearly half of the SharePoint customer base exclusively into the cloud by year's end?

Posted by Scott Bekker on January 10, 2018 at 2:44 PM0 comments

Update: Intel Confirms Flaw, Big Patch Week Ahead

The next Patch Tuesday or some date around then is looking like a doozy.

Reports have been bubbling up this week that vendors and open source teams are hustling under embargo to fix a major security flaw affecting Intel processors over the last decade. The rumored software fix could seriously slow down both personal systems and public clouds.

Here's the top of The Register's report from Tuesday night:

A fundamental design flaw in Intel's processor chips has forced a significant redesign of the Linux and Windows kernels to defang the chip-level security bug.

Programmers are scrambling to overhaul the open-source Linux kernel's virtual memory system. Meanwhile, Microsoft is expected to publicly introduce the necessary changes to its Windows operating system in an upcoming Patch Tuesday: these changes were seeded to beta testers running fast-ring Windows Insider builds in November and December.

Crucially, these updates to both Linux and Windows will incur a performance hit on Intel products. The effects are still being benchmarked, however we're looking at a ballpark figure of five to 30 per cent slow down, depending on the task and the processor model. More recent Intel chips have features -- such as PCID -- to reduce the performance hit. Your mileage may vary.

The next Patch Tuesday is Jan. 9. Microsoft also sent out warnings to some users that their Azure Virtual Machines would undergo an unusual reboot for security and maintenance on Jan. 10, and Amazon Web Services (AWS) e-mailed users of a maintenance reboot on Jan. 5-6, The Register noted. Officially, all the vendors are declining comment.

Patch Tuesdays are always mark-the-date events for IT, but this flaw is looking more like an all-hands-on-deck situation -- both for the security issues and then for the potential of subsequent and permanent performance problems.

UPDATE: Intel released its first statement on the issue Wednesday afternoon, confirming a serious security problem and a fix timeframe for next week, but pushing back partially on the performance hit and on reports that the problem only affected Intel chips. Here's the statement:

Intel Responds to Security Research Findings

Intel and other technology companies have been made aware of new security research describing software analysis methods that, when used for malicious purposes, have the potential to improperly gather sensitive data from computing devices that are operating as designed. Intel believes these exploits do not have the potential to corrupt, modify or delete data.

Recent reports that these exploits are caused by a "bug" or a "flaw" and are unique to Intel products are incorrect. Based on the analysis to date, many types of computing devices -- with many different vendors' processors and operating systems -- are susceptible to these exploits.

Intel is committed to product and customer security and is working closely with many other technology companies, including AMD, ARM Holdings and several operating system vendors, to develop an industry-wide approach to resolve this issue promptly and constructively. Intel has begun providing software and firmware updates to mitigate these exploits. Contrary to some reports, any performance impacts are workload-dependent, and, for the average computer user, should not be significant and will be mitigated over time.

Intel is committed to the industry best practice of responsible disclosure of potential security issues, which is why Intel and other vendors had planned to disclose this issue next week when more software and firmware updates will be available. However, Intel is making this statement today because of the current inaccurate media reports.

Check with your operating system vendor or system manufacturer and apply any available updates as soon as they are available. Following good security practices that protect against malware in general will also help protect against possible exploitation until updates can be applied.

Intel believes its products are the most secure in the world and that, with the support of its partners, the current solutions to this issue provide the best possible security for its customers.

Posted by Scott Bekker on January 03, 2018 at 11:54 AM0 comments

Microsoft Dynamics Giants Combine in End-of-Year Merger

Two of the biggest partner companies in the Microsoft Dynamics channel merged in a deal announced Friday.

Edison, N.J.-based SBS Group and Columbus, Ohio-based Socius are combining to form Velosio, which will be led by the executive teams of both organizations.

The new company, which will have its legal headquarters in Columbus, Ohio, has 4,000 clients and more than 300 employees.

SBS and Socius were also Dynamics Master VARs, a business model Microsoft introduced in 2011 to allow a handful of large partners to recruit smaller partners who would sell Dynamics CRM and ERP solutions under the Master VAR's brand. By combining the SBS and Socius Master VAR networks, Velosio will have about 150 affiliates.

"The scale of the organization that we've assembled is unique in our space. It gives us the ability to scale in a lot of ways, much quicker than we could on our own," said Jeff Geisler, who was CEO of Socius and holds the same position at Velosio. "We really see this combination being a platform to help our four major stakeholders," he said in a reference to Velosio's clients, employees, affiliates and publishers, such as Microsoft, Sage and NetSuite.

The deal follows the acquisition in July of the 740-employee Tribridge, another Master VAR, by DXC, the services giant created earlier this year by the merger of CSC and the enterprise services arm of HPE.

Jim Bowman, president and CEO at SBS Group, who is now chairman and chief revenue officer for Velosio, said the new name reflects the organization's aim to grow fast. "We fully expect to have double-digit growth in the number of team members we have, revenue, you name it. We think there's a great opportunity in front of us," Bowman said.

One major potential source of that growth is the Microsoft Cloud Solution Provider (CSP) Program. SBS Group's Stratos Cloud Alliance had been one of a dozen CSP Indirect Providers in the United States. Those companies facilitate the resale of Microsoft cloud subscriptions directly to customers by smaller Microsoft partners, who have been clamoring for the ability to set their own pricing and create their own service bundles rather than just pointing customers to Microsoft to transact the deals.

The Stratos Cloud Alliance differs from the other CSP Indirect Providers in that SBS specializes in facilitating Dynamics 365 subscription sales for non-Dynamics partners, allowing those partners to offer customers ERP and CRM products without building up expertise in those complicated businesses.

"With the addition of Socius' team with our team, we now have more resources that are available to work with partners in deploying Dynamics 365 and implementing it," Bowman said.

Posted by Scott Bekker on December 15, 2017 at 4:05 PM0 comments

Huddleston Taking Leave, Schuster To Run Microsoft OCP

Ron Huddleston, corporate vice president of the Microsoft One Commercial Partner (OCP) worldwide organization, is going on an indefinite family leave a little less than a year into the job, and Gavriella Schuster will step into the role.

Schuster previously reported to Huddleston as corporate vice president for worldwide channels and programs, and was also considered Microsoft's "worldwide channel chief," a semi-official designation that she had also held prior to Huddleston's arrival in OCP when she was CVP of the now-discontinued Worldwide Partner Group (WPG).

Huddleston, a former senior executive at Oracle and more recently at where he helped set up the AppExchange business app marketplace, got the nod to lead the newly formed OCP in early 2017. He moved over to the partner role from an initial posting in the Microsoft Dynamics organization in the summer of 2016.

The OCP worldwide organization reports to Judson Althoff, executive vice president for the Microsoft Worldwide Commercial Business. Forming OCP consolidated channel efforts that were previously spread throughout different divisions and groups inside Microsoft. A particular focus was bringing developer enablement into the main channel organization.

This leadership change late in calendar 2017 caps a tumultuous year for Microsoft field and channel employees, as well as the Microsoft partners who interact with them. The creation of the OCP early in the year served as a precursor for a massive reorganization throughout the Microsoft field, with layoffs and reassignments occurring through the summer and the fall.

It remains to be seen what the change will mean for some of the initiatives that Huddleston championed, especially the Solution Maps, also known as OCP Catalogs, which were lists of go-to partners in different geographies and vertical industries for various elements of a solution. Much of the field reorganization involved the staffing of the new Channel Manager roles, which had responsibility for maintaining the maps.

The other major OCP initiative since its creation was to reorganize all Microsoft partner employees and efforts around three motions -- build-with, sell-with and go-to-market.

Posted by Scott Bekker on December 11, 2017 at 11:10 AM0 comments

Windows 10 on Snapdragon Reaches a Milestone

A year after unveiling a Microsoft-Qualcomm partnership to bring Windows 10 to the Snapdragon platform and create a new class of Windows on ARM devices, ASUS and HP are showing off systems with shipments set to begin next year.

The ASUS NovaGo, a 2-in-1 convertible, will be available in early 2018. The HP Envy x2 Windows on Snapdragon Mobile PC is supposed to ship in the spring. Both devices were on display for hands-on use at Qualcomm's Snapdragon Technology Summit in Hawaii this week.

Here are six big takeaways from the Windows on Snapdragon developments:

1. This is a category-creation effort.
This is an important new platform for Windows 10 and a potentially significant new class of devices for business and home use. For a while, the market has been mostly segmented into PCs, tablets and smartphones. The Snapdragon-based devices could create some crossover possibilities in the in-between spaces -- less expensive, bulky or battery-draining than even the smallest and lightest 2-in-1 PCs but more capable than tablets or larger smartphones.

2. It's not Microsoft's first ARM rodeo.
Microsoft did try to do Windows on ARM before with the Surface RT, and it was not a hit. Surface RT eventually went away and Microsoft took some bruising earnings writeoffs in the process. The major limitation of the RT platform was its inability to run regular Windows applications, having only supported modern apps. Those apps had usability problems, suffered from developer disinterest, and buyers were confused about the incompatibilities.

The new ASUS NovaGo.

This time Microsoft is coming at the ARM category in a more traditional way for Redmond -- working closely with the platform provider and enabling OEM partners. The new devices will come pre-installed with Windows 10 S, which has some app compatibility limitations of its own, although they are not as severe as the ones RT presented. Windows 10 Pro is supposed to be an upgrade option, as well.

3. The platform provides some interesting capabilities.
Undergirding the Windows 10 on Snapdragon systems is the Qualcomm Snapdragon 835 Mobile PC platform, as well as Qualcomm LTE modems. Microsoft bills the systems as Always Connected PCs. Terry Myerson, executive vice president of the Windows and Devices group at Microsoft, presented the  platform pitch this way: "Always Connected PCs are instantly on, always connected with a week of battery life...these Always PCs have huge benefits for organizations, enabling a new culture of work, better security and lower costs for IT."

In the case of the NovaGo, ASUS is claiming 22 hours of battery life and 30 days of standby. Adding in capabilities from the Snapdragon X16LTE modem, as well as on-board Wi-Fi and other technologies, Samsung says the system is capable of LTE download speeds of up to 1Gbps and Wi-Fi download speeds of up to 867Mbps. In other words, LTE downloads are faster than Wi-Fi downloads. ASUS claims an LTE download for a two-hour movie is around 10 seconds.

4. Designs aren't breaking new ground yet.
Designs shown so far are variations on existing PCs from ASUS and HP. While there is the potential for some rethinking of the PC based on the technology platform, the first few models are pretty familiar.

5. Performance is worth watching.
Performance will be an issue, which raises questions about where these systems will find their niche. Early reviews indicate the PCs will be plenty fast enough for Web browsing and e-mail. But the Windows emulation overhead, along with the repurposing of mobile chipsets for PC use, mean the systems won't be as capable as run-of-the-mill PCs at offline processing.

6. Price is a big question.
The other issue determining who ultimately buys these Windows 10 on Snapdragon devices is price. ASUS is charging $599 for 4GB of RAM and 64GB of storage and $799 for 8GB of RAM and 256GB of storage. That's fairly steep to compete with Chromebooks and even mid-range Windows PCs.

On the other hand, LTE connectivity is a wild card in valuing the system. HP and Lenovo (which is also working on a device) haven't released their pricing yet. Where those device prices land will indicate where the Windows 10 on Snapdragon category might find its niche.

Posted by Scott Bekker on December 06, 2017 at 12:08 PM0 comments

In MPN 'Ask Me Anything,' Microsoft Gives Pitch for the Cloud Enablement Desk

During an Ask Me Anything (AMA) session this week aimed at new partners, Microsoft gave the hard sell for its Cloud Enablement Desk benefit.

Microsoft hosted the AMA session, called "Partnering with Microsoft," on its Microsoft Partner Community page on Wednesday after taking questions for eight days. Fielding questions for the session was John Mighell, Global Breadth Partner Enablement Lead at Microsoft, who is also responsible for the MPC Partnership 101 discussion board.

As Microsoft emphasizes cloud service sales and consumption in employee compensation and in investor relations, the company is tweaking Microsoft Partner Network (MPN) programs, competencies and incentives to encourage partners to drive cloud sales. The Cloud Enablement Desk is a relatively new and little-known part of that broader effort.

First discussed around the beginning of this calendar year, the Cloud Enablement Desk is a team of Microsoft employees who are standing by to help both brand-new partners -- even those who signed up for free in the Microsoft Partner Network (MPN) -- as well as longtime partners get going with Microsoft's cloud.

"This is a new and fantastic resource team specifically for supporting non-managed partners as they navigate MPN and build a practice," Mighell said in the AMA. Managed partners are those few among Microsoft's hundreds of thousands of channel partners who have Microsoft field salespeople dedicated to their performance.

Mighell indicated that the desk is a work in progress with continuous investment: "As this is a new process, we are adding additional services and expertise to this team constantly."

Some of the services currently offered through the desk are for MPN's free, lowest tier -- Network Member. Those partners who nominate themselves for cloud desk help are supposed to be contacted by a Cloud Program Specialist within 48 hours with assistance for getting started in the MPN and for attaining a cloud competency. Microsoft's cloud competencies include Cloud Customer Relationship Management, Cloud Productivity, Cloud Platform, Enterprise Mobility and Management, and Small and Midmarket Cloud Solutions.

Partners who pay for a Microsoft Action Pack Subscription or who work through the requirements of, and pay for, one of the competencies are eligible for a little more hands-on help from the desk, including a Cloud Mentor Program. The Action Pack costs $475 per year, while the silver cloud competency fee in the United States is $1,670.

In response to a different question in the AMA -- the evergreen matter for low-profile partners of how to get 1:1 attention from Microsoft's field -- Mighell implied that the cloud desk may be something of a shortcut. "Getting involved with the Cloud Enablement Desk (see my previous reply) is one way to get connected in with local sales and enablement efforts," he said.

Posted by Scott Bekker on November 30, 2017 at 12:27 PM0 comments

Microsoft, SAP Take Cloud Partnership to the Dogfood Level

Global technology companies like Microsoft and SAP SE of course often double as marquee clients for one other.

Taking advantage of their scale, prominence and ongoing strategic partnership, SAP and Microsoft this week announced a new dogfooding arrangement. The companies will each put their joint engineering efforts to get the SAP HANA Enterprise Cloud running on Microsoft Azure to work internally on running parts of their actual businesses.

The partnership to get SAP's enterprise ERP products onto Azure has been publicly discussed for over a year, and fits within a long history of cooperation in pursuit of joint business opportunities in spite of other areas where the companies directly compete. When the joint work on Azure was initially announced, for example, Microsoft and SAP also outlined an arrangement to more tightly integrate SAP offerings with Microsoft Office 365, upsetting some Microsoft partners for whom Office 365 compatibility with Microsoft Dynamics ERP products was a competitive differentiator.

Nor is SAP exclusive to Microsoft in this instance. The company has worked with Amazon Web Services (AWS) since 2011 to get its enterprise cloud products onto that public cloud and is also engaged with the Google Cloud Platform (GCP).

What's qualitatively different about the new Microsoft-SAP arrangement is the public description of the real internal systems where each company will deploy the joint solution.

Microsoft described itself as transforming its internal systems to implement the SAP S/4HANA Finance solution on Azure. The company said those internal systems include legacy SAP finance applications. The use of the word "include" implies other legacy systems are being swept up in the transformation, but those systems aren't disclosed. The plan for Redmond also calls for connecting SAP S/4HANA to Azure AI and analytics services.

SAP is committing to migrate more than a dozen business-critical systems to Azure. While it doesn't look like SAP is putting anything as central as its internal financial applications on the platform since that's not specifically mentioned, SAP will run its SaaS-based travel and expense management company, Concur, off of SAP S/4HANA on Azure. That is no small commitment, as SAP paid $8.3 billion to acquire Concur Technologies in 2014. Also, as part of the announcement, the supply chain management business SAP Ariba, already an Azure customer, will explore expanding Azure usage within its procurement applications.

On the Microsoft side, the dogfooding of the SAP S/4HANA software will certainly lead to continuing awkward questions from both customers and partners, who wonder why Microsoft doesn't bet its own business on Dynamics 365 for Finance and Operations. In the past, Microsoft officials have suggested to Dynamics AX users that the company committed to SAP before it had its own enterprise-scale ERP offering and the extensive customized applications associated with the SAP system made a migration to its own ERP solution impractical.

In fact, the complexity of migrating its own heavily customized SAP implementation into the SAP HANA for Azure environment should prove useful for joint SAP-Microsoft customers such as The Coca-Cola Company, Columbia Sportswear Company and Costco Wholesale Corp.

Look to both Microsoft and SAP for many tips and tricks, detailed migration guidance and other best practices from the dogfooding experience. In addition to co-engineering, go-to-market activities and joint support services, the companies are promising extensive documentation from their internal deployments.

Posted by Scott Bekker on November 29, 2017 at 10:19 AM0 comments

HPE Unveils Pay-Per-Use GreenLake Solutions

Hewlett Packard Enterprise (HPE) unveiled five packaged workload solutions that will be billed like cloud services but reside on-premises.

The pay-per-use offerings are called HPE GreenLake, and the initial set will include Big Data, backup, open database, SAP HANA and edge computing. HPE announced them Monday during its Discover conference in Madrid.

"HPE GreenLake offers an experience that is the best of both worlds -- a simple, pay-per-use technology model with the risk management of data that's under the customer's direct control," said Ana Pinczuk, senior vice president and general manager of HPE Pointnext, in a statement.

Pinczuk presented the solutions as the next step in consumption after GreenLake Flex Capacity, HPE's existing offering of on-premises infrastructure that is paid for on an as-used basis. The GreenLake Flex Capacity offering is also being expanded, according to the announcements Monday, to include more technology choices, including Microsoft Azure Stack, HPE SimpliVity or high-performance computing (HPC).

Still fluid are details on availability of the various packages and clear lines on which products will be sold to which categories of customers by the channel versus HPE direct.

Here is how HPE describes its five new outcome-based GreenLake solutions:

HPE GreenLake Big Data offers a Hadoop data lake, pre-integrated and tested on the latest HPE technology and Hortonworks or Cloudera software.  

HPE GreenLake Backup delivers on-premises backup capacity using Commvault software pre-integrated on the latest HPE technology with HPE metering technology and management services to run it.

HPE GreenLake Database with EDB Postgres delivered on-premises and built on open source technology to help simplify operations and substantially reduce total cost of ownership for a customer's entire database platform.

HPE GreenLake for SAP HANA offers an on-premises appliance operated by HPE with the right-sized, SAP-certified hardware, operating system, and services to meet workload performance and availability objectives.

HPE GreenLake Edge Compute offers an end-to-end lifecycle framework to accelerate a customer's Internet of Things (IoT) journey.

The HPE conference and GreenLake announcements come a week after HPE announced that CEO Meg Whitman will step down from the CEO role on Feb. 1, 2018, while retaining her seat on the HPE board of directors. Her replacement will be current HPE President Antonio Neri, who will become president and CEO and a member of the board.

Posted by Scott Bekker on November 27, 2017 at 12:41 PM0 comments

SkyKick, Ingram Offer Monthly Billing on Office 365, Backup, Migration Bundle

SkyKick and Ingram Micro now offer a bundle of Office 365, SkyKick Cloud Backup and the SkyKick Migration Suite that Microsoft Cloud Solution Provider (CSP) Indirect Resellers can buy from Ingram and sell to customers on a monthly billing basis.

The companies launched the bundle at the IT Nation show in Orlando, Fla., this month.

"We've seen across partners across the world that the more we can reduce friction, the more we can accelerate their business," said Chike Farrell, vice president of marketing at SkyKick, in explaining the thinking behind creating the bundle.

In the Ingram Marketplace, the bundle's official name is "Office 365 with SkyKick Migration & Backup."

If you think about the lifecycle of a customer cloud engagement, the bundle starts with the SkyKick Migration Suite for moving e-mail and data from existing systems into Office 365. The next element is Office 365 itself, which is available in the bundle from Ingram (a CSP Indirect Provider) in any of four versions. The final component for ongoing management is SkyKick's 2-year-old Cloud Backup for Office 365 product, which includes unlimited data storage, up to six daily automatic backups and one-click restore. Also included with the bundle is 24-hour SkyKick phone and e-mail support.

"For partners, it ties nicely into their business model. There's no break between Office 365 and how it's sold," said Peter Labes, vice president of business development at SkyKick. Labes added that SkyKick hopes the bundle will become Ingram's "hero SKU" for CSP Indirect Resellers of Office 365.

The bundle is initially available in the United States, but the companies plan to add other geographies.

Posted by Scott Bekker on November 17, 2017 at 9:11 AM0 comments

New Survey Points to Data Recovery Uncertainty, Channel Opportunity

A new survey of 500 U.S. organizations shows IT decision-makers are worried about the effectiveness and frequency of their data backups.

Data management and protection specialist StorageCraft commissioned the study with a third-party research organization.

The survey had three key findings.

Instant Data Recovery: More than half of respondents (51 percent) lacked confidence in the ability of their infrastructure to bring data back online after a failure in a matter of minutes.

Data Growth: The survey results suggested that organizational data growth is out of control. Some 43 percent of respondents reported struggling with data growth. The issue was even more pronounced for organizations with revenues above $500 million. In that larger subset, 51 percent reported data growth as a problem.

Backup Frequency: Also in the larger organization subset, slightly more than half wanted to conduct backups more frequently but felt their existing IT infrastructure prevented it.

In an interview, about the results, Douglas Brockett, president of Draper, Utah-based StorageCraft, said the sense of uncertainty among U.S. customers points to opportunities for the channel.

"Despite all the work that's been done, we still had a significant percentage of the IT decision-makers who weren't confident that they could have an RTO [recovery time objective] in minutes rather than hours," Brockett said. "Channel partners have an 'in' here, where they can help these customers get a more robust infrastructure."

Posted by Scott Bekker on November 16, 2017 at 10:16 AM0 comments

Live! 360: 15 Lessons Microsoft Learned Running DevOps

Microsoft Visual Studio Team Services/Team Foundation Server (VSTS/TFS) isn't just a toolset for DevOps; the large team at Microsoft behind the products is a long-running experiment in doing DevOps.

During the main keynote at the Live! 360 conference in Orlando, Fla., this week, Buck Hodges shared DevOps lessons learned at Microsoft scale. While Microsoft has tens of thousands of developers engaged to varying degrees in DevOps throughout the company, Hodges, director of engineering for Microsoft VSTS, focused on the 430-person team developing VSTS/TFS.

VSTS and TFS, which share a master code base, provide a set of services for software development and DevOps, providing services such as source control, agile planning, build automation, continuous deployment, continuous integration, test case management, release management, package management, analytics and insights, and dashboards. Microsoft updates VSTS every three weeks, while the schedule for new on-premises versions of TFS is every four months.

Hodges' narrower lens on the VSTS/TFS team provides a lengthy and deep set of experiences around DevOps. Hodges started on the TFS team in 2003 and helped lead the transformation into cloud as a DevOps team with VSTS. The group's real trial by fire in DevOps started when VSTS went online in April 2011.

"That's the last time we started from scratch. Everything's been an upgrade since then. Along the way, we learned a lot, sometimes the hard way," Hodges said.

Here are 15 DevOps tips gleaned from Hodges' keynote. (Editor's Note: This article has been updated to remove an incorrect reference to "SourceControl.Revert" in the third tip.)

1. Use Feature Flags
The whole point of a fast release cycle is fixing bugs and adding features. When it comes to features, Microsoft is using a technique called feature flags that allows them to designate how an individual feature gets deployed and to whom.

"Feature flags have been an excellent technique for us for both changes that you can see and also changes that you can't see. It allows us to decouple deployment from exposure," Hodges said. "The engineering team can build a feature and deploy it, and when we actually reveal it to the world is entirely separate."

[Click on image for larger view.] Buck Hodges, director of engineering for Microsoft Visual Studio Team Services, makes the case for feature flags as a key element of DevOps.

The granularity allowed by Microsoft's implementation of feature flags is surprising. For example, Hodges said the team added support for the SSH protocol, which is a feature very few organizations need, but those that do need it are passionate about it. Rather than making it generally available across the codebase, Hodges posted a blog asking users who needed SSH to e-mail him. The VSTS team was able to turn on the feature for those customers individually.

2. Define Some Release Stages
In most cases, Microsoft will be working on features of interest to more than a handful of customers. By defining Release Stages, those features can get flagged for and tested by larger and larger circles of users. Microsoft's predefined groups before a feature is fully available are:

  • Stage 0: internal Microsoft
  • Stage 1: a few external customers
  • Stage 2: private preview
  • Stage 3: public preview

3. Use a Revert Button
Wouldn't it be nice to have a big emergency button allowing you to revert from a feature if it starts to cause major problems for the rest of the code? That's another major benefit of the feature flag approach. In cases where Microsoft has found that a feature is causing too many problems, it's possible to turn the feature flag off. The whole system then ignores the troublesome feature's code, and should revert to its previous working state.

4. Make It a Sprint, not a Marathon
In its DevOps efforts around VSTS/TFS, Microsoft is working in a series of well-defined sprints, and that applies to the on-premises TFS, as well as the cloud-based VSTS. You could think of the old Microsoft development model as a marathon, working on thousands of changes for an on-premises server and releasing a new version every few years.

The core timeframe for VSTS/TFS is three weeks. At the end of every sprint, Microsoft takes a release branch that ships to the cloud on the service. Roughly every four months, one of those release branches becomes the new release of TFS. This three-week development motion is pretty well ingrained. The first sprint came in August 2010. In November 2017, Microsoft was working on sprint No. 127.

5. Flip on One Feature at a Time
No lessons-learned list is complete without a disaster. The VSTS team's low point came at the Microsoft Connect() 2013 event four years ago. The plan was to wow customers with a big release. An hour before the keynote, Microsoft turned on about two dozen new features. "It didn't go well. Not only did the service tank, we started turning feature flags off and it wouldn't recover," Hodges said, describing the condition of the service as a "death spiral."

It was two weeks before all the bugs were fixed. Since then, Microsoft has taken to turning on new features one at a time, monitoring them very closely, and turning features on completely at least 24 hours ahead of an event.

6. Split Up into Services
One other big change was partly a response to the Microsoft Connect() 2013 incident. At the time of the big failure, all of VSTS ran as one service. Now Microsoft has split that formerly global instance of VSTS into 31 separate services, giving the product much greater resiliency.

7. Implement Circuit Breakers
Microsoft took a page out of the Netflix playbook and implemented circuit breakers in the VSTS/TFS code. The analogy is to an electrical circuit breaker, and the goal is to stop a failure from cascading across a complex system. Hodges said that while fast failures are usually relatively straightforward to diagnose, it's the slow failures in which system performance slowly degrades that can present the really thorny challenges.

The circuit breaker approach has helped Microsoft protect against latency, failure and concurrency/volume problems, as well as shed load quickly, fail fast and recover more quickly, he said. Additionally, having circuit breakers creates another way to test the code: "Let's say we have 50 circuit breakers in our code. Start flipping them to see what happens," he said.

Hodges offered two warnings about circuit breakers. One is to make sure the team doesn't start treating the circuit breakers as causes rather than symptoms of an event. The other is that it can be difficult to understand what opened a circuit breaker, requiring thoughtful and specialized telemetry.

8. Collect Telemetry
Here's a koan for DevOps: The absence of failure doesn't mean a feature is working. In a staged rollout environment like the one Microsoft runs, traffic on new features is frequently low. As the feature is exposed through the larger concentric circles of users in each release stage, it's getting more and more hits. Yet a problem may not become apparent until some critical threshold gets reached weeks or months after full availability of a feature.

In all cases, the more telemetry the system generates, the better. "When you run a 24x7 service, telemetry is absolutely key, it's your lifeblood," Hodges said. "Gather everything." For a benchmark, Microsoft is pulling in 7TB of data on average every day.

"When you run a 24x7 service, telemetry is absolutely key, it's your lifeblood. Gather everything."

Buck Hodges, Director of Engineering, Microsoft Visual Studio Team Services

9. Refine Alerts
A Hoover-up-everything approach is valuable when it comes to telemetry so that there's plenty of data available for pursuing root causes of incidents. The opposite is true on the alert side. "We were drowning in alerts," Hodges admitted. "When there are thousands of alerts and you're ignoring them, there's obviously a problem," he said, adding that too many alerts makes it more likely you'll miss problems. Cleaning up the alert system was an important part of Microsoft's DevOps journey, he said. Microsoft's main rules on alerts now are that every alert must be actionable and alerts should create a sense of urgency.

10. Prioritize User Experience
When deciding which types of problems to prioritize in telemetry, Hodges said Microsoft is emphasizing user experience measurements. Early versions might have concluded that performance was fine as long as user requests weren't failing. But understanding user experience expectations, and understanding those thresholds when a user loses his or her train of thought due to a delay, makes it important to not only measure failure of requests but to also recognize a problem if a user request takes too long. "If a request takes more than 10 seconds, we consider that a failed request," Hodges said.

11. Optimize DRIs' Time
Microsoft added a brick to the DevOps foundation in October 2013 with the formalization of designated responsible individuals, or DRIs. Responsible for rapid response to incidents involving the production systems, the DRIs represent a formalization on the operations side of DevOps. In Microsoft's case, the DRIs are on-call 24/7 on 12-hour shifts and are rotated weekly. In the absence of incidents, DRIs are supposed to conduct proactive investigation of service performance.

In case of an incident, the goal is to have a DRI on top of the issue in five minutes during the day and 15 minutes at night. Traditional seniority arrangements result in the most experienced people getting the plum day shifts. Microsoft has found that flipping the usual situation works best.

"We found that at night, inexperienced DRIs just needed to wake up the more experienced DRI anyway," Hodges said. As for off-hours DRIs accessing a production system, Microsoft also provides them with custom secured laptops to prevent malware infections, such as those from phishing attacks, from working their way into the system and wreaking havoc.

12. Assign Shield Teams
The VSTS/TFS team is organized into about 40 feature teams of 10 engineers and a program manager or two. With that aggressive every-three-weeks sprint schedule, those engineers need to be heads down on developing new features for the next release. Yet if an incident comes up in the production system involving one of their features, the team has to respond. Microsoft's process for that was to create a rotating "shield team" of two of the 10 engineers. Those engineers are assigned to address or triage any live-site issues or other interruptions, providing a buffer that allows the rest of the team to stay focused on the sprint.

13. Pursue Multiple Theories
In the case of a live site incident, there's usually a temptation to seize on a theory of the problem and dedicate all the available responders to pursuing that theory in the hopes that it will lead to a quick resolution. The problem comes when the theory is wrong. "It's surprising how easy it is to get myopic. If you pursue each of your theories sequentially, it's going to take longer to fix the problem for the customer," Hodges said. "You have to pursue multiple theories."

In a similar vein, Microsoft has found it's important and helpful to rotate out incident responders and bring in fresh, rested replacements if an incident goes more than eight or 10 hours without a resolution.

14. Combine Dev and Test Engineer Roles
One of the most critical evolutions affecting DevOps at Microsoft over the past few years involved a companywide change in the definition of an engineer. Prior to combining them in November 2014, Microsoft had separate development engineer and test engineer roles. Now developers who build code must test the code, which provides significant motivation to make the code more testable.

15. Tune the Tests
The three-week-sprint cycle led to a simultaneous acceleration in testing processes, with the biggest improvements coming in the last three years. As of September 2014, Hodges said the so-called "nightly tests" required 22 hours, while the full run of testing took two days.

A new testing regimen breaks down testing into different levels. A new test taxonomy allows Microsoft to run the code against progressively more involved levels, allowing simple problems to be addressed quickly. The first level tests only binaries and doesn't involve any dependencies. The next level adds the ability to use SQL and file system. A third level tests a service via the REST API, while the fourth level is a full environment to test end to end. The upshot is that Microsoft is running many, many more tests against the code in much less time.

Posted by Scott Bekker on November 16, 2017 at 1:50 PM0 comments