SkyKick, Ingram Offer Monthly Billing on Office 365, Backup, Migration Bundle

SkyKick and Ingram Micro now offer a bundle of Office 365, SkyKick Cloud Backup and the SkyKick Migration Suite that Microsoft Cloud Solution Provider (CSP) Indirect Resellers can buy from Ingram and sell to customers on a monthly billing basis.

The companies launched the bundle at the IT Nation show in Orlando, Fla., this month.

"We've seen across partners across the world that the more we can reduce friction, the more we can accelerate their business," said Chike Farrell, vice president of marketing at SkyKick, in explaining the thinking behind creating the bundle.

In the Ingram Marketplace, the bundle's official name is "Office 365 with SkyKick Migration & Backup."

If you think about the lifecycle of a customer cloud engagement, the bundle starts with the SkyKick Migration Suite for moving e-mail and data from existing systems into Office 365. The next element is Office 365 itself, which is available in the bundle from Ingram (a CSP Indirect Provider) in any of four versions. The final component for ongoing management is SkyKick's 2-year-old Cloud Backup for Office 365 product, which includes unlimited data storage, up to six daily automatic backups and one-click restore. Also included with the bundle is 24-hour SkyKick phone and e-mail support.

"For partners, it ties nicely into their business model. There's no break between Office 365 and how it's sold," said Peter Labes, vice president of business development at SkyKick. Labes added that SkyKick hopes the bundle will become Ingram's "hero SKU" for CSP Indirect Resellers of Office 365.

The bundle is initially available in the United States, but the companies plan to add other geographies.

Posted by Scott Bekker on November 17, 2017 at 9:11 AM0 comments

New Survey Points to Data Recovery Uncertainty, Channel Opportunity

A new survey of 500 U.S. organizations shows IT decision-makers are worried about the effectiveness and frequency of their data backups.

Data management and protection specialist StorageCraft commissioned the study with a third-party research organization.

The survey had three key findings.

Instant Data Recovery: More than half of respondents (51 percent) lacked confidence in the ability of their infrastructure to bring data back online after a failure in a matter of minutes.

Data Growth: The survey results suggested that organizational data growth is out of control. Some 43 percent of respondents reported struggling with data growth. The issue was even more pronounced for organizations with revenues above $500 million. In that larger subset, 51 percent reported data growth as a problem.

Backup Frequency: Also in the larger organization subset, slightly more than half wanted to conduct backups more frequently but felt their existing IT infrastructure prevented it.

In an interview, about the results, Douglas Brockett, president of Draper, Utah-based StorageCraft, said the sense of uncertainty among U.S. customers points to opportunities for the channel.

"Despite all the work that's been done, we still had a significant percentage of the IT decision-makers who weren't confident that they could have an RTO [recovery time objective] in minutes rather than hours," Brockett said. "Channel partners have an 'in' here, where they can help these customers get a more robust infrastructure."

Posted by Scott Bekker on November 16, 2017 at 10:16 AM0 comments

Live! 360: 15 Lessons Microsoft Learned Running DevOps

Microsoft Visual Studio Team Services/Team Foundation Server (VSTS/TFS) isn't just a toolset for DevOps; the large team at Microsoft behind the products is a long-running experiment in doing DevOps.

During the main keynote at the Live! 360 conference in Orlando, Fla., this week, Buck Hodges shared DevOps lessons learned at Microsoft scale. While Microsoft has tens of thousands of developers engaged to varying degrees in DevOps throughout the company, Hodges, director of engineering for Microsoft VSTS, focused on the 430-person team developing VSTS/TFS.

VSTS and TFS, which share a master code base, provide a set of services for software development and DevOps, providing services such as source control, agile planning, build automation, continuous deployment, continuous integration, test case management, release management, package management, analytics and insights, and dashboards. Microsoft updates VSTS every three weeks, while the schedule for new on-premises versions of TFS is every four months.

Hodges' narrower lens on the VSTS/TFS team provides a lengthy and deep set of experiences around DevOps. Hodges started on the TFS team in 2003 and helped lead the transformation into cloud as a DevOps team with VSTS. The group's real trial by fire in DevOps started when VSTS went online in April 2011.

"That's the last time we started from scratch. Everything's been an upgrade since then. Along the way, we learned a lot, sometimes the hard way," Hodges said.

Here are 15 DevOps tips gleaned from Hodges' keynote. (Editor's Note: This article has been updated to remove an incorrect reference to "SourceControl.Revert" in the third tip.)

1. Use Feature Flags
The whole point of a fast release cycle is fixing bugs and adding features. When it comes to features, Microsoft is using a technique called feature flags that allows them to designate how an individual feature gets deployed and to whom.

"Feature flags have been an excellent technique for us for both changes that you can see and also changes that you can't see. It allows us to decouple deployment from exposure," Hodges said. "The engineering team can build a feature and deploy it, and when we actually reveal it to the world is entirely separate."

[Click on image for larger view.] Buck Hodges, director of engineering for Microsoft Visual Studio Team Services, makes the case for feature flags as a key element of DevOps.

The granularity allowed by Microsoft's implementation of feature flags is surprising. For example, Hodges said the team added support for the SSH protocol, which is a feature very few organizations need, but those that do need it are passionate about it. Rather than making it generally available across the codebase, Hodges posted a blog asking users who needed SSH to e-mail him. The VSTS team was able to turn on the feature for those customers individually.

2. Define Some Release Stages
In most cases, Microsoft will be working on features of interest to more than a handful of customers. By defining Release Stages, those features can get flagged for and tested by larger and larger circles of users. Microsoft's predefined groups before a feature is fully available are:

  • Stage 0: internal Microsoft
  • Stage 1: a few external customers
  • Stage 2: private preview
  • Stage 3: public preview

3. Use a Revert Button
Wouldn't it be nice to have a big emergency button allowing you to revert from a feature if it starts to cause major problems for the rest of the code? That's another major benefit of the feature flag approach. In cases where Microsoft has found that a feature is causing too many problems, it's possible to turn the feature flag off. The whole system then ignores the troublesome feature's code, and should revert to its previous working state.

4. Make It a Sprint, not a Marathon
In its DevOps efforts around VSTS/TFS, Microsoft is working in a series of well-defined sprints, and that applies to the on-premises TFS, as well as the cloud-based VSTS. You could think of the old Microsoft development model as a marathon, working on thousands of changes for an on-premises server and releasing a new version every few years.

The core timeframe for VSTS/TFS is three weeks. At the end of every sprint, Microsoft takes a release branch that ships to the cloud on the service. Roughly every four months, one of those release branches becomes the new release of TFS. This three-week development motion is pretty well ingrained. The first sprint came in August 2010. In November 2017, Microsoft was working on sprint No. 127.

5. Flip on One Feature at a Time
No lessons-learned list is complete without a disaster. The VSTS team's low point came at the Microsoft Connect() 2013 event four years ago. The plan was to wow customers with a big release. An hour before the keynote, Microsoft turned on about two dozen new features. "It didn't go well. Not only did the service tank, we started turning feature flags off and it wouldn't recover," Hodges said, describing the condition of the service as a "death spiral."

It was two weeks before all the bugs were fixed. Since then, Microsoft has taken to turning on new features one at a time, monitoring them very closely, and turning features on completely at least 24 hours ahead of an event.

6. Split Up into Services
One other big change was partly a response to the Microsoft Connect() 2013 incident. At the time of the big failure, all of VSTS ran as one service. Now Microsoft has split that formerly global instance of VSTS into 31 separate services, giving the product much greater resiliency.

7. Implement Circuit Breakers
Microsoft took a page out of the Netflix playbook and implemented circuit breakers in the VSTS/TFS code. The analogy is to an electrical circuit breaker, and the goal is to stop a failure from cascading across a complex system. Hodges said that while fast failures are usually relatively straightforward to diagnose, it's the slow failures in which system performance slowly degrades that can present the really thorny challenges.

The circuit breaker approach has helped Microsoft protect against latency, failure and concurrency/volume problems, as well as shed load quickly, fail fast and recover more quickly, he said. Additionally, having circuit breakers creates another way to test the code: "Let's say we have 50 circuit breakers in our code. Start flipping them to see what happens," he said.

Hodges offered two warnings about circuit breakers. One is to make sure the team doesn't start treating the circuit breakers as causes rather than symptoms of an event. The other is that it can be difficult to understand what opened a circuit breaker, requiring thoughtful and specialized telemetry.

8. Collect Telemetry
Here's a koan for DevOps: The absence of failure doesn't mean a feature is working. In a staged rollout environment like the one Microsoft runs, traffic on new features is frequently low. As the feature is exposed through the larger concentric circles of users in each release stage, it's getting more and more hits. Yet a problem may not become apparent until some critical threshold gets reached weeks or months after full availability of a feature.

In all cases, the more telemetry the system generates, the better. "When you run a 24x7 service, telemetry is absolutely key, it's your lifeblood," Hodges said. "Gather everything." For a benchmark, Microsoft is pulling in 7TB of data on average every day.

"When you run a 24x7 service, telemetry is absolutely key, it's your lifeblood. Gather everything."

Buck Hodges, Director of Engineering, Microsoft Visual Studio Team Services

9. Refine Alerts
A Hoover-up-everything approach is valuable when it comes to telemetry so that there's plenty of data available for pursuing root causes of incidents. The opposite is true on the alert side. "We were drowning in alerts," Hodges admitted. "When there are thousands of alerts and you're ignoring them, there's obviously a problem," he said, adding that too many alerts makes it more likely you'll miss problems. Cleaning up the alert system was an important part of Microsoft's DevOps journey, he said. Microsoft's main rules on alerts now are that every alert must be actionable and alerts should create a sense of urgency.

10. Prioritize User Experience
When deciding which types of problems to prioritize in telemetry, Hodges said Microsoft is emphasizing user experience measurements. Early versions might have concluded that performance was fine as long as user requests weren't failing. But understanding user experience expectations, and understanding those thresholds when a user loses his or her train of thought due to a delay, makes it important to not only measure failure of requests but to also recognize a problem if a user request takes too long. "If a request takes more than 10 seconds, we consider that a failed request," Hodges said.

11. Optimize DRIs' Time
Microsoft added a brick to the DevOps foundation in October 2013 with the formalization of designated responsible individuals, or DRIs. Responsible for rapid response to incidents involving the production systems, the DRIs represent a formalization on the operations side of DevOps. In Microsoft's case, the DRIs are on-call 24/7 on 12-hour shifts and are rotated weekly. In the absence of incidents, DRIs are supposed to conduct proactive investigation of service performance.

In case of an incident, the goal is to have a DRI on top of the issue in five minutes during the day and 15 minutes at night. Traditional seniority arrangements result in the most experienced people getting the plum day shifts. Microsoft has found that flipping the usual situation works best.

"We found that at night, inexperienced DRIs just needed to wake up the more experienced DRI anyway," Hodges said. As for off-hours DRIs accessing a production system, Microsoft also provides them with custom secured laptops to prevent malware infections, such as those from phishing attacks, from working their way into the system and wreaking havoc.

12. Assign Shield Teams
The VSTS/TFS team is organized into about 40 feature teams of 10 engineers and a program manager or two. With that aggressive every-three-weeks sprint schedule, those engineers need to be heads down on developing new features for the next release. Yet if an incident comes up in the production system involving one of their features, the team has to respond. Microsoft's process for that was to create a rotating "shield team" of two of the 10 engineers. Those engineers are assigned to address or triage any live-site issues or other interruptions, providing a buffer that allows the rest of the team to stay focused on the sprint.

13. Pursue Multiple Theories
In the case of a live site incident, there's usually a temptation to seize on a theory of the problem and dedicate all the available responders to pursuing that theory in the hopes that it will lead to a quick resolution. The problem comes when the theory is wrong. "It's surprising how easy it is to get myopic. If you pursue each of your theories sequentially, it's going to take longer to fix the problem for the customer," Hodges said. "You have to pursue multiple theories."

In a similar vein, Microsoft has found it's important and helpful to rotate out incident responders and bring in fresh, rested replacements if an incident goes more than eight or 10 hours without a resolution.

14. Combine Dev and Test Engineer Roles
One of the most critical evolutions affecting DevOps at Microsoft over the past few years involved a companywide change in the definition of an engineer. Prior to combining them in November 2014, Microsoft had separate development engineer and test engineer roles. Now developers who build code must test the code, which provides significant motivation to make the code more testable.

15. Tune the Tests
The three-week-sprint cycle led to a simultaneous acceleration in testing processes, with the biggest improvements coming in the last three years. As of September 2014, Hodges said the so-called "nightly tests" required 22 hours, while the full run of testing took two days.

A new testing regimen breaks down testing into different levels. A new test taxonomy allows Microsoft to run the code against progressively more involved levels, allowing simple problems to be addressed quickly. The first level tests only binaries and doesn't involve any dependencies. The next level adds the ability to use SQL and file system. A third level tests a service via the REST API, while the fourth level is a full environment to test end to end. The upshot is that Microsoft is running many, many more tests against the code in much less time.

Posted by Scott Bekker on November 16, 2017 at 1:50 PM0 comments

Live! 360: Microsoft's 'No. 1 Persona' for BI is the Business User

Partners pitching business intelligence (BI) solutions to business users rather than database administrators (DBAs) appear to have a committed new ally in Microsoft.

When looking at Microsoft's recent enhancements to its BI platforms, the lack of rows, columns and even traditional data management terminology makes it evident that the changes aren't aimed at DBAs.

"Our No. 1 persona, our No. 1 person that we're building for, is the business user," said Charles Sterling, senior program manager in the Microsoft Business Applications Group, during his keynote for the SQL Server Live! track of the Live! 360 conference Tuesday in Orlando, Fla.

Sterling, a 25-year Microsoft veteran, and Ted Pattison of Critical Path Training, delivered the session, "Microsoft BI -- What's New for BI Pros, DBAs and Developers." Their talk centered on demos of Power BI, but also touched on roadmaps for elements of Power BI, and covered the growing role of PowerApps and Flow. The main idea behind PowerApps is to allow business users, without coding, to pull from either simple or complex organizational data sources and create shareable business apps that are usable from mobile devices or in a browser.

All of Microsoft's simplification efforts are driving an explosion of BI usage among organizations. According to a slide from the presentation titled "Power BI by the numbers," there are 11.5 million data models hosted in the Power BI service, 30,000 data models added each day, 10 million monthly publish to Web views, and 2 million report and dashboard queries per hour.

Charles Sterling, senior program manager in the Microsoft Business Applications Group, discusses Power BI improvements during his keynote for the SQL Server Live! track of the Live! 360 conference Tuesday in Orlando, Fla.

Yet one of the biggest obstacles right now preventing business users from running even more wild with the concept is not having access to that organizational data. While a DBA creating a more sophisticated application has, or can quickly get, permissions for the underlying organizational data, the story is different for most business users.

"For [business users] to create an app that goes out and collects data is relatively difficult," Sterling said. "That's what PowerApps is going to enable business people to do in the near future. You can do it right now, but we are actually integrating it into Power BI."

A related concept on the roadmap will actually require more organizational communication across the business, including DBAs educating users to help set expectations. From within their dashboard, business users will be able to use PowerApps to update the database on the back end, a PowerApps roadmap feature called Write Back.

Sterling said the Write Back feature is one of his favorite new features, but he suggested it may be a bit of a struggle for DBAs as business users get going with it.

"For DBAs, empowering business users with the tools to directly update and collect data will likely open a whole new host of problems. Take, for example, a business user updating data being fed into a Hadoop cluster. It is entirely likely that the processing could take hours to days to propagate into the BI system they are viewing. Business users will expect those updates in real time," Sterling said in an interview.

Another big element of the roadmap with business users in mind is storytelling in the Power BI Desktop, Sterling said.

During a demo, Pattison clicked on slicers in a graph within Power BI that were increasing or decreasing. The example application brought up a sentence, based on the underlying data, explaining in business terms what was driving the increase or decrease.

"We're going to continue playing out this whole storytelling [approach]," Sterling said. In another example of such storytelling, Sterling and Pattison ran most of their presentation out of bookmarks in Power BI rather than PowerPoint. The bookmarks recall a preconfigured view of a report page, and pull live data when the bookmark is displayed.

Another demo during the session that caught attendees' attention involved Power BI Report Server, an enterprise reporting solution that received an update release on Nov. 1. Blue Badge Insights Founder and CEO Andrew Brust found the walkthrough of the Power BI Report Server one of the most important demos during the session. "It's an enhanced version of reporting services, but it lets you distribute reports on premises instead of having to push them to the cloud," Brust said.

Sterling said near-term items on the Report Server roadmap include more data sources, more APIs and integration with Microsoft SharePoint.

Posted by Scott Bekker on November 15, 2017 at 9:22 AM0 comments

ConnectWise Marshals Smartphone Cams To Enlist Users in Their Own Support

When the ConnectWise Control team told him they were playing around with integrating smartphone cameras into their remote support offering, ConnectWise Chief Product Officer Craig Fulton wasn't sure at first that the idea added enough value to pursue.

After all, end users or field engineers could use products like Apple FaceTime for show and tell with subject-matter experts or senior support staff back at the main office. But the team quickly sold him on the idea, and his demo of the technology with ConnectWise CEO Arnie Bellini was an audience favorite during the main keynote at the company's IT Nation show last week in Orlando.

The official name of the video streaming technology, expected to be available in the second quarter of 2018, is ConnectWise Perspective.

On the ConnectWise platform side, Perspective includes a browser plug-in and integrations with Control (formerly ConnectWise ScreenConnect), as well as integration with ticketing and management in products like ConnectWise Automate. But that's for the tech back at the MSP's headquarters.

ConnectWise CEO Arnie Bellini (right) points his smartphone camera at his tablet, and ConnectWise Chief Product Officer Craig Fulton (left) streams the camera view to his PC. The on-stage preview of ConnectWise Perspective generated a lot of buzz among MSPs at IT Nation. (Source: Scott Bekker)

At the customer site, all the user needs is an iPhone or an Android phone. The headquarters tech would send a URL to the field tech or the end user. By simply tapping the link and entering a code, the on-site user's camera feeds live video into the back-end system, allowing the headquarters tech to see what the user is seeing. That technology is built on the Web Real-Time Communication (WebRTC) protocols and APIs.

With Perspective, pointing the camera at a bar code on the underside of a laptop can instantly give the tech back at the main office all the identifying information about that system. Use of the camera will also cause the session to automatically integrate with tickets and be recorded for billing purposes.

ConnectWise is also working on a feature for Perspective that it's calling Canvas. As a user moves the camera around, the images will be stitched together, similar to the way smartphone cameras build panoramic photos. A tech will be able to keep the canvas open in a separate browser tab from the feed, building a map of an entire room. That map, or canvas, will let them direct the on-site user to other areas. The headquarters tech is also supposed to be able to direct a small box within the smartphone camera display to lead the user's focus toward those things that are important to the tech, Fulton explained in an interview.

Use of technologies like FaceTime introduced some problems in the past that Perspective would address, Fulton said. For one thing, the customer then has a technician's personal phone number, introducing a privacy issue and creating a temptation for the customer to reach out to that tech any time they have a problem, regardless of whether the tech is on duty or not. Additionally, any FaceTime-style interactions fall outside of an MSP's billing and documentation processes.

Potential usage scenarios are extensive. On the one hand, there are the junior tech/senior tech scenarios, where a junior tech can get hands-on experience in the field and stream back to a senior tech for training or advanced troubleshooting.

Additionally, there's augmented self-support for customers. Anything with blinking lights and a confusing array of buttons, ports or switches presents an opportunity for Perspective, especially supporting audio/visual equipment, physical security devices or networking gear.

"For all of the things that really require having to stand in front of it, it's going to be super useful. It's going to turn the customer into a technician," Fulton said. "You see less of a need for field engineers. With cloud, the reach is getting further."

Posted by Scott Bekker on November 13, 2017 at 9:35 AM0 comments

IT Nation: With Kit, ConnectWise Hopes To Simplify Integrations

ConnectWise, which is touting its many integrations and industry openness as a differentiator in an increasingly competitive market, previewed a new developer kit that will make it faster and easier for partners to build on its platform.

CEO Arnie Bellini unveiled the ConnectWise Developer Kit on Thursday in the opening keynote of the company's IT Nation conference in Orlando, Fla.

"This is about giving you a tool that will generate code automatically and lets you very quickly add to the ConnectWise platform with your solution," Bellini said of the kit, which is in a pilot phase after a two-year development process and is expected to be generally available in the second quarter of 2018.

The company currently boasts more than 200 third-party integrations, developed over the years through co-engineering projects, APIs and SDKs.

Bellini said the new kit will streamline integrations for those third parties, but will also make it possible for the company's managed service provider (MSP) customers to tailor the ConnectWise platform to their unique needs or to create their own connections to key applications that aren't already integrated.

ConnectWise CEO Arnie Bellini announced a new developer kit and angel investment fund at IT Nation to spur vendor and partner integrations. (Source: Scott Bekker)

"This developer kit is not just about our vendor partners, but it's also about you and especially our larger partners," Bellini told many of the 3,500 attendees during the opening keynote.

In a short on-stage demo, ConnectWise Chief Product Officer Craig Fulton built one of the integrations, called a pod, in about a minute, joking that he's not a developer, but he can drag and drop.

"We'll still give you the APIs, we'll still give you the SDKs, but this is the tool that we see that unifies this ecosystem," Fulton said.

In conjunction with the new tool, Bellini offered a tantalizing hint of new funding ConnectWise will make available for partners to further build out the ecosystem.

"We have raised a fund that's up to $10 million at this point, and we hope to get it up to $25M," he said. "We want to be angel investors for those of you that can help us connect this ecosystem together." In an interview later, Bellini said the angel investment fund doesn't have a name yet, but will be run out of ConnectWise Capital.

Meanwhile, ConnectWise is moving forward with its traditional style of partnership/API/SDK integrations. The company was joined on stage by Nirav Sheth, vice president of Cisco's Global Partner Organization for Solutions, Architectures & Engineering, to talk about the companies' new partnership effort announced last week. ConnectWise and Cisco are working together on ConnectWise Unite, which will allow Cisco partners to do cloud-based management, automation and billing for Cisco Spark, Cisco Meraki, Cisco Umbrella and Cisco Stealthwatch.

As a privately held company, ConnectWise finds itself surrounded by IT service management companies with private equity backing, including Kaseya, Continuum, SolarWinds MSP and Autotask, which just combined with Datto. Last month, former Tech Data CEO Steve Raymund joined the ConnectWise board of directors and is helping the company evaluate options for outside financing.

Posted by Scott Bekker on November 09, 2017 at 12:04 PM0 comments

Microsoft Opens Azure Marketplace to Partner 'Ap/Ops'

While DevOps is a key trend for IT departments, Microsoft hopes to seed a new partner ecosystem within the Azure Marketplace around a related idea that it's calling "Ap/Ops."

The immediate evidence of the effort is general availability this month of Managed Applications in the Azure Marketplace, which is Microsoft's 3-year-old catalog of third-party applications that have been certified or optimized to run on the Azure public cloud platform.

Managed applications would be different from regular applications in the Azure Marketplace. Where a customer would deploy a regular application to Azure by themselves or have a partner deploy it, a managed application is a turnkey package. The partner who develops the solution would package it with the underlying Azure infrastructure, sell it as a sealed bundle and handle the operations, such as management and lifecycle support of the application, on the customer's behalf.

Corey Sanders, director of compute for Microsoft Azure, described that packaging of the application and the operations as "Ap/Ops" in a blog post announcing managed applications in the Azure Marketplace.

"Managed Service Providers (MSPs), Independent Software Vendors (ISVs) and System Integrators (SIs) can build turnkey cloud solutions using Azure Resource Manager templates. Both the application IP and the underlying pre-configured Azure infrastructure can easily be packaged into a sealed and serviceable solution," Sanders said.

Customers can deploy the managed application in their own Azure service, where they are billed for the application's Azure consumption along with a new line item for any fees the partner charges for lifecycle operations.

Sanders presents the Azure Marketplace offering as a first in public cloud. "This new distribution channel for our partners will change customer expectations in the public cloud. Unlike our competitors, in Azure, a marketplace application can now be much more than just deployment and set-up. Now it can be a fully supported and managed solution," he said.

Three companies were ready to go last week with managed applications for sale in the Azure Marketplace -- the OpsLogix OMS Oracle Solution, Xcalar Data Platform and Cisco Meraki.

Posted by Scott Bekker on November 06, 2017 at 8:21 AM0 comments

HPE Is Moving

Hewlett Packard Enterprise (HPE) will move its headquarters across town by the end of 2018.

The company announced the headquarters move from Palo Alto, Calif., to nearby Santa Clara, Calif., on Thursday, almost two years to the day after the official split of the old Hewlett-Packard Co. into HPE and HP Inc.

With the move, HPE will leave the headquarters campus area that it currently still shares with HP Inc.

"Over the past two years we've made tremendous progress towards becoming a simpler, nimbler and more focused company," said HPE CEO Meg Whitman in a statement. "I'm excited to move our headquarters to an innovative new building that provides a next-generation digital experience for our employees, customers and partners."

HPE will consolidate its Silicon Valley real estate for its smaller workforce by selling the Palo Alto building and relocating employees to the Santa Clara offices, as well as existing offices, including those in San Jose and Milpitas.

The Santa Clara office building that will become the HPE headquarters was already a showcase structure for the wireless and collaboration technologies of Aruba Networks, which HP acquired for $2.7 billion in 2015. The Aruba unit started designing the 230,000-square-foot, six-story, open-floor-plan office space after the acquisition. A local business report at the time suggested the facility would have space for hundreds more employees than the nearly 700 that Aruba had at the time.

HPE and HP Inc. will continue to cooperate to support two Silicon Valley landmarks that are part of their shared heritage -- the Hewlett-Packard Garage and the Founder's Office of Bill Hewlett and Dave Packard.

Posted by Scott Bekker on November 02, 2017 at 10:36 AM0 comments

Strategic Microsoft Partners Get the 'Swarm' Treatment

The ongoing reorganization of the Microsoft channel operation, One Commercial Partner (OCP), includes a new concept called the "swarm," a pool of technical experts at Microsoft available to strategic partners.

The term came up during the most recent podcast in the "The Ultimate Guide to Partnering" series as host Vince Menzione was interviewing Scott Buth, director of partner development in the Microsoft U.S. OCP. Buth, a 10-year Microsoft veteran, is responsible for improving the capabilities of Microsoft Licensing Solution Provider (LSP) partners, an elite group that currently consists of 16 companies.

Specifically, he works with those partners in the newly defined solution areas -- Modern Workplace, Apps & Infrastructure, Data & AI and Business Applications. Those four topics were laid out as major horizontal solution focus areas in the field reorganization that was laid out in July.

Asked by Menzione to elaborate after mentioning the "swarm" term, Buth offered this description: "It is the pool of essentially of Cloud Solution Architects that align to our four solution areas...Those resources are aligned so those solution areas and the [Partner Technical Strategist] who quarterbacks the technology and the solution development for our strategic partners has the ability to engage those individuals and bring them into the conversation, help train and enable the partner technical resources, as well as create a deeper roadmap for where they want to go with their solutions."

The definition came up as Buth was describing Microsoft's overall efforts to deploy Microsoft's partner-supporting business and technical experts in combination for the benefit of partners.

"It's really to get under the hood of our partners' capabilities, understand the types of practices that they have in place today, the offerings that are generating profit for them, and then also work with them in a strategic way to align them on a road map of offering development that we want to work on," Buth said.

You can listen to Menzione's interview with Buth here:

While the "swarm" term doesn't seem to be commonly used outside of Microsoft or confidential briefings with individual partners, the concept lines up with floating technical resources in Microsoft OCP organizational charts (see below). Those charts mention Cloud Solution Architects and Partner Technical Architects that are outside the main organization buckets of OCP (Build-with, Go-To-Market and Sell-with).

[Click on image for larger view.]

The "swarm" idea offers hints of how Microsoft intends for those technical experts to be deployed.

Posted by Scott Bekker on October 30, 2017 at 8:43 AM0 comments

Investors To Acquire Datto, Merge It with Autotask To Form MSP Powerhouse

Investment firm Vista Equity Partners shook up the managed service provider (MSP) market on Thursday by announcing an agreement to acquire Datto and to combine that data protection specialist company with another of its holdings, MSP tools vendor Autotask Corp.

Assuming the deal closes as expected by the end of this year, it will create a powerhouse MSP tools company with offerings ranging from backup and disaster recovery (BDR) to remote monitoring and management (RMM) to professional services automation (PSA), along with file sync and share and SMB networking. Datto founder and CEO Austin McChord will serve as CEO overseeing a combined management team, while Autotask President and CEO Mark Cattini will act as a strategic advisor to the board of directors. Branding for the combined company has not been determined.

"This unique combination of talent with a track record of success marks a new chapter that will make an even bigger impact for our managed service provider partners, by delivering an unprecedented set of capabilities for them to serve millions of small businesses in the future," McChord said in a statement. Terms of the deal were not disclosed, although Datto has previously been valued as a $1 billion company after securing $75 million in a Series B financing round two years ago.

Datto is based in Norwalk, Conn. Autotask, with headquarters in New York, was acquired by Vista Equity Partners in June 2014. Vista, which focuses on enterprise technology companies and has invested $30 billion since 2000, has offices in Austin, Texas; San Francisco and Chicago.

The companies did make public a few scale metrics, including that Datto-Autotask have a combined 13,000 providers/MSPs, 500,000 SMB customers and a geographic reach that covers 125 countries.

Cattini emphasized the future company's potential for creating integrations between the product lines and improve the products. "With the powerful combination of the Autotask Unified PSA-RMM platform and Datto's industry leading business continuity solutions, together we can now deliver unprecedented innovation and unmatched levels of value and service to our customers and partners worldwide," Cattini said in a statement.

Executives from both Datto and Autotask assured both MSPs and industry partners that the new company will continue to work openly with its competitors/partners to ensure that products are interoperable -- allowing MSPs to continue to mix and match among RMM, PSA and BDR tools.

"We're going to continue to have an open philosophy. That's worked well for Autotask in the past, and it's certainly worked well for Datto," said Patrick Burns, vice president of product management for Autotask, in an interview.

Citing BDR and file sharing competitor eFolder and PSA/RMM vendor ConnectWise as examples, Datto Chief Revenue Officer Brooks Borcherding elaborated on the interoperability philosophy: "Going forward we would certainly expect and encourage Autotask partners to continue on with an eFolder or for ourselves to continue to have a strong relationship with ConnectWise and all the importance of the integration and ease of use that come along with that because all the MSPs have their different needs and they're fulfilled better by one or a mix of those different services."

Borcherding and Burns both said that even as Datto-Autotask works in cooperation with industry partners, look for the Vista-backed entity to continue to add new elements to the stack via acquisition, as well as innovation, and to compete vigorously.

Outside of the Vista investment community, other industry figures confirmed the importance of the deal.

"The merger between Autotask and Datto signifies a major shift in the MSP vendor landscape," said Fred Voccola, CEO of Kaseya, in a statement.

"Investor-backed companies, like Datto/Autotask and Kaseya, will continue to acquire technologies that will help them bring more comprehensive solutions to benefit the greater MSP community. On the other hand, those without the necessary financial backing (like ConnectWise, TigerPaw, and others) will steadily lose ground as they are unable to invest in the products to help take their MSPs to the next level of growth," Voccola said. "Kaseya will continue to work harmoniously with Datto/Autotask to deepen the integration between our products and fully support our mutual customers who use VSA by Kaseya."

UPDATE: In a statement Thursday night, ConnectWise CEO Arnie Bellini offered congratulations but emphasized the internal tensions that Datto and Autotask will face in driving forward their own products while integrating with industry partners.

"Today's Datto/Autotask merger announcement is exactly what we expect to see in a rapidly expanding ecosystem," Bellini said. "ConnectWise's acquisition strategy is different. We are focused on building a completely integrated business platform for Technology Solution Providers of all kinds, including MSPs. We also believe in an open and connected ecosystem of choices. For example, ConnectWise currently offers our customers six different data protection solutions: Infrascale, Acronis, Storage Craft, Veeam, Centrestack and Storage Guardian. Those choices are important! It seems the new Datto/Autotask merger will offer a single data protection solution. That may not work out well for them. Regardless, we congratulate them and welcome the competition. It makes all of us better."

Posted by Scott Bekker on October 26, 2017 at 9:11 AM0 comments

Cisco Acquiring BroadSoft for $1.9 Billion

Cisco made a major move to expand its collaboration footprint within small and medium businesses worldwide on Monday with the announcement of its intent to acquire cloud-calling and contact center solution vendor BroadSoft for $1.9 billion.

"Following the close of the acquisition, Cisco and BroadSoft will provide a comprehensive SaaS portfolio of cloud based unified communications, collaboration, and contact center software solutions and services for customers of all sizes," said Rob Salvagno, vice president of corporate business development for Cisco, in a blog post about the deal.

BroadSoft is a publicly held company based in Gaithersburg, Md., and the agreement calls for Cisco to pay $55 per share in cash. The board of directors of each company has approved the deal, which is expected to close in the first quarter of 2018.

Rowan Trollope, senior vice president and general manager of Cisco's Applications Business Group, emphasized the way the deal expands the types of customers Cisco can now reach. "We believe that our combined offers, from Cisco's collaboration technology for enterprises to BroadSoft's suite for small and medium businesses delivered through Service Providers will give customers more choice and flexibility," Trollope said in a statement.

BroadSoft executives played up the potential for Cisco's collaboration tools and services to improve the performance and capabilities of BroadSoft's hosted solutions.

According to the companies, BroadSoft partners with 450 telecom carriers in 80 countries, reaching more than 19 million business subscribers.

Once the deal closes, the BroadSoft employees and operations will be part of the Cisco Unified Communications Technology Group run by Vice President and General Manager Tom Puorro. That group is part of Trollope's organization within San Jose, Calif.-based Cisco.

Posted by Scott Bekker on October 23, 2017 at 12:27 PM0 comments

KRACK Spells Big Trouble for Wireless Security

A long-standing pillar of modern computer security sustained major damage on Monday when researchers revealed a serious weakness in WPA2, the gold-standard protocol for protecting wireless networks.

The Belgian researcher who discovered the weakness, Mathy Vanhoef of KU Leuven, dubbed the new category of attacks "KRACK" for "key reinstallation attacks."

KRACK exploits a flaw in the way a client joins a WPA2-protected network, a procedure known as the four-way handshake. Critically, Vanhoef noted that the flaw exists in properly configured wireless networks. "The weaknesses are in the Wi-Fi standard itself, and not in individual products or implementations. Therefore, any correct implementation of WPA2 is likely affected," Vanhoef wrote on a Web site created to explain the vulnerability,

By manipulating and replaying cryptographic handshake messages, KRACK tricks the victim system into re-installing keys that are already in use, Vanhoef wrote. While the attack does not reveal the wireless network password, it does allow some to all of the network traffic to be visible to an attacker, depending on the encryption protocol in use.

Like any wireless attack, KRACK requires the attacker to be within wireless signal range of the target, and only circumvents the encryption provided by WPA2, not the encryption of the underlying data using Transport Layer Security or other types of protection. (In a proof-of-concept video on his Web site, however, Vanhoef used the SSLStrip tool in combination with KRACK methods to simulate a man-in-the-middle attack to view an Android phone user's encrypted Internet traffic.)

"Attackers can use this novel attack technique to read information that was previously assumed to be safely encrypted. This can be abused to steal sensitive information such as credit card numbers, passwords, chat messages, emails, photos, and so on," Vanhoef said. "Depending on the network configuration, it is also possible to inject and manipulate data. For example, an attacker might be able to inject ransomware or other malware into websites."

Vanhoef began notifying affected vendors in mid-July and had originally planned to go public with details in August, but began working with industry organizations as the scope and scale of the problem became evident.

The coordinated public release of the details of the attack Monday morning caused a flurry of activity in the security community. A CERT Vulnerability Note #228519 titled, "[WPA2] handshake traffic can be manipulated to induce nonce and session key reuse," went out Monday with a list of 15 affected vendors, including Cisco, Intel, Juniper Networks, Red Hat, Toshiba and others. Vanhoef's own tests found Android, Linux, Apple, Windows, OpenBSD, MediaTek, Linksys and others vulnerable, although the problems are particularly acute for Android and Linux.

Because of the early notice, Microsoft has already issued fixes for the flaw, a Microsoft spokesperson said in an e-mail: "Microsoft released security updates on October 10th and customers who have Windows Update enabled and applied the security updates, are protected automatically. We updated to protect customers as soon as possible, but as a responsible industry partner, we withheld disclosure until other vendors could develop and release updates."

In a post on his personal blog, Alex Hudson, CTO at the Iron Group, ranked his impressions of risk by platform. "Attacks against Android Phones are very easy!" he wrote. "Best to turn off wifi on these devices until fixes are applied. Windows and Mac OS users are much safer. Updates for other OSes will come quite quickly, the big problem is embedded devices for whom updates are slow / never coming."

Hudson also pointed out that the main attack was against clients, not access points. "Updating your router may or may not be necessary: updating your client devices absolutely is! Keep your laptops patched, and particularly get your Android phone updated."

Meanwhile, vendors that are focused on other layers of security were quick to pounce on the incident as further evidence of the need for multifaceted security approaches.

"There's no stopping users from connecting to public Wi-Fi hotspots, so it's up to the enterprise to layer on protection mechanisms. This vulnerability speaks to the importance of ensuring that all connections from endpoints leverage strong encryption, such as the latest versions of Transport Layer Security (TLS). Intermediary proxies can ensure that regardless of what the application supports, all connections from end-user devices leverage strong encryption," said Rich Campagna, CEO of Bitglass, in a statement.

While WPA2 has not been impervious to attack, the flaw represents a significant chink in the armor of one of the more robust quarters of computer security. Previous attacks on WPA2 mostly involved hitting surrounding technologies, such as vulnerabilities in Wi-Fi Protected Setup (WPS), or required either password-guessing or an attack from a table of hashed passwords that could only succeed if the correct password was already included.

It will be possible to issue patches in a backward-compatible manner, meaning that KRACK doesn't create a need for a WPA3, Vanhoef noted. Nonetheless, the combination of unpatched and unpatchable systems mean attacks based on this new method are likely to be a factor in wireless network attacking and defending for a long time to come.

Posted by Scott Bekker on October 16, 2017 at 1:04 PM0 comments

RCP Update

Sign up for our newsletter.

Terms and Privacy Policy consent

I agree to this site's Privacy Policy.