News
Microsoft Touts AI Advances at Build 2018
- By Michael Desmond
- May 07, 2018
Microsoft put the spotlight on artificial intelligence (AI) and machine learning at Monday's kickoff of the Build 2018 conference in Seattle.
Presiding over a keynote that included the debut of new tools and capabilities spanning the gamut from the cloud to Internet of Things (IoT), Microsoft CEO Satya Nadella and Executive Vice President Scott Guthrie described Microsoft's efforts to bring intelligence to the edge, improve tooling to leverage data and AI, and enable what Microsoft calls "multi-sense, multi-device" experiences.
Microsoft's efforts at the edge centered on its Azure IoT Edge runtime offering, which enables Azure-based AI software to run locally on IoT devices in the field, even when no connection to the cloud is present. During the keynote, drone-maker DJI showed how a camera-equipped drone could be used to survey pipeline infrastructure and automatically flag detected anomalies for further scrutiny.
In addition, Microsoft announced that it was extending its family of Cognitive Services APIs to support Azure IoT Edge, starting with the release of the Custom Vision API for camera-based visualization and recognition.
Data and Beyond
On the data front, Microsoft revealed the latest improvements to its massively scalable Cosmos DB database service, including multi-master write capability to enable near-real-time updates across geographic regions. It also debuted improvements in its Cognitive Services APIs (Custom Vision and Unified Speech) to improve functionality and extend use case scenarios. For instance, the Unified Speech API will allow developers to deploy applications that provide speech recognition, text-to-speech, customized voice models and real-time translation.
Microsoft also highlighted a pair of AI-themed projects that promise to extend the state-of-the-art. Project Kinect for Azure repackages the Kinect sensor platform released first with the Xbox and later HoloLens, and adapts it for AI scenarios that rely on spatial comprehension, object recognition and motion tracking. For instance, a Project Kinect for Azure application could analyze factory floor video and detect when a worker engages in unsafe practices or enters a dangerous area. Project Brainwave, meanwhile, is a hardware effort that employs field programmable gate array (FPGA) silicon from Intel to accelerate AI calculations on edge devices.
On the multi-sense/multi-device front, Microsoft debuted two new services -- Remote Assist and Layout -- that combine immersive interaction and elements of mixed reality to help people collaborate effectively with distant parties. Remote Assist is aimed at what Microsoft calls "firstline workers," employing HoloLens headsets that allow workers in the field to both share and manipulate their view of the environment. For instance, a worker could look at a malfunctioning panel and use HoloLens controls to draw a circle around a suspect component, so a remote expert can provide informed input about the issue.
Layout also leverages HoloLens, allowing teams to view imported 3-D models and to create and edit room layouts that can be viewed at real-world scale. The result: Stakeholders can make decisions about room layouts and environments by interacting directly with a virtual hologram of the space, freely moving elements around to develop a best sense for the design.
On the developer front, Microsoft announced a preview of a new feature for the Visual Studio integrated development environment (IDE) called Visual Studio IntelliCode. The feature extends the code-hinting functionality provided by IntelliSense, using AI to offer up code completion suggestions based on code practices gleaned from high-quality GitHub open source projects. IntelliCode also provides smart code scanning and review to help streamline bug fixes and improve code quality.
About the Author
Michael Desmond is an editor and writer for 1105 Media's Enterprise Computing Group.