News
Microsoft Introduces Copilot Cowork with AI Updates to Security Copilot and Foundry
- By Chris Paoli
- March 12, 2026
Microsoft this week introduced a set of AI-focused updates spanning Microsoft 365 Copilot, Security Copilot and the Microsoft Foundry development platform. At the center of the announcements is Copilot Cowork, a new capability designed to move Copilot beyond answering prompts and toward handling multi-step work tasks on behalf of users.
According to Microsoft, the feature lets employees describe a goal and then delegate the steps required to complete it. Instead of simply returning information in a chat response, Copilot can coordinate actions across Microsoft 365 tools and execute parts of a workflow while keeping the user in control of approvals and oversight. This moves Copilot from its usual chat capabilities to Cowork's action abilities, easing workflows and tasks on your behalf.
Copilot Cowork Moves Beyond Chat
Introduced Monday, Cowork is designed to take a user’s goal and turn it into a structured plan that runs in the background. Microsoft said it pulls context from across Microsoft 365 (Outlook, Teams, Excel, files and meetings) through what it calls Work IQ, then surfaces checkpoints for approval before making changes.
According to Microsoft, Cowork can handle tasks such as resolving calendar conflicts, preparing meeting briefs, compiling research memos with citations from Web and workplace sources and building product launch plans with competitive analysis and pitch decks.
Microsoft said Cowork operates within the existing Microsoft 365 security and governance framework, with identity, permissions and compliance policies enforced by default. Actions and outputs are auditable, the company said.
Microsoft also said Cowork can tap Claude from Anthropic, which Lamanna described as a "multi-model advantage" that allows Copilot to route work to the model best suited for the task.
Cowork is currently available to a limited set of customers through a Research Preview. Microsoft said it expects broader rollout through its Frontier program in late March 2026. The company introduced Frontier earlier this year as an early-access channel for emerging Copilot features.
Security Copilot Gets a Credential-Finding Upgrade
Microsoft on Wednesday also announced the
general availability of Agentic Secret Finder, or ASF, in Microsoft Security Copilot.
The feature is aimed at detecting exposed credentials hidden in unstructured data such as emails, chat logs, documents and screenshots. Microsoft said ASF uses a multi-step, multi-agent reasoning process to determine whether a suspicious string is a valid credential and what level of access that credential could provide.
"Unlike regex-based scanners, ASF uses reasoning to identify not just credentials, but the systems they unlock, helping security teams understand exposure and respond faster," Microsoft wrote in the announcement.
Microsoft said that approach is intended to improve triage by reducing the false positives often generated by traditional pattern-matching tools, while also identifying credentials that do not fit known formats.
In benchmark testing using synthetic datasets across e-mails, chats, notes and documents, Microsoft said ASF reached 98.33 percent credential recall with zero false positives. Traditional regex-based tools, the company said, detected about 40 percent of the same credentials.
ASF currently supports more than 20 credential types, including Azure Storage Keys, AWS Access Keys, OAuth tokens, SSH private keys and database connection strings. Microsoft said it is also exploring GitHub integration to extend the capability into source code analysis.
Fireworks AI arrives in Foundry
The third announcement, also made Wednesday, was a public preview that brings Fireworks AI to the Microsoft Foundry model catalog.
Microsoft said the integration gives developers access to Fireworks AI’s cloud-based inference engine inside their Foundry projects, offering low-latency inference for several open-source models.
"For customers needing the latest open-source models from emerging frontier labs, break-neck speed, or the ability to deploy their own post-trained custom models, Fireworks delivers best-in-class inference performance," Microsoft said in the announcement.
At launch, the preview supports both serverless pay-per-token deployments and provisioned throughput across four models: Minimax M2.5, OpenAI’s gpt-oss-120b, MoonshotAI’s Kimi-K2.5 and DeepSeek-v3.2.
Microsoft said customers can also import and deploy their own fine-tuned versions from those model families -- including Qwen3-14B and DeepSeek v3.1 -- through a new Custom Models workflow in Foundry.
The Fireworks integration is opt-in during preview and must be enabled through the Azure portal’s Preview features panel. Microsoft said customers also must be in one of six supported U.S. regions to use the pay-per-token option.