Synopsis
Anthropic Claude Sonnet arriving in Microsoft 365 Copilot is more than another model announcement. It reflects Microsoft’s move toward a more flexible, multi-model Copilot experience, where different models can shape how users draft, summarise, and analyse content in day-to-day work.
For organisations already using Microsoft 365 Copilot, this matters because model choice can influence response style, depth, and consistency. That has clear implications not just for users, but also for admins responsible for support, governance, and adoption.
This update highlights a broader shift in Microsoft 365 Copilot, where model choice is becoming a more visible part of how organisations balance quality, consistency, and trust in everyday AI-assisted work.
Message ID: MC1247880
[Introduction]
We’re expanding model choice in Microsoft 365 Copilot with the addition of Anthropic Claude Sonnet for users with a Microsoft 365 Copilot license. Claude Sonnet is available in Copilot Chat in Frontier, alongside the latest OpenAI models, giving users the flexibility to choose the model best suited for their tasks. This expansion reflects our commitment to delivering the latest AI innovation for work—while maintaining the security, compliance, and privacy standards customers expect from Microsoft.
[When this will happen:]
- Frontier availability: Claude Sonnet is available now in Frontier.
- General availability (web/desktop/macOS/mobile): Rolling out gradually; expected completion late March 2026.
[How this affects your organization:]
Who is affected:
- Users with a Microsoft 365 Copilot license
Who is not affected:
- Tenants in EU/EFTA and the UK
- Government clouds (GCC, GCC High, DoD)
- Sovereign clouds For these tenants, Anthropic models are not available
For these tenants, Anthropic models are not available and no model option will be shown.
What will happen:
- Microsoft 365 Copilot licensed users will be able to select Claude Sonnet as an option in the model selector within Copilot Chat.
- In regions where Anthropic is configured as a subprocessor and is set to Off by default, admins can choose to opt in to make Anthropic models available for their organization.
- Enterprise Data Protection for Microsoft 365 Copilot continues to apply, with no changes to existing protections.
- Anthropic operates as a Microsoft subprocessor under the Microsoft Data Protection Addendum and Product Terms.
- Anthropic models are currently excluded from EU Data Boundary and in‑country processing commitments.
[What you can do to prepare:]
- No action is required. However, it’s recommended that you review internal guidance and communicate to users that availability.
- If your tenant is in a region where Anthropic is Off by default, review the subprocessor setting and opt in if you want Anthropic models available to users. Learn more: Anthropic as a subprocessor for Microsoft Online Services | Microsoft Learn
- Learn more about Anthropic as a Microsoft subprocessor: Anthropic as a subprocessor for Microsoft Online Services
Source: Microsoft
![Anthropic Claude Sonnet is now available in Microsoft 365 Copilot [MC1247880] 1 Anthropic Claude Sonnet is now available in Microsoft 365 Copilot [MC1247880]](https://mwpro.co.uk/wp-content/uploads/2024/08/pexels-danny-meneses-340146-943096.bak_-1024x683.webp)



![Microsoft Copilot Studio – UPDATE – Classic agent creation experience in Teams [MC1282727] 5 pexels anniroenkae 2832533](https://mwpro.co.uk/wp-content/uploads/2025/06/pexels-anniroenkae-2832533-150x150.webp)
![SharePoint: AI citations analytics for documents and pages [MC1247902] 8 SharePoint: AI citations analytics for documents and pages [MC1247902]](https://mwpro.co.uk/wp-content/uploads/2024/08/pexels-olly-3760809-150x150.webp)
Really interesting update. It feels like the bigger story here is not just that another model has been added, but that Microsoft is quietly turning Copilot into a much broader multi-model experience. That is the part that feels easy to underestimate now, but could end up being one of the more meaningful shifts over time.
I also think you’re right to call out the consistency angle. Once people start noticing that responses can feel a bit different depending on what is sitting behind Copilot, that is going to raise some interesting questions around trust, support, and user expectations. Definitely one of those updates that looks quite small on the surface, but feels like it points to something much bigger.