• News
  • 3 min read

Strengthen agent security with real-time protection in Microsoft Copilot Studio

A computer screen with a shield on it
For organizations that need deeper oversight and real-time control, a new feature is now in public preview: Advanced real-time protection during agent runtime for enhanced security.

As AI agents become more embedded in critical business workflows, the need for robust security grows. Microsoft Copilot Studio already includes strong built-in protections against agent manipulation, but for organizations that need deeper oversight and proactive, responsive control, a new feature is now in public preview: Advanced real-time protection during agent runtime for enhanced security.

This capability enhances security for AI agents by enabling organizations to connect their own monitoring system such as Microsoft Defender as well as security platforms by other providers, or their own custom-built tools. These integrations allow for real-time evaluation and control of agent behavior during runtime.

When connected, the external systems become part of the agent’s decision-making process. They can block unsafe actions, even if the agent plans to execute them. For example, if the external system determines that the agent is planning to send an email that is oversharing information, it can block the email from being sent.

Admins can apply these protections across multiple agents and environments using the Power Platform Admin Center – no code required.

A computer screen with a shield on it

Copilot Studio agents: secure by default

AI agents face unique threats. One major risk is injection of prompts to the agent from an external source (also known as cross prompt injection attacks, or XPIA), where malicious prompts trick agents into leaking data or misusing tools. Copilot Studio includes default protections against both XPIA and user prompt injection attacks (UPIA). These defenses block suspicious prompts in real time, reducing the risk of data loss or unauthorized actions.

However, for organizations with advanced security needs, built-in protections may not be enough. That’s where the real-time protection comes in with an additional layer of defense.

Real-time protection in action

With advanced runtime protection, Copilot Studio calls the connected security system during the agent’s runtime. The system reviews the agent’s planned actions and decides whether to approve or block them. If it detects a threat, it stops the agent immediately and notifies the user. If the action is safe, the agent continues without delay or disruption.

This setup gives organizations stronger control over agent behavior while preserving a smooth user experience. It supports a “bring your own protection” model, allowing integration with:

  • Microsoft Defender (available today – learn more)
  • Third-party security providers
  • Custom-built monitoring tools

This flexibility helps organizations align security for AI agents with internal policies, industry standards, and regional compliance.

An illuminated security shield with a checkmark, signifying locked-down defenses

Instant alerts, actionable logs

In addition to the ability to block threats before they happen, Copilot Studio creates detailed audit logs for every interaction with the external system. Admins can use these logs to track attempted breaches, identify vulnerable agents, and improve future deployments.

These logs also help evaluate how well the external monitoring system performs. Admins can analyze trends, refine policies, and guide agent creators in building more secure agents. This feedback loop strengthens overall security for AI agents.

How advanced real-time protection works

When a user sends a prompt, the agent formulates a plan to respond. This plan includes the tools and actions it will use. Before the agent begins execution, Copilot Studio sends this plan to the external monitoring system via an API call. The data includes:

  • The user’s prompt and chat history
  • Tool details and input values
  • Metadata like agent ID, user ID, and tenant ID

The external system has one second to respond. If it approves the action, the agent proceeds. If it blocks the action, the agent stops and informs the user. If no response arrives in time, the agent assumes approval and continues.

Setup and management

Admins can configure external monitoring in the Power Platform Admin Center. They can apply settings to one environment, multiple environments, or specific environment groups. Different environments can use different monitoring systems. If needed, admins can disable the integration with a single setting.

Data sharing and compliance

To enable split-second decisions, Copilot Studio shares specific data with the external system. This includes prompts, chat history, tool inputs, and metadata. This data sharing is not customizable. Organizations should only enable the feature if they’re comfortable with the data being shared.

External providers may handle data differently than Microsoft. Some may store or process data outside your region. It’s important to review your provider’s policies and ensure they meet your compliance standards.

Why this feature matters

Advanced security for AI agents is no longer optional. As agents are increasingly equipped with autonomous triggers and take on more complex and sensitive tasks, organizations need real-time oversight. External monitoring gives them the tools to enforce compliance, detect and block threats, and gain visibility – without compromising performance.

This new, groundbreaking capability in Copilot Studio empowers organizations to take control of their AI agent security strategy. It’s a critical step toward safer, more reliable AI deployments.

Next steps

The public preview is rolling out worldwide, with availability to all customers by Wednesday, September 10th. To learn how to get started, visit the Microsoft Learn documentation for advanced real-time protection during agent runtime.

Resources 

A man with a goatee and a plant behind him

Asaf Tzuk

Principal Program Manager
Asaf Tzuk is a Principal Program Manager at Microsoft, where he leads security and governance initiatives for Copilot Studio, specializing in agent lifecycle, data protection, and extensibility features. With deep expertise in program management, product roadmapping, and enterprise security, Asaf plays a key role in shaping strategic capabilities for AI agents across Microsoft platforms.
See more articles from this author