
Microsoft has announced its AI Security Copilot, a GPT-4 implementation that brings generative AI capabilities to its in-house security suite and a host of new visualization and analysis functions.
The most impressive features come in the form of integration with Microsoft’s existing security products, including Defender, Sentinel, Entra, Purview, Priva, and Intune. Copilot can interpret data from all of those security products and provide automated, in-depth explanations, as well as suggested remedies.
The system will have an ability to take action against some kinds of threats deleting email messages that contain malicious content identified by a previous analysis.
Security Copilot accepts natural language inputs, so security professionals could ask for a summary of a particular vulnerability, feed in files, URLs, or code snippets for analysis or ask for incident and alert information from other security tools. All prompts and responses are saved, so there’s a full audit trail for investigators.
Microsoft said that it has plans to expand Security Copilot’s connectivity options beyond the company’s own products but did not offer any further details in a livestream and official blog post detailing the product.
Microsoft said, as a generative AI product, Security Copilot isn’t going to give correct answers 100% of the time, and that it will need additional training and input from early users to reach its full potential.
Security Copilot is the latest example of Microsoft’s big push with AI. The Microsoft 365 Copilot feels like it will forever change Office documents, and Microsoft-owned GitHub is supercharging its own Copilot into more of a chatty assistant to help developers create code.
Microsoft doesn’t appear to be slowing down with its Copilot ambitions, so we’re likely to see this AI assistant technology appear throughout the company’s software and services.