Knowledge base

March 29, 2023

Microsoft Security Copilot brings OpenAI’s GPT-4 to the cybersecurity battlefield

Microsoft Security Copilot is a GPT-4 AI-driven service that uses intelligence to give companies an edge against cybercrime.

Microsoft’s AI wave in recent months has seen the company adopt intelligent models across its ecosystem. Using GPT-4 and ChatGPT from OpenAI, the company provided AI to search via Bing Chat, cloud with Azure OpenAI Serivce and Office via Microsoft 365 Copilot. Now the company is turning to AI to fight cybercrime with the launch of Microsoft Security Copilot today.

Announced at the inaugural Microsoft Secure event, the company revealed that Microsoft Security Copilot is powered by the GPT-4 generative multimodal AI. In an accompanying blog post, the company says the new platform will help cybercriminals meet cybercriminals head-on, giving organizations an edge rather than constantly catching up with threat actors.

“Today, the odds remain stacked against cybersecurity professionals. Too often, they fight an asymmetric battle against prolific, ruthless and sophisticated attackers. To protect their organizations, defenders must respond to threats that are often hidden among noise. This challenge is compounded by a global shortage of skilled security professionals, leading to an estimated 3.4 million job openings in the field.”

Security Copilot will pick up the slack from the skills shortage while providing scalable and evolving end-to-end defense at a machine speed level. Microsoft describes the solution as a first of its kind that uses the ChatGPT natural language model (LLM) with Microsoft’s existing security-specific AIs.

Ask Security Copilot questions in natural language and receive actionable answers. Source: Microsoft

“When Security Copilot receives a prompt from a security professional, it uses the full power of the security-specific model to implement skills and queries that maximize the value of the latest major language modeling capabilities. And this is unique to a security use case. Our cyber-trained model adds a learning system to create and tune new skills. Security Copilot can then help capture what other approaches may miss and augment an analyst’s work. “

Microsoft admits the system is not perfect simply because AI is still a growing technology that makes mistakes. To solve this problem, Microsoft Security Copilot uses a closed loop learning system. This essentially means that it continually learns from user interactions and welcomes feedback to help it improve.

Improve security

It is worth noting that Microsoft is not positioning Security Copilot as a complete replacement for current cybersecurity measures. I’m sure that’s the ultimate goal, but the AI is far from ready for that kind of responsibility. Instead, Microsoft says the service expands current security professionals with machine scale and speed.

The company points out that the system still relies on “human ingenuity” and is built on three guiding principles:

  • “Simplify the complex: in security the minutes count. With Security Copilot, defenders can respond to security incidents within minutes instead of hours or days. Security Copilot delivers critical step-by-step guidance and context through a natural language-based investigative experience that accelerates incident investigation and response. The ability to quickly summarize any trial or event and tailor reporting to a desired audience gives advocates the freedom to focus on the most pressing work.
  • Catch what others miss: attackers hide behind noise and weak signals. Defenders can now detect malicious behavior and threat signals that might otherwise go unnoticed. Security Copilot uncovers prioritized threats in real time and anticipates a threat actor’s next move with continuous reasoning based on Microsoft’s global threat intelligence. Security Copilot also comes with skills that represent security analysts’ expertise in areas such as threat detection, incident response and vulnerability management.
  • Address the talent gap: the capacity of a security team is always limited by the size of the team and the natural limits of human attention. Security Copilot enhances the skills of your defenders with its ability to answer security-related questions – from the basic to the complex. Security Copilot continuously learns from user interactions, adapts to business preferences and advises defenders on the best course of action to achieve safer outcomes. It also supports learning for new team members because it exposes them to new skills and approaches as they develop. This enables security teams to do more with less and to work with the capabilities of a larger, more mature organization.”

Opportunities

It’s not just OpenAI’s GPT-4 that defines Microsoft Security Copilot. The service also directly leverages Microsoft’s security-specific AI and end-to-end services to deliver the following feature set:

  • “Ongoing access to the most advanced OpenAI models to support the most demanding security tasks and applications.
  • A security-specific model that takes advantage of continuous reinforcement, learning and user feedback to meet the unique needs of security professionals;

  • Visibility and evergreen threat intelligence
    powered by your organization’s security products and the 65 trillion threat signals Microsoft sees every day to ensure security teams are working with the latest knowledge of attackers, their tactics, techniques and procedures;

  • Integration with Microsoft’s end-to-end security portfolio
    for a highly efficient experience that builds on security signals;
  • A growing list of unique skills and pointers that enhance the expertise of security teams and raise the bar for what is possible, even with limited resources.”

Safety

The goal of Microsoft Security Copilot is to usher in a new era of AI-driven security. But what about the other way around, what is Microsoft doing to make sure the AI is ethical. This is a hot topic, as the company fired its AI ethics team this month and hasn’t really done enough to discuss the potential dangers of the AI it releases into the wild.

First, OpenAI is an anchor for all Microsoft AI projects in recent months. OpenAI is an organization that takes AI ethics seriously and has a robust security commitment. Microsoft remains rather vague about Security Copilot, saying only that it stands by its commitment to “impactful and responsible AI practices by innovating responsibly, empowering others and promoting positive impact.”

The company does point out that users of the service can control how and when their data is used. Moreover, Microsoft claims that user data is not used to teach the AI model.

Microsoft’s AI wave

In recent months, artificial intelligence (AI) has experienced a mainstream explosion. This has been driven primarily by Microsoft and OpenAI, long-term partners through Microsoft’s billion-dollar investments. AI, we are often told, will transform our lives and make them fundamentally better. For the most part, Microsoft’s embrace of AI in recent months has pushed the needle in that direction, but only slightly.

The use of AI in cybersecurity is an area where technology can rapidly transform. Therefore, Microsoft 365 Copilot is an intriguing use of GPT-4.

source: winbuzzer

Want to know more?

Get in touch

Tech Updates: Microsoft 365, Azure, Cybersecurity & AI – Weekly in Your Mailbox.