WatchGuard Blog

DBIR 2024: AI Fuels more Cyber Threats though its Impact Remains Limited

The use of artificial intelligence in cyberattacks is now a growing concern. From the automated creation of malware to more sophisticated phishing campaigns, AI’s ability to enhance the scale and effectiveness of threats has become a cause for alarm. 

However, Version’s 2024 Data Breach Investigations Report (DBIR) states that only 2% of analyzed data breaches directly involved the use of this technology. This suggests that while AI tools are expanding the attack surface, their impact has yet to translate into a significant number of successful attacks. 

Still, organizations are now facing a major new risk: shadow AI. According to the report, unsupervised use of AI tools by employees, working from either personal or corporate accounts, outside the control of IT departments and without proper authentication, can lead to data exposure and security breaches. That’s why it’s crucial for companies to implement clear protocols and actively monitor internal AI usage to minimize their attack surface and maintain control over digital assets. 

How can we ensure safe AI usage without limiting its potential? 

The study found that 14% of employees accessed AI tools from corporate devices. Among these users, 72% used personal (non-corporate) email addresses to log in, while 17% used corporate emails without proper built-in authentication systems. 

This behavior puts an organization’s tech infrastructure at risk. If an employee uses an AI-driven text generator through a personal account to draft sensitive documents outside corporate channels, it opens the door to vulnerabilities. Cybercriminals could exploit these security gaps by intercepting data through insecure apps or network weaknesses, gaining unauthorized access to information while remaining undetected. 

Lack of visibility into unauthorized AI usage raises the risk level and organizations need to ensure they take the following steps: 

  • Define Internal Protocols: Establish clear rules for AI use: which tools are approved, how to respond to incidents, and what security standards must be applied. This helps reduces operational risks for the organization and enables employees to work more confidently knowing the tools they are using are secure.
  • Assess AI-Specific Risks: Conduct targeted risk assessments in environments that integrate AI. This allows organizations to identify weak points before they are exploited and to strengthen security from the design stage using practices such as data breach simulations or manipulation tests.
  • Protect the Data: Setting security and access policies to manage data entry and usage safeguards sensitive information through role-based access controls, encryption, and clear limits on the types of data that can be used.
  • Control the Use of AI Tools at the Endpoint: WatchGuard Endpoint Security offers a layered approach to managing AI tool usage in corporate environments. The Zero-Trust Application Service blocks unknown or unclassified applications—such as emerging AI tools—until they are explicitly authorized, preventing the execution of untrusted software. The Application Control module allows administrators to block or monitor applications by name or MDR category, ensuring that only approved tools can run. To restrict access to online AI services, the Web Access Control feature enables blocking of specific URLs or entire site categories related to generative AI tools. These policies can be applied to individual devices or device groups, giving organizations the flexibility to tailor protection levels to each department or risk profile, reducing exposure without impacting productivity.
  • Invest in Training: The more informed users are about secure AI usage, the better equipped they will be to detect anomalies, act wisely, and protect organizational assets. This not only strengthens the company’s cyber resilience but also empowers employees and grows digital maturity. 

Artificial intelligence doesn’t just unlock new opportunities, it also introduces new vulnerabilities within organizations. Risks aren’t always external; many stem from unregulated or misunderstood internal usage, bringing security challenges that aren’t immediately visible. To mitigate and protect enterprise assets, it’s essential to establish clear usage guidelines and adopt a layered security strategy that blends training, internal policies, and technological solutions, which enables implementation that is intelligent, proactive, and responsible. 

OSZAR »