Oct 07, 2025

Why This 30 Year Cybersecurity Veteran Predicts Major AI Breaches

When someone who spent 30 years defending against nation-state cyberattacks builds a digital version of themselves, then warns that major AI companies will face massive security breaches, business leaders should pay attention.

Ron Gula co-founded Gula Tech Adventures after taking Tenable Network Security public following 16 years as CEO. His background includes time at the National Security Agency and currently investing in 30+ cybersecurity and AI startups that serve everyone from first-time security buyers to Pentagon operations.

In this episode of Lead with AI, with host Dr. Tamara Nall, Ron Gula shares why he created a 3D avatar of himself that holds independent conversations, demonstrating how AI can scale expertise while simultaneously raising questions about vulnerabilities that few executives are discussing. His insights come from watching cybersecurity evolve through multiple technology waves and now funding the companies building tomorrow's defenses. The message he delivers cuts through typical AI hype: artificial intelligence has already embedded itself invisibly into your business operations, synthetic data threatens model quality in ways most don't understand, and the security dependencies you're building today could become critical vulnerabilities tomorrow.

The Invisible AI Already Running Your Business  

AI integration has happened quietly across business tools without the dramatic announcements or conscious adoption decisions that executives might expect. Animation software now produces Pixar-quality output through AI assistance that users don't see. Cybersecurity products detect threats using machine learning models that operate behind the scenes. Development platforms suggest code completions and generate functions that save hours of programming time.

This invisible embedding matters because it changes how companies should think about hiring and resource allocation. Gula tells founders in his portfolio who plan to hire 10-15 developers for new features to instead hire 3-5 people who effectively leverage AI tools. This recommendation isn't speculation about future capabilities. Software development already relies heavily on libraries and frameworks built by other programmers. AI-generated code simply continues this pattern of building on existing work rather than creating everything from scratch.

The shift happens at the tool level rather than through strategic initiatives or transformation programs. Your development team already uses GitHub Copilot or similar assistants. Your marketing team leverages AI writing tools. Your design team experiments with generative image models. These adoptions often happen without formal approval processes because they feel like productivity enhancements rather than fundamental changes to how work gets done. But the cumulative effect represents a significant dependency on AI systems that most organizations haven't formally assessed for risk.

The Security Breach Prediction

Gula's prediction that major AI companies will face significant security breaches comes from watching cybersecurity challenges evolve since his time at the NSA through building Tenable and now investing in defense technologies. The comparison he makes to Google search history illustrates the stakes clearly. If attackers access your search queries, they learn what you're researching and thinking about. But AI interactions reveal more than searches. They expose business decisions, strategic thinking, competitive concerns, and problem-solving approaches in ways that create unprecedented intelligence value for adversaries.

The fundamental challenge is that cybersecurity remains unsolved despite decades of investment and innovation. The private cybersecurity industry essentially functions as the front line in ongoing cyber warfare between major powers. Unlike physical security threats with clear escalation paths, cyber attacks exist in gray zones where responsibility and response mechanisms remain unclear. A spam email doesn't warrant calling the Pentagon, but nation-state attacks on critical infrastructure do. The vast middle ground gets handled by private companies serving as the defensive layer between individual users and sophisticated adversaries.

This reality drives Gula's investment thesis toward companies helping organizations develop internal AI capabilities rather than relying entirely on external providers. The reasoning connects to more than just pricing control or data sovereignty. When major breaches occur at AI companies providing services to thousands of businesses, organizations with no internal capabilities face critical vulnerabilities with no immediate alternatives. Building at least some internal AI capacity creates options and reduces single points of failure in increasingly critical business systems.

Data Care as Essential Infrastructure  

Gula introduces data care as a framework parallel to healthcare in its complexity and necessity. Medical care ranges from basic first aid to specialized surgery depending on what needs treatment and the consequences of failure. Data management requires a similar assessment based on what you're protecting, who might want access, and what happens if protections fail.

The framework applies to decisions executives regularly face but often make without structured thinking. Should your company enable location tracking on employee devices? The answer depends on what you're trying to achieve, what risks this creates, and whether employees understand and accept these trade-offs. How should digital assets be managed when team members leave? The approach differs dramatically between a small business and a defense contractor. What rights do you grant when accepting software terms of service? Most people click through without reading, potentially agreeing to data sharing or liability transfers they wouldn't accept if understood clearly.

Gula suggests copying lengthy end-user license agreements into ChatGPT or similar tools to get plain-language explanations of what you're agreeing to before clicking accept. This small habit change can reveal surprising permissions or liability shifts that might warrant negotiation or choosing alternative providers. The broader principle is treating data decisions with the same deliberate assessment you'd apply to healthcare choices rather than handling them as administrative tasks that don't warrant executive attention.

The Synthetic Data Crisis Accelerating Model Degradation  

The comparison Gula makes between synthetic data and the Harrier jet's design flaw illustrates a technical problem with real business implications. The vertical-takeoff aircraft can re-ingest its own exhaust during hover operations, eventually causing engine failure. AI systems increasingly train on synthetic data because insufficient human-generated content exists to meet growing model development needs. This creates quality degradation risks when systems learn primarily from other systems rather than original human knowledge and creativity.

The concern isn't purely theoretical. As AI-generated content floods the internet and AI companies train new models on data that includes previous AI outputs, the potential for quality decline increases. Gula references how this pattern could lead to what he calls an AI idiocracy future, where declining quality becomes normalized because standards shift alongside the degradation. The worry extends beyond technical accuracy to human cognition and capability as people delegate increasingly complex thinking to machines rather than just automating repetitive tasks.

For business leaders, this matters because the AI tools your teams use today will influence how they approach problems tomorrow. Automation that handles repetitive work differs fundamentally from delegation that transfers actual thinking to machines. Organizations need to maintain human expertise and judgment even while leveraging AI assistance. The risk isn't that AI becomes too capable but that human capabilities atrophy through disuse while AI quality declines through synthetic data training, creating a double degradation that leaves organizations worse off than before AI adoption.

Key considerations for managing synthetic data risks:

  • Understand what data trained the AI systems your business depends on

  • Maintain human expertise in critical domains rather than complete AI delegation

  • Audit AI outputs for quality rather than assuming accuracy

  • Build internal capabilities to reduce dependency on external AI providers

  • Invest in companies like StarSeer that help assess AI model origins and biases

  • Recognize that AI quality may decline over time rather than continuously improve

Taking Action on AI Dependencies Before Breaches Occur  

The time to assess AI security risks and dependencies is before incidents occur rather than during crisis response. Gula's experience across three decades and 30+ portfolio companies suggests that organizations succeeding with AI focus on practical applications solving specific problems rather than pursuing general-purpose capabilities. The companies in his portfolio that deliver value address real customer pain points with measurable solutions rather than chasing AI trends or implementing technology for its own sake.

Start by mapping where AI has already embedded itself in your operations. Your team uses AI-powered tools daily whether leadership has formally approved this or not. Development environments, writing assistants, design tools, and security products all incorporate AI capabilities that create dependencies worth understanding. Which of these dependencies would become critical vulnerabilities if the provider faced a security breach? Where have you built expertise to maintain operations if external AI services become unavailable?

The security implications can't be delegated entirely to IT departments. When AI becomes embedded in business decision-making, understanding how these systems work, what data trains them, and where vulnerabilities exist becomes a core executive responsibility. Gula's prediction about major AI company breaches will test whether organizations developed sufficient independence or created dependencies that became critical vulnerabilities during security incidents. Assess these risks proactively, build at least some internal AI capabilities, and treat data care with the structured thinking you apply to other critical business infrastructure.

For more insights on how AI is transforming business and cybersecurity, subscribe to the Lead with AI podcast, where we explore innovations with the leaders shaping technology's future.

Follow or Subscribe to Lead with AI Podcast on your favorite platforms:

Website: LeadwithAIPodcast.com
Apple Podcasts: Lead-with-AI
Spotify: Lead with AI
Podbean: Lead-with-AI-Podcast
Email: Tamara@LeadwithAIPodcast.com

Follow Dr. Tamara Nall:

LinkedIn: @TamaraNall
Website: TamaraNall.com
Email: Tamara@LeadwithAIPodcast.com

Follow Ron Gula

Website: Gula.Tech
LinkedIn: @Gula-Tech-Adventures
YouTube: @GulaTechAdventures
LinkedIn:   @RonGula
Twitter/X: @RonGula

Comments