How to Address the Network Security Challenges Related to Agentic AI

Agentic artificial intelligence (AI) represents the next frontier of AI, promising to go beyond even the capabilities of generative AI (GenAI). Unlike most GenAI systems, which rely on human prompts or oversight, agentic AI is proactive because it doesn’t require user input to solve complex, multi-step problems. By leveraging a digital ecosystem of large language models (LLM), machine learning (ML) and natural language processing (NLP), agentic AI performs tasks autonomously on behalf of a human or system, massively improving productivity and operations.

While agentic AI is still in its early stages, experts have highlighted some ground-breaking use cases. Consider a customer service environment for a bank where an AI agent does more than purely answer a user’s questions when asked. Instead, the agent will actually complete transactions or tasks like moving funds when prompted by the user. Another example could be in a financial setting where agentic AI systems assist human analysts by autonomously and quickly analyzing large amounts of data to generate audit-ready reports for data-informed decision-making.

The incredible possibilities of agentic AI are undeniable. However, like any new technology, there are often security, governance, and compliance concerns. The unique nature of these AI agents presents several security and governance challenges for organizations. Enterprises must address these challenges to not only reap the rewards of agentic AI but also ensure network security and efficiency.

What Network Security Challenges Does Agentic AI Create for Organizations?

AI agents have four basic operations. The first is perception and data collection. These hundreds, thousands, and maybe millions of agents gather and collect data from multiple places, whether the cloud, on-premises, the edge, etc., and this data could physically be from anywhere, rather than one specific geographic location. The second step is decision-making. Once these agents have collected data, they use AI and ML models to make decisions. The third step is action and execution. Having decided, these agents act accordingly to carry out that decision. The last step is learning, where these agents use the data gathered before and after their decision to tweak and adapt correspondingly.

In this process, agentic AI requires access to enormous datasets to function effectively. Agents will typically integrate with data systems that handle or store sensitive information, such as financial records, healthcare databases, and other personally identifiable information (PII). Unfortunately, agentic AI complicates efforts to secure network infrastructure against vulnerabilities, particularly with cross-cloud connectivity. It also presents egress security challenges, making it difficult for businesses to guard against exfiltration, as well as command and control breaches. Should an AI agent become compromised, sensitive data could easily be leaked or stolen. Likewise, agents could be hijacked by malicious actors and used to generate and distribute disinformation at scale. When breaches occur, not only are there financial penalties, but also reputational consequences.

Key capabilities like observability and traceability can get frustrated by agentic AI as it is difficult to track which datasets AI agents are accessing, increasing the risk of data being exposed or accessed by unauthorized users. Similarly, agentic AI’s dynamic learning and adaptation can impede traditional security audits, which rely on structured logs to track data flow. Agentic AI is also ephemeral, dynamic, and continually running, creating a 24/7 need to maintain optimum visibility and security. Scale is another challenge. The attack surface has grown exponentially, extending beyond the on-premises data center and the cloud to include the edge. In fact, depending on the organization, agentic AI can add thousands to millions of new endpoints at the edge. These agents operate in numerous locations, whether different clouds, on-premises, the edge, etc., making the network more vulnerable to attack.

A Comprehensive Approach to Addressing Agentic AI Security Challenges

Organizations can address the security challenges of agentic AI by applying security solutions and best practices at each of the four basic operational steps:

  1. Perception and Data Collection: Businesses need high bandwidth network connectivity that is end-to-end encrypted to enable their agents to collect the enormous amount of data required to function. Recall that this data could be sensitive or highly valuable, depending on the use case. Companies should deploy a high-speed encrypted connectivity solution to run between all these data sources and protect sensitive and PII data.
  2. Decision Making: Companies must ensure their AI agents have access to the correct models and AI and ML infrastructure to make the right decisions. By implementing a cloud firewall, enterprises can obtain the connectivity and security their AI agents need to access the correct models in an auditable fashion.
  3. Action Execution: AI agents take action based on the decision. However, businesses must identify which agent out of the hundreds or thousands of them made that decision. They also need to know how their agents communicate with each other to avoid conflict or “robots fighting robots.” As such, organizations need observability and traceability of these actions taken by their AI agents. Observability is the ability to track, monitor, and understand internal states and behavior of AI agents in real-time. Traceability is the ability to track and document data, decisions, and actions made by an AI agent.
  4. Learning and Adaptation: Companies spend millions, if not hundreds of millions or more, to tune their algorithms, which increases the value and precision of these agents. If a bad actor gets hold of that model and exfiltrates it, all those resources could be in their hands in minutes. Businesses can protect their investments through egress security features that guard against exfiltration and command and control breaches.

Capitalizing on Agentic AI in a Secure and Responsible Manner

Agentic AI holds remarkable potential, empowering companies to reach new heights of productivity and efficiency. But, like any emerging technology in the AI space, organizations must take precautions to safeguard their networks and sensitive data. Security is especially crucial today considering highly sophisticated and well-organized malefactors funded by nation-states, like Salt Typhoon and Silk Typhoon, which continue to conduct large-scale attacks.

Organizations should partner with cloud security experts to develop a robust, scalable and future-ready security strategy capable of addressing the unique challenges of agentic AI. These partners can enable enterprises to track, manage, and secure their AI agent; moreover, they help provide companies with the awareness they need to satisfy the standards related to compliance and governance.

The post How to Address the Network Security Challenges Related to Agentic AI appeared first on Unite.AI.