Why AI governance matters: what recent AI Agent events have revealed
2026.04.10
An open-source AI Agent launched in 2025 quickly went viral, drawing attention for its ability to run locally on user devices - such as PCs and Macs - and autonomously execute tasks on users’ behalf. Positioned as a personal AI assistant, it gained rapid popularity among users.
However, within six months, users began to raise concerns and engage in discussions potential issues and risks associated with the AI Agent. In response, some local authorities issued warnings to alert users to these risks.
This article explores the issues the emerged and explains why robust AI governance is increasingly necessary as AI becomes more widely adopted across organizations and society.
OVERVIEW OF THE AI AGENT AND REPORTED CONCERNS
Unlike AI chatbots (e.g., ChatGPT) that primarily answer questions, an AI Agent is designed to take actions. It can follow instructions, interact with external tools and applications (e.g., messaging apps, email, file systems, and banking apps), and complete tasks autonomously without constant user input.
Agents can also maintain long-term memory, preserving context across sessions (e.g., user preferences and prior decisions). They perform tasks through “skills,” which can be added to expand capabilities, such as calling APIs, querying databases, and operating other applications, so the Agent can plan and execute actions to complete the assigned tasks.
Since its launch, the AI Agent has been popular among individual users and organizations for handling daily and complex tasks to improve efficiency and productivity. For instance, users set it up to scan inboxes, categorize emails by urgency, and draft replies for review; developers connect the AI Agent to external AI coding assistants (e.g., Claude Code) to help write software code, etc. To enable an Agent to act, users need to grant elevated permissions, e.g., administrative rights on devices, and provide sensitive information, e.g., email credentials.
This rapid adoption was soon accompanied by increasing digital trust challenges. Users widely reported issues and risks, including having critical vulnerabilities, e.g., CVE-2026-25253; alleged unauthorized actions, e.g., deleting important emails; and exfiltration of sensitive data, e.g., account passwords and crypto-wallet information. As a result, organizations and individuals began questioning whether the Agent could be trusted.
Due to these risks, some authorities have issued formal warnings, including the Digital Policy Office (DPO) and the Privacy Commissioner for Personal Data (PCPD) in Hong Kong, as well as Chinese government agencies.
Source link:
- The PCPD Issues Alert over the Privacy Risks of OpenClaw and Agentic AI and Reminds Organisations and the Public to Use AI Safely (Date: 16 March 2026)
- Hong Kong Generative Artificial Intelligence Technical and Application Guideline
AI AGENTS THE TREND AND GOVERNANCE IS THE ENABLER
Despite the disruption, the AI Agent case offers insights:
- AI Agent represents the next phase in AI adoption: The widespread adoption of the AI Agent demonstrates a strong demand among users and organizations for autonomous AI Agents. A shift from AI-assisted operations to AI-executed operations is likely and may happen quickly.
- Governance enables safe adoption of AI Agent: Organizations that casually introduce AI Agent systems without established governance structures and policies are prone to risks and drawbacks. The AI Agent incident is an early warning.
To introduce AI Agent safely, organizations need to consider the following:
| Aspect | Description |
|---|---|
|
AI roles and responsibilities |
Define the roles and responsibilities of the organization, operational units, and employees in adopting AI Agents. |
|
Accountability |
Clearly define who is accountable for actions executed by AI Agents; machines do not bear accountability. |
|
Compliance |
Ensure AI Agents are compliant with applicable AI laws and regulations, e.g., EU AI Act, throughout their lifecycle. |
|
Risk assessment |
Identify potential impacts and risks of AI Agents and implement relevant controls. |
|
Responsible AI Agent development |
Regardless of whether AI Agents are self-developed or externally acquired, organizations need to ensure they are built responsibly with consideration of fairness, robustness (e.g., not susceptible to attacks), safety, security and data management, and privacy, etc. |
|
Trustworthiness |
Verify and validate the large language models (LLMs) behind AI Agents to ensure they are trustworthy. For instance, they can provide reliable and non-discriminate outputs. |
|
Human oversight |
Define what types of tasks AI Agents can execute autonomously and which require a human in the loop. |
|
Transparency |
Ensure decisions and actions made by AI Agents are logged and explainable to maintain their transparency. |
|
Ongoing monitoring |
Establish monitoring mechanisms to ensure AI Agents perform as expected in a changing environment. |
|
Incident response |
Put handling mechanisms in place to handle incidents of AI Agents. |
FRAMEWORKS AND BEST PRACTICES FOR AI GOVERNANCE
Addressing the above aspects of AI is not easy, nor is it a one-off task. Organizations should build a governance framework and develop the necessary capabilities. Because there is no one-size-fits-all approach, many standards, guidelines, and best practices can help organizations design a suitable framework that reflects their context.
- EU AI Act: The world’s first comprehensive legal framework for AI, designed to ensure that AI systems used in the EU are safe, ethical, and respect fundamental rights. It bans AI systems that bear unacceptable risk, e.g., AI systems for mass surveillance in the public spaces by law enforcement. Besides, it imposes stricter regulation requirements on high-risk AI systems than to limited risk and minimal/no risk AI systems.
- ISO/IEC 42001:2023: The first international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS). It provides a comprehensive approach for organizations to systematically address risks, ethical considerations, and regulatory compliance for AI technologies. Organizations need to determine their AI roles, e.g., AI provider, AI producer, AI user, etc. Unlike voluntary frameworks like the NIST AI RMF, ISO/IEC 42001 is a certifiable standard. Organizations can undergo third-party audits to demonstrate their commitment to responsible AI.
- NIST AI Risk Management Framework (RMF): A voluntary framework developed by the U.S. National Institute of Standards and Technology (NIST). This framework provides organizations with guidance to identify, assess, manage, and monitor AI risks. It emphasizes trustworthiness, transparency, and accountability throughout the AI lifecycle. It is structured around four core functions: Govern, Map, Measure, and Manage. The AI RMF Playbook provides step-by-step implementation guidance with actionable suggestions for each core function.
CONCLUSION
The AI Agent exemplifies both the promise and the peril of AI Agents. Its capabilities offer transformative potential, but they also introduce risks that demand vigilant governance. By embracing robust frameworks such as the EU AI Act, ISO/IEC 42001:2023 and the NIST AI RMF, organizations can foster responsible innovation and ensure that AI Agents contribute positively to their businesses. Looking forward, ongoing adaptation and continuous improvement of AI governance frameworks will be essential to address new challenges and maintain stakeholder trust in AI.