Bangkok: In an era where artificial intelligence (AI) is becoming an integral part of daily life, many people are choosing to use AI to help them work faster and more efficiently. However, Dr. Prinya Hom-anek has warned about a phenomenon called "Shadow AI," where employees use AI for work without an official organizational policy in place. This is becoming the number one risk that executives worldwide are concerned about.
According to Thai News Agency, Dr. Prinya points out that most employees don't intend to harm the organization, but the need to "upskill" or "retain their jobs" in a weak economy leads them to inadvertently misuse AI. Instances of this misuse include the leakage of customer data when employees enter customer names and surnames into AI for email writing assistance, the potential leak of source code when companies upload code for AI verification, and violations of data privacy laws like the PDPA when hospitals use AI for analyzing X-ray images containing patient information. Another risk is the loss of credibility when AI generates false answers, known as "hallucinations," which employees may unknowingly send to customers without verification.
Dr. Prinya highlights the terrifying consequence of losing "data sovereignty," where AI uses the data provided for training and potentially answers future queries from other users with this information. He shared a personal experience where AI was able to generate a detailed 17-page report on his friend's personal information without permission, illustrating the potential loss of "AI sovereignty."
The risk associated with AI has escalated from manual command inputs by employees to AI agents capable of directly accessing local files and company database servers to analyze data. This increases the risk of company-wide data leaks if the AI is not secure.
Dr. Prinya proposes several approaches for organizations to address these risks. First, there must be AI governance and policies that clearly communicate what is permissible and what is not, accompanied by awareness training for employees. Second, implementing an AI gateway or firewall to filter data before it is sent to AI is crucial, such as blocking the upload of personal data. Third, organizations should elevate their AI management standards to align with ISO 42001 to build customer confidence. Finally, a human-in-the-loop approach is vital, where humans perform the final verification before releasing AI-generated data.
In summary, while prohibiting AI use by employees is currently impossible, executives must urgently develop robust policies and safeguards to ensure organizations can leverage AI without compromising privacy and cybersecurity amidst the shadow of Shadow AI.