
Visit Tech Alliance
Board members need to seriously consider artificial intelligence (AI) governance as AI becomes integral to business operations. Understanding its impact is key to fulfilling their duty of care, managing risks and guiding their organisations in using AI responsibly and ethically.
As I took on more governance roles, I recognised that boards play an increasingly critical role in overseeing AI, but many are unprepared for this responsibility. This led me to focus on how boards can proactively manage AI governance to avoid being left behind as AI becomes more prevalent.
Shadow AI, for example, is an alarming trend of unsanctioned generative AI use within an organisation. Employees may access AI tools to perform a range of tasks, including drafting copy or writing code. However, a recent Cyberhaven report, which surveyed 3 million employees, found that 73.8% of workplace ChatGPT usage occurs through public accounts. Generative AI tools can help drive innovation and agility, but they also come with significant risks. The 2023 Samsung incident is a stark reminder: employees used generative AI to analyse sensitive data, inadvertently leaking source code—a clear example of the dangers of using public AI tools without robust safeguards.
These headlines should serve as a wake-up call for boards to proactively tackle AI governance. The challenge is determining which AI uses to permit or restrict, balancing employee support with business security.
But where should boards start?
By Karen Rolleston
Karen is a chartered member of the Institute of Directors and a professional director. If you would like to know more about AI board training, please email karen@kaw.net.nz
Keep updated with the AI Forum’s news and events here. If you’re not already a member, learn more and join us.