In a recent incident in Alaska, AI-generated content played a controversial role in policy-making. Alaska’s Department of Education and Early Development (DEED) used AI to draft a policy restricting cellphone use in schools, incorporating AI-generated citations that turned out to be fabricated. This situation highlights the risks of using unverified AI data in government decisions, especially without clear disclosure.
The AI-Generated Policy and Its Errors
Alaska Education Commissioner Deena Bishop used generative AI to draft a cellphone policy, relying on citations generated by AI. These references, intended to support the proposed policy, were neither accurate nor verified, yet they appeared in the draft presented to the Alaska State Board of Education. Although Bishop claimed she revised the errors before the meeting, AI “hallucinations”—or plausible but false information produced by AI—were still included in the final document voted on by the board.
The policy, now posted on DEED’s website, includes six citations. Four of these, supposedly from reputable journals, turned out to be fabricated, with links leading to unrelated content. This incident underscores the need for robust verification processes when using AI in government documents.
AI Hallucinations: A Growing Challenge Across Fields
AI hallucinations are not unique to policy-making; similar issues have surfaced in law and academia. Lawyers have faced repercussions for presenting fictitious case citations generated by AI, while AI-created academic papers have included inaccurate data and sources. Generative AI, designed to produce content based on learned patterns, can easily fabricate citations if not supervised.
Implications for Education Policy
Using unverified AI-generated data in educational policy poses serious risks. Policies based on fictitious data may misallocate resources and overlook genuine solutions. For example, a cellphone restriction policy based on faulty evidence may divert focus from more effective, research-backed methods to improve student outcomes.
This incident also threatens public trust in both AI technology and the policy-making process, highlighting the importance of transparency and accuracy in AI’s role in sensitive decision-making.
Alaska’s Response and Lessons Learned
Alaska officials initially downplayed the issue, labeling the fake citations as “placeholders” for later revision. However, the placeholder document was used as the basis for a board vote, underscoring the importance of rigorous oversight when using AI. This incident serves as a reminder that human review remains essential in AI-assisted processes, especially those affecting public policy.
For more insights on the evolving role of AI in policy and beyond, explore upcoming AI & Big Data Expo events in Amsterdam, California, and London, co-located with Digital Transformation Week, Cyber Security & Cloud Expo, and more.
Sources: https://www.artificialintelligence-news.com/news/ai-hallucinations-gone-wrong-as-alaska-uses-fake-stats-in-policy/, https://www.cnet.com/tech/hallucinations-why-ai-makes-stuff-up-and-whats-being-done-about-it/