In recent months, generative AI programs have surged in popularity, funded and developed by tech giants like Microsoft and Google. However, beneath their seemingly innocuous capabilities lies a darker potential: the automation of child grooming by predators. The eSafety commissioner, Julie Inman Grant, has raised alarm bells, warning that chat bots could be created to contact young people, opening up sinister new avenues for manipulation.
The Rise of AI Chatbots and Image Generators
Products like ChatGPT and Dall-E have captured public attention, but their impact extends beyond mere novelty. These AI chatbots and image generators can replace human employees, raise concerns about misinformation, and even facilitate child exploitation or scams. As Sam Altman, CEO of OpenAI, emphasized, more regulation is essential.
Ethical Considerations and Governance Gaps
Australia, among the first countries to adopt national AI ethics principles, grapples with governance gaps in both public and private sectors. Ed Husic, the minister for science and technology, acknowledges the need for additional regulation. Existing copyright laws govern data collection and training of AI programs, but privacy and consumer protection laws also apply.
The Way Forward
As governments worldwide grapple with AI’s rapid evolution, Australia faces a critical juncture. The eSafety commissioner’s proposals to detect and remove child abuse material and pro-terror content are steps in the right direction. However, vigilance remains crucial to prevent AI tools from becoming unwitting accomplices in sinister activities.