This question has recently taken on new urgency thanks to growing concern about the dangers that can arise when children interact with AI chatbots. For years Big Tech asked for birthdays (which anyone could make up) to avoid violating children’s privacy laws, but they weren’t required to moderate content accordingly. Two developments over the past week show how quickly things are changing in America and how this issue is becoming a new battleground, even among parents and child safety advocates.
In one corner is the Republican Party, which has supported laws passed in several states requiring sites with adult content to verify the age of users. Critics say it provides cover for banning anything deemed “harmful to minors”, which could include sex education. Other states, such as California, are coming up with laws to protect children behind AI companies that talk to chatbots (requiring them to verify who is a child). Meanwhile, President Trump is trying to keep AI regulation a national issue instead of allowing states to make their own rules. Support for various bills in Congress is constantly in flux.
So what could happen? The debate is increasingly moving away from whether age verification is necessary and towards who will be responsible for it. This liability is a hot potato that no company wants to carry.
a Blog post Last Tuesday, Openi revealed that it plans to make automated age predictions. In short. The company will apply a model that uses factors such as time of day, among others, to predict whether someone chatting is under 18. For those who identify as teens or children, Chat GPT will use filters to “reduce exposure” to content that contains graphic violence or sexual content. Youtube Launched Something similar happened last year.
If you support age verification but are concerned about privacy, this might feel like a win-win. But there is a catch. Of course this system is not perfect, so it may classify a child as an adult or vice versa. People who are falsely labeled as under 18 can verify their identity by presenting a selfie or government ID to a company called Persona.
Selfie verifications have problems: They fail more often for people of color and some people with disabilities. Sameer Hinduja, who co-directs the Cyberbullying Research Center, says the fact that the figure would require the public to hold millions of government identification and biometric data is another weak point. “When they’re breached, we’ve simultaneously exposed the population at large,” he says.
Hinduja instead advocates device-level authentication, where parents specify a child’s age when first setting up the child’s phone. This information is then stored on the device and shared securely with apps and websites.
Apple CEO, Tim Cook, more or less is Recently lobbed US lawmakers to call. Cook was battling lawmakers who wanted to require app stores to verify age, which cuts Apple with a lot of responsibility.