AI Training on Children's Photos Raises Privacy Concerns
Human Rights Watch reports widespread misuse of Australian children's images in AI datasets without consent.
- Human Rights Watch found 190 photos of Australian children in the LAION-5B dataset, used by AI tools like Stable Diffusion and Midjourney.
- Photos included identifiable information such as names, ages, and locations, posing significant privacy risks.
- Indigenous children are particularly vulnerable due to cultural restrictions on image reproduction.
- Current AI models cannot forget data they've been trained on, even if removed from datasets later.
- Australia's upcoming Privacy Act reforms may introduce new protections for children's online privacy.