Artificial intelligence is changing the world we live in—but it’s also posing new challenges when it comes to data privacy.
In my latest column for The Drum, I explain why consumer skepticism could wind up slamming the brakes on the AI revolution.
The data used to train AI tools doesn’t disappear: it lingers on in the finished product, and it’s often possible to extract that data by reverse-engineering the training process. Safeguarding personal information will require a new approach to data privacy, with front-end privacy protections and clear processes for “un-training” AI tools in response to data deletion requests.
As algorithms are trained on bigger and bigger datasets, the potential for data leakage becomes greater and greater. The risk is real and we need to act now to secure our data in an era of ubiquitous AI.