Meta has announced that it will begin training its artificial intelligence (AI) systems using public content shared by adult users across Facebook and Instagram in the U.K. in the coming months.
“This means that our generative AI models will reflect British culture, history, and idiom, and that U.K. companies and institutions will be able to utilize the latest technology,” the social media behemoth said.
As part of the process, users aged 18 and above are expected to receive in-app notifications starting this week on both Facebook and Instagram, explaining its modus operandi and how they can readily access an objection form to deny their data being used to train the company’s generative AI models.
The company said it will honor users’ choices and that it won’t contact users who have already objected to their data being used for their purpose. It also noted that it will not include private messages with friends and family, as well as information from accounts of minors.
Furthermore, Meta said the outcome is the result of its engagement with the U.K. Information Commissioner’s Office (ICO) and its guidance supporting Meta’s implementation of the legal basis of Legitimate Interests, which it said is a valid mechanism for using first-party data to train its AI models.
“While our original approach was more transparent than our industry counterparts, we’ve incorporated feedback from the ICO to make our objection form even simpler, more prominent and easier to find,” Meta added.
It’s worth noting that Meta has paused similar efforts in the European Union following a request from the Irish Data Protection Commission (DPC) as of June 2024. It called the move a “step backwards for European innovation.”
Austrian privacy non-profit noyb has since accused the company of shifting the burden on users – i.e., making it opt-out as opposed to opt-in – and failing to provide adequate information on how the company is planning to use the publicly-accessible Facebook and Instagram data.
The development comes as Meta suspended the use of generative AI in Brazil after the country’s data protection authority issued a preliminary ban objecting to its new privacy policy.
The ICO, in response to Meta’s plans, said it intends to monitor the situation as the company notifies users and begins processing their data.
“We have been clear that any organization using its users’ information to train generative AI models needs to be transparent about how people’s data is being used,” Stephen Almond, executive director of regulatory risk at the ICO, said.
“Organizations should put effective safeguards in place before they start using personal data for model training, including providing a clear and simple route for users to object to the processing. The ICO has not provided regulatory approval for the processing and it is for Meta to ensure and demonstrate ongoing compliance.”