Google’s Bard and other AI chatbots remain under privacy watch in the EU
As we reported earlier, Google’s AI chatbot Bard has finally launched in the European Union. We understand it did so after making some changes to boost transparency and user controls — but the bloc’s privacy regulators remain watchful and big decisions on how to enforce the bloc’s data protection law on generative AI remain to be taken.
Google’s lead data protection regulator in the region, the Irish Data Protection Commission (DPC), told us it will be continuing to engage with the tech giant on Bard post-launch. The DPC also said Google has agreed to carry out a review and report back to the watchdog in three months’ time (around mid October). So the coming months will see more regulatory attention on the AI chatbot — if not (yet) a formal investigation.
At the same time the European Data Protection Board (EDPB) has a taskforce looking into AI chatbots’ compliance with the pan-EU General Data Protection Regulation GDPR). The taskforce was initially focused on OpenAI’s ChatGPT but we understand Bard matters will be incorporated into the work which aims to coordination actions that may be taken by different data protection authorities (DPA) to try to harmonize enforcement.
“Google have made a number of changes in advance of [Bard’s] launch, in particular increased transparency and changes to controls for users. We will be continuing our engagement with Google in relation to Bard post- launch and Google have agreed to carrying out a review and providing a report to the DPC after three months of Bard becoming operational in the EU,” said DPC deputy commissioner Graham Doyle.
“In addition, the European Data Protection Board set up a task force earlier this year, of which we are a member, which will look at a wide variety of issues in this space, he added.
The EU launch of Google’s ChatGPT rival was delayed last month after the Irish regulator urgently sought information Google had failed to provide it with. This included not providing the DPC with sight of a data protection impact assessment (DPIA) — a critical compliance document for identifying potential risks to fundamental rights and assess mitigation measures. So failing to stump up a DPIA is one very big regulatory red flag.
Doyle confirmed to TechCrunch the DPC has now seen a DPIA for Bard.
He said this will be one of the documents he said will form part of the three-month review, along with other “relevant” documentation, adding: “DPIAs are living documents and are subject to change.”
In an official blog post Google did not immediately offer any detail on specific steps taken to shrink its regulatory risk in the EU — but claimed it has “proactively engaged with experts, policymakers and privacy regulators on this expansion”.
We reached out to the tech giant with questions about the transparency and user control tweaks made ahead of launching Bard in the EU and a spokeswoman highlighted a number of areas it has paid attention to which she suggested would ensure it’s rolling out the tech responsibly — including limiting access to Bard to users aged 18+ who have a Google Account.
One big change is she flagged a new Bard Privacy Hub which she suggested makes it easy for users to review explanations of available privacy controls available.
Per information on this Hub, Google’s claimed legal bases for Bard include performance of a contract and legitimate interests. Although it appears to be leaning most heavily on the latter basis for the bulk of associated processing. (It also notes that as the product develops it may ask for consent to process data for specific purposes.)
Also per the Hub, the only clearly labelled data deletion option Google seems to be offering users is the ability to delete their own Bard usage activity — there’s no obvious way for users to ask Google to delete personal data used to train the chatbot.
Although it does offer a web form which lets people report a problem or a legal issue — where it specifies users can ask for a correction to false information generated about them or object to processing of their data (the latter being a requirement if you’re relying on legitimate interests for the processing under EU law).
Another web form Google offers lets users request the removal of content under its own policies or applicable laws (which, most obviously, implies copyright violations but Google is also suggesting users avail themselves of this form if they want to object to its processing of their data or request a correction — so this, seemingly, is as close as you get to a ‘delete my data from your AI model’ option).
Other tweaks Google’s spokeswoman pointed to relate to user controls over its retention of their Bard activity data — or indeed the ability not to have their activity logged.
“Users can also choose how long Bard stores their data with their Google Account — by default, Google stores their Bard activity in their Google Account for up to 18 months but users can change this to three or 36 months if preferred. They can also switch this off completely and easily delete their Bard activity at g.co/bard/myactivity,” the spokeswoman said.
At first glance, Google’s approach in the area of transparency and user control with Bard looks pretty similar to changes OpenAI made to ChatGPT following regulatory scrutiny by the Italian DPA.
The Garante grabbed eyeballs earlier this year by ordering OpenAI to suspend service locally — simultaneously flagging a laundry list of data protection concerns.
ChatGPT was able to resume service in Italy after a few weeks by acting on the initial DPA to-do list. This included adding privacy disclosures about the data processing used to develop and train ChatGPT; providing users with the ability to opt out of data processing for training its AIs; and offering a way for European to ask for their data to be deleted, including if it was unable to rectify errors generated about people by the chatbot.
OpenAI was also required to add an age-gate in the near term and work on adding more robust age assurance technology to shrink child safety concerns.
Additionally, Italy ordered OpenAI to remove references to performance of a contract for the legal basis claimed for the processing — saying it could only rely on either consent or legitimate interests. (In the event, when ChatGPT resumed service in Italy OpenAI appeared to be relying on LI as the legal basis.) And, on that front, we understand legal basis is one of the issues the EDPB taskforce is looking at.
As well as forcing OpenAI to make a series of immediate changes in response to its concerns, the Italian DPA opened its own investigation of ChatGPT. A spokesman for the Garante confirmed to us today that that investigation remains ongoing.
Other EU DPAs have also said they’re investigating ChatGPT — which is open to regulatory inquiry from across the bloc since, unlike Google, it’s not main established in any Member State.
That means there’s potentially greater regulatory risk and uncertainty for OpenAI’s chatbot vs Google’s (which, as we say, isn’t under formal investigation by the DPC as yet) — certainly it’s a more complex compliance picture as the company has to deal with inbound from multiple regulators, rather than just a lead DPA.
The EDPB taskforce may help shrink some of the regulatory uncertainty in this area if EU DPAs can agree on common enforcement positions on AI chatbots.
That said, some authorities are already setting out their own strategic stall on generative AI technologies. France’s CNIL, for example, published an AI action plan earlier this year in which it stipulated it would be paying special attention to protecting publicly available data on the web against scarping — a practice that OpenAI and Google both use for developing large language models like ChatGPT and Bard.
So it’s unlikely the taskforce will lead to complete consensus between DPAs on how to tackle chatbots and some differences of approach seem inevitable.