الخميس, مايو 7, 2026
الخميس, مايو 7, 2026
Home » Canadian privacy czars call out ’several concerns’ with how OpenAI trained ChatGPT

Canadian privacy czars call out ’several concerns’ with how OpenAI trained ChatGPT

by admin

 

RCI

Following investigation, OpenAI took steps to resolve commissioners’ concerns.

OpenAI did not respect Canadian privacy laws when it trained its immensely popular ChatGPT tool, resulting in the collection and use of sensitive personal information, according to a joint investigation.

The federal privacy commissioner and his counterparts in Quebec, British Columbia and Alberta outlined their findings Wednesday morning into ChatGPT— a chatbot launched in 2022 that generates conversational, human-like responses when users type in questions or tasks.

The privacy watchdogs’ started their probe in 2023 following a complaint that the company unlawfully collected, used and disclosed personal information without consent.

According to their review, they identified several concerns that led us to find that the way in which OpenAI had initially trained ChatGPT did not respect federal and provincial privacy laws.

They found OpenAI gathered vast amounts of personal information without safeguards to prevent use of that information to train its models.

This could include sensitive details such as individuals’ health conditions and political views, as well as information about children, said their report.

It also found many users were unaware that their data was collected and used to train ChatGPT.

Investigation found ChatGPT ‘not compliant’ with Canadian laws, say privacy watchdogs

A joint investigation found OpenAI did not follow Canadian privacy laws by collecting and using sensitive personal information while training its ChatGPT tool. ‘There was a sense that [OpenAI] had to move quickly, but we found that problematic,’ said Privacy Commissioner of Canada Philippe Dufresne.

OpenAI launched ChatGPT without having fully addressed known privacy issues. This exposed Canadians to potential risks of harm such as breaches and discrimination on the basis of information about them, said federal commissioner Philippe Dufresne’s prepared remarks Wednesday.

Dufresne said there was a lack of accountability from OpenAI about why it launched a product that didn’t follow Canadian law.

We have some some statements from leaders of the organization at the time saying, ‘We felt we had to move, we knew that there were others out there and so we launched it,’ he said.

We found that problematic.

Need to modernize Canada’s laws: privacy czar

The company expressed its disagreement with the findings, according to the report, and asserted that it was compliant with the various privacy acts in most respects.

Still, the privacy watchdogs said following their investigation, OpenAI took steps to improve privacy protections and has agreed to implement further measures to address their concerns.

As AI is increasingly being integrated into personal and professional applications and while currently laws apply to AI, updated laws would help further support the safe deployment of new technologies to protect Canadians’ fundamental right to privacy, he said.

The investigation predates the fatal shooting in Tumbler Ridge, B.C. in February, but comes amid calls for the government to introduce regulations targeting AI chatbots.

Seven lawsuits on behalf of those killed or injured in the rampage have been filed in California accusing OpenAI and its co-founder Sam Altman of negligence (new window).

Lawyers with the firm Rice Parsons Leoni & Elliott say the Tumbler Ridge shooter’s ChatGPT account was banned for disturbing content, which allegedly included planning violent scenarios, prior to the February tragedy.

However, despite some 12 different OpenAI employees imploring the company to notify Canadian law enforcement about the shooter’s plans, nothing else was done, the firm said.

Late last month Altman wrote an apology letter to the community (new window) for failing to alert RCMP about the account of the Tumbler Ridge shooter.

Dufresne says a ban isn’t the answer

The federal government has said it’s reviewing whether the use of chatbots and social media should be age-restricted. Last year Australia implemented a first-of-its kind ban on youth under the age of 16 using major social media services including Tiktok, X, Facebook, Instagram, YouTube, Snapchat and Threads.

Asked if he would support a ban, Dufresne said a balance needs to be struck.

The first step need not necessarily be a ban. I think the first step should be, can we fix the underlying issue? Can we make it more privacy protective? he said.

I think the goal is to reach this balance where you’re protecting children, but you’re also giving them the ability to evolve in this increasingly digital world.

Catharine Tunney (new window) · CBC News

You may also like

Editor-in-Chief: Nabil El-bkaili

CANADAVOICE is a free website  officially registered in NS / Canada.

 We are talking about CANADA’S international relations and their repercussions on

peace in the world.

 We care about matters related to asylum ,  refugees , immigration and their role in the development of CANADA.

We care about the economic and Culture movement and living in CANADA and the economic activity and its development in NOVA  SCOTIA and all Canadian provinces.

 CANADA VOICE is THE VOICE OF CANADA to the world

Published By : 4381689 CANADA VOICE \ EPUBLISHING \ NEWS – MEDIA WEBSITE

Tegistry id 438173 NS-HALIFAX

 

هذا الموقع مجاني ولا يخضع لاية رسوم

This website is free and does not incur any fees

Email: nelbkaili@yahoo.com 

 

Editor-in-Chief : Nabil El-bkaili
-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00