Artificial intelligence threatens individual privacy: commissioner

Artificial intelligence (AI) may provide great benefits for society but must be overseen rigorously to protect Canadians’ privacy, the federal privacy watchdog says.

The Office of the Privacy Commissioner of Canada said AI uses are based on individuals’ personal information and can have serious consequences for privacy as AI models have the capability to analyze, infer and predict aspects of behaviour and interests.

“Artificial intelligence has immense promise, but it must be implemented in ways that respect privacy, equality and other human rights,” said Commissioner Daniel Therrien. “A rights-based approach will support innovation and the responsible development of artificial intelligence.”

A problem with the growing use of AI, though, explained McGill University’s faculty of law professor Ignacio Cofone, is that people cannot opt out of data collection.

He said new laws should not define AI or be technologically specific.

Rather, he said, “The statute should focus on the risks to human rights posed by technology.”

“Sharing personal information is much more embedded in our daily lives than it was in 2001, including tools and services that people cannot opt out of if they wish to continue functioning normally in our society, such as email and cell phones,” Cofone said in a report released Nov. 12. “Aggregated and inferred data about us has become more valuable and poses greater risks than almost any type of personal information shared. Aggregation and inferences are particularly strengthened by AI.”

Cofone said a rights-based approach as suggested by Therrien is “essential to protecting other human rights in the context of commercial activity.”

Further, he said, such protections are needed to comply with the European Union’s General Data Protection Regulation, long considered the global gold standard, in order for Canada to continue doing business with the continental trade bloc.

Moreover, Cofone said, new legislation should incorporate the ability for the commissioner’s office to issue binding orders and issue penalties for noncompliance.

“They would, for example, help avoid situations where organizations refuse to comply with Canadian law, as Facebook recently did,” Cofone said.

An updated federal Personal Information Protection and Electronic Documents Act (PIPEDA) should incorporate penalties of up to $10 million or 2% of an organization’s worldwide turnover, Cofone said.

Further, Cofone said, the issue of individual consent to the use of data remains problematic, noting most people do not read privacy notices or change default privacy settings.

“It is neither realistic nor reasonable to expect individuals to make informed choices in the modern information economy,” Cofone said. “Assessing the costs and benefits of their personal information is close to an impossible task and the power asymmetry between organizations and individuals is enormous.”

Terrine’s office said AI offers help addressing some of today’s most pressing issues.

Such uses can be in analyzing medical image patterns to help diagnose illness, improving energy efficiency by forecasting power grid demand, delivering individualized learning for students and managing traffic flows to reduce accidents.

“It also stands to increase efficiency, productivity and competitiveness – factors that are critical to economic recovery and long-term prosperity,” the office said.

What AI can also be used to do is AI systems can use such insights to make automated decisions about people, including whether they get a job offer, qualify for a loan, pay a higher insurance premium or are suspected of unlawful behaviour.

Those computer-based decisions have an impact on people’s lives and raise concerns about how decisions are reached, as well as issues of fairness, accuracy, bias, and discrimination, the office said.

As such, Therrien is calling for amendments to PIPEDA to:

  • allow personal information to be used for new purposes toward responsible AI innovation and for societal benefits;
  • authorize such uses within a rights-based framework that would entrench privacy as a human right and a necessary element for the exercise of other fundamental rights;
  • create a right to meaningful explanation for automated decisions and a right to contest those decisions to ensure they are made fairly and accurately;
  • strengthen accountability by requiring a demonstration of privacy compliance upon request by the regulator;
  • empower the commissioner’s office to issue binding orders and proportional financial penalties to incentivize compliance with the law; and
  • require organizations to design AI systems from their conception in a way that protects privacy and human rights.



Original post:

Leave a Reply

Your email address will not be published. Required fields are marked *