Quebec can become a leader in responsible artificial intelligence

“We should be able to analyze the state of the data through which the algorithms work, that is, to facilitate the opening of their data when necessary.” (Photo: 123RF)

Text by Lahcen Fatah, technological ethicist and PhD student in science, technology and society at the Interuniversity Center for Research in Science and Technology. He is also a board member of Nord Ouvert and a teacher of applied ethics in engineering at Polytechnique Montréal.

READERS’ MAIL. How should Quebec regulate artificial intelligence (AI)? After consulting several hundred experts, the Quebec Innovation Council recently presented its Ready for AI report, which appears to mark the beginnings of an AI framework law in the province. This follows a consultation process bringing together several experts and what has been described as a public forum on artificial intelligence.

First of all, we welcome this initiative because the provincial law would allow Quebec to assert its leadership in responsible artificial intelligence. However, this future bill should not reproduce the shortcomings or certain shortcomings of its federal equivalent, or worse, create new ones.

Note that the recent consultation launched by Ottawa on a code of conduct for generative artificial intelligence systems has been heavily criticized, particularly for its lack of transparency. The lack of consultation for the original version of the Artificial Intelligence and Data Act (LIAD) was similarly seen as particularly worrying.

Quebec could avoid these mistakes at the provincial level by meeting some of the key goals of the consultations, such as public education and democratic engagement.

Therefore, it is appropriate from now on to draw attention to possible pitfalls that this framework may encounter.

Let’s first look at the way the Quebec Innovation Council’s report has been shaped, starting with its public forum on AI on November 2nd. Far from being a space for debate, we were instead presented with the already familiar story of AI in Quebec and Canada, overseen by a small number of researchers and practitioners—a panel of experts selected according to opaque criteria and painfully lacking in diversity.

This forum offered avenues for reflection on different perspectives linked to the expected impacts of AI on society, including: “the impacts of AI on work and the labor market in Quebec” or even “Quebec’s role in the international framework of AI and as a leader in responsible development and deployment AI. In the absence of a space for dialogue in which stakeholders – citizens, civil society, experts, university research, businesses, etc. – could exchange and share their ideas or opinions, it was more of a sequence of speeches allowing at best a few brief questions from the public .

Supporting innovation while protecting the public

In addition, Quebec must avoid succumbing to fears that overly strict oversight of AI will slow innovation. This rhetoric came up again and again during the discussions leading up to the Innovation Council report. Innovation must of course be allowed to take its course, but it must not be at the expense of public protection.

This fear of regulation suggests the conflicting risks we’ve seen recently in Canada between tech giants and the news media. All indications are that if the practices of the web giants were better regulated from the start, some of the abuses – including the blocking of messages on certain platforms – would not have occurred.

That being said, we will take these elements as feedback to consider during the next AI oversight debates in Quebec, which are fast approaching. In other words, since algorithms are at our doorstep, we see this as an opportunity to design adequate regulation and ultimately prevent rather than cure.

Focus on transparency and rights protection

In the early stages of generative artificial intelligence, the Quebec government would be interested in creating a framework adapted to current and future technological deviations, in particular by establishing principles of governance and transparency of algorithmic data.

Just as eating harmful foods can seriously damage our health, feeding algorithms with distorted data (or whose use is not regulated) can be dangerous. Let us think of the rise of inequality or any other abuse that could pose a danger to citizens and hinder the proper functioning of democracy.

Just think of the discrimination driven by AI systems that relied on pre-existing data that was problematic to say the least. One of the most notable cases is that of Amazon, which adopted intelligent software whose algorithm unfairly prevented the hiring of suitably qualified women. Also worth noting is COMPAS, a tool used by US courts to assess a defendant’s likelihood of becoming a repeat offender, arbitrarily unfavorable to the African-American community.

The risks associated with democracy are the rise of disinformation spread by some algorithms, which are difficult to control and known to be particularly dangerous, especially during election periods. Let’s imagine the impact such systems could have on society if more people used them, and not always with the best intentions.

Let’s be aware that if human prejudices come from our moral values ​​(which may be the fruit of our faith, the environment in which we develop or others, for example), the prejudices of artificial intelligence systems mainly come from the processing of data algorithms that are the basis of “moral values” so to speak these robots. Furthermore, we should be able to analyze the state of the data with which the algorithms work, that is, facilitate the opening of their data if necessary.

As I propose for federal Bill C-27 and its LIAD, Quebec should also prohibit systems that could violate rights and freedoms, the principle of non-discrimination and the right to dignity, or even hinder the values ​​of equality and justice. This applies, for example, to biometric recognition systems, the dangers of which were highlighted by the Privacy Commissioner of Canada’s report on the RCMP’s use of Clearview AI. Let us also think about systems of social evaluation of individuals initiated or mandated by public authorities, such as the social credit experimented in China, the creation of which could limit the freedom of individuals and create new forms of social inequality.

Preventing the ravages of hyperfaking

The systems behind hyperfaking (deepfake) should be given special attention.

Last year, former Canadian Heritage Minister Pablo Rodriguez appointed a group of experts to consider a draft law on online defamation, looking in particular at hyper-fake photos and videos, disinformation and any other software capable of spreading falsehoods. Last November, members of the expert group called on the government to speed up the introduction of this “digital platform harm” bill because of the growing risk of harm to Canadian children, violations of victims’ privacy and online harassment. platforms they use daily.

This is all the more important as recently hyper-fake and explicit images have been presented by students at a Winnipeg school through photos collected from social media.

Although the recent Bill C-63 finally considers hyperfaking of a sexual nature, especially with regard to children, it is curious that LIAD did not advance any reservations against the abuse of artificial intelligence systems designed to create hyperfakes, and specifically those that could undermine democracy. If this is not the case, there are risk measures which stipulate that the head of the high-performance system implements, in accordance with the regulations, measures aimed at identifying, assessing and mitigating the risks of harm or biased results that could arise from the use of the artificial intelligence system.

In this context, if Quebec proves to be less tolerant than the federal government towards these systems – particularly by imposing sanctions specific to the creation and/or dissemination of malicious hyperfakes – or any other ethically questionable algorithmic program capable of having a human rights impact, then it could establish as one of the main leaders of responsible artificial intelligence. And even more so if Quebec includes government institutions in the oversight of this technology, unlike LIAD, which is unfortunately limited to the private sector for now.

In short, only increased collaboration between different Quebec AI experts in a multidisciplinary dynamic will allow for a solid regulatory proposal in favor of responsible AI whose potential could be recognized on a global scale. Right now, Quebec is playing a crucial role, where we saw the birth of the influential Montreal Declaration on the Responsible Development of Artificial Intelligence.

This text was originally published in the magazine Policy options.

Subscribe to our themed newsletter for content that resonates with your reality:

Techno – every Monday

Find advice from our experts on cybersecurity, cryptocurrency analysis and the latest news on technology innovation.

Leave a Comment