6 min read

February 2026

This monthly newsletter covers recent developments and upcoming events in AI safety, ethics, and governance in Montreal.

Upcoming Events

AI Pluralism: What Models Do, Who Decides, and Why It Matters Tuesday, February 3, 7–9 PM, UQAM Pavillon Président-Kennedy Presentation by Rashid Mushkani (UdeM + Mila). Discussion on who decides model behavior and by what criteria.

Technopolice: Police Surveillance in the Age of AI Friday, February 6, 12–1:30 PM, online (Zoom) Reading circle from OBVIA’s Law, Cybersecurity and Cyberjustice axis. Presentation by Félix Tréguer, response by Benoît Dupont.

La Bataille de l’IA Friday, February 6, 1–3 PM, 501 rue de la Gauchetière Ouest, room C.623, Montreal Participatory workshop (game format) on five issues in generative AI: information reliability, personal data, intellectual property, algorithmic bias, environmental impact. Registration required.

IVADO Perspective IA Student Program Fridays February 6, 13, and 27, 4–6 PM, hybrid Short, non-credit, free training sessions in French and English.

Mila Community of Practice: AI Safety Thursday, February 12, 9 AM–12 PM, Mila (6666 St-Urbain, Montreal) AI Safety community of practice gathering.

What hackers talk about when they talk about AI Tuesday, February 17, 7–9 PM, UQAM Pavillon Président-Kennedy Presentation by Benoît Dupont (Canada Research Chair in Cyber-Resilience, UdeM). Analysis of 160+ cybercriminal forum conversations: how hackers perceive and exploit AI, their doubts, and implications for security.

Looking ahead: ACM FAccT 2026 (Fairness, Accountability, and Transparency) will be held in Montreal, June 25–28, 2026 at Le Centre Sheraton Montréal.

Opportunities

Mila Indigenous Pathfinders in AI Deadline: February 13, 2026 Program for First Nations, Inuit, and Métis talent. Training from late May to mid-July at Mila, covering technical and human dimensions of AI. $5,800 stipend plus travel and accommodation.

ML4Good Bootcamp — Montreal June 1–9, 2026. Deadline: February 8, 11:59 PM GMT (6:59 PM Montreal time) Intensive AI Safety training. 8 days, in-person, fully funded, limited spots. Past participants now work at the AI Security Institute (UK), European Commission, and MATS.

Volunteer with Horizon Omega The organization behind this newsletter is looking for volunteers for event organization, AI safety field-building in Montreal, the AI Safety Unconference (next edition in preparation), and other projects.

Policy and Governance

Ontario: Joint IPC-OHRC AI Principles. On January 21, the Information and Privacy Commissioner and the Ontario Human Rights Commission jointly published six principles for responsible AI use: validity and reliability, safety, privacy protection, human rights affirmation, transparency, and accountability. While non-binding, these principles will guide compliance assessments by both bodies.

Canada: Parliamentary study on AI risks. The House of Commons ETHI committee is conducting a study on “challenges posed by artificial intelligence and its regulation.” Notably, the committee explicitly addresses catastrophic and existential risks. Those with relevant expertise can request to testify by writing to ETHI@parl.gc.ca. Hearing videos: session 1, session 2, session 3.

Canada: Accessible AI Standard. Canada published CAN-ASC-6.2:2025 (1st edition), a standard on accessible and equitable AI covering the full lifecycle of AI systems for people with disabilities.

Quebec: Guidelines for Public Service. The Ministry of Cybersecurity and Digital imposes guidelines for generative AI in public administration: data protection, staff training, risk assessment, and governance framework. The moratorium on AI virtual assistants has been lifted under strict conditions. AI must remain a support tool, not a decision-maker, with mandatory privacy impact assessments.

Ontario: AI Disclosure in Hiring. Since January 1, 2026, mandatory disclosure if AI is used to evaluate or screen job applications.

Canada: People’s AI Consultation. Over 160 civil society organizations, lawyers, and academics launched a citizen consultation on January 21 in response to the federal process they deemed too rushed and industry-dominated. Submissions accepted until March 15, 2026.

Elsewhere. In the United States, an executive order aims to limit state AI laws through federal action. New York nonetheless signed the RAISE Act on frontier models: developers must publish safety protocols and report critical incidents within 72 hours, with fines up to $1M. In Europe, the first key obligations of the AI Act for high-risk systems and transparency take effect starting August 2026.

Research

UK AISI Frontier AI Trends Report. The UK AI Security Institute’s report (December 2025) documents concerning advances: AI makes novices ~5x more likely to write feasible viral recovery protocols, self-replication success rates rose from under 5% to over 60% in two years, and universal jailbreaks were found in every system tested.

Legal Alignment for Safe and Ethical AI. New research field proposing to use legal rules, principles, and methods to address alignment problems. Three directions: compliance with law, adapting legal interpretation methods, and using legal concepts as structural blueprints. Authors include Gillian K. Hadfield (Vector Institute) and Jonathan Zittrain.

Alignment Pretraining. First controlled study showing that pretraining data about AI causally influences alignment. Upsampling positive documents reduces misalignment scores from 45% to 9%. Effects persist through post-training (SFT + DPO).

Multilingual Amnesia. Unlearning a concept in English does not guarantee it is unlearned in French. Safety interventions do not transfer across languages.

Multilingual Calibration. Instruction-tuning degrades calibration across all 71 languages tested: confidence increases without accuracy gains, even for languages absent from training data. A problem for any application where model confidence influences a decision.

LLMs and Persuasion. Three pre-registered experiments (N=2,724) show that LLMs can convince people to believe conspiracy theories.

AI in Schools. Quebec parents’ perspectives on AI in education.

AI and Linguistic and Cultural Diversity. How models handle minority languages and local cultural references.

AI for Access to Legal Information. Study on using AI to improve access to law in Quebec.

Arts and Culture

Devenirs partagés: pratiques de l’IA Until February 28, Galerie de l’UdeM (Pavillon de l’Esplanade, 6737 av. de l’Esplanade) Exhibition from the Responsible AI initiative (IVADO + UdeM) on creative and critical uses of AI.

La Boîte Noire Until February 21, Duceppe Dystopian thriller by Catherine-Anne Toupin. A tech company uses an algorithm to reprogram brains and help people “make better choices.” The play explores AI dangers, self-optimization, and dehumanization.

À l’heure de l’IA Until February 19, Université de Sherbrooke (Roger-Maltais Library, Agora B1-2002) Interactive OBVIA exhibition on AI’s societal impacts. Eight everyday scenes explore how AI integrates into our lives, its challenges, and ethical principles for responsible use. Accessible to all, no prior knowledge required.

Yoshua Bengio on HugoDécrypte Interview on AI risks and governance.