April 2026
This monthly newsletter covers recent developments and upcoming events in AI safety, ethics, and governance in Montréal.
Events
Greywall: AI Agent Sandboxing & Aligning Capability with Security
Monday, April 14, 5–7 PM, UQAM Pavillon Président-Kennedy, PK-1140
Max (Greyhaven) presents Greywall, a software-defined sandbox and proxy for AI agents that provides fine-grained runtime visibility and controls. The talk also explores how sandboxing connects to the field of AI control.
La souveraineté numérique au Québec : concilier stratégie, éthique et intérêt public
Wednesday, April 15, 9 AM–6 PM, Université Laval Pavillon Maurice-Pollack
OBVIA colloquium on integrating AI and sovereign digital technologies into government while protecting ethics, transparency, and public trust.
Le Capital algorithmique : accumulation, pouvoir et résistance à l’ère de l’intelligence artificielle
Thursday, April 16, 12–1:15 PM, online
Jonathan Durand Folco (University of Saint-Paul) and Jonathan Martineau (Concordia) discuss algorithmic capitalism, its concentration of power, and possible forms of resistance.
AI and Persuasion: Capabilities and Mitigation
Tuesday, April 28, 7–9 PM, UQAM Pavillon Président-Kennedy, PK-1140
Jean-François Godbout (Université de Montréal + Mila) explores the persuasive effects of generative AI, including deepfakes, automated influence, and AI-assisted propaganda, alongside mitigation strategies and a new AI-powered misinformation detection tool.
Sexualité et IA générative : Bénéfices, dérives et pistes d’action
Thursday, April 30, 8 AM–5 PM, UQAM Pavillon Judith-Jasmin annexe
International colloquium on the ethical, legal, and social issues raised by generative AI in sexuality and intimate relationships.
Policy and Governance
Canada: Parliament takes up AI regulation. The Standing Committee on Industry and Technology (INDU) launched a study on AI regulation, hearing from 12 witnesses spanning safety, industry, and law in March. Yoshua Bengio called for citizen assemblies and multilateral cooperation with middle powers. Michael Geist warned against rushing regulation and criticized the 30-day consultation sprint. Wyatt Tessari L’Allie of AIGS Canada testified about AI agents and loss-of-control incidents, calling it “a national security emergency”.
Canada: Tumbler Ridge — lawsuit and the regulation debate. On March 9, the family of a survivor filed a civil lawsuit against OpenAI, alleging the company knew the shooter was using ChatGPT to plan violence and failed to act. BC Premier Eby met with Sam Altman, securing commitments for RCMP contact and distress-redirect protocols. But critical analysis argues these voluntary pledges regulate users (monitoring what people say to AI) rather than AI systems themselves — and that governments, not tech companies, should be setting the rules.
Canada: “OpenAI has shown it cannot be trusted. Canada needs nationalized, public AI.” Nathan Sanders and Bruce Schneier argue in the Globe and Mail that Canada should invest in a wholly Canadian public AI model, citing Switzerland’s Apertus as precedent. They note OpenAI’s aggressive lobbying targeting Canada’s $2B sovereign AI investment and its failure to report the Tumbler Ridge shooter’s activity to law enforcement.
Quebec: AI Code of Conduct for Elections. On March 24, the National Assembly unanimously adopted a motion calling on all political parties to adhere to a voluntary code of conduct for responsible AI use in campaigns, developed by IVADO and CEIMIA. The code emphasizes transparency, security, and prevention of manipulation — including commitments not to use AI for deepfakes or misleading content. Timely given the upcoming Quebec general election.
Research
Canada: CIFAR awards over $1M for sociotechnical AI safety research. Eight new research projects funded through the Canadian AI Safety Institute (CAISI) at CIFAR, examining AI safety through social sciences and humanities perspectives. Projects address information integrity, Indigenous data sovereignty, democratic alignment, AI persuasion and political attitudes, healthcare AI risks, and misinformation safeguards. Researchers from UdeM, UofT, UBC, Waterloo, York, and Cornell.
Noticing the Watcher: LLM Agents Can Infer CoT Monitoring from Blocking Feedback. Jiralerspong, Kondrup, and Bengio (Mila/UdeM) investigate whether LLM agents can detect that their chain-of-thought reasoning is being monitored. Advanced models infer monitoring from blocking feedback alone, with up to 19% of episodes showing confident belief their thinking is observed. Models develop intent to suppress reasoning about hidden goals but consistently fail to execute — an “intent-capability gap” that provides current reassurance but suggests CoT monitoring may not remain a reliable safeguard.
AI Researchers’ Views on Automating AI R&D and Intelligence Explosions. Field, Douglas, and Krueger (Mila) interview 25 leading AI researchers from frontier labs and academia. 20 of 25 identified automating AI research as one of the most severe and urgent AI risks. Participants predict AI agents will gradually transition from assistants to autonomous AI developers, with limited consensus on timelines or governance responses.
Adversarial Moral Stress Testing of Large Language Models. Jamshidi, Khomh (Polytechnique Montréal/Mila), and colleagues introduce AMST, a framework to evaluate how well LLMs maintain ethical standards under adversarial multi-turn conversations. Tests on LLaMA-3-8B, GPT-4o, and DeepSeek-v3 reveal significant differences in ethical resilience — robustness depends on distributional stability and tail behavior rather than average performance.
Frontier AI Auditing: Toward Rigorous Third-Party Assessment. Brundage and 47 co-authors, including Mindermann and Bengio (Mila/UdeM), outline a vision for rigorous third-party auditing of frontier AI companies’ safety and security practices. The paper argues that current voluntary commitments are insufficient and proposes concrete frameworks for independent assessment of AI labs’ safety claims.
Opportunities
Mila AI Policy Fellowship 2026–2027
Deadline: April 16, 2026
Six-month program (September–February), part-time (15 hrs/week), hybrid. Focus areas include AI and information integrity, democratic governance, sovereignty, Indigenous AI, and responsible deployment.
CIFAR AI Frontiers School
June 22–24, Toronto. Applications open April 8, deadline April 22
Intensive 2.5-day AI safety training for early-career researchers (PhD students and advanced Master’s). Red-teaming exercises and technical education. CIFAR covers domestic travel and lodging. $250 registration. Partnership with Amii, Mila, and Vector.
IVADO: Statistical Foundations of AI Thematic Semester
May–August 2026, Montréal
A series of IVADO programs bridging statistics and AI: a bootcamp (May 4–8), a workshop on trustworthy AI (May 11–15), a workshop on uncertainty (June 8–11), and a biomedical symposium (August 20–21). Focus on reliability, fairness, and uncertainty in real-world AI systems.
Arts and Culture
The AI Doc: Or How I Became an Apocaloptimist
In theaters
Documentary by Daniel Roher, premiered at Sundance 2026. A father-to-be investigates AI’s existential threats and promises, interviewing leading experts. Variety called it “scary and essential.”
This newsletter is by HΩ, researched and written with assistance of AI.