Newsletter

Events

Upcoming meetups, talks, and workshops

Tue, Apr 14 · 5:00 PM
Greywall: AI Agent Sandboxing & Aligning Capability with Security
UQAM - Pavillon Président-Kennedy, Montréal
Max (Greyhaven) presents Greywall, a software-defined sandbox and proxy for AI agents that provides fine-grained runtime visibility and controls. The talk also explores how sandboxing connects to the field of AI control.
Wed, Apr 15 · 9:00 AM
Digital Sovereignty in Quebec: Reconciling Strategy, Ethics, and the Public Interest
Université Laval - Pavillon Maurice-Pollack, Québec
Colloquium on integrating AI and sovereign digital technologies into government while protecting ethics, transparency, and public trust.
Thu, Apr 16 · 12:00 PM
Algorithmic Capital: Accumulation, Power, and Resistance in the Age of AI
Jonathan Durand Folco (University of Saint-Paul) and Jonathan Martineau (Concordia) discuss algorithmic capitalism, its concentration of power, and possible forms of resistance.
Tue, Apr 28 · 7:00 PM
AI and Persuasion: Capabilities and Mitigation
UQAM - Pavillon Président-Kennedy, Montréal
Jean-François Godbout (Université de Montréal + Mila) explores the persuasive effects of generative AI, including deepfakes, automated influence, and AI-assisted propaganda, alongside mitigation strategies and a new AI-powered misinformation detection tool.
Thu, Apr 30 · 8:00 AM
Sexuality and Generative AI: Benefits, Harms, and Paths Forward
UQAM - Pavillon Judith-Jasmin annexe, Montréal
International colloquium on the ethical, legal, and social issues raised by generative AI in sexuality and intimate relationships.
Mon, May 4 · 9:00 AM – May 8
Bootcamp: Statistical Insights into Modern AI Systems
Montréal
Five-day bootcamp on statistical foundations of modern AI, part of the IVADO thematic semester on Statistical Foundations of AI.
Mon, May 11 · 9:30 AM – May 15
Workshop: Statistics in Trustworthy AI
Montréal
Five-day workshop on statistical methods for trustworthy AI, part of the IVADO thematic semester on Statistical Foundations of AI.
Mon, Jun 8 · 9:30 AM – Jun 11
Workshop: Uncertainty in AI
Montréal
Four-day workshop on uncertainty quantification in AI, part of the IVADO thematic semester on Statistical Foundations of AI.

Organizations

Montréal's AI safety ecosystem: research labs, policy institutes, and community groups

  • AI Safety Reading Group (Mila)
    Biweekly research seminar series at Mila inviting authors to present their own AI safety papers.
  • AI Alignment McGill (AIAM)
    Student club at McGill focused on AI alignment and safety; organizes reading groups, hackathons and related activities.
  • Canadian AI Safety Institute (CAISI)
    Federal government AI safety institute. Funds research through CIFAR and NRC programs, develops safety tools/guidance. Member of International Network of AI Safety Institutes.
  • CEIMIA
    Independent organization managing Montréal hub of GPAI Network of Centres. Implements high-impact applied projects for responsible AI based on ethics and human rights.
  • CIFAR — AI & Society
    Program under Pan-Canadian AI Strategy that convenes interdisciplinary meetings and publishes reports on AI's societal impacts for policymakers and public.
  • McGill Centre for Media, Technology & Democracy (CMTD)
    Research centre at McGill's Max Bell School focused on AI governance, transparency and democratic oversight. Publishes analyses and convenes workshops informing Canadian and international policy.
  • Encode Canada
    Youth-led group that builds AI literacy and early-career capacity through student fellowships, hands-on workshops, public events, and creative programs.
  • Goodheart AI
    Goodheart is building AI systems to safely accelerate R&D of defensive technologies to create a world robust to powerful AI.
  • Horizon Omega (HΩ)
    Montréal-based nonprofit hub supporting local AI safety, ethics and governance community through meetups, coworking, workshops and collaborations.
  • IVADO — R³AI / R10: AI Safety & Alignment
    Multi-year research program on AI safety across axes: evaluating harmful behaviors, understanding AI decision-making, algorithmic approaches for safe AI.
  • Krueger AI Safety Lab (KASL)
    Technical AI safety research group at Mila led by David Krueger. Research in misgeneralization, mechanistic interpretability, and reward specification.
  • LawZero
    Nonprofit AI safety research organization launched by Yoshua Bengio. Focuses on non-agentic 'Scientist AI' architecture as alternative to frontier lab approaches.
  • Montréal AI Ethics Institute (MAIEI)
    International non-profit founded 2018 that equips citizens concerned about AI and its societal impacts to take action. Produces AI Ethics Brief and State of AI Ethics reports.
  • Mila – Québec AI Institute
    Academic deep learning research center with 140+ affiliated professors. Technical alignment, interpretability, responsible AI development.
  • Montréal AI Governance, Ethics & Safety Meetup
    Public meetup community hosting talks, workshops, discussions on AI governance, ethics and safety. Co-organized with Horizon Omega.
  • OBVIA
    Inter-university observatory on societal impacts of AI. Network of researchers from Quebec institutions publishing research across 7 thematic hubs.
  • PauseAI Montréal
    Volunteer community advocating to mitigate AI risks and pause development of superhuman AI until safe.