Newsletter

Events

Tue, Mar 3 · 7:00 PM
When Is a Human Actually "Overseeing" an AI System?
Maison du développement durable, Montréal
Shalaleh Rismani (McGill/Mila) presents research on how humans actually oversee AI systems — finding that greater understanding doesn't necessarily lead to better oversight.
Fri, Mar 6 · 10:00 AM
AI Safety Coworking & Bouldering
Bloc Shop Mile-Ex, Montréal
Working on something related to AI safety or governance (e.g. research, engineering, policy)? Come work on your projects alongside others in the field — and boulder between sessions. We meet at Blocshop Mile Ex, 10h-14h. Entry ~$13 on Fridays. The gym has a café with tables & wifi. You may want to bring a lunch.
Tue, Mar 10 – Mar 13
Social Reasoning and the Ecology of Thought
IVADO, Montréal
Workshop on reasoning in multi-agent systems: theory of mind, argumentation, and distributed reasoning — 22 speakers from AI, neuroscience, and philosophy. Part of IVADO's thematic semester on computational reasoning. $40–$240.
Tue, Mar 10 · 7:00 PM
What should Montréal's role be in AI safety?
Montréal
Discussion on how Montréal can shape its role in AI safety — leveraging its unique position as a major AI research hub.
Mon, Mar 16 – Mar 23
Building Safer AI for Youth Mental Health
Mila, Montréal
Week-long hackathon with three tracks: adversarial stress-testing, logic hardening, and synthetic data augmentation for safer conversational AI. Prizes include $10K and a Mila AI Safety Studio internship.
Thu, Mar 19 · 5:00 PM
Artificial Intelligence: Where Does Consciousness Begin?
Université de Montréal - Campus MIL, Montréal
A discussion on artificial intelligence and the notion of consciousness — where does it begin?
Fri, Mar 20 · 6:00 PM – Mar 22
AI Control Hackathon
Montréal
Three-day hackathon to develop control protocols, build evaluation tools, and stress-test safety measures using ControlArena and SHADE-Arena.

Organizations

  • AI Safety Reading Group (Mila)
    Biweekly research seminar series at Mila inviting authors to present their own AI safety papers.
  • AI Alignment McGill (AIAM)
    Student club at McGill focused on AI alignment and safety; organizes reading groups, hackathons and related activities.
  • Canadian AI Safety Institute (CAISI)
    Federal government AI safety institute. Funds research through CIFAR and NRC programs, develops safety tools/guidance. Member of International Network of AI Safety Institutes.
  • CEIMIA
    Independent organization managing Montréal hub of GPAI Network of Centres. Implements high-impact applied projects for responsible AI based on ethics and human rights.
  • CIFAR — AI & Society
    Program under Pan-Canadian AI Strategy that convenes interdisciplinary meetings and publishes reports on AI's societal impacts for policymakers and public.
  • McGill Centre for Media, Technology & Democracy (CMTD)
    Research centre at McGill's Max Bell School focused on AI governance, transparency and democratic oversight. Publishes analyses and convenes workshops informing Canadian and international policy.
  • Encode Canada
    Youth-led group that builds AI literacy and early-career capacity through student fellowships, hands-on workshops, public events, and creative programs.
  • Goodheart AI
    Goodheart is building AI systems to safely accelerate R&D of defensive technologies to create a world robust to powerful AI.
  • Horizon Omega (HΩ)
    Montréal-based nonprofit hub supporting local AI safety, ethics and governance community through meetups, coworking, workshops and collaborations.
  • IVADO — R³AI / R10: AI Safety & Alignment
    Multi-year research program on AI safety across axes: evaluating harmful behaviors, understanding AI decision-making, algorithmic approaches for safe AI.
  • Krueger AI Safety Lab (KASL)
    Technical AI safety research group at Mila led by David Krueger. Research in misgeneralization, mechanistic interpretability, and reward specification.
  • LawZero
    Nonprofit AI safety research organization launched by Yoshua Bengio. Focuses on non-agentic 'Scientist AI' architecture as alternative to frontier lab approaches.
  • Montréal AI Ethics Institute (MAIEI)
    International non-profit founded 2018 that equips citizens concerned about AI and its societal impacts to take action. Produces AI Ethics Brief and State of AI Ethics reports.
  • Mila – Québec AI Institute
    Academic deep learning research center with 140+ affiliated professors. Technical alignment, interpretability, responsible AI development.
  • Montréal AI Governance, Ethics & Safety Meetup
    Public meetup community hosting talks, workshops, discussions on AI governance, ethics and safety. Co-organized with Horizon Omega.
  • OBVIA
    Inter-university observatory on societal impacts of AI. Network of researchers from Quebec institutions publishing research across 7 thematic hubs.
  • PauseAI Montréal
    Volunteer community advocating to mitigate AI risks and pause development of superhuman AI until safe.