Connect with community members, organizations, and practitioners, through a monthly newsletter, local events, and an organization directory.

Newsletter

01
Feb

February 2026

Events

Fri, Feb 6 · 4:00 PM – Feb 27
IVADO Student Program: Perspective AI
IVADO
Hybrid
Short, non-credited, free training sessions in French and English. Fridays Feb 6, 13, and 27.
Tue, Feb 24 · 7:00 PM
How the Future Rights of AI Workers will also Protect Human Rights
Horizon Omega
UQAM - Pavillon Président-Kennedy, Montréal
Panel discussion on rights balancing: how frameworks protecting AI workers also reinforce human rights protections. Speakers include Heather Alexander and Jonathan Simon.
Wed, Feb 25 · 6:30 PM
ORI Community Roundtable: People's Consultation on AI
Open Roboethics Institute (ORI)
Concordia SHIFT Centre, Montréal
Community roundtable for the People's Consultation on AI. Share and discuss concerns about the impact of robotics and AI across sectors and what we want the government to do. No technical background needed. Limited FR/EN translation available.
Tue, Mar 3 · 7:00 PM
When Is a Human Actually "Overseeing" an AI System?
Horizon Omega
UQAM - Pavillon Président-Kennedy, Montréal
Talk by Shalaleh Rismani (McGill/Mila, Open Roboethics Institute). AI systems are often described as being "under human oversight," but what does that actually mean in practice? Drawing on a study of AI writing assistants, this talk examines oversight as a human behavior — showing that greater system understanding doesn't necessarily lead to better oversight, and can even result in worse outcomes. Broader questions about oversight in AI assistants and agents, including coding tools.
Tue, Mar 10 · 7:00 PM
What should Montréal's role be in AI safety?
Horizon Omega
Montréal
Montréal is home to many organizations working on AI safety, ethics, and governance: Mila, LawZero, HΩ, PauseAI, IVADO, CAISI, OBVIA... But do we actually function as a community, or are we just co-located? Brief overview of the landscape followed by structured small-group discussions: What's missing? Who should be talking? What do we do in the next few months?

Organizations

  • AI Safety Reading Group (Mila)
    Biweekly research seminar series at Mila inviting authors to present their own AI safety papers.
  • AI Alignment McGill (AIAM)
    Student club at McGill focused on AI alignment and safety; organizes reading groups, hackathons and related activities.
  • Canadian AI Safety Institute (CAISI)
    Federal government AI safety institute. Funds research through CIFAR and NRC programs, develops safety tools/guidance. Member of International Network of AI Safety Institutes.
  • CEIMIA
    Independent organization managing Montréal hub of GPAI Network of Centres. Implements high-impact applied projects for responsible AI based on ethics and human rights.
  • CIFAR — AI & Society
    Program under Pan-Canadian AI Strategy that convenes interdisciplinary meetings and publishes reports on AI's societal impacts for policymakers and public.
  • McGill Centre for Media, Technology & Democracy (CMTD)
    Research centre at McGill's Max Bell School focused on AI governance, transparency and democratic oversight. Publishes analyses and convenes workshops informing Canadian and international policy.
  • Encode Canada
    Youth-led group that builds AI literacy and early-career capacity through student fellowships, hands-on workshops, public events, and creative programs.
  • Goodheart AI
    Goodheart is building AI systems to safely accelerate R&D of defensive technologies to create a world robust to powerful AI.
  • Horizon Omega (HΩ)
    Montréal-based nonprofit hub supporting local AI safety, ethics and governance community through meetups, coworking, workshops and collaborations.
  • IVADO — R³AI / R10: AI Safety & Alignment
    Multi-year research program on AI safety across axes: evaluating harmful behaviors, understanding AI decision-making, algorithmic approaches for safe AI.
  • Krueger AI Safety Lab (KASL)
    Technical AI safety research group at Mila led by David Krueger. Research in misgeneralization, mechanistic interpretability, and reward specification.
  • LawZero
    Nonprofit AI safety research organization launched by Yoshua Bengio. Focuses on non-agentic 'Scientist AI' architecture as alternative to frontier lab approaches.
  • Montréal AI Ethics Institute (MAIEI)
    International non-profit founded 2018 that equips citizens concerned about AI and its societal impacts to take action. Produces AI Ethics Brief and State of AI Ethics reports.
  • Mila – Québec AI Institute
    Academic deep learning research center with 140+ affiliated professors. Technical alignment, interpretability, responsible AI development.
  • Montréal AI Governance, Ethics & Safety Meetup
    Public meetup community hosting talks, workshops, discussions on AI governance, ethics and safety. Co-organized with Horizon Omega.
  • OBVIA
    Inter-university observatory on societal impacts of AI. Network of researchers from Quebec institutions publishing research across 7 thematic hubs.
  • PauseAI Montréal
    Volunteer community advocating to mitigate AI risks and pause development of superhuman AI until safe.