Newsletter
Events
Mon
Tue
Wed
Thu
Fri
Sat
Sun
26
28
29
31
1
2
4
5
6
7
8
9
10
11
13
14
15
16
18
19
21
22
23
26
27
28
1
Fri, Feb 6 · 4:00 PM – Feb 27
IVADO Student Program: Perspective AI
Short, non-credited, free training sessions in French and English. Fridays Feb 6, 13, and 27.
Tue, Feb 24 · 7:00 PM
How the Future Rights of AI Workers will also Protect Human Rights
Panel discussion on rights balancing: how frameworks protecting AI workers also reinforce human rights protections. Speakers include Heather Alexander and Jonathan Simon.
Wed, Feb 25 · 6:30 PM
ORI Community Roundtable: People's Consultation on AI
Community roundtable for the People's Consultation on AI. Share and discuss concerns about the impact of robotics and AI across sectors and what we want the government to do. No technical background needed. Limited FR/EN translation available.
Tue, Mar 3 · 7:00 PM
What should Montréal's role be in AI safety?
Discussion on how Montréal can shape its role in AI safety — leveraging its unique position as a major AI research hub.
Organizations
- Biweekly research seminar series at Mila inviting authors to present their own AI safety papers.
- Student club at McGill focused on AI alignment and safety; organizes reading groups, hackathons and related activities.
- Federal government AI safety institute. Funds research through CIFAR and NRC programs, develops safety tools/guidance. Member of International Network of AI Safety Institutes.
- Independent organization managing Montréal hub of GPAI Network of Centres. Implements high-impact applied projects for responsible AI based on ethics and human rights.
- Program under Pan-Canadian AI Strategy that convenes interdisciplinary meetings and publishes reports on AI's societal impacts for policymakers and public.
- Research centre at McGill's Max Bell School focused on AI governance, transparency and democratic oversight. Publishes analyses and convenes workshops informing Canadian and international policy.
- Youth-led group that builds AI literacy and early-career capacity through student fellowships, hands-on workshops, public events, and creative programs.
- Goodheart is building AI systems to safely accelerate R&D of defensive technologies to create a world robust to powerful AI.
- Montréal-based nonprofit hub supporting local AI safety, ethics and governance community through meetups, coworking, workshops and collaborations.
- Multi-year research program on AI safety across axes: evaluating harmful behaviors, understanding AI decision-making, algorithmic approaches for safe AI.
- Technical AI safety research group at Mila led by David Krueger. Research in misgeneralization, mechanistic interpretability, and reward specification.
- Nonprofit AI safety research organization launched by Yoshua Bengio. Focuses on non-agentic 'Scientist AI' architecture as alternative to frontier lab approaches.
- International non-profit founded 2018 that equips citizens concerned about AI and its societal impacts to take action. Produces AI Ethics Brief and State of AI Ethics reports.
- Academic deep learning research center with 140+ affiliated professors. Technical alignment, interpretability, responsible AI development.
- Public meetup community hosting talks, workshops, discussions on AI governance, ethics and safety. Co-organized with Horizon Omega.
- Inter-university observatory on societal impacts of AI. Network of researchers from Quebec institutions publishing research across 7 thematic hubs.
- Volunteer community advocating to mitigate AI risks and pause development of superhuman AI until safe.