Newsletter
Events
Mon
Tue
Wed
Thu
Fri
Sat
Sun
23
26
28
1
2
4
5
6
7
8
9
11
12
13
14
15
16
17
18
19
23
24
25
26
27
28
29
30
31
1
2
3
4
5
Tue, Mar 3 · 7:00 PM
When Is a Human Actually "Overseeing" an AI System?
Shalaleh Rismani (McGill/Mila) presents research on how humans actually oversee AI systems — finding that greater understanding doesn't necessarily lead to better oversight.
Tue, Mar 10 · 7:00 PM
What should Montréal's role be in AI safety?
Discussion on how Montréal can shape its role in AI safety — leveraging its unique position as a major AI research hub.
Fri, Mar 20 · 6:00 PM – Mar 22
AI Control Hackathon
Three-day hackathon to develop control protocols, build evaluation tools, and stress-test safety measures using ControlArena and SHADE-Arena.
Organizations
- Biweekly research seminar series at Mila inviting authors to present their own AI safety papers.
- Student club at McGill focused on AI alignment and safety; organizes reading groups, hackathons and related activities.
- Federal government AI safety institute. Funds research through CIFAR and NRC programs, develops safety tools/guidance. Member of International Network of AI Safety Institutes.
- Independent organization managing Montréal hub of GPAI Network of Centres. Implements high-impact applied projects for responsible AI based on ethics and human rights.
- Program under Pan-Canadian AI Strategy that convenes interdisciplinary meetings and publishes reports on AI's societal impacts for policymakers and public.
- Research centre at McGill's Max Bell School focused on AI governance, transparency and democratic oversight. Publishes analyses and convenes workshops informing Canadian and international policy.
- Youth-led group that builds AI literacy and early-career capacity through student fellowships, hands-on workshops, public events, and creative programs.
- Goodheart is building AI systems to safely accelerate R&D of defensive technologies to create a world robust to powerful AI.
- Montréal-based nonprofit hub supporting local AI safety, ethics and governance community through meetups, coworking, workshops and collaborations.
- Multi-year research program on AI safety across axes: evaluating harmful behaviors, understanding AI decision-making, algorithmic approaches for safe AI.
- Technical AI safety research group at Mila led by David Krueger. Research in misgeneralization, mechanistic interpretability, and reward specification.
- Nonprofit AI safety research organization launched by Yoshua Bengio. Focuses on non-agentic 'Scientist AI' architecture as alternative to frontier lab approaches.
- International non-profit founded 2018 that equips citizens concerned about AI and its societal impacts to take action. Produces AI Ethics Brief and State of AI Ethics reports.
- Academic deep learning research center with 140+ affiliated professors. Technical alignment, interpretability, responsible AI development.
- Public meetup community hosting talks, workshops, discussions on AI governance, ethics and safety. Co-organized with Horizon Omega.
- Inter-university observatory on societal impacts of AI. Network of researchers from Quebec institutions publishing research across 7 thematic hubs.
- Volunteer community advocating to mitigate AI risks and pause development of superhuman AI until safe.