About
What we mean by AI safety
AI safety is the effort to understand and reduce the risks posed by increasingly capable AI systems: to people, to institutions, to the broader world. It asks how we develop and deploy these systems responsibly, with risks understood, managed, and governed.
The field spans technical research (alignment, interpretability, robustness), governance and policy, ethics, and societal impact. We take an inclusive view: it encompasses everything from formal verification of neural networks to international policy coordination, from AI consciousness to surveillance and workers' rights.
For a comprehensive overview of current risks and capabilities, see the International AI Safety Report, a global review led by Yoshua Bengio with over 100 independent experts.
What we do
Montréal hosts one of the world's largest AI research ecosystems, anchored by Mila and surrounded by leading labs and startups. This community hub connects the people working on making that ecosystem safer.
- Events: Reading groups, talks, coworking sessions, hackathons, and workshops covering topics from technical alignment to governance and policy.
- Newsletter: Monthly roundup of research highlights, local events, and community news.
- Directory: A map of the local ecosystem: research labs, policy institutes, nonprofits, and community groups.
Who runs this
This site and most events are organized by Horizon Omega, a Canadian nonprofit working to reduce AI risk through community building, research, and public engagement. The community includes 1,600+ members across AI safety, ethics, and governance.
Get involved
- Come to an event
- Reach out to list your event or organization: contact@horizonomega.org
- Subscribe to the newsletter