About us

Why this group?

We think it is important to help future professionals and researchers make sense of the developments in AI and the range of possible governance measures.

While a lot of resources in society will be devoted to creating new and helpful services with AI, our focus is to explore foundational concepts that are important to people working to make the technology ethical and safe.

  • We think AI technologies have presented and will continue to present a lot of new challenges to society.
  • Lots of people will offer takes on how society should deal with AI, but most things are uncertain. We want to create a space where we start from this place of uncertainty and help each other make sense of what we’re seeing.
  • The history of technology (e.g. the proliferation of social media) has shown us that civil society needs to engage critically with developments of powerful technology, and hold creators and regulators accountable.
  • Like many other technologies, AI models are dual use: Their role in the world isn’t inherently good or bad. At the moment, AI is doing significant damage, and the harms have been known since well before the models that were breaking laws and harming people were being called AI. The current risks and likely future risks mean we need more people to delve into the topics that will be relevant to ensuring our application of AI models supports the public interest.

What do we cover in seminars?

The field of AI ethics and governance is a trans-disciplinary field that challenges our ability to cover it meaningfully in a monthly seminar series. It seems like the most valuable thing we can do at the outset is to help each other become less confused about key concepts.

Most of the recent surge of AI capabilities has come from the scaling of foundation models. For now, we’ll therefore be looking into definitions and concepts to do with these models, how they are deployed, and how they are being regulated.

You can find some recommended resources here.


Who can join?

This group is for you if you think the recent progress in AI models is important for society, and want to be in a better position to make sure it is stewarded well. You’re probably a good match if you’d like to work in policy, NGOs, tech, journalism or research.

We’re hoping to meet folks studying a broad range of disciplines, and you don’t need to know much about AI from before.


What do we expect from participants?

We expect that you’re motivated to deepen your understanding of AI and technology governance.

Our format and frequency of meetings depend on your preferences, so please let us know in the application form. We’ll reach out to you within a few days of submitting.