<aside> ❇️

About Us


We think that reducing risks from advanced artificial intelligence is one of the most important problems of our time. We also think it's a highly interesting and exciting problem, with open opportunities for many more researchers to make progress on it.

Check out our new website! http://aisafetyuci.com/

Apply for our Intro Fellowship here: Application


</aside>

<aside> 📗

What We Do


We are building a community at UCI aimed at reducing AI risks, training the next generation of researchers, engaging the public, and steering the trajectory of AI development for the better. Learn more about getting involved below.


</aside>

<aside> 🐉

Get Involved

</aside>

Join our Discord

This is our main form of communication. Click to join for updates and access to resources.

Technical Fundamentals Fellowship

AIS UCI runs an 8-week introductory reading group on AI safety, covering topics like neural network interpretability,¹ learning from human feedback,² goal misgeneralization in reinforcement learning agents,³ and eliciting latent knowledge. ****The fellowship meets weekly in small groups, with dinner provided and no additional work outside of meetings.

Apply now by April 3! Application is here.

AI Safety Technical Fundamentals Syllabus

Membership

Being a member of the AISUCI community has both a number of opportunities and a number of responsibilities. Membership entails: