Irene Solaiman is a leading AI safety and policy expert and Head of Global Policy at Hugging Face, the most popular community-oriented company and platform working to democratize good machine learning. where she is conducting social impact research and leading public policy. Irene serves on the Partnership on AI’s Policy... Read more
Irene Solaiman is a leading AI safety and policy expert and Head of Global Policy at Hugging Face, the most popular community-oriented company and platform working to democratize good machine learning. where she is conducting social impact research and leading public policy. Irene serves on the Partnership on AI’s Policy Steering Committee and the Center for Democracy and Technology’s AI Governance Lab Advisory Committee. Irene also advises responsible AI initiatives at OECD and IEEE. She is the foremost expert in AI releases in the open and closed source landscape and her research includes AI value alignment, social impact, and combating misuse and malicious use. Irene was recently named and was named MIT Tech Review’s 35 Innovators Under 35 2023 for her research.
Irene formerly initiated and led bias and social impact research at OpenAI, where she also led public policy. Her research on adapting GPT-3 behavior received a spotlight at NeurIPS 2021. She was recently a Tech Ethics and Policy Mentor at Stanford University and an International Strategy Forum Fellow at Schmidt Futures. She also built AI policy at Zillow Group and advised policymakers on responsible autonomous decision-making and privacy as a fellow at Harvard’s Berkman Klein Center.
Irene has worked on generative AI systems and especially language models since the early GPT-2 release, starting the first sociotechnical work on GPT systems. She believes both technical and policy insights are needed to guide responsible development and deployment.
Working on AI models globally that are developed and deployed on most continents raises policy considerations of potential security risks and opportunities for international cooperation.
Having built and led AI policy in industry for years and spoken at the world’s most prominent AI policy convenings, translating technical AI concepts into policy solutions requires deep knowledge of the state of AI and practical policy recommendations. AI policy tools are needed for governments, industry, and non profits.
Whether deploying or using AI, cybersecurity practices and hygiene is key. And those practices slightly differ from basic internet cybersecurity. From serving your own AI models to using an API, cybersecurity skills must be updated.
A central debate of “open versus closed” AI often overlooks important nuances. As one of the thought leaders in researching generative AI release strategies, Irene has noted release is a spectrum and open and closed systems have benefits, risks, and tradeoffs. Each has a strength in a given deployment environment.
Ensuring AI systems are safe, from product trust and safety to aligning increasingly intelligent models, is both a technical and social science effort.
AI’s fast development and integration requires having the foresight for how AI will continue to impact the economy and society.