Margaret Mitchell is a researcher focused on the ins and outs of machine learning and ethics-informed AI development in tech. She has published around 100 papers on natural language generation, assistive technology, computer vision, and AI ethics, and holds multiple patents in the areas of conversation generation and sentiment classification.
She has recently received recognition as one of Time’s Most Influential People of 2023. She currently works at Hugging Face as Chief Ethics Scientist, driving forward work in the ML development ecosystem, ML data governance, AI evaluation, and AI ethics.
Margaret Mitchell previously worked at Google AI as a Staff Research Scientist, where she founded and co-led Google’s Ethical AI group, focused on foundational AI ethics research and operationalizing AI ethics Google-internally. Before joining Google, she was a researcher at Microsoft Research, focused on computer vision-to-language generation; and was a postdoc at Johns Hopkins, focused on Bayesian modeling and information extraction.
She holds a PhD in Computer Science from the University of Aberdeen and a Master’s in computational linguistics from the University of Washington. While earning her degrees, she also worked from 2005-2012 on machine learning, neurological disorders, and assistive technology at Oregon Health and Science University.
Margaret Mitchell has spearheaded a number of workshops and initiatives at the intersections of diversity, inclusion, computer science, and ethics. Her work has received awards from Secretary of Defense Ash Carter and the American Foundation for the Blind, and has been implemented by multiple technology companies. She likes gardening, dogs, and cats.
By examining how past priorities in AI development shaped the AI of today, we can make some predictions about how what we prioritize today will shape the AI of the future. In this talk, I will work through how AI (machine learning) systems came to be what they are now, the different biases and values at play throughout the AI development lifecycle, what these suggest for how AI will evolve, and what’s worth focusing on now to create beneficial AI in the future.
Data is a fundamental component of machine learning, serving as the foundation for what ML models learn. However, relatively little attention has been paid to how ML datasets can be collected, curated, measured, stored, and shared responsibly. In this talk, I will dig into these issues, covering details on datasets and data governance, and provide some guidance on how we can use human values to shape what (and who) ML data represents.
I will walk through what it means to operationalize AI Ethics within the product lifecycle, focusing in on developing machine learning models informed by ethical considerations. This talk covers the role of human cognitive biases, the utility of different artifacts during development, some fun mathematical ideas for ML models and larger AI systems, and methods for ethics-focused launch protocols.
In this talk, I will go over details of how to understand data collection for training Machine Learning models from the perspective of human values and human biases. I will walk through how the data interacts with model training and evaluation protocols, and how to approach model development from the perspective of ethical goals. Time permitting, I will further discuss how the ideas here can be applied to a company at large, shaping the kind of work that is done and the technology that gets deployed.
“Diversity” can be seen as a function of the proportions of different marginalized subpopulations; “Inclusion” can be seen as a function of each individual’s sense of belonging. In other words, Diversity means lots of different people at the table, while Inclusion means each person feels comfortable talking at the table. This talk will tease apart these differences culturally and algorithmically, focusing on methods to improve both diversity and inclusion at an organizational level as well as at an “AI” level.
Show More
Contact us to get Margaret Mitchell's fees and availability for your next event
One of our consultants will get back to you soon