Yim Register
✨ the future of AI can be kind. ✨
Let’s meet at the intersection of joy and justice.
I received my PhD as an NSF GRFP Fellow from the the University of Washington Information School. My dissertation is entitled: The Future of AI Can Be Kind: Strategies for Embedded Ethics in AI Education. I study ways that AI algorithms can cause harm, and the best practices for identifying and remedying such algorithmic harms. My main focus is AI education – using trauma-informed computing to teach AI in empowering, inclusive, and supportive ways. I study the full life cycle of ML algorithms – from data collection to model selection to model evaluation and deployment; all with the goals of societal benefit and user safety and empowerment. From basic regression to large language models (LLMs), my goals are to quantify bias in the data and output, as well as identify potential harms that may come from the technology we create.
When we create technology with compassion, we create a better world for everyone. I’ve worked with RStudio, Code.org, MD4SG, and the Center for an Informed Public. I also do visiting talks and workshops, such as The Future of AI Can Be Kind or Mental Health, Social Media, and Empowerment. I often create educational resources and guest lecture on AI/ML topics.

I am also an artist, an avid reader of poetry, a dancer, and an active member of my communities. I advocate for addiction recovery, mental health support, autism appreciation, prison reform, and overdose awareness. I am a former child advocate for foster children, and I even used to teach kindergarten before I found my way into my PhD! I use my experiences to expand my perspectives and beliefs about what is possible.
If we work together, ✨the future of AI can be kind✨.
latest posts
Sep 4, 2024 | Advice to PhD Students |
---|---|
Jul 21, 2024 | The Future of AI Can Be Kind, and I wrote my Dissertation about it |
Jan 26, 2024 | These days, AI is like Fast Fashion |
selected publications
- Learning machine learning with personal data helps stakeholders ground advocacy arguments in model mechanicsIn Proceedings of the 2020 ACM Conference on International Computing Education Research, 2020
-
- Attached to “The Algorithm”: Making Sense of Algorithmic Precarity on InstagramIn Proceedings of the CHI Conference on Human Factors in Computing Systems, 2023
- Beyond Initial Removal: Lasting Impacts of Discriminatory Content Moderation to Marginalized Creators on InstagramIn Proceedings of the CSCW ACM SIGCHI Conference on Computer-Supported Cooperative Work And Social Computing, 2024