Jiaming Zeng is a Senior Machine Learning Researcher at AKASA. She received her PhD from Stanford University working with Ross Shachter, Susan Athey, and Daniel Rubin. Her research focused on employing causal models and machine learning to develop interpretable models for medical decision making. While an undergrad at MIT, she worked with Cynthia Rudin to develop interpretable machine learning models for predicting prisoner recidivism.
She did her postdoc at IBM Research’s Computational Health group, working on identifying and mitigating bias present in clinical data. Previously, she worked as an AI Resident at X, the Moonshot Factory (formerly Google X) to increase sustainable fishing and protect the ocean. She was also an AI Research intern on the NVIDIA AI Infrastructure team, working on practical ways to capture uncertainty in neural networks. Currently as a Senior ML Researcher AKASA, she leads the research and development efforts for pretraining, finetuning, and evaluting LLMs for automating clinical workflow.
Jiaming’s research has been published in channels such as Nature Communications, JCO Clinical Informatics, NeurIPS, etc. Her work has also been featured in various news channels. In her freetime, she enjoys reading, writing, being out in nature, and learning about other cultures.
PhD in Management Science and Engineering, 2021
MEng in Management Science and Engineering, 2018
BSc in Mathematics with Computer Science, 2015
Massachusetts Institute of Technology
We explore how unstructured clinical text can be used to reduce selection bias and improve medical practice. We present this proof-of-concept study to enable more credible causal inference using observational data, uncover meaningful insights from clinical text, and inform high-stakes medical decisions.
We develop a natural language processing approach with structured electronic medical records and unstructured clinical notes to identify the initial treatment administered to patients with cancer.
Our work explores whether fully Bayesian networks are needed to successfully capture model uncertainty.