Amit Sharma — AI Researcher
Machine learning researcher working on improving AI reasoning. I've developed training and explanation algorithms that have thousands of citations and are used by millions of people around the world. For instance, I developed DoWhy, an open-source framework for causal reasoning that has been used to impact government policy, health outcomes, and business decisions globally. It has over three million downloads and has spawned multiple startups that provide reasoning services on top of core DoWhy algorithms. I've also led the development of DiCE, a popular counterfactual explanation method that has been integrated into the Microsoft Responsible AI platform. My work on advancing AI reasoning through causality has been featured on multiple podcasts, including Humans of AI, Microsoft AI Frontiers and Causal Bandits. I've been fortunate to be recognized by multiple awards such as Nasscom AI Gamechangers, Yahoo Key Scientific Challenges, Honda Young Engineer and Scientist, and Best Paper and Featured Paper awards at premier computer science research conferences. To build the causal AI assistant of the future, in 2021 I co-founded PyWhy, an OSS organization that brings together a global team of researchers and engineers from CMU, Columbia, Microsoft, Amazon and other institutions. The latest project, PyWhy-LLM, has developed large language model-based reasoning algorithms that are state-of-the-art on causal reasoning tasks. PyWhy's scientific impact has led to keynote talks across disciplines, including conferences on environmental science (Cambridge University), finance (CFA, New York), and medicine (Leibnizhaus, Germany). At a technical level, I work on combining two seemingly incompatible ideas: the messy but generalizable capabilities of language models with the principled but rigid capabilities of causal models (or formal reasoning models). Early in 2023, I saw the potential of LLMs for inferring causal relationships, a key part of scientific discovery. This has led to LLM-based algorithms that achieve up to 96% accuracy on inferring cause and effect across scientific fields, including medicine (Covid-19), climate science (Arctic sea ice coverage), and engineering. The key insight in my work is that with the right training procedure, even small models can match accuracy of large models such as GPT-4. These days, I'm most excited by Axiomatic Training, a framework for training language models that enables even small models to outperform large frontier models on reasoning. For more details on my work, visit: www.amitsharma.in.
Stackforce AI infers this person is a Machine Learning and AI expert with a focus on causal reasoning and research.
Location: Bengaluru, Karnataka, India
Experience: 15 yrs 2 mos
Skills
- Machine Learning
- Artificial Intelligence
- Research
- Computer Science
- Teaching
- Causal Inference
- Data Analysis
- User Experience Research
- Software Engineering
Career Highlights
- Developed DoWhy, impacting global policy and health outcomes.
- Co-founded PyWhy, advancing causal reasoning with LLMs.
- Recognized by multiple prestigious awards in AI research.
Work Experience
Microsoft Research
Principal Researcher (4 yrs 6 mos)
Senior Researcher (3 yrs)
Researcher (1 yr 9 mos)
Postdoctoral Researcher (1 yr 11 mos)
Intern (3 mos)
Cornell University
Co-Instructor (4 mos)
Research Assistant (4 yrs 9 mos)
User Experience Research Intern (3 mos)
Software Engineering Intern (3 mos)
EPFL
Research Intern (2 mos)
IBM India Research Lab
Summer Intern (2 mos)
Education
Doctor of Philosophy (Ph.D.) at Cornell University
B.Tech. (Hons.) at Indian Institute of Technology, Kharagpur
High School at Sardar Patel Vidyalaya, New Delhi