Rajkumar Pujari

AI Researcher

West Lafayette, Indiana, United States10 yrs 11 mos experience
Most Likely To SwitchAI ML Practitioner

Key Highlights

  • PhD in NLP with strong research background.
  • Experience in DARPA's CCU program.
  • Publications in top NLP conferences.
Stackforce AI infers this person is a specialist in Artificial Intelligence with a focus on Natural Language Processing.

Contact

Skills

Core Skills

Large Language Models (llm)Reinforcement LearningNatural Language ProcessingDeep Learning

Other Skills

Post trainingDeep Reinforcement LearningWorld ModelsComputational Social ScienceHuman-in-the-Loop Machine LearningParameter Efficient TrainingCross-Cultural SystemsCulturally-aware Language ModelsDiscourse AnalysisGraph Neural NetworksDataset DesignPython (Programming Language)CC++Java

About

I am Applied scientist at Amazon AGI, specializing in LLM post-training and graph-based methods. With experience in DARPA’s CCU program, PhD in NLP from Purdue University and industry internships at Microsoft Research and Amazon Alexa, I have a strong background in impactful and imaginative applied AI research. My work resulted in publications at top NLP conferences like ACL, NAACL, and EMNLP. My Ph.D research focuses on pragmatic language understanding, with a focus on cultural grounding and political contextualization. I worked on novel architectures for Graph foundation models, steering LLMs towards culturally aligned reasoning chains and multi-agent architectures for assessing social media information quality.

Experience

Amazon

Applied Scientist II

Aug 2025Present · 7 mos · Sunnyvale, California, United States · On-site

  • Working on AGI Foundations team: Post-training general purpose frontier LLMs, LLM-as-a-judge, Inference time scaling
Post trainingLarge Language Models (LLM)Reinforcement LearningDeep Reinforcement Learning

Microsoft

Research Intern

May 2021Aug 2021 · 3 mos · Redmond, Washington, United States

  • Worked with Elnaz Nouri, Erik Oveson, and Priyanka Kulkarni on low-resource reinforcement learning framework for stereotype detection in text.

Purdue university

2 roles

Graduate Research Assistant

Promoted

Aug 2019Jul 2025 · 5 yrs 11 mos · West Lafayette, Indiana

  • My research developed computational methods that embed social, political, and discourse context into language models, enabling more faithful interpretation of real-world text such as political messaging, news, and multi-choice reasoning tasks. I introduced graph-structured and relation-aware modeling frameworks, along with new datasets and evaluation paradigms, demonstrating that socially and relationally grounded models yield more robust, human-aligned understanding than standard pretrained LMs.
  • As part of the DARPA CCU program, I led all TA1 text-focused tasks, and the systems I built performed strongly in blind evaluations. This work was also integrated into prototype systems for cross-cultural, cross-lingual intelligent human-AI interaction.
Deep LearningWorld ModelsComputational Social ScienceHuman-in-the-Loop Machine LearningLarge Language Models (LLM)Parameter Efficient Training+3

Graduate Teaching Assistant

Aug 2017May 2019 · 1 yr 9 mos · West Lafayette, United States

  • I am working on 'Machine Comprehension using Common Sense Inference'. Specifically, I am looking into the problem of augmenting world knowledge to given textual information, to effectively answer questions that require inferences beyond the information available in the text.

Amazon

Applied Scientist Intern

May 2019Aug 2019 · 3 mos · Greater Seattle Area

  • Worked in 'Alexa Conversational Search' team as an Applied Scientist Intern for the summer.

Indian institute of technology, bombay

Research Project Assistant

Jan 2016May 2017 · 1 yr 4 mos · Mumbai Area, India

  • I was involved in academic research in 'Deep Learning in the field of Natural Language Processing'. I worked on improving quality of pre-trained word embeddings in English language and English-Hindi Wordnet linking using synset embeddings. I developed a taxonomy for questions in English language and corresponding classification algorithms. The taxonomy was used to improve the performance of semantic question matching, specifically in a restricted-domain setting.

Worldquant llc

2 roles

Senior Quantitative Researcher

Jul 2015Dec 2015 · 5 mos

  • I was responsible for researching financial and mathematical literature, analyzing various datasets to seek out sources of market inefficiencies and converting them to a predictive profitable model called alpha. Making alphas using diverse datasets such as textual news data, price-volume data, fundamental data from balance sheets, income statements and cash flow statements, etc.

Quantitative Researcher

Jul 2014Jun 2015 · 11 mos

Yahoo

Summer Intern

May 2013Jul 2013 · 2 mos · Bangalore

  • I worked on improving 'Relevance-based ranking of comments on news articles'.
  • I designed and implemented a component of comment ranking module on Yahoo! articles that scores articles based on the relevance of comment text to the article.

Education

Purdue University

Doctor of Philosophy - PhD — Natural Language Processing (Computer Science)

Jul 2017Aug 2025

Indian Institute of Technology, Kharagpur

Bachelor of Technology (Hons.) — Computer Science and Engineering

Jan 2010Jan 2014

Sri Chaitanya Junior College

High School — MPC

Jan 2008Jan 2010

Sainik School, Korukonda

10th — Schooling

Jan 2003Jan 2008

Geethanjali Public School

Schooling

Jan 1996Jan 2002

Stackforce found 100+ more professionals with Large Language Models (llm) & Reinforcement Learning

Explore similar profiles based on matching skills and experience