Weixin Liang — Product Engineer
On the job market for 2025. Recruiters, please feel free to reach out to discuss opportunities or collaborations! Completing my PhD in Computer Science at Stanford University, specializing in multi-modal Large Language Models (LLMs) and efficient model architectures. Homepage: https://ai.stanford.edu/~wxliang/ My research advances the efficiency frontier of large-scale LLMs, focusing on pretraining optimization and sparse architectures, particularly for Multi-Modal AI. I recently introduced Mixture-of-Transformers (MoT), an innovative architecture that achieves comparable performance to dense models with up to 66% reduction in FLOPs for multi-modal LLM pretraining. This work represents a significant step toward more resource-efficient AI systems. At Meta FAIR, I extended this research through large-scale LLM applications, demonstrating MoT's effectiveness across text, image, and speech modalities. My industry experience spans roles at Amazon, Apple, and Tencent, where I contributed to production-scale ML systems and infrastructure. I am actively exploring opportunities in industry research labs and technology companies to continue advancing multi-modal LLMs and driving innovation in resource-efficient AI. Let’s connect to discuss collaboration opportunities or the future of AI. Stanford PhD CS (2025), Stanford MS EE (2021), Zhejiang University CS (2019).
Stackforce AI infers this person is a multi-modal AI researcher with a strong focus on efficiency and ethical AI development.
Location: Stanford, California, United States
Experience: 6 yrs 6 mos
Skills
- Large Language Models (llm)
- Multi-modal Llm
- Trustworthy Ai
- Natural Language Processing (nlp)
- Multi-modal Ai
- Conversational Ai
- Data Science
Career Highlights
- PhD candidate specializing in multi-modal Large Language Models.
- Introduced innovative Mixture-of-Transformers architecture.
- Published research on AI documentation practices in Nature Machine Intelligence.
Work Experience
Meta
Research Scientist Intern at FAIR, Multi-Modal LLM, Pre-training (6 mos)
Hugging Face
Visiting Researcher, Trustworthy AI (6 mos)
Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI)
Graduate Research Assistant, Dermatology AI (0 mo)
Stanford Office of Technology Licensing (OTL)
Graduate Research Assistant | Foundation Model, AI, NLP (0 mo)
Amazon
Applied Scientist Intern | Multi-Modal AI (3 mos)
Stanford University Department of Computer Science
Graduate Research Assistant (6 yrs 6 mos)
Tencent
Machine Learning Engineer Intern (2 mos)
Apple
Software Engineer Intern (2 mos)
University of Illinois Urbana-Champaign
Summer Internship (0 mo)
Education
Doctor of Philosophy - PhD at Stanford University
Master of Science (M.S.) at Stanford University
Bachelor of Engineering - BE at Zhejiang University
Honnors degree at Zhejiang University
Summer Internship at University of Illinois Urbana-Champaign
High School Diploma at Guangdong Experimental High School