Avinash Paliwal

Co-Founder

Seattle, Washington, United States8 yrs 5 mos experience
Highly StableAI Enabled

Key Highlights

  • Expert in 3D Computer Vision and Generative Video Models.
  • Published in top-tier conferences like CVPR and SIGGRAPH Asia.
  • Led innovative projects in video generation and scene reconstruction.
Stackforce AI infers this person is a leading expert in AI-driven video generation and 3D computer vision.

Contact

Skills

Core Skills

Video Generation3d Computer VisionMachine LearningDeep LearningComputer VisionResearchData AnalysisEmbedded Systems

Other Skills

PythonDiffusion Models3D Gaussian SplattingNovel View SynthesisReal-Time SystemsStereo View-Time Interpolation3D DisplaysJavaSQLRTOS DevelopmentDebuggingGenerative AICMatlabHTML

About

Founding AI Researcher at Morphic Inc., working at the intersection of 3D Computer Vision and Generative Video Models. I lead the development of production-grade video generation features, such as image-to-video (3D Motion) and video-to-video camera control. My expertise centers on diffusion models and scene reconstruction/generation using techniques like 3D Gaussian Splatting. I hold a Ph.D. from Texas A&M University , with prior research experience at Meta Reality Labs and Leia Inc.. My work has been published in top-tier conferences including CVPR, ICCV, ECCV, and SIGGRAPH Asia.

Experience

Nuance labs

Founding Research Scientist

Mar 2026Present · 0 mo · Seattle, Washington, United States

Morphic

Founding AI Researcher

Apr 2025Mar 2026 · 11 mos · San Jose, California, United States

  • Led development of production video generation features, including 3D Motion for custom camera videos from a single image, and video inpainting/outpainting.
  • Building an upcoming feature based on “Reshoot-Anything” (under review): a self-supervised video-to-video camera control model trained on arbitrary monocular videos without 3D annotations or static/dynamic scene filtering.
  • Trained large video models using multi-node distributed training with multiple fine-tuning strategies, including full fine-tuning, LoRA, and context adapters, enabling faster experimentation and improved controllability.
  • Built parallelized data processing pipelines for video curation, filtering, and preprocessing to support robust training and production workflows.
  • Exploring agentic text-to-trajectory planning with LLMs/MLLMs to generate scene-aware camera paths from prompts, with optional human-in-the-loop refinement via intermediate visualizations and text/3D edits.
Video Generation3D Computer VisionMachine LearningDeep LearningPython

Meta

2 roles

Student Researcher

Jan 2024Apr 2024 · 3 mos

  • Improved 360° sparse-view novel view synthesis (3–9 views) by integrating diffusion-based repair/inpainting priors into 3D Gaussian Splatting optimization (ICCV 2025).

Research Scientist Intern

Aug 2023Dec 2023 · 4 mos

  • Developed a real-time sparse novel view synthesis system using coherent 3D Gaussian Splatting to improve geometry consistency and reconstruction quality (ECCV 2024).
3D Gaussian SplattingNovel View SynthesisComputer Vision

Leia inc.

Research Intern

Sep 2021Dec 2021 · 3 mos · College Station, Texas, United States · Remote

  • Developed a near real-time stereo view-time interpolation method for handheld 3D displays using multi-plane disparities and non-uniform coordinates (CVPR 2023).
3D Gaussian SplattingReal-Time SystemsComputer Vision

Texas a&m university

3 roles

Graduate Teaching Assistant

Aug 2020Dec 2020 · 4 mos · College Station, Texas, United States

  • CSCE 441 Computer Graphics

Graduate Teaching Assistant

Jan 2020May 2020 · 4 mos · College Station, Texas, United States

  • CSCE 489/689 Computational Photography

Graduate Research Assistant

Sep 2019Mar 2025 · 5 yrs 6 mos · College Station, Texas, United States

  • Developed a single-image 360° 3D scene generation pipeline using diffusion-based panorama/depth synthesis followed by 3D Gaussian Splatting optimization (SIGGRAPH Asia 2025).
  • Proposed a modular pixel reshading network that predicts view-dependent shading (e.g., moving specular highlights) for novel views, improving single-image novel view synthesis realism (SIGGRAPH Asia 2023).
  • Published additional work on dynamic scene interpolation (WACV 2023), GAN-based raw video denoising (ICCP 2021), and hybrid imaging slow-motion reconstruction (TPAMI 2020).
Diffusion Models3D Gaussian SplattingComputer VisionResearch

Amazon

Software Development Engineer Intern

May 2019Aug 2019 · 3 mos · Seattle, Washington

  • Built a Java/SQL pipeline to analyze customer interaction history and generate personalized recommendations to improve user engagement.
Stereo View-Time Interpolation3D DisplaysComputer Vision

Bajaj auto ltd

Senior R&D Engineer

Jul 2016Jul 2018 · 2 yrs · Pune/Pimpri-Chinchwad Area

  • Developed core modules for BALOS (company’s first in-house RTOS), including OTA bootloader, fault management, and a lightweight CAN protocol reducing bandwidth by 40%.
  • Built an emulation and debugging environment to run the application + RTOS stack without hardware, enabling faster root-cause analysis and iteration.
JavaSQLData Analysis

Bharat electronics

Intern

May 2015Jun 2015 · 1 mo · Bengaluru, Karnataka, India

RTOS DevelopmentEmbedded SystemsDebugging

Education

Texas A&M University

Doctor of Philosophy - PhD — Computer Engineering

Jan 2018Jan 2024

Visvesvaraya National Institute of Technology

Bachelor of Technology (B.Tech.) — Electronics and Communications Engineering

Jan 2012Jan 2016

Stackforce found 100+ more professionals with Video Generation & 3d Computer Vision

Explore similar profiles based on matching skills and experience