Ayush Sharma

Machine Learning Engineer

San Francisco, California, United States9 yrs 9 mos experience
Highly Stable

Key Highlights

  • 7+ years in neural networks and scalable software solutions.
  • Led interdisciplinary teams at Amazon and a unicorn startup.
  • Developed patented systems for logistics and robotics.
Stackforce AI infers this person is a Machine Learning Engineer specializing in Computer Vision and Robotics for logistics and healthcare.

Contact

Skills

Core Skills

Computer VisionDeep LearningSoftware DevelopmentAlgorithm DevelopmentMachine LearningComputer Engineering

Other Skills

AB TestingActive LearningAdobe PhotoshopAgile MethodologiesAgile Project ManagementAnalytical SkillsAndroidAndroid DevelopmentCC++ LanguageCamera CalibrationComputer ScienceCore JavaData AnalysisData Annotation

About

7+ years of experience in developing neural networks and building scalable software solutions for perception, image processing, backend systems and data driven technologies. Built new systems from ground up at an early stage B2B startup (now a unicorn) as well as delivered large scale consumer focused launches at Amazon. Led and worked across interdisciplinary teams.

Experience

Waymo

Sr. Machine Learning Engineer Perception

Apr 2025Present · 11 mos · Mountain View, California, United States · Hybrid

Motional

Sr. Machine Learning Engineer

Sep 2023Feb 2025 · 1 yr 5 mos · Milpitas, California, United States · Hybrid

Dexterity, inc.

Computer Vision Engineer

Feb 2020Jul 2023 · 3 yrs 5 mos · Redwood City, California · On-site

  • Early employee of a now unicorn startup that makes full stack software for warehouse robots. Delivered Segmentation and error detection systems, and built robot camera calibration tools.
  • Conceptualized and deployed a new Error Detection System for parcel singulation robots at a top 3 logistics firm [Patented]
  • The system, developed in Python, detected and flagged errors in real time (<100ms) on high speed conveyors to reduce parcel mis-slotting (from 5% to 0.01%) and avoid parcel-robot collisions
  • Developed a semi-supervised learning approach in PyTorch; containarized the system and deployed it with NVIDIA Triton Server (using CUDA shared memory)
  • Integrated the system with GCS to store live data and used Google BigQuery to generate comparison charts; created Kibana dashboards to visualize the live performance of the system.
  • Created a 1-click self serve tool for robot camera calibration and verification using GRPC api in Python [Patent pending]
  • Delivered automated processes for faster robot camera calibration (static as well as end-effector mounted cameras) and its verification getting sub-mm accuracy
  • Reduced the calibration time by 10 mins and produced a comprehensive report
  • Created a system diagnostic testing platform using GRPC APIs allowing engineering teams to create system tests at scale
  • Initiated the effort; led meetings with Engineering, UI and DevOps to understand their pain points.
  • Developed the backend using a graph based workflow which modularised the engineering, content and UX.
  • Revamped segmentation pipeline for parcel singulation, helping Dexterity acquire 2 of its first 3 clients
  • Revised data annotation by removing redundancy and used committee based active learning policy to select high-value samples and save cost
  • Trained segmentation models in Pytorch to detect rigid as well as non-rigid parcels and used hard negative mining to improve MAP by appx. 15%; used GCS for AB testing of models and Plotly for visualizations.
PyTorchScikit-LearnOpenCVPythonSoftware DevelopmentAgile Project Management+3

Amazon lab126

Applied Scientist

Jul 2018Feb 2020 · 1 yr 7 mos · Sunnyvale, California, United States · On-site

  • Amazon Halo (health tracker) - Worked on the Visual Body Fat (VBF) measurement feature
  • Trained multi-view convolutional neural networks with supervised learning in PyTorch to predict body fat percentage from multi pose images; increased accuracy on the test set by appx. 25%; used AWS for multi-gpu training
  • Identified issues with data annotation, worked with fitness experts and data team to make significant changes in the process. As a result, we hit our KPIs for bounding errors
  • Amazon Ring cameras - trained neural networks in Keras for the human detection pipeline for indoor cameras
  • Used model distillation and active learning to increase the mean precision over classes by 12%
  • Worked with a team of engineers and QAs to iteratively improve the models and get it deployed in an Android app
Computer EngineeringPyTorchOpenCVMachine LearningAnalytical SkillsProblem Solving+11

Amazon

Applied Scientist Intern, Amazon Go

May 2017Aug 2017 · 3 mos · Greater Boston

  • I developed an augmented reality system which can project on real world environment in real time. The information regarding the environment comes through a camera. I developed and implemented a custom algorithm to calibrate the camera and projector.
  • Key responsibilities:
  • Took ownership of the project and developed it from scratch - from building the hardware setup to delivering a working demo.
  • The projection was accurate and instantaneous for human perception.
Computer EngineeringProblem SolvingAlgorithm DevelopmentResearch and Development (R&D)Pattern Recognition

Samsung electronics

2 roles

Senior Software Engineer, Samsung Research Institute Bangalore

Mar 2016Jul 2016 · 4 mos · Bengaluru, Karnataka, India

  • I developed the Sync feature of Samsung SHealth android fitness application. This allows user to sync his/her fitness data on other android applications with SHealth.
  • Exposure: Agile programming paradigm, Rest APIs, multi-threaded Java programming, object oriented design and architectures.
Computer EngineeringProblem Solving

Software Engineer, Samsung Research Institute Bangalore

Jun 2014Mar 2016 · 1 yr 9 mos · Bengaluru, Karnataka, India

  • Samsung Research Institute, Bangalore was my first company and it was a learning experience working with people here for two years.
  • Developed the framework ‘S-Fit’ using R and Java to reduce user drop out in SHealth. The framework extracted useful patterns from the data coming from the wearable sensors and user food inputs using machine learning models to give recommendations. Ontology based knowledge was then used to cross validate the recommendations given.
  • Published paper in IEEE, International Conference on Bioinformatics and Biomedicine [875-882].
  • S-Fit: Knowledge guided fitness pattern mining framework, Ayush Sharma et al. DOI: 10.1109/BIBM.2015.7359800.
  • Exposure: R, Data Analysis, Machine Learning, Java.
Computer EngineeringProblem SolvingResearch and Development (R&D)

Samsung r&d institute india

Software Developer Intern

May 2013Jul 2013 · 2 mos · Bengaluru Area, India

  • Worked on Content Based Image Retrieval(CBIR) Algorithms and how these can be adapted for android platform. Made an android application for the same.
  • Exposure: CBIR, Java, Object Oriented Oriented Design, LIRE.
Computer EngineeringProblem Solving

Education

University of Massachusetts Amherst

Master of Science (MS) — Computer Science

Jan 2016Jan 2018

Indian Institute of Technology (Banaras Hindu University), Varanasi

Bachelor of Technology - BTech — Computer Science

Jul 2010May 2014

Indian Institute of Technology (Banaras Hindu University), Varanasi

Bachelor of Technology - BTech — Computer Science

Jul 2010May 2014

Indian Institute of Technology (Banaras Hindu University), Varanasi

Bachelor’s Degree — Computer Science

Jan 2010Jan 2014

Stackforce found 100+ more professionals with Computer Vision & Deep Learning

Explore similar profiles based on matching skills and experience