Balaji Sundareshan

AI Researcher

San Francisco, California, United States7 yrs 6 mos experience
AI ML PractitionerAI Enabled

Key Highlights

  • Led development of first office-ergonomics product.
  • Co-inventor on a U.S. patent.
  • Achieved 20x speed-up in real-time object detection.
Stackforce AI infers this person is a Computer Vision and Machine Learning expert in the Robotics and Ergonomics sectors.

Contact

Skills

Core Skills

Computer VisionMachine LearningRobotics

Other Skills

3D Face Model Rendering3D Facial Landmarks Detection3D Object DetectionActivity RecognitionAlgorithmsArtificial Intelligence (AI)Batch ProcessingBinarizationCenter-point Keypoint DetectionCloud ComputingCommunicationComputer Vision Module DevelopmentCreative Problem SolvingDriver Activity DetectionElectrical Engineering

About

I’m a Senior CV/ML Research Engineer at Tumeke, where I design, train, and deploy real‑time computer‑vision systems that keep workplaces safe. Since June 2023, I’ve led the development of our first office‑ergonomics product -- reconstructing 3D office scenes, fusing them with SAM2 segmentation masks, and measuring metric distances to deliver actionable insights. I also developed a coaching app that provides real-time ergonomic feedback using quantized CoreML models, which runs natively on the iPad (M2) and is now used by over 30,000 flight attendants. Before Tumeke, I was a Computer Vision Researcher at Motional, where I improved monocular 3D object detection and velocity prediction by 40% through novel inter‑frame fusion techniques (and contributed to a U.S. patent). As Lead Research Engineer at Edgetensor, I engineered real‑time object detectors and trackers with center‑point keypoint detection, deploying them via ONNX and TensorRT to achieve a 20x speed‑up while mentoring teams and architecting scalable ML systems. My earliest work at DeepSight AI Labs accelerated custom object detectors by 40% using batch processing and model quantization. I hold an M.S. in Electrical and Computer Engineering (Computer Vision & ML) from Northeastern University where I co‑authored a CVPR 2023 workshop paper on transformer‑based egocentric action segmentation and a B.Tech. in Engineering Physics (Robotics minor) from IIT Delhi. I’m also co‑inventor on patent application US 17929405.

Experience

Tumeke ergonomics

2 roles

Senior Machine Learning Engineer

Promoted

Jul 2025Present · 8 mos · Hybrid

Machine Learning Engineer

Jun 2023Jul 2025 · 2 yrs 1 mo · Hybrid

Motional

Research Intern - Computer Vision

May 2022Sep 2022 · 4 mos · Santa Monica, California, United States · Hybrid

  • Implemented monocular video-based 3D object detection which predicts the 3D bounding box of objects along with their velocities from a temporal sequence of images on the NuScenes dataset.
  • Designed many inter-frame fusion techniques, including attention models, to improve the velocity predictions of detected objects by 40%, with just 15% computation overhead.
  • Filed a US patent covering all the techniques.
3D Object DetectionInter-frame Fusion TechniquesUS Patent FilingComputer VisionMachine Learning

Northeastern university

Graduate Research Assistant

Sep 2021May 2023 · 1 yr 8 mos · Boston, Massachusetts, United States · On-site

  • Lab: Robust Systems Lab
  • Advisor: Prof. Octavia Camps
  • Implemented binarization module to perform non-zero thresholding which reduces noise and perspective distortion, increasing the accuracy of the model from 87.5% to 92.9%.
  • Designed a pipeline that includes activity recognition and object detection, to estimate fish count using Pytorch.
  • Integrated transformer-based encoder-decoder network into multi-view action recognition pipeline, increased the accuracy by 10%.
PytorchActivity RecognitionObject DetectionBinarizationTransformer-based NetworksComputer Vision+1

Edgetensor

3 roles

Lead Research Engineer

Promoted

Dec 2019Sep 2021 · 1 yr 9 mos · Bangalore Urban, Karnataka, India

  • Built a real-time object detector and tracker using center-point based keypoint detection technique in MXNet.
  • Designed a camera-pose invariant 3D box object detector trained on rendered 3D CAD models using Blender.
  • Increased the model inference speed by 20x with maintaining the same accuracy, by creating the network graph in ONNX and deploying it in TensorRT.
Real-time Object DetectionCenter-point Keypoint DetectionONNXTensorRTComputer VisionMachine Learning

Research Engineer

Jan 2019Nov 2019 · 10 mos · Bangalore Urban, Karnataka, India

  • Implemented driver activity detection feature to spot activities such as smoking, using phone, eating, and drinking.
  • Rendered face images for different head-poses by fitting a dense 3D face model and rotating the image in 3D space.
Driver Activity Detection3D Face Model RenderingComputer VisionMachine Learning

Computer Vision Engineer Intern

Apr 2018Dec 2018 · 8 mos · Bangalore Urban, Karnataka, India

  • Worked on 3D facial landmarks detection, face tracking, eye and mouth tracking, and head-pose detection.
  • Employed emotion detection feature to identify if the driver is angry, disgust, sad, happy, surprise, fear or neutral.
3D Facial Landmarks DetectionEmotion DetectionComputer VisionMachine Learning

Deepsight ai labs pvt ltd

Computer Vision Software Engineer

Apr 2018Jul 2018 · 3 mos · Gurgaon, India

  • Increased processing speed by 40% by implementing batch processing for custom real-time object detector.
  • Quantized 32 bit model to 16 bit model and 8bit model to reduce memory access cost and increase compute efficiency, trained and evaluated the models on COCO dataset.
  • Used Computer Vision techniques to develop a new fully integrated functionality in the company’s security solution which is now integrated with the product after rigorous long-term testing at multiple customer locations.
Batch ProcessingModel QuantizationComputer VisionMachine Learning

Indian institute of technology, delhi

3 roles

Learning Manipulation Task from Visual Demonstration: Computer Vision Researcher

Aug 2017Aug 2018 · 1 yr

  • Trained a KUKA KR5 robot to perform the task of arranging objects when sequences of the same task are provided as an input.
  • Employed Single Shot MultiBox Detector(SSD) to detect all objects in the frame in 2d space.
  • Calculated location and orientation of the objects from 3d point cloud using Point Cloud Library(PCL).
KUKA KR5 Robot TrainingSingle Shot MultiBox DetectorPoint Cloud LibraryRoboticsComputer Vision

Robotics Club, IIT Delhi

Oct 2016Mar 2017 · 5 mos

  • Built PCBs for sensor based object detection and for integrating Arduino MegaADK, Sabertooth, Encoders and Raspberry pi2 as a single module.
  • Worked on face recognition, interaction with human and improved efficiency of battery usage by the entire system.
  • Implemented Large Scale Direct(feature-less) Monocular SLAM algorithm for 3D reconstruction of the environment.
  • Built a Maze solving two wheel driven mobile robot using line sensor, camera and PID based control on Raspberry pi 2.
  • Received Seed Grant of 6000 USD from IITD for project development.
PCB DesignFace RecognitionSLAM Algorithm ImplementationRoboticsComputer Vision

Self Driving Car : Computer Vision Researcher

Jul 2016Apr 2018 · 1 yr 9 mos

  • Developed computer vision module for Level 2 Self Driving Car.
  • Automated traffic sign detection, lane detection as well as programmed it to avoid bumps using Tensorflow.
  • Selected among the top 13 teams in the Mahindra Rise Driverless Challenge out of 600+ applicants in 2017.
Computer Vision Module DevelopmentTraffic Sign DetectionLane DetectionComputer VisionMachine Learning

Education

Indian Institute of Technology, Delhi

Bachelor of Technology (B.Tech.)

Northeastern University

Master of Science - MS

Stackforce found 100+ more professionals with Computer Vision & Machine Learning

Explore similar profiles based on matching skills and experience