H

Harsh Verma

CTO

Redmond, Washington, United States16 yrs 6 mos experience
Highly Stable

Key Highlights

  • Expert in metadata scalability and data governance.
  • Led cross-functional teams to deliver enterprise-scale solutions.
  • Innovative architect with a focus on performance optimization.
Stackforce AI infers this person is a SaaS architect specializing in scalable data solutions and system architecture.

Contact

Skills

Core Skills

Distributed SystemsData GovernanceSoftware DevelopmentData ManagementSystem ArchitectureSearch Technologies

Other Skills

Metadata ScalabilitySchema ManagementCross-functional CollaborationPerformance OptimizationObservabilityGraph Query EngineZero Downtime UpgradeTechnical LeadershipGraph Database ManagementMicroservices ArchitectureConfiguration ManagementFeature DevelopmentAWSWeb DevelopmentSearch Engine Optimization

About

With a strong foundation in distributed systems, metadata systems, and enterprise-scale data solutions, I focus on building resilient, scalable, and performant architectures. My approach involves ✅ Breaking down complex, ambiguous problems into scalable solutions ✅ Cross-functional collaboration across engineering, product, UX, and business teams ✅ Driving technical direction and influencing stakeholders for strategic initiatives I am passionate about solving the next frontier of metadata scalability, data governance, and schema management.

Experience

16 yrs 6 mos
Total Experience
5 yrs 8 mos
Average Tenure
5 yrs 2 mos
Current Experience

Salesforce

Software Engineering Architect

Mar 2021Present · 5 yrs 2 mos

  • I drive core innovations in metadata scalability, data governance, and schema management within Data Cloud to support enterprise-scale workloads. Leading strategic initiatives, I collaborate closely with engineering, product, and platform teams to deliver scalable, performant, and future-ready solutions.
  • Key Contributions & Leadership:
  • Metadata Scalability:
  • Designed and executed a roadmap to scale metadata handling for 100K+ entities per tenant. Partnered with testing teams to establish performance baselines and identify optimization areas. Collaborated with UX and API teams to enhance scalability through efficient queries, pagination, and lazy loading, reducing performance bottlenecks. Implemented caching layers to achieve sub-200ms API response times (90th percentile) while maintaining data correctness via proper invalidation and access control. Enhanced reliability and reduced latency for metadata synchronization using bulk processing and asynchronous workflows.
  • Schema Management & Data Evolution:
  • Led the design and rollout of multi-currency support across Data Cloud, coordinating cross-team dependencies among 20+ teams for successful deployment. Expanded schema capabilities by integrating complex data types like Geospatial, ensuring cross-platform interoperability and contributing to open-source specifications (Parquet, Iceberg). Collaborated with query teams to refine storage layers, leveraging data statistics and indices to provide high-performance geospatial functionalities.
  • Data Governance & Billing Accuracy:
  • Partnered with consumption-based billing and observability teams to improve transparency in billing and usage reporting. Developed comprehensive observability data and insightful reports, enabling sales and account executives to clearly articulate cost attributions to customers, supporting continued contracts valued over $100 million.
Metadata ScalabilityData GovernanceSchema ManagementCross-functional CollaborationPerformance OptimizationDistributed Systems

Tableau software

2 roles

Staff Software Engineer

Jan 2018Mar 2021 · 3 yrs 2 mos · Greater Seattle Area

  • In my role as Tech Lead for Tableau's Data Catalog product, I led the team from concept to launch, partnering closely with the product team to define strategic roadmaps for continuous enhancements.
  • Key Contributions & Leadership:
  • Designed and developed a high-performance graph query engine leveraging the TinkerPop stack, powering Data Lineage within Tableau's Data Catalog. This significantly enhanced Tableau Server's data management capabilities, fostering greater trust, visibility, and data discoverability.
  • Led the architecture and implementation of a Zero Downtime Graph Upgrade solution, enabling seamless, in-place upgrades of the graph database. This eliminated downtime and data availability issues previously experienced by customers during major releases.
  • Facilitated rapid onboarding of new team members by establishing an annual two-day bootcamp. This training, involving multiple subject matter experts, provided comprehensive technical insights into the product's core architecture and functionalities.
Graph Query EngineZero Downtime UpgradeTechnical LeadershipSoftware DevelopmentData Management

Senior Software Developer

Dec 2014Dec 2017 · 3 yrs · Greater Seattle Area

  • Tech Lead on the team, working on re-architecting the system – breaking into micro-services, spread across multiple geographic regions and running on AWS.
  • Re-designing configuration management pipeline in Tableau Server to support per-service isolation.
  • Delivered many features to the product, working end-to-end on them.
  • Migrate and store blob data in S3 outside of Postgres SQL database – reducing database size from around 800 GB to less than 200 GB.
  • Activity Feed on Tableau Public for users to follow activities of people in their network. Introduced Cassandra as the no-SQL data store in the stack.
  • Google Sheets connected visualizations on Tableau Public which remain updated with latest data – first of its kind for Tableau Public, all other visualizations’ data is static.
  • Developed testing framework to run tests conditionally based on capabilities of the environment under test.
Microservices ArchitectureConfiguration ManagementFeature DevelopmentSoftware DevelopmentSystem Architecture

Microsoft

Software Developer Engineer

Oct 2009Dec 2014 · 5 yrs 2 mos

  • Few projects I worked with Bing Relevance team
  • 1. Introduced locally relevant deeplinks (secondary links associated with top web result on search result page) on Bing, which improved the user experience significantly. It involved creating a model for identification of location of interest for a link and ranking results based on distance from user location.
  • 2. Made improvements to the title of deeplinks, driving the defect rate down by 50% and increasing the links with title by 6%.
  • Few projects I worked with Bing platform team:
  • 1. Workflow Debugger: Developed a web app for debugging various nodes in a complex multi-node, highly parallel execution workflow based on metadata associate with each node.
  • 2. Search whole page composition: Ported a monolithic service to new framework. Split the service into components which greatly improved the overall development agility across whole division.
  • Individually, designed and developed a component to quickly and easily create association between multiple search results.
  • 3. Query Statistics Serving: Developed the next-gen pipeline for serving query statistical data. Improved the data serving capacity of the pipeline by factor of 2.
  • 4. Bing-Yahoo Integration: Developed services for transferring terabytes of crawling data between Bing and Yahoo, as well as injecting data provided by Yahoo into Bing index.
  • Setup web servers using IIS, and implemented transfer protocol using ASP.Net and Bing proprietary distribute storage and computing solutions.
  • 5. Crawler Dashboard: Designed and developed dashboard for Bing’s web crawling service which provided day-to-day monitoring of the service.
  • Analyzed billions of records generated by crawler to formalize various metrics for health and performance measurement.
  • Defined and computed the maximum efficiency (theoretical) for the crawler.
Web DevelopmentSearch Engine OptimizationData Transfer ServicesSoftware DevelopmentSearch Technologies

Microsoft research

Research Intern

May 2008Jul 2008 · 2 mos

  • Aim of the research was to study NAND flash properties and experiment with various file systems on it to optimize the performance and life-time of device.
  • Accomplished setting up the test bed for measuring performance of NAND flash.
  • Developed a disk trace collecting tool to replay the load.
  • Exposed the APIs for underlying NAND device driver to experiment with various file systems.
NAND Flash PropertiesFile System Optimization

Education

Indian Institute of Technology, Kanpur

Bachelor of Technology — Computer Science and Engineering

Jan 2005Jan 2009

Indian Institute Of Technology

Bachelor of Technology — Computer Science and Engineering

Jan 2005Jan 2009

Stackforce found 100+ more professionals with Distributed Systems & Data Governance

Explore similar profiles based on matching skills and experience