Rameshwari Jadhav

Data Engineer

Los Angeles, California, United States4 yrs 1 mo experience
AI EnabledAI ML Practitioner

Key Highlights

  • Proficient in Python, SQL, and data engineering.
  • Experience with AI and machine learning technologies.
  • Strong work ethic and exceptional teamwork abilities.
Stackforce AI infers this person is a Data Engineer with expertise in cloud-based data solutions and AI technologies.

Contact

Skills

Core Skills

Data EngineeringCloud InfrastructureArtificial IntelligenceMachine Learning

Other Skills

ETL pipelinesDataflowCloud ComposerBigQuerySQLPythonAITensorFlowPyTorchGCPAzureApache AirflowData migrationData transformationAWS

About

I am seeking a position that will allow me to leverage my skills as an administrator while further developing my technical expertise. I am passionate about programming in Python, SQL, and Java, and have recently started expanding my knowledge in C++ and Data Structures & Algorithms (DSA). As a motivated fresher, I possess a strong work ethic, exceptional teamwork abilities, and the capability to manage multiple tasks simultaneously. In addition to my core programming skills, I am actively enhancing my capabilities in the rapidly evolving fields of Artificial Intelligence (AI) and Machine Learning (ML). I have experience working with Large Language Models (LLMs) and Generative AI (GenAI) technologies, and I am familiar with platforms such as Vertex AI for building and deploying AI solutions. I am eager to apply my skills in data engineering, model development, and cloud platforms (AWS, Azure, etc.) to contribute to impactful and innovative projects.

Experience

4 yrs 1 mo
Total Experience
2 yrs
Average Tenure
2 yrs
Current Experience

Onix

Data Engineer

May 2024Present · 2 yrs · Tampa, Florida, United States · Remote

  • Building and validating ETL pipelines using Dataflow, Cloud Composer, and BigQuery
  • Designing data quality checks, row-count validation, and schema mapping logic
  • Automating metadata extraction from legacy scripts using custom Python parsers
  • Wrote optimized SQL transformations for sales, inventory, and product catalogs
  • Developed audit logging & row-level lineage for pipeline observability
  • Built an export-import framework using GCS as an intermediary layer
  • Worked with Looker and Data Studio for dashboard transition from legacy BI tools
  • Creating delta-load orchestration using Airflow and GCS change tracking
ETL pipelinesDataflowCloud ComposerBigQuerySQLPython+2

California state university, northridge

Graduate Student Assistant (Software Engineer)

Aug 2023May 2024 · 9 mos · Los Angeles, California, United States · On-site

  • As a Teaching Assistant for graduate-level courses in Artificial Intelligence, Machine Learning, and Data Science, I supported academic and research activities focused on advanced technologies shaping today’s digital world.
  • 1. AI & Machine Learning Support
  • Mentored students on supervised and unsupervised learning algorithms, including decision trees, SVM, k-means, and ensemble methods.
  • Provided 1-on-1 guidance on model development, hyperparameter tuning, and evaluation techniques using tools such as Scikit-learn, TensorFlow, and PyTorch.
  • 2. Deep Learning & NLP
  • Led lab sessions covering neural network architectures including CNNs, RNNs, and Transformers, with real-world applications in image classification and language modeling.
  • Guided projects involving sentiment analysis, topic modeling, and text classification using NLP libraries like NLTK, spaCy, and Hugging Face Transformers.
  • 3. Cloud Computing & Big Data Integration
  • Helped students deploy ML models and data pipelines using Google Cloud Platform (Vertex AI, BigQuery) and Azure (Data Factory, Databricks).
  • Assisted with labs focused on scalable ML, AutoML, and serverless architecture using tools like Apache Beam, Dataflow, and Cloud Composer.
  • 4. Capstone & Research Projects
  • Actively reviewed and advised final-year AI projects, ensuring proper application of ethical AI principles, reproducibility, and compliance with data privacy standards (GDPR/CCPA).
  • 4. Skills Developed & Shared
  • Python, SQL, TensorFlow, PyTorch, Hugging Face, GCP, Azure, Apache Airflow, BigQuery, Docker, Git, NLP, Data Visualization, and responsible AI practices.
AIMachine LearningPythonSQLTensorFlowPyTorch+5

Datametica

2 roles

Data Engineer II

Promoted

Jul 2022Aug 2023 · 1 yr 1 mo · Pune, Maharashtra, India

  • At Datametica, I played a pivotal role in architecting and implementing large-scale data migration and transformation solutions across cloud platforms including Google Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft Azure. I contributed to multiple end-to-end data engineering projects in highly regulated environments, ensuring scalability, efficiency, and compliance.
  • 1. Google Cloud Platform (GCP)
  • Led the migration of enterprise data warehouses from Teradata and Hive to BigQuery, ensuring schema integrity and performance optimization.
  • Designed and deployed Apache Airflow DAGs using Cloud Composer to orchestrate ETL workflows.
  • Utilized Dataflow (Apache Beam) for real-time and batch data processing pipelines.
  • Built monitoring dashboards to track data quality and pipeline performance.
  • 2. Amazon Web Services (AWS)
  • Implemented ELT pipelines using AWS Glue, Lambda, and Athena to enable scalable data ingestion and transformation from S3-based data lakes.
  • Developed serverless jobs for data cleansing, enrichment, and aggregation using Step Functions and Glue Jobs.
  • Integrated Redshift with downstream analytics tools like Tableau and Power BI for BI reporting.
  • 3. Microsoft Azure
  • Built enterprise-grade pipelines using Azure Data Factory (ADF) to extract and transform data across SQL and NoSQL sources.
  • Leveraged Databricks on Azure for distributed processing and machine learning model integration.
  • Implemented data validation and transformation logic using PySpark and integrated with Azure SQL Database and Blob Storage for persistence.
  • 🔹 Core Responsibilities
  • Engineered robust data models for reporting, forecasting, and ML pipelines using star/snowflake schema and partitioning strategies.
  • Developed and optimized complex SQL queries across distributed datasets for performance improvement (~40% reduction in processing time).
  • Ensured data governance, version control, and end-to-end documentation to meet compliance and audit requirements.
Data migrationData transformationGCPAWSAzureApache Airflow+4

Data Engineer I

Jul 2021Aug 2022 · 1 yr 1 mo · Pune, Maharashtra, India

  • Writing optimized and scalable SQL for petabyte-scale datasets
  • Automating data quality checks using Python and shell scripting
  • Collaborating in Agile teams with cross-functional stakeholders
  • Gaining exposure to tools like DBT, Airflow, and version control via Git
SQLPythonDBTAirflowGitData Engineering

Education

California State University, Northridge

Masters — Computer Engineering

Aug 2023Aug 2025

Dr. Babasaheb Ambedkar Technological University

Bachelor of Technology - BTech — Computer Science

Jan 2018Jan 2022

Stackforce found 100+ more professionals with Data Engineering & Cloud Infrastructure

Explore similar profiles based on matching skills and experience