Chaitanya ReddyReddy — DevOps Engineer
Highly skilled Data Engineer with over 14+ years of IT experience, including 10+ years specializing in data engineering. Proven expertise in designing, building, and optimizing scalable, high-performance data architectures across Big Data, cloud, and real-time analytics. Proficient in Big Data frameworks such as Apache Hadoop (HDFS, Hive, MapReduce), Apache Spark (PySpark, Spark Streaming), Apache Flink, and Elasticsearch/OpenSearch, with extensive experience in distributed computing platforms like Databricks, Amazon EMR, Google Dataflow, and Azure Synapse. Expert in programming languages including Python (6+ years), Java (7+ years), and Scala (9+ years), with a strong foundation in ETL/ELT design, data pipeline development, and data modeling (Dimensional Modeling, OLAP/OLTP, Star/Snowflake Schema). Extensive hands-on experience in real-time and batch data processing using Apache Kafka, Spark Streaming, Pulsar, and Amazon Kinesis, with strong data pipeline orchestration skills utilizing Apache Airflow, Prefect, Dagster, Apache NiFi, Azure Data Factory (ADF), and Apache Oozie. Proficient in SQL & NoSQL databases, including MySQL, PostgreSQL, Oracle, SQL Server, Apache HBase, MongoDB, and DynamoDB, with expertise in data warehousing technologies such as Snowflake, BigQuery, Amazon Redshift, and Azure Synapse Analytics. Skilled in Lakehouse architectures with Delta Lake, Apache Iceberg, and Hudi. Strong background in Cloud Data Engineering, working across AWS (EMR, Glue, S3, Lambda, Redshift, Kinesis), Azure (Data Factory, Databricks, Synapse, Microsoft Fabric), and GCP (BigQuery, Dataflow, Pub/Sub), with hands-on experience in Infrastructure as Code (IaC) using Terraform, AWS CloudFormation, and Azure Bicep. Deep knowledge of DataOps & DevOps best practices, including CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI), data governance (GDPR, CCPA, HIPAA compliance), data security, metadata management, and data quality frameworks (Great Expectations, dbt). Experience with containerization and orchestration using Docker, Kubernetes (EKS, AKS, GKE), along with monitoring and logging solutions such as Grafana, Prometheus, Datadog, AWS CloudWatch, ELK Stack, and Splunk. Familiarity with serverless computing & event-driven architectures, leveraging AWS Lambda, Azure Functions, GCP Cloud Functions, and message-driven workflows using Kafka, AWS EventBridge, SNS, and SQS. Knowledgeable in MLOps and AI/ML integration, working with ML pipelines (MLflow, TFX, SageMaker) and Feature Stores (Feast, Databricks Feature Store) for machine learning-powered data solutions.
Stackforce AI infers this person is a Data Engineering expert with extensive experience in Cloud solutions and Big Data architectures.
Experience: 8 yrs 4 mos
Skills
- Data Engineering
- Cloud Data Engineering
- Cloud Migration
- Ai/ml Solutions
- Full Stack Development
- Cloud Solutions
Career Highlights
- Over 14 years of IT experience with a focus on data engineering.
- Expert in designing scalable data architectures across multiple platforms.
- Strong background in Cloud Data Engineering and DataOps best practices.
Work Experience
JLL Technologies
Lead Data Engineer & AI/ML Solutions Architect – Azure | AWS | ADB | Spark | MS Fabric | Snowflake (3 yrs 5 mos)
Synechron
Senior Data Engineer & Cloud Migration Specialist – GCP | AWS | Azure | Spark | Microservices (1 yr 5 mos)
Monetary Authority of Singapore (MAS)
Senior Data Engineer & AI/ML Solutions Specialist – Cloudera | AWS | Spark | Java | Microservices (1 yr 4 mos)
IBM
Big Data Engineer & Full Stack Developer – AWS | Spark | Java | Microservices (3 yrs 7 mos)
Autodesk
Data Engineer (1 yr 2 mos)
Etisbew Technology Group, Inc. (A CMMI Level 3 Company)
Software Engineer (3 yrs 4 mos)
Education
Bachelor's degree at University of Madras