Naresh Dulam

CTO

McKinney, Texas, United States17 yrs 4 mos experience
AI EnabledAI ML Practitioner

Key Highlights

  • Led enterprise-scale Data Lakes and AI solutions.
  • Published books on Generative AI and data technologies.
  • Recognized mentor in AI and technology communities.
Stackforce AI infers this person is a Fintech and AI expert with extensive experience in data engineering and cloud solutions.

Contact

Skills

Core Skills

Cloud PlatformsBig Data & Ai/ml TechnologiesAi Frameworks & Libraries

Other Skills

Linux KernelCritical ThinkingKey Performance IndicatorsData StrategiesEnterprise SoftwareData PrivacySoftware ProjectsImplementation ExperienceMaster DataBusiness InsightsLeadershipPython (Programming Language)Object Oriented DesignSocial InfluenceAWS SageMaker

About

**Opinions expressed are solely my own and do not reflect the views or opinions of my employer.** As the Senior Vice President of Software Engineering, I lead the design and implementation of enterprise-scale Data Lakes and Generative AI solutions across diverse cloud platforms and cutting-edge technologies. With a robust background in Data Engineering, AI/ML Architecture, Cloud Infrastructure, and Enterprise Software Development, I deliver scalable, cost-effective, and impactful solutions. Professional Highlights: Author: Published books and technical articles, sharing insights on Generative AI and data technologies. https://play.google.com/store/books/details?id=4vE6EQAAQBAJ Technical Reviewer: Critically evaluating books and technical materials to ensure excellence. Global Public Speaker: Delivering keynotes and talks at prominent AI and technology forums. Top-Rated Mentor: Recognized for guiding and empowering individuals on platforms like Topmate and ADPList. AI Ambassador: Advocating for responsible AI innovation and its transformative potential at AI Frontier Network. Core Skills and Expertise: Cloud Platforms: Proficient in AWS and Azure services, enabling efficient cloud-based architectures. Big Data & AI/ML Technologies: Hands-on with Snowflake, Databricks, Spark, and TensorFlow. Programming & Tools: Skilled in Java, Python, Docker, Kubernetes, Terraform, and Airflow. AI Frameworks & Libraries: Expertise in LangChain, LlamaIndex, Scikit-learn, PyTorch, and TensorFlow. Advanced AI Techniques: Experienced in Supervised Learning, XGBoost, LightGBM, Linear Regression, Random Forest, and SVM. Vision & Passion: I thrive on collaborating with cross-functional teams to optimize data architecture and storage solutions, creating robust data ecosystems that are both scalable and cost-effective. Beyond technology, I am deeply passionate about mentorship and knowledge sharing, fostering growth in the tech community. I am committed to staying at the forefront of emerging technologies, continuously learning, and driving data-driven innovation in every initiative.

Experience

Global alliance for artificial intelligence

Community Delegate (Volunteer – Non-Operational)

Jul 2025Present · 8 mos · Remote · Remote

  • As part of a global community initiative, I contribute to advancing responsible, transparent, and ethical AI adoption through thought leadership, public engagement, and strategic discussions.
  • My role involves participation in global forums, policy dialogues, and mentorship initiatives aimed at strengthening AI awareness, governance, and education.
  • This is a voluntary, unpaid, non-operational role focused on advocacy, community building, and knowledge sharing.
  • I do not engage in technical execution, paid services, or commercial activities.

Houston christian university

Community Contributions/Volunteer - Strategic Artificial Intelligence Advisory Member

May 2025Present · 10 mos · United States · Remote

  • An unpaid, voluntary advisory role supporting the university’s strategic direction in Artificial Intelligence education and ethics.
  • My contributions focus on:
  • Providing high-level strategic guidance on AI curriculum vision
  • Offering guest lectures and academic mentorship
  • Supporting responsible AI discussions across academia and industry
  • I do not perform operational, paid, or research execution work.

Caio circle

Community Founding Contributor (Volunteer)

Jan 2025Present · 1 yr 2 mos · Dallas, Texas, United States · Remote

  • Active member and founding contributor to a regional community of Chief AI Officers and AI leaders.
  • Activities include:
  • Knowledge sharing through professional forums and podcast discussions
  • Community mentoring and thought leadership in enterprise AI strategy
  • Supporting collaboration among AI leaders across industry, academia, and startups
  • Role is strictly community-focused and voluntary.

Jpmorgan chase & co.

2 roles

Senior Vice President of Software Engineering

Promoted

Oct 2019Present · 6 yrs 5 mos · Plano, Texas, United States · Hybrid

  • Built WM Risk & Control Analytics is next gen data platform, Controls Data as Product from ground up to replace legacy data warehouse system, and Hadoop as part of modernization with seamless user experience for 1500+ business users. This helps the organization to reduce vendor license and consulting costs. With Data mesh architecture of New Data Analytics platform built on Data mesh principles reduce the storage cost, no duplication of data and with open-source cloud native technologies.
  • o
  • Envisioned, Architected, and operationalized the WM Data Analytics Hybrid Platform to run the MIS Reporting, data science and analytical usage for Private Bank.
  • o
  • Data lake with capacity of 1.5 PB supporting more than 2000 users on-prem (PB/IPB Data) and Cloud Data Lake for Market/Index data serving 200+ data scientists for AI/ML Use cases.
  • o
  • Designed and built a generic data ingestion pipeline framework, which helped in quicker data onboarding on to the platform.
  • o
  • Architected the design pattern for self-serve for both ingestion and consumption. This design pattern was made a standard for other technology teams, thus reducing build time for multiple teams across lines of businesses.
  • JPMC Advanced Data Ecosystem is a firm wide public cloud -based data platform comprised of reusable components required by LOBs to establish and machine learnable datasets to the public cloud – making it available for AI/ML, analytics, BI Use cases.
  • As part of cloud migrations all the app teams are required to build the cloud pipelines which is duplication of work. A framework required which adheres to JPMC security standards.
  • o
  • Architect/Develop the cloud pipelines for LOB to use the same pipelines in their cloud migration journey.
  • o
  • Responsible for building containerized applications for both data ingestion to cloud and transformation.
  • Migrating the existing containerized application running in private cloud to public cloud AWS (Use case Airflow with OIDC Authentication)
Linux KernelCritical ThinkingKey Performance IndicatorsData StrategiesEnterprise SoftwareData Privacy+45

Big Data Developer

Apr 2018Apr 2019 · 1 yr · Plano

  • During the Covid pandemic, due to sudden change in the markets, the front office team wanted to be able to do lot more analysis, bring in their own data and identify patterns using ML and AI algorithms. Though we had data in our platform, the way they can leverage it was limited since they lacked the right tools and set up. We designed and built sandboxes in a matter of weeks to help our advisors deliver to the clients.
  • o
  • Monetized data and made it available for business quants and analysts via a self-serve platform.
  • o
  • A huge value-add for the analysts which helped them generate more revenue by providing targeted WM Solutions.
  • o
  • Completed design, build, and roll out in less than 3 months, done over multiple iterations.
Linux KernelCritical ThinkingKey Performance IndicatorsData StrategiesData PrivacySoftware Projects+36

Blue cross and blue shield of illinois, montana, new mexico, oklahoma & texas

Big Data Engineer

Apr 2019Oct 2019 · 6 mos · Richardson,Texas · Hybrid

  • Provider Analytical platform is used to help 3rd party vendor, claim processing. This initiative will help the users to give a 360 degrees overview about claims and vendor consumption.
  • o
  • Responsible for the implementation of the Data Ingestion framework using spark to consume the xml and mainframe files.
  • o
  • Developed custom input formats to read ORC data from Acid tables in Hive for Spark jobs.
  • o
  • Designed and developed physical and logical models for data used in different extracts.
Linux KernelCritical ThinkingKey Performance IndicatorsData StrategiesData PrivacySoftware Projects+40

At&t

Sr.Hadoop/Java Developer

Apr 2017Apr 2018 · 1 yr · Dallas-Fort Worth Metroplex

  • As a Senior Big Data Consultant build the data marts for different applications teams across ATT.
  • o
  • Migrate existing MR jobs built on vendor framework to Open-source spark jobs.
  • o
  • Providing data sets available from different sources on to Hadoop as per Adhoc customer requirements.
  • o
  • Developed monitoring and alerting framework using Kafka connect that is being used by operations teams to monito application in production.
  • o
  • Build the Sqoop to bulk load data HBase for complex data types.
Linux KernelKey Performance IndicatorsData StrategiesSoftware ProjectsImplementation ExperienceBusiness Insights+25

Bank of america

Hadoop Engineer

Oct 2016Apr 2017 · 6 mos · Charlotte Metro

  • Hadoop as a Service(HAAS)- Hadoop as a service is a new initiative in BoA to build one centralized cluster across all the applications. As part of this initiative all the Hadoop clusters (53) across bank will be merged into HAAS cluster. This enormous cluster configuration, resources, tools, standards are defined by HAAS Architecture Team.
  • As a key member of Architecture Team I did get an opportunity to work on below tasks. Biggest challege on the platform is to support multi tenants on the clusters
  • Worked on evaluating frameworks like Spark, Spark Streaming, Kafka, Flume, HBase and other tools to empower tenants of HAAS Clusters.
  • Actively worked with Cloudera support to debug multiple P1, P2 issues on the clusters.
  • Analyzed dynamic resource pools (fair scheduler) in depth and proposed resource pools to handle various workloads from varied tenants.
  • Reviewed and recommended Hadoop cluster configurations for optimal utilization.
  • Worked on defining standards and guidelines for Hadoop and Hive
  • As part of security tool review, evaluated flume and its defined work around for the security loop holes.
  • Configured flume nodes with multiple flows, used morphline to convert Json data to avro data.
  • Worked on Design to provide High availability for critical flows that cannot have any outage.
  • Worked on Design to provide Active –Active cluster where both production and DR clusters are made available all the time.
  • Worked on HDFS Encryption POC, to support PCI guidelines on the cluster.
  • Evaluated multiple projects and provided consulting work for various internal teams on implementation of a Bigdata projects.
  • Implemented Spark POC using Scala as part of tool acceptance test.
  • Worked on HDFS Encryption POC to support PCI guidelines on the cluster.
  • Evaluated multiple projects and provided consulting work for various internal application teams on implementation of Big data projects.
Linux KernelSoftware ProjectsImplementation ExperienceObject Oriented DesignSocial InfluenceData Standards+13

T-mobile

Hadoop developer

Feb 2016Oct 2016 · 8 mos · United States

  • Hadoop developer
  • Exported data using Sqoop from HDFS to Teradata on regular basis.
  • Converting map reduce/pig/jobs into Spark jobs; also transform the Hive jobs into Spark SQL
  • Implemented POC using Spark and Cassandra
  • Write Hive queries for data analysis to meet the business requirements. (also create tables using Hive QL)
  • Create the workflow to run multiple Hive and Pig jobs, which run independently with time and data availability.
  • Developed PIG Latin scripts for the analysis of semi structured data; involved in debugging PIG scripts
  • Worked closely with data warehouse architect and business intelligence analyst to develop solutions.
  • Design, develop Hive Data model, loading with data and writing Java UDF for Hive
  • Developed Pig Latin scripts and used Pig as ETL tool for transformations, event joins, and filter.
  • Designed and Developed Sqoop scripts to extract data from a relational database into Hadoop.
Software ProjectsImplementation ExperienceObject Oriented DesignDevelopment OperationsProblem SolvingEngineering Management+3

Oracle

Hadoop Developer

Jan 2013Jan 2016 · 3 yrs

  • Developed multiple Map Reduce jobs in Java for data cleaning and preprocessing
  • Importing and exporting data into HDFS and Hive using Sqoop Jobs
  • Extracted files from Oracle DB through Sqoop and placed in HDFS and processed Developed Hive queries, Hive UDF’s according to business requirement
  • Developed PIG Latin scripts for the analysis of semi structured data. Involved in debugging PIG scripts
  • Involved in designing data modeling for hive/cassandra tables
  • Troubleshooting the existing jobs while migrating them to higher version
  • Developed pig scripts to preprocess data before moving the data into final tables
  • Load and transform large sets of structured, semi structured and unstructured data
  • Supported Map Reduce Programs those are running on the cluster
  • Wrote shell scripts to monitor the health check of Hadoop daemon services and respond accordingly to any warning or failure conditions
  • Involved in loading data from UNIX file system to HDFS, configuring Hive and writing Hive UDFs
  • Utilized Java and MySQL from day to day to debug and fix issues with client processes
  • Managed and reviewed log files
  • Implemented partitioning, dynamic partitions and buckets in HIVE
Software ProjectsImplementation ExperienceObject Oriented DesignDevelopment OperationsProblem SolvingEngineering Management+3

Infor

Software Engineer

Sep 2008Dec 2012 · 4 yrs 3 mos

  • o Oracle Certified Java Professional with over all 8 years of experience in Information Technology and a wide range of progressive experience in providing product specifications, design, analysis, development, testing, documentation, coding and implementation of the business technology solutions.
  • o I have well experienced in training, mentoring and motivating other members of the team and other teams to achieve the goals of the company.
  • o Worked with development and maintenance teams for developing and maintaining in Oracle Fusion Product development.
  • o Performed admin tasks like installing and configuring the Weblogic server and Fusion Tech stack installation for Fusion Applications.
  • o We-versed in SQL/PLSQL.
  • o Good Exposure to Big Data technologies like hadoop.
Software ProjectsImplementation ExperienceObject Oriented DesignDevelopment OperationsProblem SolvingEngineering Management+3

Education

Jyothisamithi Institute of Technological Science

Bachelor's degree — Computer Science

Stackforce found 100+ more professionals with Cloud Platforms & Big Data & Ai/ml Technologies

Explore similar profiles based on matching skills and experience