Summary
Overview
Work History
Education
Skills
Certification
Timeline
Generic
JAGADISH DAMODARAN

JAGADISH DAMODARAN

Frisco,TX

Summary

Bringing over 17+ years of specialized expertise, my focus centers on implementing Data Lake solutions across diverse domains such as Banking, Retail, Life science, Healthcare, and Insurance. I command 8+ years of extensive experience overseeing AWS Cloud management, Big Data Hadoop, Hive, Glue, Sage Maker, EC2, S3, Lambda, Athena, Kinesis Firehouse, EMR and Redshift implementations. Proficient in Hadoop framework, data modeling, data mining, and machine learning, I possess hands-on familiarity with HDFS, MapReduce, Hive, and Sqoop. As a design architect, I've spearheaded seamless migration projects from on-premises to AWS Cloud, demonstrating adeptness in data architecture, ETL workflows, AWS services, and data warehousing solutions. My skill set extends to AWS services like EC2, S3, RDS, Elastic Load Balancing, CloudWatch, and Kinesis Firehouse, driving effective application design. My track record showcases precision in executing Data Warehouse solutions using Confidential Redshift in data migration. Proficient in ETL/Pipeline Development with tools like Delta Lake, Apache Spark and Python, I specialize in crafting machine learning models using AWS Sage Maker for predictive analytics and data-driven insights. My leadership encompasses large-scale enterprise data pipeline development, underscored by a resolute commitment to data quality and integrity. Adept at improving security, operational processes and systems implementation procedures. Proficient in disaster recovery planning, emergency response, project mapping and training. Analytical IT Manager skilled driving efficiency and productivity through development, delivery and support of strategic plans. Dedicated to creating and delivering initiatives to guide organizational navigation of modern technology. Proven history of successfully translating technical requirements into business solutions. Strong proficiency in AWS with passion for utilizing excellent communication to foster company rapport.

Overview

18
18
years of professional experience
1
1
Certification

Work History

Senior Software Engineering Manager

Cognizant Technology Solution-Texas
Frisco, TX
06.2022 - Current
  • Led and managed a high-performing team of data engineers, machine learning specialists, and cloud architects
  • Architected, designed, and implemented end-to-end data solutions on AWS, leveraging services like EMR, Sage Maker, Redshift, and Lambda
  • Standardized ROM/LOE estimation procedures
  • Construct roadmap for R&D enhancements every year for successful application governance
  • Directed the development of data pipelines using tools such as Apache Spark, Kafka, and Airflow for seamless data processing and analysis
  • Spearheaded the implementation of machine learning models for predictive analytics, driving data-driven insights for strategic decision-making
  • Collaborated with cross-functional teams to define data requirements, ensuring alignment with business goals
  • Implemented cloud-based monitoring solutions, enhancing observability, and ensuring data quality and system performance
  • Managed and optimized data warehouse structures, enabling efficient storage, retrieval, and analysis of large datasets
  • Mentored team members, conducted performance reviews, and provided technical leadership and guidance.
  • Developed data security and disaster recovery procedures.
  • Facilitated meetings with engineering teams to discuss project progress and address any roadblocks.
  • Implemented automation tools to improve build times, reduce deployment cycles, optimize resource utilization and enhance productivity.
  • Reviewed and approved project plans prior to implementation.
  • Developed processes to create high-quality products and customer satisfaction.
  • Supervised projects to track feature requirements, milestones, resources, build releases and change requests.
  • Recruited engineering personnel, technical staffing and decided on project team formation.
  • Led and mentored software engineers on proper techniques and direction.
  • Examined metrics and prepared IT project progress reports.
  • Provided technical guidance to the team on coding standards, design patterns, code reviews and testing best practices.
  • Led software teams in optimizing test plans and improving test designs.
  • Developed and implemented software development processes and methodologies to ensure quality, scalability, and performance.
  • Architected design and development of high visibility customer specific applications.
  • Managed risks and worked with cross-functional team leads to produce deliverables and meet customer requirements.
  • Provided technical leadership by fostering innovation through research and experimentation while maintaining high standards of quality assurance throughout the development process.
  • Monitored system performance metrics to identify potential issues before they become problems.
  • Collaborated with stakeholders to define product requirements and develop project plans for successful delivery of initiatives.
  • Reviewed program plans to develop and coordinate activities.
  • Managed a team of 15 software engineers including hiring, training, mentoring, assigning tasks and evaluating performance.

Principle Engineer

Sun Life Financial
Kansas City, KS
12.2018 - 07.2022
  • Architected, designed, and developed data reporting structures for enterprise data warehouses and Data Lakes
  • Designed and developed a Python Parser to auto-convert HiveQL codes into equivalent PySpark (Spark SQL) jobs to leverage the Spark capabilities on AWS EMR, reducing conversion time by over 90%
  • Reduced job workflow creation time by 80% through an automated Oozie workflow creation framework
  • Designed services for seamless monitoring like monitoring Active EMR Clusters and Sage Maker instances running across all regions
  • Used Boto3 library and deployed the solution on Lambda
  • Business notifications were configured via SES and scheduled via CloudWatch
  • Enabled Amazon Sage Maker for Risk Modelling (Machine Learning) to leverage Spark on Sage Maker both locally and through remote EMR Cluster
  • Persisted Sage Maker Jupyter Notebooks in S3 instead of a local EBS volume
  • Executed Big Data solutions on AWS and On-prem platforms
  • Provided technical expertise, crafting software design proposals
  • Utilized MapReduce, HDFS, Hive, and MongoDB effectively
  • Managed global DevOps projects under Agile, integrated Hadoop into traditional ETL for efficient data processing
  • Applied Data Science packages in Python/R, loaded aggregate data for cost-saving insights
  • Developed Spark scripts, UDFs for data aggregation, wrote data back into RDBMS via Sqoop
  • Designed Hive schemas using techniques like partitioning and bucketing.
  • Developed and implemented engineering strategies to improve production efficiency.
  • Developed ballpark estimates and monitored operating budget of several multi-year projects in various functional areas.
  • Conceptualized and developed solutions using simulations or modeling to support expansions and change.
  • Wrote performance requirements for product development or engineering projects.
  • Managed and directed material analysis actions to test for durability, brittleness, flex and porousness.
  • Executed investigation processes to examine or assess functionality and practicability of engineering structures and elements.
  • Designed engineering experiments.
  • Implemented proven methods to optimize production while meeting financial goals and providing cost savings.
  • Communicated with clients and coworkers about analysis results.
  • Analyzed design or requirement information for equipment or systems.

Lead Data Engineer

Bank of the West
Arizona City, AZ
02.2018 - 12.2018
  • Led a team of data engineers in designing and developing scalable data processing systems using Hadoop, Spark, and AWS services
  • Successfully migrated over 40+ client applications from on-premises to the AWS Cloud in under 2 years, resulting in a 55% cost savings for customers
  • Coached engineers individually, providing regular feedback and aligning individual skills and growth areas with company needs and trajectory
  • Coordinated cross-team projects around security, operational readiness, and product features
  • Worked closely with Product and Design to plan and timeline core features Developed the team's proficiency in project management, operational excellence, and technical mastery
  • Built new teams by hiring and onboarding engineers and other managers
  • Facilitated architecture discussions, process improvements, planning, and other agile rituals
  • Managed end-to-end ETL processes, from data ingestion to transformation and loading into data warehouses
  • Implemented DevOps practices to automate deployment, monitoring, and scaling of data pipelines
  • Oversaw the creation of data architecture, data models, and data governance frameworks to ensure data accuracy and consistency
  • Collaborated with Data Scientists to deploy machine learning models into production environments, enabling real-time decision-making
  • Conducted performance tuning and optimization of ETL processes, reducing processing times by 30%
  • Acted as a technical point of contact for stakeholders, providing expert guidance on data engineering best practices.
  • Followed industry innovations and emerging trends through scientific articles, conference papers or self-directed research.
  • Led development projects from conception through deployment while managing resources effectively.
  • Conducted research on new technologies related to big data analytics and machine learning.
  • Developed new functions and applications to conduct analyses.
  • Deployed cloud-based solutions such as AWS EMR clusters for distributed computing operations at scale.
  • Performed root cause analysis on failed jobs by troubleshooting issues with underlying systems or applications.
  • Identified potential risks associated with data processing activities and proposed mitigation strategies.
  • Adept in troubleshooting and identifying current issues and providing effective solutions.
  • Created SQL scripts to query, update, and manage databases as well as optimize queries for performance improvement.
  • Designed and implemented ETL process for transforming raw data into meaningful insights.
  • Ensured compliance with industry standards like GDPR while handling customer's personal information securely.
  • Tested, validated and reformulated models to foster accurate prediction of outcomes.

Lead Data Engineer

H&R Block- Kansas City, MO
Kansas City, MO
10.2014 - 02.2018
  • Lead Data Engineer and DevOps ETL Tech Lead, responsible for scalable distributed data solutions using Hadoop
  • Evaluate business requirements, create specifications, and develop MapReduce Jobs using Hive for data cleansing and downstream loading
  • Analyze large datasets to optimize aggregation and reporting methods
  • Handle global teams and DEVOPS process, manage data import from diverse sources, transformations, and extraction using Sqoop
  • Implement partitioned tables in Hive, create technical documentation, and manage Hadoop log files
  • Design, customize, and manage data models for real-time data warehouse supporting multiple sources
  • Construct ETL architecture, source-to-target mappings, and implement Dimensional modeling (Star Schema)
  • Utilize Type 1 and Type 2 SCD mappings for data updates and modifications
  • Enhance existing mappings, develop UNIX shell scripts for FTP and repository management
  • Perform performance tuning and optimize source, target, mappings, and sessions
  • Create migration documents for seamless movement of mappings across development, testing, and production repositories.
  • Led development projects from conception through deployment while managing resources effectively.
  • Conducted research on new technologies related to big data analytics and machine learning.
  • Created graphs and charts detailing data analysis results.
  • Performed root cause analysis on failed jobs by troubleshooting issues with underlying systems or applications.
  • Deployed cloud-based solutions such as AWS EMR clusters for distributed computing operations at scale.
  • Designed surveys, opinion polls and assessment tools to collect data.
  • Created SQL scripts to query, update, and manage databases as well as optimize queries for performance improvement.
  • Recommended data analysis tools to address business issues.
  • Designed and implemented ETL process for transforming raw data into meaningful insights.

ETL Tech Lead

H&R Block- INDIA
10.2009 - 10.2014
  • Performed end-to-end data analysis, design, implementation, testing, and support of ETL processes for Stage, ODS, and Mart
  • Developed technical specifications, conducted unit tests, and created necessary programs and scripts
  • Conducted DB analysis, SQL performance tuning, and utilized Change Data Capture (CDC) for efficient ETL
  • Utilized debugger and UNIX scripting for troubleshooting, optimization, and workflow monitoring
  • Provided technical expertise for ETL solutions, collaborating with Business Analysts, Data Modelers, and BI Leads
  • Performed data profiling, quality checks, and utilized Erwin for data modeling
  • Managed ETL process metadata and implemented performance tuning techniques
  • Troubleshooted production failures, conducted root cause analysis, and applied emergency code fixes.

Software Engineer

B1C Solutions - INDIA
06.2006 - 09.2009
  • Effectively involved in requirements gathering, writing ETL Specs and preparing design documents
  • Designed & developed Informatica mappings for data sharing between interfaces utilizing SCD type 2 and CDC methodologies
  • Fixed various performance bottlenecks involving huge data sets by utilizing Informatica partitioning, pushdown optimizations and SQL overrides
  • Worked on parameters, variables, procedures, scheduling, and pre/post session shell scripts
  • Built sample Micro strategy reports to validate BI requirements and loaded data
  • Designed migration plan and cutover documents; created and monitored Informatica batches
  • Worked on requirement traceability matrix, provided support for integration and user acceptance testing.

Education

Bachelor of Engineering in Electronics & communication -

Anna University

Master of Business administration in Information Technology -

University of Madras

Skills

  • Pyspark, SQL, PL/SQL, HTML, XML, UNIX Shell Scripting, Python, PowerShell
  • Netezza, Oracle, SQL Server, MongoDB, DynamoDB, Teradata
  • ThoughtSpot, Tableau, Power BI, SQL Server Reporting Services, Dundas Dashboard, Power BI
  • Alteryx, Delta Lake, Data Pipelines, QDC, Talend, Informatica Power Center
  • Agile Work Processes
  • Staff Hiring
  • Product Development
  • Security Improvements
  • Business Development Support
  • Project Planning
  • Project Coordination
  • Infrastructure Planning
  • Project Leadership
  • Budget Administration
  • Requirements Analysis
  • Security Planning
  • Infrastructure Development

Certification

  • AWS Certified Machine Learning - Specialty
  • AWS Certified Data Analytics - Specialty
  • AWS Certified Solutions Architect - Associate
  • PCAP - Certified Associate in Python Programming

Timeline

Senior Software Engineering Manager

Cognizant Technology Solution-Texas
06.2022 - Current

Principle Engineer

Sun Life Financial
12.2018 - 07.2022

Lead Data Engineer

Bank of the West
02.2018 - 12.2018

Lead Data Engineer

H&R Block- Kansas City, MO
10.2014 - 02.2018

ETL Tech Lead

H&R Block- INDIA
10.2009 - 10.2014

Software Engineer

B1C Solutions - INDIA
06.2006 - 09.2009

Bachelor of Engineering in Electronics & communication -

Anna University

Master of Business administration in Information Technology -

University of Madras
JAGADISH DAMODARAN