Summary
Overview
Work History
Education
Skills
Websites
Certification
Timeline
Generic

Jagan Mohan Kanimetta

Summary

Senior Data Engineer with extensive experience in Data Platform, Data Migration and Datawarehousing. Possess specialized skills in data modeling, ETL development, and cloud computing solutions. Have successfully led teams in creating innovative data solutions to improve system efficiency and business decision-making processes. Demonstrated impact through enhanced data availability and accuracy in previous roles. Excel in problem-solving, teamwork, and communication, ensuring successful project outcomes and effective collaboration with cross-functional teams.

Overview

17
17
years of professional experience
1
1
Certification

Work History

Senior Data Engineer/Technical lead

Bank of America
Charlotte, NC
09.2023 - Current
  • Designed and implemented data pipelines using Azure Data Factory to ingest, process, and transform data from multiple sources into Azure Data Lake and SQL databases.
  • Built and maintained large-scale Azure Data Lake solutions to store unstructured and semi-structured data, enabling high-performance analytics.
  • Leveraged Azure Databricks and Apache Spark to process large volumes of data efficiently, reducing processing times by 30%.
  • Worked with ADF and its infrastructure, including Copy activity, Get metadata, Web activity, execute pipeline, Azure data flows, IR’S, Dataset and linked service implementation, IAM, triggers, synapse.
  • Executed complex data processing tasks using PySpark and Python, optimizing data workflows for performance across distributed systems.
  • Created ETL workflows for data transformation and cleansing, improving the data quality and reporting accuracy.
  • Implemented Azure Databricks notebooks to handle complex file transformations involving data sourcing formats like csv/parquet/Json.
  • Implemented Unity Catalog within Databricks to streamline data access and define RBAC enforcing security policies.
  • Utilized PySpark RDDs (Resilient Distributed Datasets) and DataFrames for efficient data manipulation and analysis in distributed computing environments.
  • Implemented Slowly Changing Dimension (SCD), utilizing delta tables and change data feed.
  • Developed and maintained detailed documentation for all data engineering processes, including data models, ETL workflows, and data transformation logic, ensuring transparency and ease of knowledge transfer.
  • Created the DAGs in Airflow for orchestration of tasks through Python code and using the operators.
  • Environment: Azure Databricks, Data Factory, Pyspark, Spark, Spark SQL, Azure SQL, Informatica 10.x.

Data Engineer/Technical lead

Mitsubishi Union Finance Group (MUFG)
Charlotte, NC
03.2015 - 08.2023
  • Company Overview: Projects: EDP Data Lake Pillar2, Application Production Support, OFSAA 6.1 upgrade etc.
  • Data Modeling and Data Architecture. Review and design serverless ETL jobs and pipelines, using AWS GLUE to organize, cleanse, Part of the Integration and configuration team for Enterprise Data Catalog & Data Governance. Lineage creation using Postman’s APIs, data dictionary load, support curate process, etc.
  • Responsible to design and develop Oracle PL/SQL Packages, Procedures, and triggers, on premises, RDS and EC2.
  • Support Amazon RDS: Oracle, SQL Server, MySQL, PostgreSQL, MariaDB and Aurora DB, with Data Replica and Auto Scaling.
  • Testing & bug tracking and software maintenance in a CI/CD environment for Database and Development Environment with GIT and Jenkin.
  • Design and development of scalable data pipelines using Databricks and managing the workflows with integration of AWS services like S3 and Glue etc.
  • Development of Ingestion, Curation and Consumption process in AWS for new or existing sources.
  • Functional test case preparation, execution, logging and tracking defects in Jira.
  • Report and discuss the status in scrum calls, attend all other meetings according to the Agile practice.
  • Worked on AWS Glue, Step Functions, Lambda, SQS, Redshift, pipelines for development, test execution and data validation.
  • Analyze business requirements and transformation rules for conversion into data validation test scripts.
  • Responsible for BAU activities and production support of various applications and making sure no impact on business.
  • Business development and delegating work to the teams by priority of the task and efficiency of the team, as well as mentoring the team.
  • Design, Develop and Supported Extraction, Transformation and Load Process (ETL) for data migration with Informatica 10.x/9.x with PL/SQL Packages.
  • Develop ETL mapping Documents like High Level Design (HLD) and Low Level design (LLD) for every mapping and Mapping specification document for smooth transfer of project from development to testing environment and then to production.
  • Performs the walkthrough on low-level design, Unit test plans and implementation plans at various stages of the project prepared by the team; Ensures that all the team members are following the PMP standards.
  • Develop shell scripts and PL/SQL Procedures as part of Oracle data load.
  • Projects: EDP Data Lake Pillar2, Application Production Support, OFSAA 6.1 upgrade etc.
  • Cloud Environment: AWS, Python, DataBricks
  • Environment: Informatica 10.x/9.5.1, Oracle 12c/11g, PL/SQL and UNIX Shell Scripting.

Informatica lead

CIGNA- IM (CCW Accel Rx/Rebates)
Bloomfield, CT
05.2012 - 02.2015
  • Understand the business rules completely based on High Level document specifications and implement the data transformation methodologies.
  • Business development and delegating work to the teams by priority of the task and efficiency of the team, as well as mentoring the team.
  • Handles Offshore-Onsite-Client communication; prepares Functional Design documents and reviews the deliverables and Quality Documentation.
  • Designed, Developed and Supported Extraction, Transformation and Load Process (ETL) for data migration with Informatica 9.x with support of Teradata database.
  • Developed ETL mapping Documents like High Level Design (HLD) and Low Level design (LLD) for every mapping and Data Migration document for smooth transfer of project from development to testing environment and then to production.
  • Performs the walkthrough on low-level design, Unit test plans and implementation plans at various stages of the project prepared by the team; Ensures that all the team members are following the PMP standards; interacts with the client to get the approvals of the design, coding and implementation.
  • Environment: Informatica 9.1.1, Teradata 14, Oracle 11g and UNIX Shell Scripting.

ETL Developer

Liberty Mutual
Hyderabad, India
01.2010 - 04.2012
  • Gathered the system requirements and created mapping document which gives detail information about source to target mapping and business rules implementation.
  • Drafted Business Requirement Documents, System Requirement Specifications, Business Work Flow Diagram, Use Case Diagram, Data Flow Diagram, Cross Functional Diagram to represent Business and System requirements.
  • Designed, developed and debugged ETL mappings using Informatica designer tool.
  • Created complex mappings using Aggregator, Expression, Joiner, Filter, Sequence, Procedure, Connected & Unconnected Lookup, Filter and Update Strategy transformations using Informatica Power center designer.
  • Extensively used ETL to load data from different sources such as flat files, XML to Oracle.
  • Worked on mapping parameters and variables for the calculations done in aggregator transformation.
  • Implemented slowly changing dimension for accessing the full history of accounts and transaction information.
  • Tuned and monitored in Informatica workflows using Informatica workflow manager and workflow monitor tools.
  • Environment: Informatica Power Center 8.6.1, Informatica power Exchange, Teradata, Unix and Mainframe.

Mainframe Developer

Marks and Spencers, UK
Chennai, India
04.2008 - 12.2009
  • Attending client work group meetings and getting the requirements during the design phase.
  • Preparing Low Level Designs.
  • Coordinate and Communicate with the offshore by conducting weekly status calls.
  • Reviewing the offshore design docs & code deliverables and ensures that the coding is inline with design specifications.
  • Ensuring quality process is followed at every stage of enhancement.
  • Training and Mentoring of the new joiners into the team and other teams by conducting KT sessions.
  • Worked for CFTO and CSSM applications and implemented successfully in production.
  • Writing the System Test Scripts and Test scenarios for the applications developed.
  • Environment: Cobol II, JCL, DB2.

Education

Master of Technology -

JNTU
Hyderabad, India
01.2005

Bachelor of Technology -

JNTU
Hyderabad, India
01.2002

Skills

  • Databricks
  • PySpark
  • Azure Data Factory
  • Azure SQL
  • AWS Glue
  • AWS Lambda
  • Python
  • S3
  • EMR
  • RDS
  • EC2
  • Athena
  • Snowflake
  • Informatica 10x
  • Autosys
  • Control M
  • SQL Server
  • Oracle 11g
  • Teradata 140
  • DB2
  • PL/SQL
  • Unix Shell Scripting
  • Apache Airflow
  • UNIX/LINUX

Certification

  • AWS Certified Solutions Architect - Associate
  • DB2 UDB Certified
  • AINS 21 Certified

Timeline

Senior Data Engineer/Technical lead

Bank of America
09.2023 - Current

Data Engineer/Technical lead

Mitsubishi Union Finance Group (MUFG)
03.2015 - 08.2023

Informatica lead

CIGNA- IM (CCW Accel Rx/Rebates)
05.2012 - 02.2015

ETL Developer

Liberty Mutual
01.2010 - 04.2012

Mainframe Developer

Marks and Spencers, UK
04.2008 - 12.2009

Master of Technology -

JNTU

Bachelor of Technology -

JNTU
Jagan Mohan Kanimetta