Summary
Overview
Work History
Education
Skills
Certification
Timeline
Generic

Mounika Kancharla

Houston,TX

Summary

Data engineering professional with 9 years of experience in asset and wealth management, working across a diverse technology stack. Skilled in designing and developing high-performance data processing solutions and building scalable pipelines using Spark and modern big-data/AWS technologies. Proven ability to work effectively in agile teams and deliver reliable, analytics-ready data for business intelligence.

Overview

12
12
years of professional experience
1
1
Certification

Work History

Lead Software Engineer

JP Morgan Chase
01.2025 - Current

Technologies: AWS Services(Glue, EMR, SQS, S3, Athena), Spark, Terraform, Python

  • Led a team of five to migrate the Security Classification Mechanism from a legacy platform to AWS, improving scalability and modernization across the application.
  • Executed the NADC migration end-to-end, coordinating seamlessly with multiple cross-functional teams to ensure a smooth and uninterrupted transition.
  • Directed multi-application resiliency (SRE) and network isolation initiatives that proactively reduced risk of application outages and minimized user-impacting downtime.
  • Fostered strong collaboration through consistent communication with internal and external stakeholders, enabling efficient delivery of complex engineering tasks.
  • Implemented a structured knowledge-sharing framework that improved team efficiency and reduced recurring issue resolution time.
  • Partnered with platform engineering teams to enhance system stability and optimize performance for large-scale data processing workloads.

Sr. Associate Engineer II

JP Morgan Chase
01.2022 - 12.2024

Technologies: AWS Services, Airflow, Linux OS, HDFS, hive, Python

  • Led the redesign and migration of source system integrations to eliminate uncertified connections, transitioning to standardized NDM-based data exchange for improved security and compliance.
  • Directed the architecture and migration of applications from physical servers to Virtual Server Infrastructure (VSI), enhancing scalability, reliability, and resource utilization.
  • Designed and implemented a centralized authentication cache layer to optimize interactions with Hadoop (HDFS), improving performance and reducing repeated authentication overhead.
  • Standardized all applications on the firm’s encryption protocols, reducing reliance on external services and lowering associated operational costs.
  • Architected secure upstream connectivity aligned with audit and security requirements, eliminating unsecured interfaces and mitigating compliance vulnerabilities.

Sr. Associate Engineer I

JP Morgan Chase
01.2020 - 12.2021

Technologies: Scala, Hadoop FS, Spark, Kafka, Zookeeper, Hive, Rest Api

  • Designed and developed a passwordless authentication framework for Spark-based data ingestion into Cassandra, enabling secure, seamless user authorization and improving pipeline reliability.
  • Architected a standalone ingestion pipeline leveraging Kerberos authentication to transform and load large-scale datasets into Cassandra.
  • Designed and implemented a lightweight Scala-based microservice to replace dependency on legacy scheduling tools, eliminating over 400 Autosys jobs and significantly reducing operational overhead in production.

Associate Engineer

JP Morgan Chase
01.2018 - 12.2019

Technologies: Hive, SQL, HDFS, Autosys

  • Designed and supported an event-driven ingestion pipeline using Spark to load and process incoming data files into Hadoop-based storage systems.
  • Collaborated with the Data Modeling team to align ingestion logic and transformation rules with defined data models and project requirements.
  • Implemented secure file transfer mechanisms between source systems and application servers, ensuring compliant and reliable data movement.

Analyst

JP Morgan Chase
05.2016 - 12.2017

Technologies: SQL, Informatica Power Center, MQ- Series for Messaging

  • Integrated MQ topics with Informatica PowerCenter to enable near–real-time ingestion and processing of client transaction messages.
  • Developed end-to-end ETL mappings using CDC-1 methodology to extract, transform, and load streaming data into dual database targets in parallel.
  • Improved data availability for business advisors by ensuring processed data was delivered before end-of-day decision-making cycles.

Informatica Developer

Aplomb Technologies
08.2015 - 02.2016

Technologies: SQL, Informatica Power Center, Control-M for Scheduling

Informatica Developer

Cloud Softech
06.2013 - 11.2013

Technologies: SQL, Informatica Power Center

Education

Masters - Computer Science

Texas A&M University-Kingsville
Texas, USA
05.2015

Bachelors - Computer Science

Jawaharlal University
Kakinada, Andhra Pradesh, India
05.2013

Skills

  • AWS: Glue, S3, Athena, CloudWatch, EMR, Amazon SQS, Lambda
  • Big Data Ecosystems: HDFS, Zookeeper, Hive, Yarn, Spark, Kafka
  • Programming Languages: Scala, SQL, Py-Spark
  • ETL Tools: Informatica Power Center, Pentaho
  • CI/CD: Jenkins, Jules, Aim, Terraform
  • Middleware: MQ series Integration In Informatica
  • Orchestration: AutoSys, Airflow

Certification

  • AWS Certified Cloud Practitioner
  • CKAD: Certified Kubernetes Application Developer
  • Apache Spark Essential Training: Big Data Engineering

Timeline

Lead Software Engineer

JP Morgan Chase
01.2025 - Current

Sr. Associate Engineer II

JP Morgan Chase
01.2022 - 12.2024

Sr. Associate Engineer I

JP Morgan Chase
01.2020 - 12.2021

Associate Engineer

JP Morgan Chase
01.2018 - 12.2019

Analyst

JP Morgan Chase
05.2016 - 12.2017

Informatica Developer

Aplomb Technologies
08.2015 - 02.2016

Informatica Developer

Cloud Softech
06.2013 - 11.2013

Bachelors - Computer Science

Jawaharlal University

Masters - Computer Science

Texas A&M University-Kingsville