Summary
Overview
Work History
Education
Skills
Timeline
Generic

SAI CHANDU SUNKARA

Jersey City,NJ

Summary

Dynamic Data Engineer with over three years of experience in extracting, transforming, and analyzing data to enhance reporting, performance tracking, and strategic decision-making. Expertise in SQL, Python, and data visualization tools facilitates trend identification, KPI monitoring, and operational efficiency improvement. Proven track record in designing dashboards, conducting statistical analyses, and interpreting large datasets to solve complex business challenges while collaborating with cross-functional teams to translate business requirements into actionable analytical solutions. Committed to building reproducible data workflows and implementing robust data validation processes to ensure accuracy and consistency in fast-paced environments.

Overview

5
5
years of professional experience

Work History

Data Engineer

CoreWave
01.2025 - Current
  • Collaborated on ETL (Extract, Transform, Load) tasks, maintaining data integrity and verifying pipeline stability.
  • .Fine-tuned query performance and optimized database structures for faster, more accurate data retrieval and reporting.
  • Enhanced data quality by performing thorough cleaning, validation, and transformation tasks.
  • Streamlined complex workflows by breaking them down into manageable components for easier implementation and maintenance.
  • Provided technical guidance and mentorship to junior team members, fostering a collaborative learning environment within the organization.
  • Optimized data processing by implementing efficient ETL pipelines and streamlining database design.
  • Migrated legacy systems to modern big-data technologies, improving performance and scalability while minimizing business disruption.
  • Increased efficiency of data-driven decision making by creating user-friendly dashboards that enable quick access to key metrics.
  • Designed scalable and maintainable data models to support business intelligence initiatives and reporting needs.
  • Automated routine tasks using Python scripts, increasing team productivity and reducing manual errors.

Data Engineer

JPMorgan Chase & Co.
09.2021 - 07.2023
  • Led the design and automation of data pipelines using Python, SQL, and Airflow, reducing manual data processing time by 45% and enabling scalable data ingestion for enterprise-level financial datasets.
  • Partnered with analytics and business intelligence teams to deliver Tableau dashboards that monitored compliance KPIs, improving decision-making time by 30% across audit and risk departments.
  • Implemented robust ETL processes to streamline the aggregation of structured and semi-structured data across internal platforms, increasing data accuracy and accessibility by 99.7%.
  • Engineered cloud-based data solutions using AWS Glue and S3, resulting in a 60% improvement in query response times for large datasets and reduced infrastructure cost through optimization.
  • Collaborated cross-functionally with DevOps, analysts, and product teams to deliver production-grade pipelines for ML model deployment, enhancing real-time fraud detection accuracy by over 20%.
  • Proactively contributed to the documentation of all data workflows and participated in Agile sprint cycles, ensuring consistent progress tracking and high team velocity.

Data Engineer

Citigroup Inc.
05.2020 - 08.2021
  • Developed and maintained scalable ETL pipelines using SQL and Python to process over 2TB of financial transaction data weekly, enabling timely data access for downstream analytical teams.
  • Created custom data wrangling scripts in Python to clean and normalize multi-source data, improving model training quality and supporting customer segmentation analysis.
  • Partnered with business stakeholders to build interactive dashboards using Power BI, empowering non-technical users with self-service analytics and reducing ad hoc report requests by 50%.
  • Optimized SQL queries and implemented indexing strategies on PostgreSQL and Oracle databases, improving report generation speeds by up to 70%.
  • Utilized PySpark for batch processing and integrated Kafka streams to prepare near real-time datasets for reporting and ML readiness.
  • Played a key role in implementing data governance and quality checks, improving regulatory compliance with a 98% pass rate during internal audits.

Education

Master of Science - Information Systems

Pace University
New York, NY
05.2025

Bachelor of Technology - Electrical and Electronics Engineering

Jawaharlal Nehru Technological University (JNTUK)
09.2020

Skills

  • Experienced in using Python for data processing
  • Knowledgeable in data management techniques
  • Adept at hyperparameter configuration techniques
  • Adept at designing informative visualizations through Seaborn and Matplotlib
  • Experienced in database integration techniques
  • Adept at data processing using PySpark and Hadoop
  • Basic proficiency in container technologies
  • Adept at fostering stakeholder collaboration
  • Familiar with JIRA software functionalities
  • Data modeling strategies
  • Data architecture design
  • Data transformation expertise
  • Data integration strategy
  • Experienced in handling high-volume data sets
  • Performance tuning
  • Spark framework
  • Scripting languages
  • Data governance
  • Experience with machine learning techniques
  • SQL expertise
  • NoSQL databases
  • Experience with Hadoop components
  • SQL and databases
  • Data pipeline optimization
  • Data-driven decision making
  • RDBMS
  • Big data technologies

Timeline

Data Engineer

CoreWave
01.2025 - Current

Data Engineer

JPMorgan Chase & Co.
09.2021 - 07.2023

Data Engineer

Citigroup Inc.
05.2020 - 08.2021

Bachelor of Technology - Electrical and Electronics Engineering

Jawaharlal Nehru Technological University (JNTUK)

Master of Science - Information Systems

Pace University