Summary
Overview
Work History
Education
Skills
Timeline
Generic
Mounika Ullam

Mounika Ullam

Summary

Experienced Data Engineer with over 4 years in designing and optimizing data pipelines and architectures. Proficient in Python, SQL, and modern data engineering tools, with a focus on cloud platforms. Specializes in developing scalable ETL/ELT workflows, data lakes, and real-time streaming solutions to enhance data processing efficiency and support data-driven decisions.

Overview

5
5
years of professional experience

Work History

AWS Data Engineer

Evernorth Health Services
St. Louis, USA
08.2024 - Current
  • Designed and developed ETL/ELT data pipelines with AWS Glue, Python, and SQL, enhancing transformation efficiency by 40%.
  • Built and maintained scalable data warehousing solutions on AWS Redshift and Snowflake for over five business units, improving query performance by 35%.
  • Automated CI/CD pipelines using AWS CodePipeline, AWS CodeBuild, and Jenkins, decreasing deployment time by 80% and eliminating manual errors.
  • Executed large-scale migrations from SQL Server to Snowflake, migrating over 10 TB of historical data without downtime.
  • Implemented real-time streaming pipelines with AWS Lambda, AWS Kinesis, and AWS S3, processing millions of records daily with sub-second latency.
  • Optimized big data pipelines on AWS EMR using Hadoop and Spark, reducing job execution times by 60% and lowering infrastructure costs by 30%.
  • Integrated Tableau and Power BI with AWS pipelines, delivering over 20 interactive dashboards that accelerated decision-making by 25%.
  • Monitored and troubleshot workflows using AWS CloudWatch and CloudTrail, enhancing reliability and reducing downtime incidents by 40%.

Environment:

Python, SQL, AWS Glue, Amazon Redshift, Snowflake, SQL Server, Apache Spark, Hadoop, AWS EMR, AWS Lambda, AWS Kinesis, Amazon S3, Jenkins, AWS Code Pipeline, AWS Code Build, AWS CloudWatch, AWS CloudTrail, Tableau, and Power BI.

Azure Data Engineer

Accenture
Mumbai, India
12.2022 - 07.2023
  • Designed and developed ETL/ELT pipelines using Azure Data Factory, Python, and SQL, reducing data ingestion time by 40%.
  • Built and maintained data lakes on Azure Data Lake Storage and warehousing solutions on Azure Synapse and Snowflake, cutting query times by 30% for seven departments.
  • Executed large-scale data migrations from on-prem SQL Server and Oracle to Azure Synapse and Snowflake, transferring over 15 TB of data with zero downtime.
  • Automated CI/CD pipelines using Azure DevOps and GitHub Actions, decreasing deployment cycles by 70%.
  • Integrated Power BI with Azure pipelines to create over 25 interactive dashboards, accelerating business decision-making by 35%.
  • Optimized big data pipelines on Azure Databricks with Spark and Delta Lake, improving job performance by 55%.
  • Implemented real-time streaming pipelines using Azure Event Hubs, processing millions of events daily with sub-second latency.
  • Monitored and troubleshot data pipelines with Azure Monitor, reducing downtime incidents by 40%.

Environment:

Python, SQL, Azure Data Factory (ADF), Azure Data Lake Storage (ADLS Gen2), Azure Synapse Analytics, Snowflake, SQL Server, Oracle, Azure Databricks, Apache Spark, Delta Lake, Azure DevOps, GitHub Actions, Power BI, Azure Event Hubs, and Azure Monitor.

Data Engineer

HID Global
Mumbai, India
04.2020 - 11.2022
  • Designed and developed ETL/ELT pipelines using Python, SQL, and Apache Airflow, enhancing data ingestion and transformation efficiency by 40%.
  • Optimized big data pipelines with Apache Spark and Hadoop, improving job performance by 50% while reducing processing costs by 30%.
  • Built real-time streaming pipelines using Apache Kafka, Apache Flink, and Spark Streaming, achieving sub-second latency with millions of events processed daily.
  • Executed large-scale data migrations from SQL Server, Oracle, and MySQL into modern data warehouses, transferring over 10 TB of historical data.
  • Integrated Tableau, Power BI, and Looker with backend pipelines to deliver more than 20 interactive dashboards that accelerated business decision-making by 35%.
  • Automated CI/CD pipelines with Jenkins and GitHub Actions, cutting deployment cycles by 70% and minimizing manual errors.
  • Developed infrastructure-as-code (IaC) using Terraform and Ansible, reducing provisioning time by 60% for consistent deployments.
  • Monitored and troubleshot pipelines with Prometheus, Grafana, and ELK Stack, decreasing downtime incidents by 40% for improved system reliability.
Environment:

Python, SQL, Apache Airflow, Apache Spark, Hadoop, Apache Kafka, Apache Flink, Spark Streaming, SQL Server, Oracle, MySQL, Tableau, Power BI, Looker, Jenkins, GitHub Actions, Terraform, Ansible, Prometheus, Grafana, ELK Stack.

Education

Masters - Computer science

University of Missouri- Kansas City
Kansas City, MO
05-2025

Skills

  • Programming and scripting: Python, SQL, Bash
  • Big data processing: Apache Spark, Hadoop, Databricks
  • Cloud services: AWS Glue, Redshift, S3
  • Azure services: Data Factory, Synapse Analytics
  • Data warehousing: Snowflake, SQL Server
  • ETL orchestration: Apache Airflow, NiFi
  • Streaming technologies: Kafka, Spark Streaming
  • Container management: Kubernetes, Terraform
  • Data visualization: Tableau, Power BI
  • CI/CD tools: Git, Jenkins

Timeline

AWS Data Engineer

Evernorth Health Services
08.2024 - Current

Azure Data Engineer

Accenture
12.2022 - 07.2023

Data Engineer

HID Global
04.2020 - 11.2022

Masters - Computer science

University of Missouri- Kansas City
Mounika Ullam