Summary
Overview
Work History
Education
Skills
Certification
Timeline
Generic

SAI CHAITANYA KAMMILA

Overland Park,KS

Summary

  • Having 4+ years of experience in the IT industry, which includes hands-on experience in Big Data Technologies, SQL, Application Development with Python.
  • Capable of building large scale distributed applications, which includes Stream and Batch Processing a large set of structured, semi-structured and unstructured data with Spark
  • Extensive experience in Architecting & Building Data platforms including DataWarehouses, DataLakes, Data Warehouses, and ML Feature Stores from scratch with Spark, Hadoop , MapReduce.
  • Expertise in Data Architect, Data Modeling, Data Migration, Data Profiling, Data Cleansing, Transformation, Integration, Data Import, and Data Export using multiple ETL tools (including Advantage Architect and HighJump) .
  • Good Knowledge of Python, Unix Shell Scripting, Docker for automating deployments and other routine tasks.

Overview

3
3
years of professional experience
1
1
Certification

Work History

Senior Software Engineer

Ashley Furniture
11.2021 - 08.2022

Contribution:

  • Streamlined the Data Ingestion from different sources including Streaming, Batch, and Databases as part of AWS Data Lake implementation with the help of Hadoop, Spark on AWS Databricks, Delta Lake.
  • Contributed to multiple ETL batch and Stream processing Spark jobs to transform and load the data to different data zones in Pearson Data Lake.
  • Implemented the workflows using the Apache Airflow, AWS Databricks Workflows to orchestrate job execution
  • Worked on building the Prediction stream where we predict the streaming data from more than 100 deployed models on MLFlow using PySpark.
  • Discovery & Implementation of generic rest services for the data lake starting from config service, metadata service, and many reusable client components with Python Django and Spring Boot.
  • In-depth understanding of Spark Architecture including Spark Core, Spark SQL, and Data Frames.
  • Written transformations and actions on data frames, used Spark SQL on data frames to access hive tables into spark for faster processing of data.
  • Worked on the manufacturing material movement process. Write, Understand and develop SQL Database code for MS SQL Databases to provide solutions in accordance with business needs.
  • Created DB Objects like tables, views, sequence, stored procedures, functions.
  • Root cause Analysis of defects and fixing them.

Software Engineer

Gratis
02.2021 - 10.2021

Contribution:

  • Extensively Used Apache spark for building real-time Streaming & Batch jobs to process 10^6 eventsper minute. Contributed to multiple Streaming and structured streaming jobs which take care of ingesting the data from multiple channels to the platform, real-time batch Processing from streaming data, Streaming Aggregations of data, Event Analyzers, and Influx Sink.
  • Designed and Developed data integration/engineering workflows on big data technologies and platforms (Hadoop, Spark, Hive).
  • Handled importing of data from various data sources, performed transformations using Spark, and loaded data into S3..
  • Processed S3 data and created external tables using Hiveand developed scripts to ingest and repair tables that can be reused across the project.


Developer

HIMS
08.2020 - 01.2021

Contribution:

  • Built a framework for Customer Master Data Platform to Profiling the millions of customers and framework enables to derive new custom attributes with different clients (Including Spark, SQL, Python)
  • Deployed Customer Master Data platform pipeline on AWS, all orchestrations done with lamda, step functions, and AWS cloud watch. Built rest APIs using python Django on top of our framework.
  • Designed and Developed a Customer Unified User Profile framework (Discovering Audiences) to identify profiled users and Provide personalized offers. Deployed on EMR with 12 node cluster by caching terabytes of data.
  • Worked on URL tracking with spark streaming and Predict real-time results.

Developer

Sports Direct
01.2020 - 07.2020

Contribution:

  • Modification to the existing Packing process via Architect as per customer’s requirements.
  • Development of new process(Close Cage) via Architect as per customer’s requirements.
  • Deployment of code from DEV to QA.
  • Development of new report pages, search pages through the Web wise page editor.
  • Unit testing the report pages developed.
  • Providing support for the LIVE issues.
  • Make sure issues reported are fixed immediately.

Developer

Beauty Bay
06.2019 - 01.2020

Contribution:

  • Development of Directed Receipt process via Architect as per customer’s requirements
  • Development of Outbound Sort process via Architect as per customer’s requirements
  • Development of Pack Move process via Architect as per customer’s requirements
  • Preparing the Technical Specification for the changes
  • Unit testing the changes developed.

Education

Master of Science - computer science

University of Central Missouri
Warrensburg, MO
12.2023

Bachelor of Science - Software Engineering

Vellore Institute of Technology
Vellore,India
05.2019

Skills

  • AWS
  • Spark
  • Hadoop
  • Hive
  • Python
  • Django
  • SQl
  • MongoDB
  • HighJump Software : Advantage Architect, Web wise Page editor, HighJump One Platform, HighJump One Workspace

Certification

  • HighJump
  • Microsoft SQL Server 2016 Certification(70-762) on Udemy(8.5 hours)

Timeline

Senior Software Engineer

Ashley Furniture
11.2021 - 08.2022

Software Engineer

Gratis
02.2021 - 10.2021

Developer

HIMS
08.2020 - 01.2021

Developer

Sports Direct
01.2020 - 07.2020

Developer

Beauty Bay
06.2019 - 01.2020

Master of Science - computer science

University of Central Missouri

Bachelor of Science - Software Engineering

Vellore Institute of Technology
SAI CHAITANYA KAMMILA