Summary
Overview
Work History
Education
Skills
Certification
Zip
State
City
References
References
Timeline
Generic

VISHNU VARDHAN REDDY

Little Elm,TX

Summary

Collaborative strategic Data Engineering Manager, accountable for the successful delivery of multiple enterprise software systems. Possessing technical expertise, coupled with strong business acumen, facilitates effective communication between leadership and engineering teams. Oversees the end-to-end development and maintenance of enterprise-wide technology platforms. Possess the ability to swiftly address complex challenges and adapt to new concepts and technologies. Having a demonstrated history of collaborating with industry leaders like AWS and innovative startups such as MPACT2W0, which secured the very first AMEL (ANOTHER MEANS OF EMISSION LIMITATION) approval from EPA (Environmental Protection Agency, underscores a robust and varied professional background.

Overview

13
13
years of professional experience
1
1
Certification

Work History

Sr Engineer

Grainger
Chicago, IL
07.2023 - Current

Orchestrated a dynamic team of 4+ Data Engineers, driving transformative enhancements to the company's robust enterprise data lakehouse. Collaborated with cross-functional teams to enhance product functionality and empower leadership with essential business insights.

  • Introduced a software paradigm approach to data pipeline design and developed reusable frameworks resulting in 40% of the decrease in costs as well as 20% increase in productivity.
  • Implemented Test-Driven Development (TDD) and Trunk-Based Git Flow methodologies to significantly boost efficiency in data engineering processes.
  • Architected and deployed CICD system (Terraform, AWS Codepipeline)
  • Balanced 60% individual contributor and 40% Lead responsiblities

Data Engineer - Streaming

AMAZON WEB SERVICES (AWS)
Dallas, TX
11.2022 - Current

Empowering AWS enterprise clients with end-to-end solutions in data management and analytics, focusing on scalability and modernization. Developing customized deployable tools in popular programming languages to tackle complex industry challenges.Collaborating across AWS divisions to guide clients in leveraging a suite of services and cutting-edge technologies. Responsibilities span from pre-sales engagements to optimizing data infrastructure and fostering a data-centric culture within client organizations.

  • Helping AWS enterprise customers to develop, deliver and implement Data Lake, Data Warehouse, Data Analytics, and IOT projects
  • Specifically focusing on large scale data warehousing and data warehouse modernization
  • Building Custom Deployable artifacts in Java, Scala and Python to solve the customer real-world cutting-edge challenges
  • Responsibilities include collaborating with various AWS teams to help customers learn and use AWS services such as MSK, Glue, S3, RedShift, CDK, Athena, EMR, EKS, RDS and on Open Source Technologies Apache Flink and Spark Streaming
  • Delivering technical engagements including participating in pre-sales, understanding customer requirements and defining the end to end data platform architecture
  • Customer engagements include hands-on deliverables to assess and optimize customer’s Data Lake and Data Warehouse implementations
  • Engaging with the customer’s business and technology stakeholders to create a compelling vision of a data-driven enterprise in their environment.

Data Engineering Lead

Molex
Irving, TX
09.2019 - 11.2022

Operated effectively within a dynamic startup environment, collaborating to architect and construct an end-to-end data platform from scratch. Leveraged open-source technologies and developed a shippable product tailored for multiple customer deployments.

  • Developed and managed Realtime streaming platform from the scratch using Flink, Kafka and MongoDB to enable remote monitoring of industrial assets with a sub-second latency in an end-to-end Cloud IOT platform
  • Lead Development of a time-series API that unifies storage in multiple SQL and NoSQL databases, our existing and new systems
  • Overall results are a 10x reduction in cloud spend with a 10-100x improvement in ingest and query performance
  • Worked on Re-designing of existing Data pipelines to build robust and scalable Windowed Stateful Stream processing applications to handle any latencies and scale
  • Overall results are an 8x reduction in cloud spend and reduced overall maintenance
  • Hands-on Experience in configuration of Network architecture on AWS with VPC, Subnets, Internet gateway, NAT, Route table
  • Built the AWS cloud infrastructure from ground up and adept at using different AWS services EMR, EC2, lambda, CloudWatch, IoT.

Sr. Software Engineer

Walgreens Boots Alliance Inc.
Deerfield, IL
08.2017 - 09.2019

Transitioned existing pipelines from ETL tools like Informatica and DataStage to Spark, improving efficiency and scalability. Developed reusable frameworks to streamline and enhance pipeline development and management.

  • Installed, configured, and tuned performance of Kafka Connect to handle multi-million events
  • Architected, designed and developed end to end Data Pipelines using Spark, Hive and HBase for the Data Engineering team to provide optimization via physical storage formats and to use right technology fit for purpose, handle job failures and Data quality issues
  • Developed Spark Structured Streaming and RDD streaming jobs to consume events from Kafka and performed Real Time ETL (Extract, Transform, & Load) and wrote the raw/aggregated output to Hive Tables and to HBase
  • Solely built generic frameworks in Unix and Scala to ingest the data from various sources into Hadoop and scheduled them using oozie and provided the capabilities for monitoring, alerting and traceability
  • Built Microsoft Azure cloud environment for EventHub, DataFactory and Azure Databricks from ground up and adept at using them.

Hadoop Developer

USAA
San Antonio, TX
05.2015 - 08.2017


As a Hadoop developer, I designed, developed, and maintained Big Data solutions using Hadoop ecosystem tools such as HDFS, MapReduce, Hive, and Spark. I collaborated closely with data engineers and architects to optimize data processing workflows, ensure data quality, and implement scalable data storage solutions.

  • Designed & developed custom Map Reduce Job to ingest Click-stream Data received from Adobe & IVR Data received from Nuance into Hadoop
  • Developed several custom User defined functions in Hive & Pig using Java & python
  • Improving the performance and optimization of the existing algorithms in Hadoop using Spark-SQL and Data Frame API
  • Designed & developed External & Managed Hive tables with data formats such as Text, Avro, Sequence File, RC, ORC, parquet
  • Developed Custom SQOOP tool to import data residing in any relational databases, tokenize an identified sensitive column on the fly and store it into Hadoop.

ETL Developer

TCS
Chennai, TN
08.2011 - 05.2015


As an ETL developer, I designed and implemented efficient data extraction, transformation, and loading processes using tools like Informatica and custom frameworks like Apache Spark. I collaborated with stakeholders to understand data requirements, wrote optimized SQL queries, and ensured data accuracy and timeliness. Additionally, I monitored ETL jobs, troubleshooted issues, and stayed updated with industry trends to deliver effective data integration solutions.

  • Gathered requirements and developed ETL jobs using Informatica, DataStage, Apache Nifi and BO dashboard reports using Business Objects
  • Directly involved in DB2 to IBM Netezza database migration which was a huge success around different Lines of Business
  • Working on different change requests (CR's) and in fixing various defects related to ETL- Informatica code changes, UNIX scripts implementation & database changes by data fixing the historical incorrect data
  • Designed & developed scripts t to build Type 1, Type 2 dimensions and Facts using Informatica and DataStage and have written complex SQL queries
  • Working on multiple projects in Agile & waterfall methodologies based on the requirement.

Education

Bachelor of Science - Information and Communication Technology

SASTRA UNIVERSITY, TANJORE
01.2011

Skills

  • Software/Data Engineering Management
  • Data Pipelines/Big Data
  • CICD Automation
  • System Architecture
  • Cross-Functional Leader/Mentor
  • Project Management/Execution
  • Cost Oversight/Savings
  • AI/ML Pipeline
  • DevOps/Security

Certification

  • AWS- Certified Solutions Architect Associate
  • AINS 21 Property and Liability Insurance principles

Zip

75068

State

TX

City

Little Elm

References

  • Doug Rhoades, AWS Practice Manager, Streaming, dmrhoad@amazon.com
  • Rajendra Prasad, Munagala, Molex- Director,Software Engineering, Rajendra.PrasadM@molex.com

References

References available upon request.

Timeline

Sr Engineer

Grainger
07.2023 - Current

Data Engineer - Streaming

AMAZON WEB SERVICES (AWS)
11.2022 - Current

Data Engineering Lead

Molex
09.2019 - 11.2022

Sr. Software Engineer

Walgreens Boots Alliance Inc.
08.2017 - 09.2019

Hadoop Developer

USAA
05.2015 - 08.2017

ETL Developer

TCS
08.2011 - 05.2015

Bachelor of Science - Information and Communication Technology

SASTRA UNIVERSITY, TANJORE
VISHNU VARDHAN REDDY