Summary
Overview
Work History
Education
Skills
Timeline
Generic

PONNAM SAI VINEETH

Irving,TX

Summary

  • 4 Years of experience in Development and Implementations of Data Warehousing solutions.
  • Hands-on experience working on Azure cloud platforms, specializing in building Extract Transform Load(ETL) ingestion flows using Azure Data Factory and leveraging Azure SQL DB and Azure SQL DW databases.
  • Extensively worked on Azure Data Bricks for ETL data pipelines using PySpark and Spark SQL. Familiarity with Azure Key Vault for managing environmental details.
  • Proficient in Snow SQL and Snowpipe for continuous data ingestion, as well as writing complex stored procedures and executing SQL statements within Snowflake environments.
  • Proficiency in working within cross-functional Agile Scrum teams and understanding of reporting objects, dimensional data modeling, and SQL across multiple dialects.
  • Ability to work effectively in cross-functional team environments, excellent communication, and interpersonal skills.

Overview

6
6
years of professional experience

Work History

DATA ENGINEER

Albertsons
08.2021 - Current

Roles and Responsibilities:

  • Our Primary Objective of this Project is our EDM data used by consumers to Calculate the PPH metrics and to run flash slots
  • Data Integration and Transformation: Successfully integrated, transformed, and consolidated data from various structured and unstructured data systems into suitable structures for analytics solutions
  • Offshore Team Management: Led and managed an offshore team, ensuring project goals were achieved and delivering high-quality results
  • Historical Data Load: Executed a comprehensive historical data load spanning two years, ensuring accurate and efficient data migration and integration
  • Airflow and Snowflake: Utilized Airflow to orchestrate and schedule complex data workflows involving Snowflake, enabling the execution of SQL queries, data loading, and extraction tasks.
  • Technical Skills: I have worked with various Snowflake components such as Snowpipe, Stage, Streams, Tasks, SnowSQL, Views, Data modelling, Service now tickets, CI/CD Deployments, Clones, Time Travel, and Stored Procedures.
  • I am experienced in handling different file formats like JSON, XML, and CSV.
  • I have also utilized tools like Offset Explorer, Putty, WinSCP, and Control Center for data validation and monitoring.
  • Involved in Data Modelling Activity like Data profiling, Data cleaning and preparation.
  • ETL and Automation: Collaborated with leaders to identify needs and gather requirements by providing solutions through the development of tools, reports, and predictive models. Implemented efficient ETL processes for data extraction, transformation, and loading, enabling the migration of extensive historical datasets. Additionally, played a key role in implementing autoscaling and archival frameworks to automate processes and enhance overall performance.
  • TOOLS: Snowpipe, Snow SQL, Stored procedures, Snowflake,Airflow, Kafka, Streams, tasks, WINSCP, Kafka control center, Putty, offset Explorer, Jira.

ASSOCIATIVE DATA ENGINEER

Katalyst
Hyderabad
12.2017 - 03.2019

Roles and Responsibilities:

  • Our Primary objective of this project is to Migrate the data from on perm (Oracle, Teradata) to Azure Data Lake Storage(ADLS).
  • Linked Services and Data Sources: Created linked services for various databases including Oracle, Teradata, SQL Server, SAP HANA, on-premises file share, blob storage, and ADLS Gen2, enabling seamless data integration from diverse sources
  • Azure Data Factory (ADF): Utilized ADF activities such as Lookups, Stored Procedures, conditionals, loops, variable manipulation, metadata retrieval, and filtering to design and implement data pipelines
  • ADF Triggers and Monitoring: Configured and scheduled ADF pipelines using triggers and monitored pipeline execution.Implemented alert notifications for pipeline failures
  • Self-Hosted Integration Runtime: Configured and managed the self-hosted integration runtime, facilitating data integration between on-premises and cloud environments
  • Data Bricks and Spark: Implemented logging frameworks, ETL logging, validation frameworks, user-defined frameworks, and various data handling techniques (e.g., SCD Type 1 and 2, UPSERT) using Data Bricks notebooks.Leveraged Azure Data Bricks and ADF for running Spark-Python and Spark-Scala notebooks, respectively.
  • Research and Documentation: Conducted research and documented best practices and standards for utilizing our BI tools, ensuring optimal usage and adherence to industry trends and guidelines.

Education

Master of Science - Computer Science

Wichita State University
05.2021

Bachelor of Technology - Computer Science

Gitam University
Hyderabad, INDIA
04.2018

Skills

Operating Systems: Unix and Windows SQL, PL/SQL

Database Tools:NoSQL, Mango DB, Teradata, Oracle, SQL Server, Frameworks, AZURE DW,Web Service Technologies

Reporting Tools: Tableau, Power BI

Data Modelling: Star Schema , Snowflake schema

Scripting Languages: PYSAPRK, PYTHON, R, SQL, SNOWSQL

VERSION SOURCE CONTROL:GitHub
SDLC Methodologies: Agile, SCRUM

AZURE Services: Azure Data Factory, Azure data bricks,ADLS Gen 1& 2, Azure key vault, Blob Storage, Event Hub, Log analytics, Cosmos DB, ADLA, ADLS

Microsoft certified Azure fundamentals Certificate Number: H802-3508

Timeline

DATA ENGINEER

Albertsons
08.2021 - Current

ASSOCIATIVE DATA ENGINEER

Katalyst
12.2017 - 03.2019

Master of Science - Computer Science

Wichita State University

Bachelor of Technology - Computer Science

Gitam University
PONNAM SAI VINEETH