Experienced Data Professional specializing in designing, developing, and deploying data-driven solutions across AWS, Azure, and Google Cloud. Proven ability to optimize data pipelines, build scalable data warehouses, and automate workflows and infrastructure. Skilled in DevOps practices and tools like Docker and Terraform, ensuring efficient and reliable deployments. Collaborative team player with strong problem-solving abilities, dedicated to delivering high-quality, data-driven solutions that drive business impact.
Overview
9
9
years of professional experience
Work History
Senior Data Engineer
GEVO INC., VERITY TRACKING
06.2023 - Current
Led cross-platform cloud optimization initiatives, reducing Snowflake warehousing costs by 15% through workload restructuring and automated shutdowns, while simultaneously improving efficiency by 20% via strategic warehouse adjustments and notifications
Architected and deployed a comprehensive cloud-native data pipeline using Prefect on Google Cloud, leveraging Big Query and Cloud Run, later expanding the architecture to incorporate Snowflake and GIS databases for enhanced geospatial data processing
Spearheaded the rearchitected of Enterprise Data Warehouse, implementing a clear and robust design that streamlined schema change processes for 5 teams, ensuring 100% data consistency and reducing errors by 30%
Pioneered the development of a Python Docker microservice for real-time monitoring and adjustment of Snowflake costs, resulting in monthly savings of $10,000, while also enhancing Azure DevOps pipelines with Python and Azure CI/CD in PLF framework, increasing efficiency by 15%
Implemented critical enhancements across multiple cloud platforms, including optimizing AWS Sage Maker workflows, utilizing Snowflake (SQL and Python) for problem identification, and leveraging Terraform to manage Google Cloud Storage, collectively improving data storage efficiency by 25% and saving 50 hours of development time.
Data Engineer
SONOBI, PUSH DATA TEAM
02.2022 - 06.2023
Led the overhaul and optimization of Amply Media data pipelines, orchestrating migration to Snowflake, Airflow, and AWS, resulting in a 30% improvement in overall system efficiency and seamless integration with the Sonobi’s data infrastructure
Spearheaded the implementation of advanced Snowflake features, boosting data processing efficiency, pipeline performance, while deciphering and optimizing thousands of lines of legacy SQL code to service critical revenue dashboards
Architected and implemented a comprehensive testing protocol for data from 10+ API sources, ensuring 99% data quality and lineage throughout the integration process, crucial for maintaining accuracy in revenue reporting
Pioneered the adoption of AWS Lambda, converting 20 Python scripts into event-driven processes, enabling real-time data handling from website interaction of user behavior data into the new system.
Data Engineer Consultant
GENUENT
11.2019 - 02.2022
MUFG Union Bank (Service Team)
Developed an innovative ELT solution using Python and Azure services, generating labor cost saving of over $100,000 semi-annually and reducing time-to-insight for executives, directly enhancing leadership decision-making capabilities
Led quality assurance and architectural design for multiple projects within the data migration initiative, maintaining rigorous standards that reduced errors, while mentoring three developers to foster a culture of excellence
Designed and refined 12 Tableau dashboards, leveraging Advanced Data Modeling Techniques, advancing data visualization capabilities for production teams and enabling data-driven decision-making.
Enterprise Products, Big Data Team
Created Flask web application for data validation and power optimization, incorporating Dimensional Modeling to enhance data integrity, resulting in a 10% reduction in data errors
Generated the design and optimized ETL pipelines using Alyteryx and Apache Nifi with Jenkins for Test and Deploy phases, helping to boost pipeline performance by 2% and enhance data processing capabilities
Developed PI WEB API application on Azure using Python, improving Apache Influx Database data extraction efficiency by 5% and enhancing data handling accessibility
Assisted in implementing Hadoop MapR and Apache Spark for managing 500 GB of data, contributing to a 3% improvement in query response time for big data processing.
Data Analyst
ALLIANTGROUP, DATA TEAM
01.2019 - 11.2019
Streamlined data processing for faster results by implementing advanced analytics tools
Presented findings to executive leadership teams through concise presentations, influencing future strategy development
Integrated multiple sources of disparate data into cohesive datasets using ETL processes, improving overall analytic capabilities
Provided actionable insights through comprehensive reports and dashboards, supporting strategic initiatives.
Python Developer
CLARITY FINANCIAL GROUP
01.2016 - 12.2018
Spearheaded the migration of legacy systems to more scalable and maintainable Python-based solutions
Increased code reusability with modular programming techniques, facilitating faster development cycles
Managed database security by implementing appropriate access controls, maintaining compliance with industry regulations and company policies
Automated routine maintenance tasks through scripting, freeing up valuable time for higher-priority projects
Integrated multiple sources of disparate data into cohesive datasets using ETL processes, improving overall analytic capabilities.