Summary
Overview
Work History
Education
Skills
Timeline
Generic

Pratyush Juneja

Seattle

Summary

Solutions-oriented Customer Engineer with deep experience designing and implementing real-time data pipelines using CDC technologies, and modern cloud platforms such as Snowflake, BigQuery, and Oracle. Proven ability to support both pre-sales and post-sales technical engagements, driving scalable streaming data solutions that reduce latency, and maximize business impact.

Overview

6
6
years of professional experience

Work History

Field Engineer

Striim
10.2021 - Current
  • Architected and deployed real-time data pipelines using Kafka, Striim, and Change Data Capture (CDC) across distributed systems like Oracle, SQL Server, Snowflake, and BigQuery, enabling sub-second replication and supporting use cases like cloud migration and hybrid analytics.
  • Built and optimized streaming solutions for enterprise customers, contributing to a 20% increase in CSAT across a $4.7M ARR portfolio.
  • Automated Python-based field tools to support incremental loads, data transformation, and schema evolution during onboarding, cutting deployment time by 30%.
  • Acted as a trusted advisor during pre-sales engagements: scoped technical requirements, built tailored POCs, and demonstrated low-latency data replication, and Kafka integration capabilities.
  • Partnered with Product and Engineering to close feedback loops from the field, influencing feature development around Kafka connectors, Avro schema registry usage, and operational observability.
  • Developed internal AI tools to assist cross-functional teams in documentation navigation and integration debugging.

Data Engineer

ServiceNow
San Diego
03.2021 - 06.2021
  • Collaborated with Engineering teams to develop backend data tool, improving table creation efficiency by 2.5x using Python and SQL
  • Created execution script that reduced processing time by 2 seconds per million SQL records, cutting total execution time by 2 hours.
  • Implemented parallel processing strategies, resulting in an 8x increase in productivity for SQL DDL queries using.

Data Science Developer

University of California, San Diego
San Diego
07.2020 - 03.2021
  • Conceptualized, designed, and implemented a revolutionary user interface tool, eliminating the need for manual SQL queries, and boosting productivity by at least 75%.
  • Spearheaded the development of user screens and prototypes using PyQt5 to address database issues, saving 6 hours weekly, and presenting them to the CIO of UC San Diego.
  • Collaborated with a cross-functional team to develop back-end scripts and SQL queries, facilitating the seamless integration of the front-end user interface with a SAP HANA database.

Decision Analyst Intern

FICO
San Diego
07.2019 - 09.2019
  • Developed a PySpark-driven cataloguing tool, boosting Transaction Analytics' productivity by 20x at FICO, showcasing adeptness in leveraging advanced technology for operational enhancement.
  • Employed Bash, Python, and PERL to optimize client data consistency, resulting in a minimum 10% increase in usable data for each client, demonstrating a hands-on approach to delivering actionable insights.

Education

Bachelor of Science - Data Science

University of California, San Diego
La Jolla, CA
06-2021

Skills

  • Data Streaming & Integration: Kafka, CDC, SQL, Avro, JSON
  • Databases: Oracle, SQL Server, Postgres
  • Programming: Python, PySpark, Bash
  • Tools & Frameworks: PyQt5, Linux, Git, Excel
  • Cloud & Data Platforms: Azure,GCP, Snowflake, Kafka-based messaging
  • Effective communicator in both pre-sales demos and post-sales delivery
  • Strategic problem-solving and client relationship management

Timeline

Field Engineer

Striim
10.2021 - Current

Data Engineer

ServiceNow
03.2021 - 06.2021

Data Science Developer

University of California, San Diego
07.2020 - 03.2021

Decision Analyst Intern

FICO
07.2019 - 09.2019

Bachelor of Science - Data Science

University of California, San Diego