Summary
Work History
Education
Skills
Websites
Timeline
c7
Sankara Subramanian Karthic M

Sankara Subramanian Karthic M

Seattle,USA

Summary

Senior Data Engineer with 15 years of experience in Amazon's data ecosystem, specializing in analytics platforms for Pricing, Seller, Returns, and Finance. Expertise includes data architecture, ETL pipeline development, and cloud-native solutions. Led a team of over 5 Data Engineers, influencing architectural decisions and developing long-term strategies. Recognized for delivering scalable, compliant data solutions that align with business objectives and support leadership reporting.

Work History

Data Engineer II

Amazon (Finance Domain)
Seattle, USA
11.2022 - Current

Over the past several years at Amazon, I have worked at the intersection of data engineering, finance technology, and large-scale analytics—designing, developing, and scaling critical data pipelines and platforms that directly influence multi-billion-dollar decision-making. My role centers around enabling the WW Stores FinTech and Global Finance teams with reliable, secure, and timely data that leadership depends on for operational visibility, compliance, and strategic planning.
I specialize in architecting cloud-native pipelines on AWS, optimizing Redshift workloads, building BI reporting solutions, and implementing governance standards across global datasets. In doing so, I have delivered innovations in real-time revenue insights, platform scalability, compliance readiness, and operational reliability. Beyond technical execution, I lead engineering initiatives, mentor Data and BI Engineers, and partner with leadership to align long-term strategy with Amazon’s business objectives.
Below is a detailed account of the scope and impact of my work at Amazon:

Real-Time Insights | Insist on the Highest Standards + Data Modeling
One of my most impactful contributions was the design and deployment of a layered hourly dataset for intra-day ASIN-level revenue insights. Previously, financial reporting relied heavily on daily aggregates, limiting leadership’s ability to respond to fast-moving market and operational shifts.
I built a multi-layered data pipeline with raw, staging, and denormalized layers that systematically transformed order and item activity into a robust hourly dataset. To ensure accuracy and alignment with daily customer order item (DUCOI) metrics, I implemented:
• MD5 hash change detection for efficient deduplication and change tracking.
• Record versioning to maintain historical integrity and allow point-in-time analysis.
• Business rule transformations to handle revenue allocation, cancellations, and late-arriving data.
• Time zone normalization for global financial reconciliation.
The result was a high-fidelity, intra-day reporting dataset that empowers the Amazon Stores Finance Team to conduct near real-time financial analysis. This solution now supports decision-making across leadership levels, enabling faster insights into ASIN-level performance, demand spikes, and regional sales patterns. It has directly influenced how the finance organization evaluates performance in fast-moving business contexts such as promotions, product launches, and holiday events.

ETL Automation & Reliability | Customer Obsession + SLA Governance
Reliability is the cornerstone of financial data reporting. In 2024, I led a pipeline reliability program for the Daily Sales Flash (DSF)—a flagship dataset used by S-Team (including the CEO and VPs) to track sales.
Key initiatives included:
• Pipeline failure reduction: Identified and eliminated systemic failure points by refactoring scripts and improving orchestration logic.
• Proactive alerting: Integrated Klaxon alarms across all DSF profiles to provide real-time monitoring and automated escalation.
• SLA governance: Designed automated SLA adherence checks and validation layers that alert engineers and stakeholders of potential breaches.
These improvements increased pipeline SLA adherence to 99.7% in 2024, ensuring that leadership always has access to accurate, timely DSF metrics. This program directly reinforced the WW Stores FinTech team’s mission of delivering trustworthy financial data to inform strategy, investor relations, and quarterly financial reporting.

Scalable Data Platforms | Think Big + Data Architecture
Amazon’s platform scalability demands constant reinvention. I took ownership of the Andes 3.0 migration, one of the largest data infrastructure modernization efforts, spanning nearly 7000 tasks across multiple organizations.
My contributions included:
• Designing scalable orchestration workflows to migrate pipelines with zero downtime.
• Refactoring data tasks into modular, reusable components, reducing maintenance overhead.
• Establishing documentation best practices to onboard new engineers and accelerate adoption.
• Partnering with cross-org stakeholders to align migration milestones with VP-level goals.
The Andes 3.0 migration not only modernized infrastructure but also improved cost efficiency, reliability, and scalability of financial data pipelines. The success of this initiative has become a reference point for future migrations, demonstrating how large-scale platform shifts can be executed with minimal business disruption.

Performance Optimization | Dive Deep + Redshift WLM Tuning
Redshift sits at the heart of Amazon’s finance reporting ecosystem. Performance bottlenecks in clusters were impacting SLA compliance and delaying leadership dashboards.
To address this, I:
• Tuned Workload Management (WLM): Segmented queries into prioritized queues based on business criticality.
• Optimized ETL pipelines: Re-engineered query logic, restructured tables with appropriate sort/dist keys, and applied compression.
• Implemented monitoring dashboards: Built near real-time performance metrics dashboards to track cluster utilization.
These improvements reduced query times, eliminated SLA breaches, and ensured leadership reports—including S-Team sales trackers—were delivered on time. The initiative exemplified operational excellence by balancing performance with cost efficiency.

BI Reporting & Leadership Insights | Deliver Results + Business Intelligence
Recognizing the need for accessible financial insights, I developed BI dashboards and reports consumed directly by Amazon’s highest levels of leadership.
Highlights include:
• IN FP&A Daily Sales Flash Metrics: A critical dataset that integrates into finance workflows across multiple marketplaces.
• QuickSight dashboards: Built executive-facing dashboards enabling S-Team leaders to track near real-time sales trends.
• Self-service tooling: Designed dashboards with drill-down capabilities, empowering finance analysts to explore data without engineering dependency.
These solutions have been instrumental in leadership decision-making, offering visibility into financial performance across countries, marketplaces, and product categories.

Data Ingestion & Accuracy | Frugality + ETL Automation
Data ingestion for external and cross-team datasets is often prone to manual errors. I automated ingestion for critical datasets, including:
• Foreign Exchange (FX) Rates
• Marketplace Finance (MP Finance)
• Curves Upload for forecasting models
By implementing validation layers, deduplication checks, and schema-driven ingestion workflows, I reduced manual errors by 80%, significantly improving the accuracy and reliability of downstream financial models and reports.

Compliance & Governance | Earn Trust + Data Governance
Compliance is non-negotiable in global finance. I spearheaded several governance initiatives to align with regulatory and internal audit standards:
• DMA and GDPR readiness: Ensured data pipelines adhered to European data privacy and portability requirements.
• Kale Attestation & EU Seller Tagging: Standardized tagging and attestation across 20+ datasets for legal compliance.
• Credential management: Migrated legacy Chime webhook and EC2 script credentials into AWS Secrets Manager, enhancing security posture and reducing operational risk.
These initiatives not only ensured compliance but also improved stakeholder confidence in Amazon’s ability to handle sensitive financial data securely and transparently.

QuickSight Dashboarding & Operational Excellence | Think Big + BI Tooling
Beyond leadership reporting, I developed operational dashboards to drive accountability and governance across teams:
• Permission Audit Dashboard: Gave leaders visibility into dataset access, reinforcing least-privilege principles.
• SLA Tracker Dashboard: Provided real-time adherence tracking, enabling proactive resolution of potential misses.
• Category Details Dashboard: Delivered category-level financial insights to support pricing and product strategy.
By standardizing reporting and reducing manual tracking, these dashboards improved transparency, governance, and leadership decision velocity.

Leadership & Mentorship | Learn and Be Curious + Enablement
Technical impact is amplified through people. I have consistently invested in mentoring and enablement:
• Mentored new Data Engineers and BI Engineers, accelerating their onboarding and career development.
• Conducted SQL workshops to upskill engineers and analysts across teams.
• Actively participated in Amazon hiring panels, ensuring high standards in recruiting top talent.
These contributions fostered a culture of technical excellence, curiosity, and continuous learning.

Leadership Principles in Action
Across all these initiatives, my work has consistently delivered outcomes aligned with Amazon’s leadership principles:
• Customer Obsession: Ensuring reliable financial datasets for leadership and finance teams.
• Insist on the Highest Standards: Driving accuracy through rigorous validation and data modeling practices.
• Think Big: Leading large-scale migrations like Andes 3.0.
• Dive Deep: Optimizing Redshift performance to resolve bottlenecks.
• Deliver Results: Enabling S-Team to make decisions with confidence.
• Earn Trust: Strengthening compliance and governance across critical datasets.


Through these efforts, I’ve established myself as a trusted leader within the WW Stores FinTech organization, recognized for both technical depth and cross-functional influence. My work continues to shape how Amazon handles finance reporting at scale, blending technical innovation with governance, compliance, and operational excellence.

Data Engineer

Amazon (Returns Domain)
Hyderabad, India
03.2017 - 11.2022

As a Data Engineer in Amazon’s Returns domain, I played a pivotal role in building and scaling the data ecosystem that powers Amazon’s global returns operations. This work directly influenced how millions of customer return requests are tracked, processed, and reconciled worldwide, with downstream impact on customer experience, finance reporting, and operational excellence.
Over my 5+ years in this role, I specialized in data architecture, migration, modeling, and performance optimization, while also modernizing Amazon’s big data platform footprint for returns. My contributions were not limited to technical execution—I took ownership of large-scale migrations, simplified analytics workflows, optimized infrastructure costs, and enabled governance frameworks to ensure data traceability. These initiatives established a foundation that continues to support Amazon’s returns ecosystem with accuracy, scalability, and compliance.

Data Architecture | Ownership
One of my earliest and most critical projects was the Oracle-to-Redshift migration (Program Jiyu), which spanned 26+ high-impact returns tables. These datasets contained the backbone of customer returns data across multiple marketplaces, and the migration required careful planning, testing, and execution to ensure zero business disruption.
Key aspects of my contribution included:
• Schema redesign: Modeled Redshift schemas optimized for analytical workloads, eliminating legacy bottlenecks from Oracle’s transactional structures.
• Data validation pipelines: Built row-level and aggregate validation frameworks to reconcile Oracle vs. Redshift records, ensuring accuracy and completeness.
• Performance enhancements: Refactored queries that previously ran in hours into optimized Redshift workloads that completed in minutes.
By deprecating legacy Oracle systems, this migration not only streamlined Amazon’s returns reporting but also significantly improved query efficiency across critical datasets consumed by finance, operations, and product teams.

Data Modeling | Invent and Simplify
Beyond migrations, I designed and launched new data models and datasets that simplified analytics for global returns reporting. Notable datasets included:
• BAD_RETURN – captured details of non-compliant or defective returns for root cause analysis.
• CONCESSION – provided visibility into concessions issued during return and replacement flows.
• TRANS_METRICS – enabled operational leaders to monitor transaction-level return metrics at scale.
These datasets streamlined how business and analytics teams interacted with returns data, removing dependency on complex joins and manual reconciliations. By packaging insights into domain-specific datasets, I accelerated time-to-analysis and enabled stakeholders to focus on decision-making rather than data wrangling.

Big Data Platform Modernization | Are Right, A Lot
In parallel, I led the Cradle Migration Project, a major modernization effort to transition from Amazon’s legacy Datanet jobs into Apache Spark-based workflows.
My role included:
• Job discovery and cataloging: Conducted a full audit of existing Datanet jobs supporting returns datasets.
• Migration design: Re-engineered job logic into Spark pipelines, leveraging distributed processing to improve efficiency.
• Workload balancing: Migrated processing-heavy jobs out of Redshift and into Spark clusters, alleviating Redshift workload constraints and reducing contention for compute resources.
This initiative boosted processing efficiency, reduced Redshift overhead, and created a modern foundation for big data processing in the returns domain. By moving to Spark, we improved performance and ensured future scalability as data volumes continued to grow year-over-year.

Operational Excellence | Success and Scale Bring Broad Responsibility
Performance optimization was a recurring theme in my role. Returns data pipelines were mission-critical, and query delays or cluster bottlenecks could have direct downstream effects on reporting and customer experience.
To address this, I drove Redshift WLM (Workload Management) and CPU resource tuning:
• Created prioritized queues for critical vs. non-critical workloads.
• Tuned CPU/memory allocations to improve throughput across concurrent queries.
• Partnered with analysts and engineers to redesign heavy workloads for efficiency.
As a result, I significantly improved query performance and cluster efficiency, ensuring that returns pipelines scaled reliably while reducing operational firefighting. These improvements translated into faster reporting SLAs, reduced cost of compute, and higher confidence in the data ecosystem.

Cost Optimization | Think Big
As datasets scaled into the hundreds of terabytes, storage and compute costs became an increasing focus. I led a cost optimization program that involved migrating 377TB of redundant or under-utilized storage to lower-cost tiers while ensuring compliance with Amazon’s retention policies.
This initiative required a delicate balance of frugality and governance:
• Identified datasets with duplicate or redundant copies.
• Partnered with compliance and legal teams to ensure retention standards were preserved.
• Re-architected data archival processes to maintain accessibility while reducing high-cost storage.
The outcome was a substantial reduction in infrastructure costs, proving that large-scale cost savings can be achieved without compromising compliance or business requirements.

Metadata Management | Dive Deep
Data governance and traceability are critical in domains like returns, where insights drive financial reconciliation, customer refunds, and regulatory reporting. To strengthen governance, I designed and maintained the metadata pipeline.
This pipeline cataloged returns datasets, capturing lineage, schema changes, and access metadata. It became the single source of truth for dataset governance within the returns domain, enabling:
• Easier onboarding of new engineers and analysts.
• Faster impact analysis during schema changes.
• Stronger audit readiness for compliance requirements.
By embedding governance directly into the data ecosystem, I enabled returns teams to move faster while maintaining confidence in data traceability and quality.

Technical Skills & Tools Applied
Throughout this role, I leveraged a deep toolkit of technologies and best practices to deliver high-scale, resilient solutions. My technical expertise spanned:
• Data Modeling & Architecture: Redshift schema optimization, domain-specific dataset design.
• ETL Development: SQL, Datanet, Spark pipelines for ingestion, transformation, and validation.
• Performance Optimization: Redshift WLM tuning, cluster CPU/memory balancing, query optimization.
• Data Governance: Metadata pipeline development, compliance adherence.
• AWS Ecosystem: S3 for data lake storage, Klaxon for proactive monitoring and alerting.
• Monitoring & Reliability: Building alarms and validation layers to ensure SLA compliance.

Leadership Principles in Action
My work in the Returns domain consistently embodied Amazon’s leadership principles:
• Ownership: Took full responsibility for Program Jiyu’s Oracle-to-Redshift migration, ensuring seamless adoption.
• Invent and Simplify: Designed new returns datasets to simplify analytics and reduce manual effort.
• Are Right, A Lot: Modernized the platform by leading the Cradle Spark migration.
• Dive Deep: Built metadata pipelines to provide complete traceability and governance.
• Success and Scale Bring Broad Responsibility: Enhanced WLM and CPU resource tuning to safeguard operational excellence at scale.
• Think Big: Drove cost optimization efforts, saving infrastructure costs while maintaining compliance.

Impact and Legacy
During my tenure in the Returns domain, I:
• Migrated 26+ critical returns datasets from Oracle to Redshift, modernizing analytics infrastructure.
• Launched new domain datasets that simplified global reporting and improved efficiency for analysts.
• Modernized processing by migrating workloads from Datanet to Spark, reducing Redshift bottlenecks.
• Optimized Redshift WLM and cluster CPU, improving performance across mission-critical pipelines.
• Saved costs by migrating 377TB of redundant storage, balancing frugality with compliance.
• Built and maintained metadata governance pipelines to strengthen trust, lineage, and audit readiness.


These contributions established a robust, scalable, and cost-efficient returns data ecosystem that continues to serve global operations, finance, and leadership reporting at Amazon.

Support Engineer

Amazon (Seller Domain)
Hyderabad, India
06.2015 - 03.2017

During my tenure as a Support Engineer in Amazon’s Seller Domain, I focused on enabling seller acquisition and onboarding through a combination of data-driven analysis, automation, and tooling development. This role sat at the intersection of business and engineering, where the ability to extract, cleanse, and deliver actionable seller data directly influenced Amazon’s growth strategy across India and international markets.
My work directly supported Amazon’s mission to expand the seller ecosystem, enhance marketplace diversity, and improve customer experience by ensuring access to high-quality, reliable seller data. Over the course of this role, I developed and deployed automation pipelines, scraping scripts, and integration workflows that scaled seller onboarding and contributed to the successful addition of 3,500+ new sellers.

Customer Obsession | Identifying High-Potential Sellers
A central part of my responsibility was identifying high-potential sellers from external marketplaces. To do this effectively, I partnered with business teams to define key attributes that signaled seller quality and growth potential. These attributes included:
• Contact accuracy (emails, phone numbers, addresses).
• Product category and subcategory distribution.
• Shipping policies, timelines, and return policies.
• Historical customer ratings and external feedback.
To scale this process, I automated web data extraction pipelines that continuously gathered structured seller information from multiple third-party websites. The extracted data was cleaned, validated, and ingested into internal Amazon databases, enabling business stakeholders to prioritize seller outreach with confidence.
This initiative exemplified Customer Obsession by ensuring that Amazon onboarded sellers who could meet customer expectations, thereby improving the end-to-end shopping experience.

Dive Deep | Automation & Data Completeness
To support the above workflows, I designed and maintained custom scraping scripts and automation tools using Python, Shell scripting, and SQL. These tools were engineered to fetch seller information from diverse external sources with varying structures.
Key technical elements included:
• Data parsing and validation: Built regex-based and XPath-based extraction logic to ensure data accuracy.
• Scalability: Modularized scripts for multiple websites, ensuring reusability and reduced maintenance overhead.
• Compliance handling: Incorporated checks to ensure data extraction aligned with Amazon’s governance policies.
• Completeness: Automated periodic runs with retry logic to capture missing fields and reduce data gaps.
Through these systems, I enabled Amazon to maintain a near-complete and up-to-date external seller dataset, ensuring consistency between external information and Amazon’s internal databases.

Deliver Results | Seamless Seller Onboarding
The true impact of my work came when these automation outputs were integrated into internal seller onboarding pipelines. I partnered with engineering and business teams to:
• Build ETL workflows to ingest scraped data into internal databases.
• Normalize attributes across marketplaces for consistent representation.
• Provide APIs and reporting layers for business development teams to access actionable seller information.
This automation reduced manual effort, improved accuracy, and drastically increased the pace of onboarding. In total, my contributions directly enabled the successful onboarding of 3,500+ new sellers, spanning India and international markets. This directly translated into higher product availability, competitive pricing, and expanded customer choice—boosting Amazon’s revenue potential and marketplace growth.

Technical Toolkit
While this role leaned heavily on applied problem-solving, I also strengthened my technical foundation across multiple domains:
• SQL: Writing optimized queries to cleanse, transform, and reconcile extracted seller datasets.
• Python: Developing scraping scripts, automation tools, and ETL integration components.
• Shell Scripting: Scheduling and orchestration for data pipelines.
• Version Control (Git): Managing codebases for scraping tools and automation scripts.
• CI/CD Pipelines: Streamlining deployment of scraping tools and automation workflows for continuous updates.
This combination of tools enabled me to build solutions that were not only functional but also maintainable, scalable, and production-ready.

Leadership Principles in Action
Even in a Support Engineer capacity, I consistently operated by Amazon’s Leadership Principles:
• Customer Obsession: Prioritized onboarding sellers who would deliver the best value and experience to Amazon customers.
• Dive Deep: Developed resilient scraping scripts that handled diverse and complex third-party website structures.
• Deliver Results: Integrated external seller data into Amazon’s onboarding ecosystem, enabling measurable impact through 3,500+ seller onboardings.
• Invent and Simplify: Automated manual seller identification and onboarding workflows, drastically reducing turnaround time.
• Learn and Be Curious: Continuously explored new scraping libraries, data validation methods, and automation practices to stay ahead of technical challenges.

Impact and Legacy
The work I delivered during this role had both immediate and lasting impact:
• Built data pipelines and automation tools that accelerated seller onboarding and reduced manual workload.
• Improved data completeness and accuracy, enabling business teams to make confident outreach decisions.
• Contributed to the onboarding of 3,500+ sellers, expanding Amazon’s product catalog, strengthening customer experience, and boosting marketplace competitiveness.
• Established a repeatable automation framework that future teams could adapt for seller acquisition in other markets.


This role served as a critical foundation in my career, allowing me to develop hands-on skills in data engineering, automation, and problem-solving that I later scaled into large-scale data architecture and financial systems at Amazon.

Sr. Support Analyst / Catalog Lead

Amazon (Pricing Domain)
Bangalore, India
10.2013 - 06.2015

As a Sr. Support Analyst and Catalog Lead in Amazon’s Pricing Domain, I was responsible for ensuring accuracy, scalability, and efficiency in global pricing operations. My work spanned process automation, BI reporting, quality assurance, and people management, directly impacting how Amazon maintained competitive prices and ensured consistent, customer-focused pricing across regions.
This role required a strong balance of technical problem-solving, operational leadership, and cross-functional collaboration. I not only drove end-to-end pricing accuracy initiatives but also led a team of 20 analysts, coached new hires, and developed BI dashboards that gave near real-time visibility into pricing performance. My efforts were aligned with Amazon’s leadership principles, particularly Ownership, Insist on the Highest Standards, Invent and Simplify, and Hire and Develop the Best.

Ownership | Streamlining Global Pricing Operations
At Amazon’s scale, pricing is a global, high-stakes process with billions of data points influencing customer decisions daily. Recognizing inefficiencies in existing workflows, I took ownership of cross-functional initiatives to streamline pricing operations across multiple regions.
Key contributions included:
• Partnering with product, engineering, and operations teams to identify gaps in pricing workflows.
• Designing scalable solutions that automated validations, reduced manual interventions, and improved visibility for stakeholders.
• Driving alignment between regional pricing teams to ensure standardization of processes and metrics.
By championing these initiatives, I delivered solutions that improved pricing accuracy, consistency, and transparency, reducing delays and errors in how Amazon priced its catalog worldwide.

Insist on the Highest Standards | Pricing Accuracy & Audit Controls
Pricing accuracy is critical not only for customer trust but also for Amazon’s competitive positioning. I led efforts to establish standardized validation frameworks and rigorous audit controls that significantly raised the bar for data quality.
Highlights include:
• Improving price validation accuracy from 85% to 99% by deploying structured, repeatable QA frameworks.
• Reducing false alarms from 33% to just 2% by implementing intelligent validation rules and automated monitoring.
• Building audit dashboards to continuously track validation effectiveness and detect anomalies.
These improvements minimized noise in the system, freed up analyst bandwidth, and ensured that pricing decisions were made based on reliable data. By raising the quality bar, I directly contributed to customer trust and satisfaction in Amazon’s pricing.

Hire and Develop the Best | People Leadership
As a Catalog Lead, I had the opportunity to manage and mentor a team of 20 analysts, including 4 direct reports. My leadership responsibilities included:
• Coaching team members on SQL, data analysis, and problem-solving techniques.
• Designing structured onboarding programs that enabled new hires to contribute productively within weeks.
• Conducting regular performance reviews and providing actionable feedback for skill development.
• Fostering a culture of continuous improvement, where analysts proactively identified and resolved process gaps.
Through this mentorship, I significantly improved team productivity and morale. By scaling knowledge effectively, I enabled the team to support more complex pricing workflows with reduced error rates. Several team members I mentored went on to take larger roles within Amazon, a testament to the long-term impact of people development.

Invent and Simplify | BI Analytics & Dashboards
Recognizing the need for actionable insights, I led BI analytics and warehousing initiatives to simplify how stakeholders consumed pricing data.
• Designed and launched pricing dashboards using Amazon-native BI tools and Power BI.
• Built real-time data visualizations that allowed pricing managers to track anomalies, validations, and exceptions.
• Automated previously manual reporting workflows, reducing turnaround time from days to near real-time.
These dashboards empowered decision-makers with visibility into key pricing metrics across regions, enabling faster interventions and better strategic decisions. The automation also freed up analysts to focus on higher-value analytical work.

Learn and Be Curious | Documentation & Knowledge Sharing
To ensure long-term success and scalability, I invested in knowledge sharing and standardization:
• Authored technical documentation for certificate implementations, validation logic, and BI workflows.
• Conducted training sessions for new analysts and cross-team partners, reducing onboarding time.
• Created reusable templates for validation and reporting tasks, ensuring consistency across teams.
By institutionalizing knowledge, I reduced reliance on individual expertise and strengthened the overall resilience of pricing operations.

Technical Toolkit & Skills
This role deepened my expertise across data engineering and BI tools:
• SQL: Advanced query development for pricing validation and reporting.
• Quality Assurance: Framework design for data validation and anomaly detection.
• Datanet (ETLM tool): ETL development for data movement and transformation.
• Power BI & Amazon-native BI tools: Dashboard creation and reporting automation.
• Cross-Team Collaboration: Bridging business and technical teams for pricing operations.
• Problem Solving: Root-cause analysis and remediation for pricing anomalies.

Leadership Principles in Action
Throughout my Pricing Domain tenure, I consistently demonstrated Amazon’s principles:
• Ownership: Drove cross-functional initiatives to improve pricing globally.
• Insist on the Highest Standards: Raised validation accuracy from 85% to 99%.
• Hire and Develop the Best: Managed and coached a 20-member team.
• Invent and Simplify: Automated reporting workflows and launched BI dashboards.
• Learn and Be Curious: Documented and trained teams for consistent execution.

Impact and Legacy
The initiatives I led had a profound impact on Amazon’s global pricing function:
• Accuracy: Increased validation accuracy by 14% and reduced false alarms by 31%.
• Efficiency: Automated reporting and streamlined workflows, saving countless analyst hours.
• People Development: Built and scaled a high-performing team of 20 analysts.
• Strategic Enablement: Delivered dashboards and frameworks that empowered leadership with real-time insights.
• Sustained Excellence: Institutionalized knowledge and standardized processes for long-term resilience.


This role laid the foundation for my career as a data-focused engineer and leader. It gave me the opportunity to blend technical expertise with people leadership, preparing me for increasingly complex challenges in Amazon’s Returns and Finance domains.

Support Analyst

Amazon (Pricing Domain)
Chennai, India
03.2010 - 10.2013

As a Support Analyst in Amazon’s Pricing Domain, I worked at the critical frontline of Amazon’s pricing operations, where accuracy, timeliness, and reliability were paramount to customer trust and business competitiveness. My responsibilities blended data analysis, system support, process automation, and quality assurance, ensuring that pricing systems operated seamlessly across millions of products globally.
This role provided me with hands-on experience in analyzing high-traffic product data, developing automation tools, implementing machine learning models, and maintaining 24/7 operational support. More importantly, it introduced me to Amazon’s Leadership Principles, which I consistently applied to deliver measurable improvements in accuracy, efficiency, and customer experience.

Customer Obsession | Driving Price Competitiveness Through Insights
At Amazon, pricing plays a critical role in attracting customers, maintaining trust, and ensuring long-term growth. To support this mission, I focused on analyzing data from high-traffic, high-visibility products—items that directly influenced customer purchase decisions.
Key contributions included:
• Data-driven insights: Developed reporting mechanisms that flagged anomalies and provided actionable insights to pricing teams.
• Cross-functional collaboration: Partnered with category managers and finance stakeholders to deliver pricing intelligence that directly shaped strategy.
• Competitive benchmarking: Analyzed market data and competitor pricing to ensure Amazon remained aligned with customer expectations.
Through these efforts, I helped improve Amazon’s price competitiveness, enabling leadership to make fast, data-informed decisions that enhanced the customer experience.

Invent and Simplify | Transforming Pricing Quality
One of my most impactful initiatives was leading a quality transformation program for pricing operations. Pricing discrepancies not only created operational inefficiencies but also risked damaging customer trust.
To address this, I:
• Designed standardized audit processes to systematically track, review, and resolve pricing errors.
• Built automated detection tools using XPath, Regex, and shell scripting to proactively identify anomalies before they reached customers.
• Partnered with stakeholders to embed these tools into standard operating procedures.
The impact was significant: pricing errors were reduced by 80%, cutting down manual rework, improving data reliability, and ensuring customers always saw competitive, accurate prices.

Dive Deep | Leveraging Data & Machine Learning Models
Complex datasets and proprietary tools were central to pricing operations. To maximize efficiency and accuracy, I consistently applied Dive Deep:
• Data exploration: Analyzed large-scale datasets using internal tools to identify discrepancies and root causes of pricing mismatches.
• ML implementation: Experimented with and deployed machine learning models to recommend competitive prices at scale.
• XPath automation: Applied XPath-based extraction logic to automate data validation and reduce dependency on manual checks.
These solutions improved both speed and accuracy, reducing pricing discrepancies by 63% and enabling pricing teams to act with greater confidence in their decisions.

Bias for Action | 24/7 System Support
Supporting Amazon’s global pricing systems required an unwavering commitment to availability and reliability. I provided round-the-clock support for both internal and customer-facing systems, ensuring uninterrupted pricing operations.
Responsibilities included:
• Incident resolution: Monitored and resolved trouble tickets, consistently meeting SLA commitments.
• Root-cause analysis: Investigated recurring system issues and implemented fixes to prevent recurrence.
• Collaboration with engineering teams: Escalated and resolved high-severity incidents, minimizing downtime and customer impact.
This commitment to operational excellence ensured that Amazon’s pricing systems maintained near-constant uptime, enabling smooth customer experiences and uninterrupted sales operations.

Technical Toolkit & Skills Applied
During this role, I developed and strengthened a diverse set of technical skills, including:
• Python (Basics): Developing automation scripts for data validation and anomaly detection.
• Shell Scripting: Orchestrating automated workflows and monitoring tools.
• XPath & Regex: Extracting and validating structured/unstructured data at scale.
• HTML/CSS: Supporting front-end validation for customer-facing pricing systems.
• Trouble Ticketing Systems: Managing incidents, SLA tracking, and resolution workflows.
• Customer Focus: Ensuring customer trust remained the centerpiece of all initiatives.

Leadership Principles in Action
Throughout my tenure as a Support Analyst, I consistently lived Amazon’s principles:
• Customer Obsession: Ensured competitive pricing decisions by analyzing high-traffic product data.
• Invent and Simplify: Automated detection tools that cut pricing errors by 80%.
• Dive Deep: Leveraged ML models and XPath automation to reduce pricing discrepancies by 63%.
• Bias for Action: Delivered 24/7 support for pricing systems, maintaining SLA and uptime.
• Insist on the Highest Standards: Established audit frameworks that raised pricing quality globally.

Impact and Legacy
My contributions in this role created both immediate and long-term value for Amazon:
• Improved price competitiveness through timely, data-driven insights on high-traffic products.
• Reduced pricing errors by 80% with audit frameworks and automated detection tools.
• Cut discrepancies by 63% using machine learning and XPath-based solutions.
• Maintained uninterrupted pricing operations through 24/7 SLA-driven support.
• Laid the foundation for scalable pricing data operations that continue to support global teams.


This role was foundational in my career, helping me transition from analyst-level problem solving into large-scale data engineering and leadership initiatives in subsequent Amazon roles. It provided me with both technical depth and a strong grounding in Amazon’s leadership culture—skills that I carried into my work in Returns and Finance domains.

Education

Bachelor of Technology (B.Tech) - Information Technology - Anna University

Dhanalakshmi Srinivasan College of Engineering
Tamil Nadu, India
04-2009

Skills

  • Leadership: Cross-Team Collaboration, AI-driven Project Management, Mentorship & Coaching, Hiring Panel Participation, Stakeholder Engagement, Quality Assurance
  • Data Engineering Tools: Apache Airflow (Orchestration), Apache Spark, Glue, EMR, Cloud Dataflow, Datanet
  • Querying Languages: SQL (Redshift SQL, HiveQL, SparkSQL, Athena SQL), Python (Basics), Shell Scripting, XPath
  • Business Intelligence Reporting: Amazon QuickSight, Tableau, Power BI
  • AWS Technologies: AWS Redshift, DynamoDB, S3, Glue, Athena, Lambda, EMR, EC2, IAM, CloudWatch, CloudFormation, AWS Secrets Manager, SQS/SNS, Datacraft, Step Functions
  • Data Governance: GDPR, DMA Compliance, Kale Attestation, Data Tagging, Anvil Policy, SAS (Sensitive Application Security) Risks
  • Operational Excellence: SLA Governance, Redshift WLM Tuning, Cluster Optimization, Proactive Monitoring (eg, Klaxon, CloudWatch), Trouble Ticketing Systems, On-Call Management
  • Prompt Engineering and LLM Integration

Timeline

Data Engineer II

Amazon (Finance Domain)
11.2022 - Current

Data Engineer

Amazon (Returns Domain)
03.2017 - 11.2022

Support Engineer

Amazon (Seller Domain)
06.2015 - 03.2017

Sr. Support Analyst / Catalog Lead

Amazon (Pricing Domain)
10.2013 - 06.2015

Support Analyst

Amazon (Pricing Domain)
03.2010 - 10.2013

Bachelor of Technology (B.Tech) - Information Technology - Anna University

Dhanalakshmi Srinivasan College of Engineering
Sankara Subramanian Karthic M
Resume profile created at Resume-Now.com