Summary
Overview
Work History
Education
Skills
Timeline
Generic

Sneha Marrivada

Manchester,CT

Summary

Results-driven Prompt Response Evaluator and Technical Analyst with over four years of specialized experience in code evaluation, data validation, and technical analysis across various programming languages, including Python, SQL, Java, JavaScript, and R. Demonstrated expertise in analyzing model-generated outputs, validating code correctness, debugging complex logic errors, and ensuring compliance with technical standards and guidelines. Proven track record includes evaluating over 10,000 code samples, implementing rigorous quality assurance checks, creating comprehensive test cases, and meticulously documenting technical specifications with detailed annotations. Strong analytical skills combined with hands-on experience in assessing algorithm correctness and reviewing code quality ensure a high level of detail-oriented work that aligns seamlessly with prompt response evaluation and code annotation projects requiring SDE-level technical understanding.

Overview

7
7
years of professional experience

Work History

Senior Prompt Evaluator-Technical Operations

Humana
11.2023 - Current

Senior Prompt Response Evaluator – Technical Operations

Humana
11.2023 - Current
  • Evaluate and annotate 200+ daily model-generated code responses across Python, SQL, R, and JavaScript, assessing correctness, logic flow, adherence to coding standards, syntax accuracy, and functional requirements using systematic rubrics and quality guidelines.
  • Write and optimize complex SQL queries with multi-table joins (INNER, LEFT, RIGHT, FULL OUTER), CTEs, window functions (ROW_NUMBER, RANK, PARTITION BY), subqueries, and aggregate functions to validate data outputs and identify discrepancies across healthcare datasets containing 50,000+ monthly records.
  • Develop Python validation scripts using pandas, numpy, json, and unittest libraries to automate code testing, perform regression analysis, execute test cases, compare expected vs. actual outputs, and generate detailed evaluation reports with pass/fail classifications.
  • Create comprehensive test cases and validation procedures for evaluating code correctness, documenting edge cases, boundary conditions, error scenarios, and expected behaviors with detailed annotations that reduced evaluation errors by 25%.
  • Debug and troubleshoot code logic errors by analyzing Python scripts, SQL queries, R statistical code, and JavaScript functions, identifying root causes, documenting findings with technical explanations, and proposing corrected implementations with comprehensive rationale.
  • Review and assess algorithm implementations across multiple programming languages, evaluating efficiency, correctness, code quality, adherence to best practices, proper error handling, and alignment with functional specifications using established guidelines.
  • Build automated evaluation frameworks using Python (pytest, unittest) to validate model outputs against ground truth datasets, implementing scoring algorithms, statistical comparisons, and quality metrics that processed 1,000+ daily evaluations.
  • Design interactive dashboards in Tableau and Power BI with calculated fields, parameters, and drill-through capabilities to visualize code evaluation metrics, quality trends, annotator performance, and compliance rates across 15 technical dimensions.
  • Perform statistical analysis on code evaluation datasets using Python (scipy, statsmodels) and R to identify patterns, calculate inter-rater reliability, measure quality distributions, conduct hypothesis testing, and generate insights for process improvement.
  • Conduct peer code reviews following established guidelines, providing detailed feedback on code structure, logic correctness, syntax adherence, documentation quality, and best practice compliance for 50+ weekly submissions.
  • Create technical documentation including annotation guidelines, evaluation rubrics, coding standards, quality assurance procedures, test case specifications, and training materials with visual flowcharts that reduced onboarding time by 35%.
  • Validate data transformation logic by writing comparative SQL queries, Python data processing scripts, and R statistical tests to ensure accuracy of ETL pipelines, identifying 200+ monthly discrepancies through systematic validation protocols.
  • Develop guideline compliance checks using regex patterns, parsing algorithms, and rule-based validation in Python to automatically flag code submissions violating standards, reducing manual review time by 40%.
  • Reverse-engineer complex algorithms by analyzing source code in Python, SQL, JavaScript, and R, documenting logic flow with pseudocode, identifying optimization opportunities, and creating detailed technical explanations with code annotations.
  • Execute User Acceptance Testing (UAT) for ML model outputs, creating test plans, defining acceptance criteria, documenting test results, comparing model predictions against expected outcomes, and providing structured feedback for model improvements.
  • Analyze large-scale code repositories using Git version control, conducting code quality assessments, reviewing commit histories, evaluating coding patterns, and documenting best practices across team projects.
  • Build data validation pipelines integrating Hadoop ecosystem tools (Hive, HDFS) with Python automation to process and validate large datasets, ensuring data quality, schema compliance, and transformation accuracy across healthcare analytics workflows.
  • Leverage machine learning evaluation frameworks including MLflow for tracking model performance metrics, comparing evaluation scores, analyzing prediction accuracy, and documenting model behavior across diverse test scenarios.
  • Environment: Python (pandas, numpy, unittest, pytest, scipy), SQL (Complex Queries, CTEs, Window Functions), R, JavaScript, Tableau, Power BI, Git, Hadoop, MLflow, HIPAA Compliance, JIRA, Google Workspace
  • Conducted detailed assessments of program implementations to identify areas for improvement, leading to significant enhancements in process efficiency.

Code Evaluation Analyst

Algocode
08.2021 - 07.2022
  • Evaluated 500+ weekly code submissions across Python, Java, and JavaScript, assessing syntax correctness, logic accuracy, algorithm efficiency, adherence to specifications, code readability, and proper documentation using standardized evaluation criteria.
  • Developed automated testing scripts in Python using unittest and pytest frameworks to validate code functionality, execute regression tests, compare outputs, identify edge case failures, and generate detailed test reports with error classifications.
  • Created comprehensive annotation guidelines for code evaluation projects, defining quality standards, scoring rubrics, evaluation criteria, edge case handling, and documentation requirements that improved consistency across 10+ evaluators.
  • Performed source-to-target validation for data migration projects, writing SQL comparison queries, Python data reconciliation scripts, and validation reports to ensure 100% accuracy of transformed datasets across multiple systems.
  • Designed ETL validation pipelines using AWS Glue and Python to test data transformation logic, validate schema mappings, verify data quality rules, and document discrepancies with detailed error logs for debugging.
  • Built interactive evaluation dashboards in Power BI displaying code quality metrics, error distributions, evaluator performance, guideline compliance rates, and trend analysis across 20+ evaluation dimensions.
  • Conducted technical code reviews for machine learning implementations, evaluating model training code, data preprocessing logic, feature engineering approaches, and validation methodology with detailed technical feedback.
  • Wrote technical specifications for code evaluation standards, including Python style guides, SQL query conventions, error handling patterns, testing requirements, and documentation standards used across evaluation teams.
  • Analyzed algorithm correctness by creating test datasets, executing code with various inputs, comparing results against mathematical proofs, identifying logic errors, and documenting findings with corrected implementations.
  • Performed statistical validation of model outputs using R and Python (scipy, statsmodels), conducting hypothesis tests, calculating confidence intervals, measuring prediction accuracy, and documenting statistical significance of results.
  • Developed quality assurance frameworks implementing multi-level validation checks, automated scoring algorithms, consensus measurement tools, and calibration procedures that improved evaluation accuracy by 30%.
  • Collaborated with cross-functional teams to define evaluation requirements, clarify technical specifications, resolve ambiguous cases, establish baseline standards, and continuously improve annotation quality through feedback loops.
  • Environment: Python (pandas, numpy, unittest, pytest, scipy), Java, JavaScript, SQL, R, AWS Glue, Power BI, Tableau, Azure, ETL, Git, statistical modeling

Technical Data Analyst

Techcore
03.2019 - 05.2021
  • Validated code outputs and data transformations by writing SQL queries with complex joins and aggregations, Python validation scripts using pandas, and R statistical tests to ensure correctness of analytical pipelines processing 10,000+ daily records.
  • Created test cases for data validation defining input scenarios, expected outputs, error conditions, and acceptance criteria, executing tests using Python automation, and documenting results with detailed pass/fail analysis.
  • Performed exploratory data analysis (EDA) using Python (pandas, matplotlib, seaborn) and R to identify patterns, detect anomalies, validate data quality, and generate statistical summaries supporting code evaluation projects.
  • Developed automated reporting scripts in Python using Jupyter Notebooks to process evaluation metrics, generate visualizations, calculate quality scores, and produce formatted reports reducing manual effort by 25 hours weekly.
  • Conducted data mining projects using SQL and Python to extract insights, identify trends, validate business logic implementations, and support analytical algorithm evaluation across multiple datasets.
  • Built validation dashboards in Tableau displaying data quality metrics, validation results, error distributions, and trend analysis enabling real-time monitoring of code evaluation quality.
  • Analyzed algorithm implementations by reviewing Python and R code, testing with diverse datasets, validating statistical correctness, comparing against established methods, and documenting evaluation findings.
  • Performed code debugging for data processing scripts, identifying logical errors, syntax issues, performance bottlenecks, and incorrect implementations with detailed root cause analysis and corrected solutions.
  • Created technical documentation including data dictionaries, validation procedures, quality standards, SQL query explanations, Python code documentation, and evaluation guidelines used by analysis teams.
  • Executed API integration testing by writing Python scripts to validate data exchange, test error handling, verify response formats, and document API behavior with comprehensive test scenarios.
  • Environment: Python (pandas, matplotlib, seaborn), SQL, R, Tableau, Power BI, Excel, Jupyter Notebooks, Git, data validation, EDA, statistical analysis

Education

Master of Science - Computer Science

University of Bridgeport
USA
05-2024

Bachelor of Technology - Computer Science

Ramachandra College of Engineering
India
07-2021

Skills

  • Programming & Query Languages: Python (pandas, numpy, unittest, pytest, scipy, statsmodels, json, requests)
  • SQL (Complex Queries, Joins, CTEs, Window Functions, Subqueries, Views)
  • R (statistical analysis, data validation)
  • JavaScript (code evaluation, logic review)
  • Java (code assessment)
  • Bash/Shell Scripting
  • HTML5/CSS3
  • DAX (Power BI)
  • M Language (Power Query)
  • Code Evaluation & Testing: Code Review & Annotation
  • Algorithm Evaluation
  • Test Case Development
  • Quality Assurance
  • Regression Testing
  • Unit Testing (pytest, unittest)
  • Code Debugging
  • Logic Validation
  • Syntax Verification
  • Guidelines Compliance
  • Performance Analysis
  • Root Cause Analysis
  • Technical Documentation
  • Data Analysis & Validation: Data Extraction & ETL
  • Data Validation & Quality Checks
  • Statistical Analysis
  • Model Output Validation
  • Data Mining
  • Exploratory Data Analysis (EDA)
  • Source-to-Target Mapping
  • Data Reconciliation
  • Pattern Recognition
  • Anomaly Detection
  • Technical Tools & Platforms: Git/GitHub (version control, code review)
  • Jupyter Notebooks
  • Visual Studio Code
  • Power BI (dashboards, DAX)
  • Tableau
  • Excel (Advanced Formulas, VBA, Power Query)
  • Azure Data Lake
  • AWS (Glue, Lambda, S3)
  • Hadoop (Hive, HDFS)
  • MLflow
  • JIRA
  • Confluence
  • Analytics & Visualization: Statistical Modeling
  • Predictive Analytics
  • Machine Learning Evaluation
  • Dashboard Development
  • KPI Development
  • Trend Analysis
  • Performance Metrics
  • Data Storytelling
  • Report Generation
  • Development Methodologies: SDLC
  • Agile/Scrum
  • Waterfall
  • User Acceptance Testing (UAT)
  • Requirements Analysis
  • Cross-functional Collaboration
  • Attention to Detail
  • Documentation Standards
  • Performance assessment
  • Analytical reasoning
  • Data interpretation
  • Project evaluation

Timeline

Senior Prompt Evaluator-Technical Operations

Humana
11.2023 - Current

Senior Prompt Response Evaluator – Technical Operations

Humana
11.2023 - Current

Code Evaluation Analyst

Algocode
08.2021 - 07.2022

Technical Data Analyst

Techcore
03.2019 - 05.2021

Master of Science - Computer Science

University of Bridgeport

Bachelor of Technology - Computer Science

Ramachandra College of Engineering