Summary
Overview
Work History
Education
Skills
Timeline
Generic

ANIRUDH RAVIPUDI

Shrewsbury,USA

Summary

Results-driven AI/ML Engineer with 4 years of experience delivering end-to-end data and machine learning solutions across diverse domains, including investment analytics, retail forecasting, and healthcare operations. Proven track record in designing scalable models, building robust data pipelines, and extracting insights from large, complex datasets to inform strategic business decisions. Adept in Python, SQL, and cloud platforms (AWS, Databricks), with strong command over supervised learning, NLP, and model interpretability techniques. Known for leading client-facing implementations, collaborating cross-functionally with engineers, analysts, and stakeholders, and translating technical results into actionable outcomes. Passionate about building intelligent systems that drive efficiency, automation, and real-world value.

Overview

4
4
years of professional experience

Work History

Data Analyst Intern

Foxfire Technologies
01.2021 - 12.2021

The client, a field service solutions provider, required an internal analytics system to monitor field operations efficiency, track service request volumes, and reduce technician dispatch errors across regions. They needed a centralized reporting framework to support operational planning, identify high-delay zones, and automate the flagging of SLA violations. The goal was to help the business reduce downtime, optimize technician assignments, and improve customer satisfaction across high-volume regions.

Responsibilities:

  • Analyzed technician dispatch and job completion logs using Python (pandas, NumPy) to identify delays and repeat visits.
  • Cleaned and preprocessed service-level data from legacy systems and SQL-based field apps.
  • Developed dashboards in Power BI highlighting regional SLA compliance, technician productivity, and repeat service requests.
  • Collaborated with operations teams to define KPIs for on-site issue resolution time and technician performance.
  • Implemented Excel-based daily trackers for first-response accuracy and zone-level performance metrics.
  • Automated weekly reporting workflows, reducing manual data prep time by over 60%.
  • Worked alongside backend engineers to align reporting outputs with CRM data pipelines.
  • Presented biweekly insights to senior analysts and operations managers to inform staffing and training decisions.

ML Engineer

Deloitte
01.2021 - 12.2021

The client required a machine learning-based inventory forecasting system to support demand-driven procurement across their retail network. The goal was to develop a scalable solution capable of processing large transactional datasets, producing accurate SKU-level forecasts, and integrating seamlessly with existing ERP and POS platforms. Key priorities included reducing overstock and stockouts, improving visibility into supply chain trends, and ensuring compliance with data governance standards.

Responsibilities:

  • Designed and trained time-series forecasting models using XGBoost and LSTM to predict SKU-level demand across regions with 85–90% accuracy.
  • Built a scalable data ingestion pipeline using Python and Apache Airflow to automate preprocessing of transactional and POS data from multiple sources.
  • Developed custom feature engineering scripts to encode temporal, seasonal, and regional attributes for inventory trend modeling.
  • Deployed models via Docker containers and integrated them with the client’s cloud-based ERP system using REST APIs.
  • Conducted backtesting and implemented forecast error monitoring using MAPE and RMSE to continuously validate model performance.
  • Worked closely with data engineers and ERP teams to ensure end-to-end deployment from data capture to dashboard delivery.
  • Created Jupyter-based internal documentation and model explainability notebooks using SHAP for business stakeholders.
  • Ensured all model development and deployment followed enterprise security, data privacy, and audit requirements.

AI/ML Engineer

Gradient AI
09.2022 - 08.2023

We were engaged by a regional healthcare provider to modernize its fragmented claims data system. The objective was to integrate structured and semi-structured data across EMR, billing, and compliance systems into a centralized analytics platform. This integration aimed to support value-based care initiatives and predictive cost modeling.

Responsibilities:

  • Collaborated with clinical stakeholders to identify reporting challenges and translated them into data modeling objectives, supporting predictive risk stratification for patient readmissions.
  • Utilized Python (pandas, NumPy, SciPy) and SQL to clean, join, and validate over 10 million patient and claims records from four disparate source systems, enhancing model accuracy.
  • Developed automated ETL pipelines using Python and Airflow to ingest daily transactional data into a centralized PostgreSQL warehouse.
  • Partnered with data engineering teams to define schemas for a unified patient-provider-payment dimension model, enabling cross-system analysis.
  • Created Power BI dashboards tracking KPIs such as cost-per-patient, denial rates, and care outcome metrics, improving executive visibility.
  • Applied statistical methods (ANOVA, t-tests, linear regression) to evaluate pilot program impacts on cost reduction and clinical efficiency.
  • Designed a framework for ongoing data quality audits and validation checks, improving data integrity for analytics consumers across departments.

AI/ML Engineer

State Street
12.2023 - 12.2024

The client required a robust AI-driven investment analytics platform capable of processing large-scale financial datasets to optimize portfolio strategies, improve risk assessment accuracy, and deliver actionable insights to institutional investors. Their objective was to integrate real-time and historical market data sources, enhance predictive modeling for asset performance, and enable intelligent decision-making through explainable AI solutions. Additionally, they sought automation of investment trend detection, sentiment analytics from financial news, and scenario simulations to support the investment team’s forecasting accuracy. Compliance with SEC and internal audit policies, along with ensuring data transparency, interpretability, and scalability, were paramount.

Responsibilities:

  • Developed supervised learning models using XGBoost and LightGBM to predict portfolio-level risk-adjusted returns, incorporating macroeconomic indicators and company fundamentals.
  • Engineered pipelines to ingest and preprocess live and historical financial data from Bloomberg and Refinitiv APIs into scalable Spark-based environments.
  • Designed and deployed sentiment analysis modules using NLP (BERT and Vader) on earnings calls and financial news to improve asset rebalancing strategies.
  • Collaborated with quants and portfolio managers to co-develop explainable AI dashboards using SHAP and Power BI for investment model transparency.
  • Implemented data versioning and reproducibility protocols using MLflow and DVC for regulatory compliance and audit readiness.
  • Integrated Monte Carlo simulations to model multi-scenario forecasts and evaluate downside risks across different portfolio allocations.
  • Contributed to developing an internal AI knowledge base and conducted workshops to educate stakeholders on interpreting model outcomes.
  • Automated performance monitoring and alerts for model drift using statistical process control (SPC) charts and time-series deviation checks.


Education

Master's - Data Science

Florida International University
Miami, FL

Bachelor of Science - Information Technology

VNR VJIET
Hyderabad, Telangana

Skills

    Programming: Python (NumPy, Pandas, Scikit-learn, XGBoost, TensorFlow, Keras, Matplotlib, Seaborn), R (basic statistical modeling), SQL (PostgreSQL, MySQL, T-SQL), Bash/Shell Scripting (for deployment pipelines), PL/SQL (Oracle environments)

    Big Data & Databases: Snowflake, Databricks, Google BigQuery, Oracle, SQL Server

    ML/AI: Supervised & Unsupervised Learning, Time-Series Forecasting (Prophet, LSTM), Regression, Classification, Clustering (K-Means, Hierarchical), NLP (BERT, spaCy, NLTK, text summarization, sentiment analysis), A/B Testing & Statistical Hypothesis Testing, Model Evaluation (RMSE, MAPE, Confusion Matrix, ROC-AUC), Feature Engineering and Selection, Explainability: SHAP, LIME

    Data Engineering: Data Preprocessing & ETL (Airflow, pandas, openpyxl, Informatica), Data Pipelines (Apache Airflow, Luigi), API Integration (RESTful APIs), Data Warehousing Concepts (Star Schema, Fact/Dim Tables)

    Visualization & Reporting: Power BI, Tableau, Matplotlib / Seaborn (Python), Excel Dashboards (PivotTables, VLOOKUP, Data Validation)

    Cloud & DevOps Tools: AWS (S3, EC2, Lambda, SageMaker, CloudWatch), Docker (model containerization), Git (version control), GitHub/GitLab, Jenkins (basic CI/CD)

Timeline

AI/ML Engineer

State Street
12.2023 - 12.2024

AI/ML Engineer

Gradient AI
09.2022 - 08.2023

Data Analyst Intern

Foxfire Technologies
01.2021 - 12.2021

ML Engineer

Deloitte
01.2021 - 12.2021

Master's - Data Science

Florida International University

Bachelor of Science - Information Technology

VNR VJIET
ANIRUDH RAVIPUDI