Summary
Overview
Work History
Education
Skills
Timeline
Generic

Nikhila B

Hartford,CT

Summary

Accomplished MLOps Engineer with a proven track record at CIGNA, enhancing model efficiency and reducing costs by 20% through expert deployment of AWS, Docker, and Kubernetes. Skilled in Python and collaborative project management, I excel in automating and optimizing ML pipelines, ensuring scalable solutions and actionable insights in healthcare analytics.

Overview

9
9
years of professional experience

Work History

MLOps Engineer

CIGNA
Bloomfield, CT
06.2023 - Current
  • Designed and implemented end-to-end machine learning pipelines on AWS SageMaker, achieving a 30% reduction in model training time
  • Automated data preprocessing using AWS Glue and Lambda, handling over 1TB of healthcare claims data daily
  • Built and deployed containerized ML models using Docker and Kubernetes, ensuring scalable and reliable infrastructure
  • Developed CI/CD pipelines for ML models using Jenkins and Terraform, reducing deployment cycles by 40%
  • Conducted model testing and evaluation using Python libraries like Scikit-learn and TensorFlow, improving model accuracy by 15%
  • Designed dimensional data models for healthcare analytics using Snowflake, enabling faster reporting and insights
  • Integrated AWS RDS and DynamoDB for structured and semi-structured data storage, ensuring high availability and fault tolerance
  • Created Redshift clusters optimized for complex queries, reducing report generation times by 50%
  • Implemented AWS CloudFormation templates for automated infrastructure provisioning, ensuring consistency across environments
  • Monitored model performance with Prometheus and Grafana, identifying issues and minimizing downtime
  • Designed and maintained data lakes on AWS S3 for large-scale data processing and analytics
  • Conducted A/B testing for healthcare policy recommendation models, optimizing patient outcomes
  • Created interactive dashboards in Power BI for visualizing model predictions and healthcare trends
  • Enhanced ETL processes using PySpark, reducing data processing time by 25%
  • Partnered with data scientists to optimize feature engineering workflows, improving model performance
  • Migrated on-premise ML workflows to AWS, achieving a 20% reduction in infrastructure costs
  • Defined and implemented data governance policies to ensure compliance with HIPAA regulations
  • Built real-time anomaly detection systems using SageMaker and Lambda, enhancing fraud detection capabilities
  • Implemented Snowflake pipelines for integrating data from various sources, ensuring data consistency
  • Deployed NLP models for analyzing patient feedback, improving customer satisfaction by 10%
  • Developed monitoring scripts for database performance, identifying bottlenecks and optimizing queries
  • Automated the generation of healthcare KPIs using Python and SQL, reducing manual effort by 60%
  • Conducted root cause analysis for pipeline failures, improving system reliability by 95%
  • Integrated Kafka for real-time data streaming, supporting dynamic model updates
  • Optimized SageMaker endpoints for low-latency predictions, reducing response times by 30%
  • Conducted workshops for teams on AWS services and MLOps best practices, improving collaboration
  • Collaborated with stakeholders to define project requirements and deliver actionable insights
  • Designed and deployed scalable dimensional models, enabling advanced analytics and decision-making
  • Ensured robust security measures for ML pipelines, including encryption and access controls

Machine Learning Engineer

ClearResults ATLAS
Austin, TX
04.2018 - 12.2022
  • Built and deployed scalable ML pipelines using AWS SageMaker, enhancing operational efficiency by 35%
  • Automated data transformation workflows using AWS Glue, handling over 500GB of energy consumption data daily
  • Designed ETL processes with PySpark and Snowflake, reducing data transformation time by 40%
  • Created CI/CD pipelines for model deployment using Jenkins and Docker, ensuring robust version control
  • Developed and maintained data warehouses using Redshift and DynamoDB, supporting large-scale analytics
  • Designed dimensional data models for energy usage analytics, enabling faster reporting
  • Integrated Tableau and Power BI for creating dashboards to visualize energy-saving trends and forecasts
  • Conducted model testing and validation using Python, ensuring deployment of high-accuracy models
  • Monitored ML pipeline performance with Prometheus, identifying and resolving bottlenecks
  • Built real-time prediction systems for energy demand forecasting, improving accuracy by 20%
  • Automated infrastructure provisioning using Terraform and AWS CloudFormation, ensuring reliability
  • Enhanced system monitoring and logging using CloudWatch, improving incident response times
  • Migrated legacy systems to AWS, achieving a 25% reduction in operational costs
  • Partnered with stakeholders to implement machine learning solutions aligned with business goals
  • Conducted A/B testing to optimize model predictions, improving energy savings by 15%
  • Utilized Apache Kafka for real-time data ingestion, supporting dynamic ML workflows
  • Designed secure pipelines for sensitive energy data, ensuring compliance with industry standards
  • Developed training modules for teams on MLOps tools and best practices, fostering skill development
  • Created interactive dashboards to track ML model performance and business KPIs
  • Conducted root cause analysis for pipeline failures, achieving a 98% uptime for critical workflows
  • Integrated Snowflake pipelines for seamless data transformation and analytics
  • Improved database query performance by indexing and optimizing Redshift configurations
  • Automated the creation of energy consumption reports using Python, reducing manual effort by 50%
  • Deployed NLP models for customer feedback analysis, enhancing service quality
  • Built APIs for integrating ML predictions into business applications, enabling real-time decision-making
  • Conducted code reviews to ensure adherence to MLOps best practices
  • Enhanced AWS S3 storage configurations to support cost-efficient data archiving

MLOps/Data Engineer

Tricon Infotech pvt.ltd
Bangalore, India
11.2015 - 03.2018
  • Designed and implemented ETL pipelines using PySpark, Snowflake, and AWS Glue, processing over 1TB of logistics data daily
  • Built dimensional data models to support supply chain analytics, improving reporting efficiency by 40%
  • Created AWS S3 data lakes for centralized data storage and analytics
  • Automated model deployment workflows using Docker and Kubernetes, reducing deployment times by 30%
  • Developed CI/CD pipelines with Jenkins, ensuring consistent and error-free deployments
  • Optimized database performance for Oracle and MySQL, achieving faster query execution
  • Conducted model validation and testing using Python, ensuring reliable predictions
  • Created dashboards in Power BI and Tableau to monitor logistics KPIs and trends
  • Enhanced system reliability by implementing data validation scripts and monitoring tools
  • Collaborated with teams to define data governance policies, ensuring data quality and compliance
  • Migrated on-premise systems to AWS, achieving a 20% reduction in costs
  • Implemented streaming solutions with Kafka for real-time logistics tracking
  • Conducted A/B testing to optimize supply chain models, improving efficiency by 15%
  • Built and maintained secure pipelines for sensitive data, ensuring regulatory compliance
  • Designed and deployed real-time anomaly detection systems, reducing operational disruptions
  • Automated daily reporting tasks using Python and SQL, saving 10+ hours weekly
  • Enhanced operational dashboards to provide actionable insights for decision-makers
  • Conducted workshops for teams on ETL and MLOps tools, fostering knowledge sharing
  • Partnered with stakeholders to implement machine learning solutions tailored to business needs
  • Developed and deployed APIs for integrating predictive models into business applications
  • Improved the efficiency of logistics operations by implementing data-driven optimization models

Education

Master of Science - Data Science

University of Connecticut
Hartford, CT
05-2024

Bachelor - Computer Science And Engineering

Jawaharlal Nehru Technological University
India
04-2016

Skills

  • Python
  • SQL
  • R
  • Java
  • Shell Scripting
  • MATLAB
  • NoSQL
  • Docker
  • Kubernetes
  • Jenkins
  • Terraform
  • PySpark
  • MLflow
  • Airflow
  • Kafka
  • Model Deployment
  • Hyperparameter Tuning
  • A/B Testing
  • Feature Engineering
  • AWS
  • GCP
  • Azure
  • MySQL
  • PostgreSQL
  • MongoDB
  • Redshift
  • Snowflake
  • SparkSQL
  • Power BI
  • Tableau
  • Seaborn
  • Matplotlib

Timeline

MLOps Engineer

CIGNA
06.2023 - Current

Machine Learning Engineer

ClearResults ATLAS
04.2018 - 12.2022

MLOps/Data Engineer

Tricon Infotech pvt.ltd
11.2015 - 03.2018

Master of Science - Data Science

University of Connecticut

Bachelor - Computer Science And Engineering

Jawaharlal Nehru Technological University
Nikhila B