Summary
Overview
Work History
Education
Skills
Certification
Timeline
Generic

Subhash Mohan Danda

Summary

Skilled DevOps Engineer with around 2 years of hands-on experience in DevOps, Build & Release and AWS Cloud. Involved in optimizing mission critical deployments in AWS, leveraging configuration management, CI/CD and DevOps processes. Support and monitoring of new platforms and Application stacks, Measurement and Optimization of system performance through various DevOps tools, Proficient in Linux administration, including installation, configuration, and troubleshooting of Linux operating systems. Additionally, adept at working with databases like PostgreSQL, including managing and optimizing database performance, data modeling, and SQL query optimization, and a good working knowledge in programming languages such as Python, R, Java.

Overview

1
1
year of professional experience
1
1
Certification

Work History

DevOps Engineer

Cisco Systems, Inc.
02.2023 - Current


  • Worked with various services of AWS: EC2, ELB, Route53, S3, Cloud Front, SNS, RDS, IAM, Cloud Watch and Cloud Formation, Elastic Beanstalk, Lambda, CloudTrail.
  • Worked on Lambda service in AWS used to maintain a server less architecture.
  • Used Amazon Elastic Beanstalk, automatically handling the deployment, from capacity provisioning, load balancing and auto-scaling along with SQS, SNS, SWF services to application health monitoring.
  • Used Terraform to map more complex dependencies and identify network issues and worked with Terraform key features such as infrastructure as code, execution plans, resource graphs and change automation.
  • Configured and maintained Jenkins to implement the CI process and integrated the tool with Ant, Maven and Gradle to schedule the builds.
  • Used Jenkins for automating/Scheduling the build processes and used Jenkins along with Shell or Python scripts to automate routine jobs.
  • Used Jenkins to create CI/CD pipeline for Artifactory using the plugin provided by JFrog and integrated with build tools and CI/CD pipelines for streamlined software delivery and deployment.
  • Knowledge in deploying and managing applications on PCF, a popular cloud-native platform.
  • Proficient in integrating PCF with CI/CD pipelines to automate the deployment and delivery of applications.
  • Utilized PCF's built-in monitoring and logging capabilities to monitor application performance, troubleshoot issues, and optimize system health.
  • Designed, Installed, and Implemented Ansible configuration management system and used Ansible to manage Web applications, Environment's configuration Files, Users, Mount points and Packages.
  • Wrote Ansible playbooks with Python SSH as a wrapper to manage configurations and the test playbooks on AWS instances using Python.
  • Created interactive dashboards and visualizations in Splunk for monitoring system performance and key metrics and configured alerting mechanisms in Splunk to proactively monitor system health.
  • Proficient in utilizing PCF for containerization, scaling, and managing cloud-based applications.
  • Knowledgeable in utilizing Dynatrace for application performance monitoring and management.
  • Experienced in Installing, Configuring and Managing Docker Containers, Docker Images for Web Servers and Applications servers such as Apache, Tomcat using Docker and integrated with Amazon MySQL RDS database.
  • Used scripting languages like Shell, Python, Ruby in various scenarios while assisting new recruits.

DevOps Engineer

Komodo Health
07.2022 - 02.2023


  • Involved in designing and deploying multitude applications utilizing almost all of the AWS stack including EC2, EB, Route53, S3, RDS, ECS, EKS, Lambda, SNS, SQS, IAM and CloudWatch focusing on high-availability, fault tolerance, and auto-scaling.
  • Configured and designed EC2 instances in all the environments to meet high availability and complete security. Setting up the Cloud Watch alerts for EC2 instances and using in Auto scaling launch configuration.
  • Good knowledge in architecting and deploying of fault tolerant, cost effective, highly available and secure servers in AWS.
  • Implemented Blue/Green deployments with AWS Code Deploy where new version of application is tested in deployment other than in-place deployment and then traffic is diverted into the latest deployment.
  • Extensively involved in infrastructure as code, execution plans, resource graph and change automation using Terraform, managed Infrastructure as code using Terraform.
  • Worked on Jira for defect/issues logging & tracking and documented all my work in Confluence.
  • Created scripts in Python which integrated with Amazon API to control instance operations.
  • Managed the Code Repository by maintaining code in Git, improve practices of branching and code merge to custom needs of development team.
  • Experience in working with Jenkins to achieve Continuous Integration and Continuous Deployment methodologies for end-to-end automation.
  • Responsible for creating and maintaining automated builds for projects written in java, PHP using Jenkins.
  • Designed and Implemented CI (Continuous Integration) system, configuring Jenkins's servers, Jenkins nodes, creating required scripts using Python.
  • Experience in using Tomcat, Apache and Nginx application servers for deployments, hosting, load balancing and proxy configurations.
  • Worked on Ansible Playbooks, Inventory files, Vault feature to configure the servers, deploy software, encryption of data file & orchestrate continuous deployments or zero downtime rolling updates.
  • Involved in Docker container snapshots, attaching to a running container, removing images, managing Directory structures, and managing containers.
  • Worked on XLR deploy (XL Release) for release management and orchestration and integrated XLR deploy with other DevOps tools to enable release management and coordination.
  • Configured and managed Nexus repositories for efficient artifact storage, versioning, and distribution.
  • Configured CloudWatch and Datadog to monitor real-time granular metrics of all the AWS services and configured individual dashboards for each resource Agents. Integrated Datadog cloud monitoring tool with PagerDuty for triggering real-time alerts.
  • Worked with DB Admins in setting up of AWS relational databases like PostgreSQL, MSSQL and DynamoDB.

Education

Master of Science - Business Analytics

University of New Haven
West Haven, CT
05.2022

Skills

  • Cloud Platform: Amazon Web Services (AWS), PCF
  • Programming Languages: Java, Python, R
  • Databases: MongoDB, PostgreSQL, SQL Server, MYSQL
  • Version Control: Git, GitHub, Bitbucket
  • Monitoring Tools: AWS Cloud Watch, Splunk, Datadog, Dynatrace
  • Infrastructure as code: Terraform and Cloud Formation
  • Scripting: Shell, Bash, Python, PowerShell
  • Containerization Tools: AWS ECS, EKS, Docker
  • Automation & Configuration Tools: Ansible, Jenkins
  • Orchestration Tools: Kubernetes
  • Application Servers: WebSphere Application Server, Apache Tomcat, WebLogic, Nginx, IIS
  • Identity Provider: ADFS, AWS SSO, SAML 20
  • Operating Systems: Linux, Red Hat, Ubuntu, Windows

Certification

  • Data Analysis- SQL, Python(Udemy)

Timeline

DevOps Engineer

Cisco Systems, Inc.
02.2023 - Current

DevOps Engineer

Komodo Health
07.2022 - 02.2023

Master of Science - Business Analytics

University of New Haven
Subhash Mohan Danda