Summary
Work History
Accomplishments
Languages
Timeline
Hi, I’m

Sai Varun. R

DEVOPS ENGINEER
Dallas,TX

Summary

Professional AWS Engineer with 7+ years of IT experience with adept knowledge in containerization ecosystems like Docker and Kubernetes, and Configuration Management such as Ansible, Chef. Experienced in Continuous Integration and Continuous Delivery (CI/CD), Build and Release, Linux, and System Administration, with multi-cloud platforms like Amazon Web Services (AWS), Azure, GCP. Proficient in principles and best practices of Software Configuration Management in Agile, Scrum, and Waterfall methodologies. Expert in migrating On-premises infrastructure to AWS and involved in virtualization using (VMware), OpenStack services such as Compute, Network, Storage, Dashboard, image, Identity, Monitoring, and infrastructure orchestration using containerization technologies like Docker and Kubernetes. Creating snapshots and Amazon machine images (AMIs) of the instances for backup and creating clone instances. Experienced in design and application deployment of AWS infrastructure utilizing services such as EC2, RDS, VPC and managed network and security, Route 53, Direct Connect, IAM, Cloud Formation, AWS Ops Works (automate operations), Elastic Beanstalk, AWS S3, Amazon Glacier and Cloud Watch monitoring Management Gateway. Created, managed AWS CloudFormation Stack and integrated it with CloudTrail for monitoring the infrastructure. Stored versioned CloudFormation templates in GIT, visualized CloudFormation templates as diagrams and modified them with the AWS CloudFormation Designer. Designed multiple VPC’s and public/private subnets with required number of IP’s using CIDR blocks, Route Tables, Security groups and Elastic Load Balancer. To grant granular permissions to specific AWS Users, Groups and Roles by creating IAM policies. Automated AWS deployment and configuration tasks using Lambda. Configured Ansible control machine and wrote Ansible playbooks with Ansible roles. Used file module in Ansible playbook to copy and remove files on EC2 instances Experience in Automating, Configuring and Deploying Instances on Azure environments and in Data centers and migrating on premise to Windows Azure using Azure Site Recovery and Azure backups. Experience in Designing AZURE Resource Manager (ARM) templates and extensive experience in designing custom build steps using PowerShell. Experience in using Azure service fabric to package, deploy, and manage scalable and reliable Microservices and containers. Also experience in developing different types of Azure Functions such as Http trigger, Timer trigger, service bus trigger, Event-Hub trigger.

Work History

Fresenius Medical Care

Sr. AWS Cloud Developer

Job overview

  • Setup and build AWS infrastructure using resources VPC, EC2, S3, RDS, Dynamo DB, IAM, EBS, Route53, SNS, SES, SQS, CloudWatch, CloudTrail, Security Group, Auto Scaling, and RDS using CloudFormation templates
  • Performed the automation deployments using AWS by creating the IAMs and used the code pipeline plugin to integrate Jenkins with AWS and created the EC2 instances to provide the virtual servers
  • Written several AWS Lambda functions in python and invoked python scripts for data transformations and analytics on large data sets in EMR clusters and AWS Kinesis data streams
  • Created all the required compatibility AWS architectures and end-to-end migration plan for migrating 1200+ Linux/Windows servers and 150+ applications into the AWS cloud using AWS Data Pipeline
  • Implemented ETL process to ingest analytical data stored in S3 into a Redshift data Warehouse cluster using AWS Lambda Microservices
  • Designed AWS Cloud Formation templates to create custom sized VPC, subnets, NAT to ensure successful deployment of Web applications and database templates, Cloud Watch, Cloud Trail and Cloud front to setup and manage the cached content delivery
  • Configured AWS S3 versioning and lifecycle policies and backup files and archive files in Glacier and Creating Lambda function to automate snapshot back up on AWS and set up the scheduled backup
  • Configured the Kubernetes provider with Terraform to interact with resources supported by Kubernetes to create several services such as Deployments, services, ingress rules, Config Map, secrets etc
  • Configured Jenkins on Kubernetes container environment, utilizing Kubernetes and Docker for the runtime environment for the CI/CD system to build and test and deploy
  • Used the Ansible Galaxy, a shared repository for the roles to download, share and manage the roles
  • Integrated Docker container-based test infrastructure to Jenkins CI test flow and set up build environment integrating with Git and Jira to trigger builds using WebHooks and Slave Machines
  • Enable Cloud security using IAM roles and policies and provide users role-based access to limit actions performed in AWS console or CLI
  • Implemented AWS solutions using EC2, S3, RDS, EBS, Elastic Load Balancer, Auto scaling groups
  • Working on creating Amazon EC2 instances and setting up security groups
  • Perform data encryption using AWS KMS keys to secure data stored in various storages like S3, RDS, EFS and EBS
  • Developed and implemented automated CI/CD pipeline utilizing Docker for a micro service-based application
  • Developed CI/CD system with Jenkins on Kubernetes container environment, utilizing Kubernetes and Docker for the runtime environment for the CI/CD system to build, test and deploy
  • Worked with Terraform for automating VPC's, ELB's, Security groups, SQS queues, S3 buckets and continuing to replace the infrastructure as a code from scratch and Created Terraform Scripts for EC2 instances, Elastic Load balancers and S3 buckets
  • Implemented Docker -maven-plugin in and Maven POM to build Docker Images for all microservices and later used Docker file to build the Docker Images from the java jar files
  • Used Jenkins for Continuous Integration and deployment into Tomcat Application Server and used Jenkins AWS Code Deploy plug-in to deploy to AWS
  • Maintain Terraform and Ansible code in GitHub, create repository for each AWS account like development, test, prod-parallel and production
  • Working on encrypting Terraform secrets to Git vault
  • Create Kafka clusters and provide producer endpoint to application teams to write to the Kafka topics
  • Provide Kubernetes cluster to create namespaces to orchestrate Docker containers across the accounts
  • Provisioned the highly available EC2 Instances using Terraform and cloud formation and wrote new plugins to support new functionality in Terraform
  • Implementing a Continuous Delivery framework using Jenkins, Ansible, shell and Artifactory in Linux environment
  • Developed a fully automated continuous integration system using Git, Jenkins, MySQL, and custom tools developed in Python and Bash
  • Involved in Automation with the help of Scripting languages like Ruby, Shell, and Python.

RDS

AWS Solutions Architect
10.2019

Job overview

  • Configured AWS application deployment infrastructure services like VPC, EC2, S3, , Dynamo DB, IAM, EBS, Route53, SNS, SES, SQS, CloudWatch, CloudTrail, Security Group, Auto Scaling Group ASG, and RDS using CloudFormation templates
  • Created and managed Docker deployment pipeline for custom application images in the cloud using Jenkins
  • Experience pulling Docker images from Docker hub and upload it to AWS ECR, uploading and downloading files from AWS S3
  • Creating Docker images and handling multiple images primarily for middleware installations and domain configuration
  • Written Ansible Playbooks, and roles, automating infrastructure in AWS, Webservers, SQL Server, and Monitoring tools etc
  • Mitigated AWS costs by writing the Ansible playbook for auto start/stop of AWS resources at a time of the day by triggering it from Jenkins
  • Deployed and configured Elasticsearch, Logstash and Kibana (ELK) for log analytics, full-text search, application monitoring in integration with AWS Lambda and CloudWatch
  • Experience in Converting existing AWS Infrastructure to Serverless architecture (AWS Lambda, Kinesis), deploying via Terraform and AWS Cloud Formation templates
  • Integrated Terraform into current software release process to help provision AWS resources and deployments of artifacts and services
  • Automated Datadog Dashboards with the stack through Terraform Scripts
  • Configured CloudWatch and Datadog to monitor real-time granular metrics of all the AWS Services and configured individual dashboards for each resource Agents
  • Wrote automation scripts for creating resources in OpenStack Cloud using Python and terraform modules
  • Created and managed Cinder volumes in OpenStack Cloud
  • Worked to setup Jenkins as a service inside the Docker swarm cluster to reduce the failover downtime to minutes and to automate the Docker containers deployment without using configuration management tool
  • Developing Docker images to support Development and Testing Teams and their pipelines; distributed Jenkins, Selenium and JMeter images, and Elasticsearch, Kibana and Logstash (ELK & EFK) etc
  • Established a real-time data analysis platform that collecting data from Jenkins build and Gerrit cluster to provide data analysis, decision support and used AppDynamics, Datadog for performance and log monitoring
  • Implemented a continuous Delivery Pipeline with Docker, Jenkins, and GitHub
  • Whenever new GitHub branches are created Jenkins automatically attempts to build a new Docker container from it
  • Worked on Docker with Kubernetes to create pods for applications and implemented Kubernetes to deploy a web application across a Multi-node Kubernetes cluster
  • Worked on setting up a two-step load balancer setup by configuring Load balancer such as NGINX for frontend to Kubernetes Cluster and to configure “back End” for Kubernetes service to proxy traffic to individual pods
  • Experience in Configuration management tools such as Chef
  • Wrote Chef recipes and cookbooks in Ruby scripting
  • Configured Chef infrastructure in a variety of settings for our project
  • In addition, I've written several automation cookbooks and used Chef Knife to develop cookbooks and recipes for installing automated Linux packages
  • Configured Chef Cookbooks to handle builds and deployments.

DevOps Engineer
12.2017

Job overview

  • Created instances in AWS and facilitated migration by implementing Terraform IAAS scripts and creating resources which include VPC, EC2, Elastic Load Balancing, Auto Scaling, S3, RDS, SES, SNS, AMI Images, Route 53 and IAM from the data center
  • Worked in the high-level environment where our team is to manage Infrastructure and configuration management using Terraform and chef for all the applications comprised of some legacy applications and few early-stage applications
  • Experience on working with on-premises network, application, server monitoring tools and on AWS with Cloud Watch monitoring tool
  • Designed AWS Cloud Formation template to create custom sized VPC, subnets, NAT (Network Address Translation) to ensure successful deployment of Web Applications and database templates
  • Have also worked on recovering the RDS instances by taking snapshots and restoring to point in time
  • Integrated Jenkins with Git to pull codes and Maven to push artifacts to AWS S3
  • Managed Ubuntu, RHEL(Linux), Windows virtual servers on AWS EC2 using Chef, involved in writing a Chef cookbook from scratch to upgrade these servers in serial to reduce the downtime of running sites
  • Worked on designing, planning and implementation for existing On-Prem applications to Azure Cloud (ARM)
  • Demonstrated an in-depth understanding of various services offered in MS Azure
  • Configuration of Internal load balancer load balanced sets and Azure Traffic manager
  • Design, planning and implementation for existing On-Prem applications to Azure Cloud (ARM)
  • Configured Chef control machine and wrote Cookbooks using file module in Chef to copy and remove files on EC2 instances
  • Rolled out Chef to all servers and used the chef node database to drive host configuration, DNS zones, monitoring and backups
  • Wrote ANSIBLE Playbooks with Python SSH as the Wrapper to Manage Configurations of OpenStack Nodes and test Playbooks on AWS instances using Python
  • Implemented jobs to create Azure and AWS Infrastructure from GitHub repositories containing Terraform code and created on-premises active directory authentication using automation with ansible play books
  • Designed and Configured Azure App - Cloud Services, PaaS, Azure Data Factory, Azure Blob Storage, Web API, VM creation, ARM Templates, PowerShell scripts, IAAS, storage, network, and database
  • Configured Azure Virtual Networks, subnets, DHCP address blocks, Azure network settings, DNS settings, security policies and routing
  • Also, deployed Azure IaaS virtual machines and Cloud services (PaaS role instances) into secure Virtual Networks and subnet
  • Experience in Implementation of Continuous Integration and Continuous Delivery and other tooling as needed to support internal and customer development efforts to customize and enhance OpenStack
  • Achieved in maintaining different branches in Bitbucket and preforming the merging strategy by creating the pull request to build the master branch for each application
  • Created best practices Build environment using Jenkins, immutable instances and AWS
  • Virtualized the servers using the Docker for the test environments and dev-environments needs and configuration automation using Docker containers
  • Experience in writing the Terraform scripts to build the infrastructure in different AWS Accounts for different environments
  • Installed docker Registry for local upload and download of docker images and even from docker hub
  • Created Chef Cookbooks to provision Apache Web servers, Tomcat servers, Nginx, Apache Spark and other applications
  • Used Chef as Configuration management tool, to automate repetitive tasks, quickly deploys critical applications, and proactively manages change
  • Experience in Setting up the build and deployment automation for Terraform scripts using Jenkins
  • Configuration Automation using Chef and Docker Containers
  • Experience in working with Bitbucket in managing different repositories for different application
  • Extensively worked on Jenkins, Docker for continuous integration and for End-to-End automation for all build and deployment
  • Provisioned the highly available EC2 Instances using Terraform and cloud formation and wrote new plugins to support new functionality in Terraform
  • Experienced in supporting database systems including Oracle, MySQL on Linux/Unix, and Windows environment
  • Implementing a Continuous Delivery framework using Jenkins, Chef, shell & Artifactory in Linux environment
  • Developed a fully automated continuous integration system using Git, Jenkins, MySQL, and custom tools developed in Python and Bash
  • Build scripts using Maven build tools in Jenkins and Build Forge to move from one environment to other environments
  • Involved in Automation with the help of Scripting languages like Ruby, Shell, and Python
  • Experienced in different tools for monitoring the health checkups using Nagios, Zabbix and Searching and Reporting with Splunk 5.0 (SPLUNK), 2013
  • Experience in working with applications like Application API’s, Data API’s, Frontend’s, and Business Processes
  • Monitoring 24x7, high performance and scalable systems
  • Worked in installing and upgrading Splunk apps and configured them
  • Guided all the SME's in using Splunk to create dashboards, reports, Alerts etc
  • Configured Various Network services such as LDAP, NFS, NIS, DHCP, DNS and Send mail in RedHat Linux.

NIS

Linux Systems Admin
12.2017

Job overview

  • Deployed and implemented, , DHCP and DNS environment and implemented and maintained Proxy Server under Linux issues regarding Samba Servers and perform hardware and software installs/upgrades
  • Hardened the local Linux server by utilizing the ACL security model, restricting access to sensitive information by employees within the organization
  • Performs Monitoring and Log Management on RHEL CentOS servers, including processes, crash dumps and swap management with password recovery and performance tuning
  • Hardening, Patching and upgrades (release) on standalone servers using (single user mode), and on production servers (live upgrade)
  • Configured and managed storage volumes such as LVM and VERITAS on RHEL/CentOS systems
  • Creating, cloning Linux Virtual Machines, templates using VMware Virtual Client 3.5 and migrating servers between ESX hosts
  • Writing Shell scripts for automation of daily tasks, documenting the changes that happen in the environment and in each server, analyzing the error logs, analyzing the User logs, analyzing the /var/log/messages
  • Responsible for writing Bash, Perl, Python scripts to ping the servers and add users to the boxes
  • Resolved system errors, crashes, disk space problems, huge file sizes, and file system full errors
  • Responsible for configuring and connection to SSH through SSH clients like Putty etc
  • Extensively worked on troubleshooting various problems while working with VM during initialization, replacement, mirroring, encapsulating, and removing disk devices
  • Support developers to resolve issue
  • Helped development team for better build process
  • Developed automation and deployment utilities using Ruby, Bash, PowerShell, Python.

Software Release Engineer
04.2016

Job overview

  • Worked along with developers & application teams to design DevOps process for orchestrating Test, Build, Release and Deployment
  • Written Puppet modules for installation and configuration of various deployments on third party applications and knowledge on Vagrant and developed Puppet models for installing and managing java versions
  • Managed PUPPET classes, resources, packages, nodes and other common tasks using PUPPET console dashboard and live management
  • Involved in Architect, Build and maintain Highly Available secure environment utilizing puppet with Bamboo for continuous integration
  • Installed and administered Artifactory repository to deploy the artifacts generated by Maven and to store the dependent jars which are used during the build
  • Designing, Planning and Configuration of Continuous Integration/ Inspection Tools such SonarQube, Artifactory, Bamboo, SVN for full DevOps Stack setup
  • Developed processes, tools, automation for Bamboo based software for build system & delivering SW Builds and Managed user authentication and authorization for the users in both Subversion & Perforce
  • Created the naming strategy for branches and labels and involved continuous integration system with SVN version control repository and continually build as the check-inn’s come from the developer
  • Configuring Bamboo set up, defining, scheduling jobs, invoking external scripts and external executable triggered from Bamboo at defined intervals
  • Worked on high-volume crash collecting and reporting system, built with Python
  • Performed dispatcher role to distribute tasks assigned to the onshore team
  • Testing the application manually and run the JUNIT Test Suites Wrote ANT and MAVEN Scripts to automate the build process
  • Configure Nagios for all the mission critical applications and using Nagios effectively for Application troubleshooting and monitoring post go lives.

Accomplishments

  • Worked on Amazon Web Services (AWS) cloud to support Enterprise Data Warehouse hosting including Virtual Private Cloud (VPC), Public and Private Subnets, Security Groups, Route Tables, Elastic Load Balancer
  • Good exposure in creating monitors, alarms and notifications for EC2 hosts using AWS Cloud Watch and have insight in the monitoring tool namely, Nagios, Ganglia and Splunk
  • Knowledge on various Docker components like Docker Engine, Hub, Machine, Compose and Docker Registry
  • Expertise in using Docker and setting up ELK with Docker and Docker-Compose
  • Actively involved in deployments on Docker using Kubernetes
  • Created Ansible Playbooks for different environments for release and converted Puppet scripts into Docker
  • Integrated Jenkins with various DevOps tools such as Nexus, SonarQube, Puppet and used CI/CD system of Jenkins on Kubernetes container environment, utilizing Kubernetes and Docker for the runtime environment for the CI/CD system to build and test and deploy
  • Expertise in using build tools like MAVEN and ANT for the building of deploy-able artifacts such as war & ear from source code
  • Application Deployments & Environment configuration using Chef, puppet
  • Experience in writing Jenkins Pipeline Groovy Scripts for Continuous Integration and build workflows and Used Jenkins for uploading Artifacts into Nexus Repository and Automated various day-to-day administration task by developing Bash, Ruby, JSON, Perl, PowerShell and Python Scripts
  • Hands on experience in using JIRA as bug tracking system
  • Configured various workflows, customizations and plug-ins for JIRA bug/issue tracker and integration of Jenkins with Jira/GitHub to track change requests, bug fixes, manage tickets for corresponding Sprints
  • Proficient in Implementing relational/non-relational Databases NoSQL Database Management systems like MySQL, MSSQL, Oracle, PostgreSQL, Cassandra DB and Mongo DB
  • Installation, Configuration and Administration of RedHat Linux 5.x,6.x and worked on Windows Server 2003/2008/2012 R2 installation, deployments, troubleshooting and automation
  • Experienced in all phases of the software development life cycle (SDLC) with specific focus on the build and release of quality software
  • Experienced in Waterfall, Agile/Scrum
  • Skill matrix:

Languages

English
Full Professional
Telugu
Full Professional
Hindi
Limited Working

Timeline

AWS Solutions Architect

RDS
10.2019

DevOps Engineer

12.2017

Linux Systems Admin

NIS
12.2017

Software Release Engineer

04.2016

Sr. AWS Cloud Developer

Fresenius Medical Care
Sai Varun. RDEVOPS ENGINEER