Summary
Overview
Work History
Education
Skills
Websites
Certification
Timeline
Generic

Siva D

Orlando,FL

Summary

A DevOps, Build & Release Engineer with 9+ years of experience encompassing a thorough understanding and hands-on experience of DevOps methodology and workflow, Continuous Integration(CI)/Continuous Delivery(CD) Oriented Build Engineering, Configuration Management, Containerization, Cloud Services and System Administration. Has the ability and experience to meld development and operations to ensure quick delivery of an effective and efficient end-product. As a Software Engineer, has the experience with cloud computing platforms to design and deploy scalable solutions by adopting Automation and DevOps processes for Agile projects. Familiar with processing from OS to frontend and strategize project portfolio to match deliverable timelines.

Expertise:

  • Extensive experience in the design and implementation of Continuous Integration, Continuous Delivery, Continuous Deployment (CI/CD) and DevOps processes.
  • Experienced in Terraform extensively to build and provision the infrastructure involving writing modules for a variety of services such as APIs, queues, functions, and others.
  • Experience with implementing monitoring solutions in Ansible, Terraform, Docker, and Jenkins.
  • Hands on experience working with Configuration Management tools like Ansible, Chef.
  • Experienced in creating Branches, Merging, Rebasing, Reverting, Tagging, and maintaining the version across the environments using SCM tools like GIT on Linux and Windows environments.
  • Experienced in managing software artifacts required for development using repository managers like ECR, Nexus and JFrog Artifactory.
  • Hands-on experience in container-based technologies Kubernetes, Docker and ECS.
  • Experience with CI/CD in a containerized micro services environment.
  • Involved in the design and deployment of a multitude of cloud services on AWS stack such as EC2, Route53, S3, RDS, DynamoDB, IAM, while focusing on high-availability, fault tolerance, and auto-scaling.
  • Worked on VAULT, Secret Manager tool for storing Secrets, Key Value pairs and other security parameters.
  • Experience with automating Build, Test and Deployment processes using CI/CD pipelines in Jenkins by developing scripts using Groovy, Bash etc.
  • Automate using Ansible and Python the configuration, installation and deployment set up of many systems within Cloud Services including the monitoring system
  • Ability and experience to meld development and operations to ensure quick delivery of an effective and efficient end-product.

Critical thinking DevOps Engineer with extensive understanding of high availability architecture and concepts. Purpose-driven professional with capacity to be strong team player plus work effectively independently.

Overview

10
10
years of professional experience
1
1
Certification

Work History

DevOps Engineer

Verizon
08.2021 - Current
  • Work in setting up the CI/CD pipelines using Jenkins, GitHub, GitOps, Helm and AWS
  • Monitoring and managing Kibana logs on Environments like Dev, QA, Prod
  • Use Kubernetes to orchestrate the deployment, scaling and management of Docker Containers
  • Maintaining container-based deployments using Docker, working with Docker images, Docker Hub and Docker-registries and Kubernetes
  • Using Jenkins pipelines to drive all microservices builds out to the Docker registry and then deployed to Kubernetes, Created Pods and managed using Kubernetes
  • Extensively working with Scheduling, deploying, managing container replicas onto a node using Kubernetes and experience in creating Kubernetes clusters work with Helm charts running on the same cluster resources
  • Point team player on Openshift for creating new Projects, Services for load balancing and adding them to Routes to be accessible from outside, troubleshooting pods through ssh and logs, modification of Buildconfigs, templates, Imagestreams, etc
  • Managing the Openshift cluster that includes scaling up and down the AWS app nodes
  • Managing logs, splitting logs to EKS where we can analyze the logs, query the logs
  • Setup and manage Elastic search to manage its cluster
  • Experienced on FluentBit, where every node has a Fluent Bit agent running deployed on a daemonset
  • This daemonset will keep an Eye on Logs generated and keep pushing logs to Elastic search
  • Work heavily with AWS and its infrastructure, including EC2, ECS, ElasticCache, ElasticSearch, RDS, VPC implementation, IAM, S3, ELB, Route 53,and Security group management
  • Implemented ArgoCD UI on all the environments like Dev, QA, Prod where default ArgoCD controllers control and syncup with Git automatically and deploys our applications
  • Manage multiple AWS accounts with multiple VPC's for both production and non-prod where primary objectives included automation, build out, integration and cost control
  • Improving tooling, processes, security, and infrastructure that support all of Motocho cloud environments
  • Identifying, designing, and developing automation solutions to create, manage and improve cloud infrastructure, builds, and deployments
  • Leading from proof of concept to implementation of new DevOps tools and solutions; Maintaining strong code hygiene (Groovy, Terraform, Python, Java) and doing peer code reviews on a daily basis
  • Automated operation, installation and monitoring of search ecosystem components in our open source infrastructure stack; specifically: Solr, Zookeeper, Message Queues (RabbitMQ/Kafka/ActiveMQ), Redis
  • Setup Elasticache Using Memcached
  • Configured Redis & Memcached services on Linux environment for faster session access
  • Experienced with SQL and relational databases (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Redis)
  • Identify, troubleshoot and resolve issues related to the build and deploy process
  • Owning critical infrastructure components or systems, and continuously working to improve them; Diving deep to resolve problems at their root, looking for failure patterns and driving resolution
  • Ensuring stability, reliability, and performance of AWS infrastructure; Improving applications performance
  • Executing operational and maintenance activities via planned work
  • Troubleshooting, monitoring and resolving high priority incidents within SLA; Proactively monitoring system performance and capacity planning; Participating as a subject matter expert on process improvement, training & tool development
  • Environment: Jenkins, Terraform, AWS,EC2, Route53, S3, VPC, EBS, Auto scaling, Kubernetes, Helm, Tekton, Elastic Search, Kibana, FluxCD/ArgoCD, GIT, AWS, Unix/ Linux environment, bash scripting, Github Actions.

DevOps Engineer

S&P Global
03.2018 - 08.2021
  • Responsible for working on a multitude of applications and working with different team members and managers on maintaining end to end CI/CD processes
  • Worked on developing automation scripts, Terraform Scripts, Google Cloud Platform, Kubernetes, analyzing the current build, release and deployment process and implementing CI/CD pipelines in Jenkins
  • Coordinate with the team to handle the challenges during script development, which includes libraries and software development kits to run the software on various OS and as well for deployment
  • Good understanding of storage, load balancers, virtualization, web, database and messaging services with the ability to dive deep into any of these areas when necessary
  • Ensure availability of Production and Development systems based on Google Cloud Platform - EC2 and EKS
  • Migrated the existing infrastructure in DFW Datacenter to AWS and GCP cloud
  • It started with creation of landing zones, governance and IAM policies and setting up the VPCs and security
  • Created these infrastructure components using CloudFormation templates and AWS Vending Machines
  • The Containerized applications are migrated to GCP GKE along with Secrets management with KMS
  • After the migration is done, deployed the infrastructure as per project requirements and supported project teams
  • Worked with Security teams to integrate complete AWS access and IAM roles and permissions with Okta
  • Managed secrets with GCP’s KMS
  • Worked on a POC to deploy the API components on GKE cluster on GCP for flexibility
  • Create clusters in Google Cloud and manage the clusters using Kubernetes(k8s) and work on setting up Kubernetes (k8s) Clusters for running microservices
  • Automated the provisioning of infrastructure and created infrastructure as code using Terraform Scripts and maintained Docker container clusters managed by Kubernetes, Linux, Bash, GIT, Docker, on Google Cloud Platform
  • Worked in setting up the CI/CD pipelines using Jenkins, Maven, Nexus, GitHub, and AWS
  • Involved in the maintenance of source code in GIT, branching, creating LABELS, merging of code on GIT for QA, Testing and Release
  • Setting Kafka using terraform, managing MSK Kafka, monitoring parameters, checking performance, manage excess on Kafka, giving Kafka access to developers
  • Used Stackdriver for monitoring CPU Utilization, Memory Management and Database
  • Increased stability and availability for live production sites
  • Configured 24/7 monitoring for live sites
  • Scheduled maintenance to warm up the infrastructure
  • Scheduled and involved in offtime deployments with releases
  • Have one whole week in Monthly for on call rotation to handle incidents (using Pager duty)
  • One whole week in Monthly for fire chief to face live client and developer requirements
  • We also have Apica synthetic monitoring checks in order to get the alerts, where we configured health checks alerts for application endpoints
  • Worked in setting up datadog monitoring across different servers and aws services
  • Created datadog dashboards for various applications and monitored real-time and historical metrics
  • Created system alerts using various datadog tools and alerted application teams based on the escalation matrix
  • Monitored performance and history of infrastructure with tools such as CloudWatch, Datadog etc
  • Environment: Terraform, GCP, AWS, Kubernetes, Ansible, GIT, Jenkins, CI/CD, Docker, GIT, AWS EC2, Route53, S3, VPC, EBS, Auto scaling, Nagios, Unix/ Linux environment, bash scripting.

AWS Engineer

Palo Alto Networks
02.2016 - 03.2018
  • Experience with automation/ integration tools like Jenkins
  • Implementing CI/CD pipelines using Jenkins, Maven, Nexus, GitHub, and AWS
  • Installed and configured Jenkins for Automating Builds and Deployments through integration of Git into Jenkins to automate the code check-out thus providing an automation solution
  • Developed build and deployment scripts using Ant and Maven as build tools in Jenkins to move from one environment to other environments
  • Administering multiple Pre-Production environment configurations, controls, code deployments and code integrity using tools such as GIT, Jenkins
  • Working closely with development teams to understand and support builds environments and integration requirements for application
  • Prepare installation and troubleshooting documents, including build and deployment process documents, production active plan and application release cycle activity
  • Involved in all the activities related with build, packaging, deployment and Maintenance of the applications
  • Single point of contact for any Build-Release and Environment related requirement or issues
  • Debugging system and software problems by working with cross functional developing and testing teams
  • Merging and maintaining upstream branches/future release branches on a weekly basis and resolving merge conflicts using GIT Bash
  • Replicating production environment to lower level environments such as Development environment to conduct performance testing with latest code fixes
  • Used various plug-ins to extend the base functionality of Jenkins to deploy, integrate tests and display reports
  • Managed Maven Repository using Nexus tool and used the same to share the snapshots and releases of internal projects
  • Good hands-on knowledge of Source Code Management (Version Control System) tool like Git
  • Handled the Build and release of software baselines, code merges, branch and label creation in GIT and interfaced between development and infrastructure
  • Involved in the maintenance of source code in GIT
  • Branching, Creating LABELS, merging of codes on GIT for QA, Testing and Release
  • Experience in developing Continuous Integration/ Delivery pipelines
  • Developed Continuous Integration, Nightly and On-demand build system from scratch with Jenkins, Maven, Ant
  • Involved in Installing Jenkins on a Linux machine and created a master and slave configuration to implement multiple parallel builds through a build farm
  • Involved in Installing Jenkins on a Linux machine and created a master and slave configuration to implement multiple parallel builds through a build farm
  • Implemented a CD pipeline involving Jenkins; GIT to complete the automation from commit to deployment
  • Knowledge of major cloud service providers, like AWS, Azure etc
  • Actively manage the day to day AWS accounts, make recommendations on how best to support our global infrastructure and interact with Developers and Architects in cross functional areas
  • Work with Security division to design and manage IAM roles for users, vendors and other third-party vendors
  • Work with business unit managers to understand project scope, suggest possible alternatives and document each step of the design
  • Work with internal teams to create the migration process of legacy systems to the AWS cloud
  • Ability to design high availability applications on AWS across availability zones and availability regions
  • Ability to design applications on AWS taking advantage of disaster recovery design guidelines
  • Prior experience in automated build pipeline, continuous integration and continuous deployment
  • Knowledge of monitoring, logging and cost management tools that integrate with AWS
  • Experience in AWS Networking – Direct Connect/VPC, NACL’s, security groups, etc
  • Container management using Docker by writing Docker files and set up the automated build on Docker HUB
  • Creating Docker images for micro-services to work in AWS ECS and configuring Application Load Balancer and Auto Scaling Groups for high availability of applications in the cloud
  • Environment: AWS (EC2, VPC, ELB, S3, RDS, Cloud watch and Route53), Kubernetes, GIT, Maven, Jenkins, Ansible, Terraform, Docker.

Build/Support, Configuration and Release Engineer

CISCO
04.2014 - 03.2016
  • Participated in the release cycle of the product, which involved environments like Development, SIT, QA, UAT and Production
  • Responsible for the building and deploying the artifacts into DEV, SIT and QA Environments
  • Used Subversion and GIT as version Control management systems
  • Used Nexus as the internal repository for storing and sharing artifacts with the company
  • Involved in Installing Jenkins on a Linux machine and created a master and slave configuration to implement multiple parallel builds through a build farm
  • Implemented a CD pipeline involving Jenkins; GIT to complete the automation from commit to deployment
  • Installed and configured Tools for Continuous Integration environment – Jenkins, Nexus and Sonar
  • Experience in managing Source control systems GIT and SVN
  • Managed Jenkins and Bamboo as a CI server for different projects
  • Developed Continuous Integration, Nightly and On-demand build system from scratch with Jenkins, Maven, Ant
  • Responsible and accountable for the implementation of software configuration and release migration across the web and mobile applications
  • Facilitated the technical and business release implementation review meetings, identifying the dependencies and coordinated activities for smooth execution of plan
  • Proficient in deploying and supporting applications on WebSphere, Tomcat, WebLogic application servers Used Shell scripts to automate the deployment process
  • Executed user administration and maintenance tasks including creating users and groups, reports and queries
  • Maintained and coordinated environment configuration, controls, code integrity, and code conflict resolution
  • Used JIRA for ticket tracking, change management and as an Agile
  • Provisioned and managed Linux servers (EC2 instances) in AWS and support DEV and TEST teams
  • Performed all the Integration, Quality Assurance and Production release implementation and deployments
  • Implemented AWS solutions using E2C, S3, RDS, EBS, Elastic Load Balancer, Auto scaling groups
  • Environment: Linux machine, GIT, Nexus, AWS, Jenkins, Apache Tomcat, JIRA, Linux.

Education

Master of Science - Computer And Information Systems Security

University of Dallas
Irving, TX
12.2013

Skills

  • Linux
  • Unix
  • Ubuntu
  • Centos
  • Windows
  • Docker
  • Kubernetes
  • Ansible
  • Puppet
  • Python
  • Bash
  • Apache Tomcat
  • WebLogic
  • WebSphere
  • GCP / AWS EC2
  • IAM
  • AMI
  • Elastic Load Balancer (EBL)
  • DynamoDB
  • S3
  • SNS
  • Cloud formation
  • Route53
  • VPC
  • VPN
  • Security groups
  • Cloud watch
  • EBS
  • Athena
  • EWR

Certification

AWS Certified Developer Associate

AWS Certified DevOps Engineer Professional

Google Cloud certified Cloud Network Engineer

Google Cloud certified Cloud Architect

Timeline

DevOps Engineer

Verizon
08.2021 - Current

DevOps Engineer

S&P Global
03.2018 - 08.2021

AWS Engineer

Palo Alto Networks
02.2016 - 03.2018

Build/Support, Configuration and Release Engineer

CISCO
04.2014 - 03.2016

Master of Science - Computer And Information Systems Security

University of Dallas
Siva D