Summary
Overview
Work History
Education
Skills
Timeline
Generic

VijayaLaxmi rapati

Austin,USA

Summary

Dynamic SRE DevOps Engineer with a proven track record at Home Depot, enhancing infrastructure automation and deployment efficiency by 40% using AWS, Terraform, and Kubernetes. Expert in Linux and Python, with exceptional problem-solving abilities and effective communication, I excel in fast-paced environments, driving innovation and continuous improvement in cloud-based solutions.

Overview

12
12
years of professional experience

Work History

SRE DevOps Engineer

Home Depot
Austin, TX
06.2022 - Current
  • Created CI/CD pipeline using Terraform for the deployment of the infrastructure in the Cloud
  • Supported applications which are self-migrated to find the root cause analysis of vulnerability
  • Gathering the requirements from the clients about the existing applications to apply the security measures
  • Performing L3 & L4 level full Life-cycle triage for all events on production servers including incident logging, troubleshooting management of production crisis events
  • Creating, validating, and reviewing solutions and effort estimate of converting existing workloads from classic to ARM based Azure Cloud Environment
  • Responsible for integrating AWS with Apptio to create cost transparency reports leveraging the cost allocation tags to report at a granular level & auditing the enterprise accounts for Non-Standard Resources
  • Manages the automation of manual reporting using Tableau and Apptio
  • Worked on GIT to maintain source code in Git and GitHub repositories
  • Developed Kubernetes Cluster using Kops, Kubespray on AWS and VMWare environment, configured etcd key value store, Flannel for networking between the Pods, Ingress controllers
  • Created Shell Scripts to monitor the administrative tasks and automated the tasks to free up the resources using Cron jobs
  • Implemented automation for deployments by using YAML scripts for massive builds and releases
  • Automation Account, Scheduler and Notification Hub, IoT Hub, Log Analytics and other PaaS services
  • Used tools like Visual Studio Community Edition, Visual Studio Code, Power Shell ISE and SQL Server Management Studio
  • Worked with Agile methodology in XL Deploy and XL Release, CI/CD automation from scratch, Docker, OpenShift
  • Worked on Jenkins to implement Continuous Integration and deployment into Tomcat /Web Logic Application Server
  • Integrated Kubernetes with Hashicorp Vault to inject configurations at runtime for each service using init, config sidecars and persistent volume sharing between app and config containers
  • Developed applications and methods with Python for ETL, writing and reviewing code for server-side Python applications
  • Worked for 5 scrum teams (Java, Jenkins, Ant, Maven, SVN, git, Agile methodology, cucumber scripts, sonar, XL Deploy and XL Release, SharePoint, CI/CD automation from scratch, Docker)
  • Conducted Dry-Run Tests to ensure fool-proof execution of customized scripts before execution in production environments
  • Worked on implementing backup methodologies by Power Shell Scripts for Azure Services like Azure SQL Database, Key Vault, Storage blobs, App Services etc
  • Created Azure services using ARM templates (JSON) and ensured no changes in the present infrastructure while doing incremental deployment
  • Implemented Spring boot microservices to process the messages into the Kafka cluster setup
  • Had knowledge on Kibana and Elastic search to identify the Kafka message failure scenarios
  • Having 4+ Years of work experience in AWS with EKS by using CloudFormation
  • Implemented to reprocess the failure messages in Kafka using offset id
  • Implemented Kafka producer and consumer applications
  • Used Spring Kafka API calls to process the messages smoothly on Kafka Cluster setup
  • Have knowledge on partition of Kafka messages and setting up the replication factors in Kafka Cluster
  • Worked on Big Data Integration & Analytics based on Hadoop, SOLR, Spark, Kafka, Storm and web Methods
  • Using Dynatrace Analyzed application performance while doing testing and also created dashboard and reports for few applications
  • Administration and development of application monitoring solutions using various tools such as Dynatrace, splunk
  • Trouble shoot and diagnose issues in Dynatrace servers, collectors and performance warehouse
  • Setup data dog monitoring across different servers and aws services
  • Created data dog dashboards for various applications and monitored real-time and historical metrics
  • Monitored performance and history of infrastructure with tools such as CloudWatch, Datadog etc
  • Collaborated with cross functional teams (firewall team, data base team, application team) in execution of this project
  • Played a major role in understanding the logs, server data and brought an insight of the data for the users
  • Worked as a Splunk Admin for Creating and managing app, Creating users, role, Permissions to knowledge objects
  • Used databases such as MySQL DynamoDB, IBM DB2 and Elastic Cache to orchestrate and manage database
  • Worked with administrators to ensure Splunk is actively and accurately running and monitoring on the current infrastructure implementation
  • Experience in using Elasticsearch, kibana and fluentd, CloudWatch, Nagios, Splunk, Prometheus and Grafana for logging and monitoring
  • Interact with the data warehousing team regarding extracting the data and suggest the standard data format such that Splunk will identify most of the fields
  • Knowledge about Splunk architecture and various components (indexer, forwarder, search head, deployment server), Heavy and Universal forwarder, License model
  • Analyzed security-based events, risks and reporting instances
  • Installed and configured RedHat Enterprise Linux servers installed and configured services like HTTP, NGINX, NFS, FTP
  • Prepared, arranged and tested Splunk search strings and operational strings, writing Regex
  • Used Service-Now tool for managing incidents and change request tickets

DevOps Engineer

Experian
Bangalore, India
06.2017 - 12.2020
  • Company Overview: Bangalore, India
  • Utilized GitHub, VSTS, Jenkins, Azure DevOps, and Terraform for application building and deployment.
  • Experience interacting with REST APIs, YAML, JSON, and Git
  • Specialized in implementing disaster recovery strategies, ensuring operational continuity.
  • Deployed application which is containerized using Docker into a Kubernetes cluster which is managed by Amazon Elastic container service for Kubernetes(AWS EKS)
  • Designed and developed a cloud-native application using AWS services such as ECS, EKS and Fargate, resulting a 40% increase in application scalability
  • Created microservices applications with integrations to AWS services by using Amazon EKS, while providing access to the full suite of kubernetes functionality
  • Working knowledge of collaboration tooling used for software development such as Azure DevOps Boards (or Jira), MS Teams (or Skype), ServiceNow
  • Operations task, using cloud-native tools, like Log Analytics, Azure Monitor and Azure Security Center or another monitoring tooling
  • Experience with DevOps technologies such as Jenkins, Artifactory, Terraform, GitHub and basics of Kubernetes
  • Experience with networking and network/system security, including firewalls, VPN, routing, switching, load balancers, monitoring, security, and DNS
  • Experience with open-source tools like Linux, Git, Ansible, and configuration knowledge in Apache and Nginx
  • Execution of service provisioning to meet Service Level Agreement (SLA) requirements and report against infrastructure Key Performance Indicators (KPI)
  • Analyze code and communicate detailed reviews to development teams to ensure a marked improvement in applications and the timely completion of projects
  • Promote adherence to corporate infrastructure processes, procedures, and standards; Change Management, Disaster Recovery /Business Continuity Plan, (DR/BCP), and Security / Government regulatory compliance
  • Experience in writing scripts, deployment frameworks, tracers, monitors, self-healing/auto-remediation tools, and automate the processes
  • Knowledge and understanding of REST, JSON, and other API based systems
  • Develop and maintain design and troubleshooting documentation
  • Experience in large-scale engagements leading the functional and technical design, installation, and configuration functions for the full stack of infrastructure elements
  • Knowledge of modern technology service architectural hosting, security, and risk management concerns such as IAM, access control, monitoring, IaaS/PaaS
  • Knowledge of infrastructure automation, configuration management, developer workflows, and practices
  • Proficient in troubleshooting using Splunk
  • Configure the add-on app SSO Integration for user authentication and Single Sign-on in Splunk Web
  • Expertise in using Splunk with shell script for various activities like Generating Server Status and Health reports, Deployments on large scale configuration of servers
  • Migrated the on-premises workloads to the Azure cloud-based on the requirement
  • Experience with Splunk technical implementation, Planning, customization, integration with big data and statistical and analytical modeling
  • Experience in Splunk development (creating apps, dashboards, data models, all knowledge Objects etc.)
  • Created and Managed Splunk DB connect Identities, Database Connections, Database Inputs, Outputs, lookups, access controls
  • Strong design and live migration experience from on-premises to Azure IaaS & PaaS
  • Resolved merge conflicts and created branching strategies for the Code repositories in GitHub and Azure Repos
  • Experience with Azure Network components (Virtual Network, Network Security Group, User Defined Route, Gateway, Load Balancer)
  • Experienced in working on DevOps/Agile operations process
  • Experienced in computing cloud cost models, network topology, platform services, and storage options
  • Worked on version control Tool Git to track changes made by developers involving concepts like branching, Merging
  • Created Playbooks to check the system's Resiliency and failure
  • Knowledge in KQL Queries to investigate the logs from Log Analytics
  • Experience on collaboration tools like Slack, Confluence, OneDrive, Jira for working on sprint stories
  • Experienced in using Source tree for code merging strategy along with Atom and VSTS for code editor
  • DevOps Workflow representing all stages starting from SCM Commit Build, Integration Build Which Compiles Code, Junit Test cases and Code Coverage, Build and Bundle, Publish with Lead Approvals and Deployment of Artifacts
  • Create and configure Jenkins server using Terraform and Ansible
  • Launching Amazon EC2 Cloud Instances using Bootstrappers and cloud formation templates
  • Installed application on AWS EC2 instances and configured the storage on S3 buckets
  • Managed IAM policies, providing access to different AWS resources, design and refine the workflows used to grant access
  • Worked with git to maintain code, deploy code through Travis and Terraform to Amazon Web Services via Dies
  • Integrated with Team City and Octopus for the continuous integration and continuous delivery
  • The development is incremental, once the changes are checked-in to TFS, the daily build runs, executes the unit test cases and deploys the changes on CI environment
  • Enhanced existing automated Build/Deploy process and architect the next generation centralized deployment processes using Octopus
  • Create a Virtual Network on Windows Azure to connect all the servers
  • Handle escalated Support tickets till closure for MS Azure PaaS platform
  • Configured AD connect to configure federation with on-premises ADFS and Azure AD
  • Experience is using Microsoft Azure
  • Had integrated with Team City and Octopus for the continuous integration and continuous delivery
  • The development is incremental, once the changes are checked-in to TFS, the daily build runs, executes the unit test cases and deploys the changes on CI environment
  • Provisioned EC2 instances, configured auto scaling and defining cloud formation JSON templates using Ansible modules
  • Accomplished tasks of client SCM team and worked on the migration of existing code repository from ClearCase multisite to TFS
  • Defined application servers on WebLogic Server, created nodes and horizontal created clusters, configured Oracle JDBC provider to provide connectivity via data source to the application
  • Deployed and managed web applications and services into AWS by using Elastic Bean Stalk
  • Designed AWS Cloud Formation templates to create custom sized VPC, subnets, NAT to ensure successful deployment of Web applications and database templates
  • Created Maven POMs to automate the build process for the new projects and integrated them with third party tools like SonarQube, Nexus
  • Involved in creation of virtual machines and infrastructure in the Azure Cloud environment
  • Created Chef Automation tools and builds and do an overall process improvement to any manual processes
  • Written Chef Cookbooks for various DB configurations to modularize and optimize end product Involved in writing Java API for Amazon Lambda to manage some of the AWS services
  • Designed and implemented a continuous build-test-deployment (CI/CD) system with multiple component multibranch pipelines using Jenkins to support weekly releases and out-of-cycle releases based on business needs to serve microservices orchestrated on K8’s
  • Created monitors, alarms and notifications using GCP billing and quotas, stack driver for various cloud services such as GKE, NAT, Cloud Functions, Pub/Sub
  • Containerized application for web application by building a docker image
  • Worked on Kubernetes cluster creation, management, and orchestration of Docker containers, and managing cloud native applications
  • Worked as technical support to monitor and resolve the server and services related issues
  • Worked as admin to configure Linux servers for various applications and responsible for network management for Cloud using VPC
  • Worked for 5 scrum teams (Java, AEM, Jenkins, Ant, Maven, SVN, git, Agile methodology, cucumber scripts, sonar, XL Deploy and XL Release, SharePoint, CI/CD automation from scratch, Docker)
  • Designed and built Azure V2 network infrastructure including Site to Site connection through Meraki VPN/Firewall appliance, with Point-To Site (V1 network connection to V2 Network) for migration of Azure VMs from classic Portal to Azure Portal
  • Implemented custom procedures to unify streamline and automate application development and deployment process with Linux container technology using Docker
  • Created and wrote shell scripts (Bash), Perl, Python and Power shell for automating tasks
  • Designed, Installed and Implemented CI/CD automation system
  • Used Ant, Maven as a build tool on java projects for the development of build artifacts on the source code
  • Implemented Maven Release Plug-in through Jenkins Jobs for deploying the artifacts to Nexus, Artifactory
  • Maintained Nexus for storing artifacts and for searching the dependencies of a project based on GAV coordinates
  • Managed Version Control Subversion (SVN) and GIT Enterprise; and Automated current build process with Jenkins with proposed Branching strategies to accommodate code in various testing cycles
  • Provision, for deployment, orchestration, and operations across multiple data centers and cloud providers, Core Coverage, Cloud-Specific Infrastructure and Systems and Architecture Cloud Planning
  • Developed rollout plans, monitored project milestones, and prepared detailed progress reports.
  • Launching Amazon EC2 Cloud Instances using Amazon Images (Linux/ Ubuntu) and configuring launched instances with respect to specific applications
  • Worked with AWS API to manage resources on AWS for many services such as EC2, S3, VPC, Cloud watch, ELB, Auto-scaling and SNS, created python script using AWS API Calls to manage all resources deployed on AWS
  • Automated single-click deployments onto AWS with Chef.
  • Maintained JIRA for tracking and updating project defects and tasks
  • Experience in monitoring infrastructure using Nagios
  • Bangalore, India

DevOps Engineer

GETOT
Bangalore, India
12.2012 - 06.2017
  • Company Overview: Bangalore, India
  • Developed and implemented Software Release Management strategies for various applications in an agile environment
  • Administrated Linux and Windows environments using Ansible, Chef and Puppet based on the needs
  • Built/Deployed custom Docker images from Artifactory into EKS k8s cluster as part of a Gitlab CI pipeline
  • Utilized Ansible and Jenkins to automate the provisioning of our identity management solution which is used to implement single sign on for AWS.EKS authentication integrated with SSO as well
  • Created and Configured Jenkins server using Ansible and Shell scripts
  • Experience in Setting up the build and deployment automation for Terraform scripts using Jenkins
  • Involved in daily Scrum meetings
  • Also involved in Iteration/Sprint planning meeting to plan the efforts for upcoming sprint based on the priority and estimated effort
  • Maintained and administered GIT source code repository, GitHub Enterprise is used
  • Implemented new Docker container creation process for each GitHub branch gets started on Jenkins as Continuous Integration server
  • Planning, Designing Automation Framework using QTP(VB Script) and Selenium and developing automation scripts for banking and investments applications on Java and Python
  • Integrated Kafka source to read the payment confirmation messages
  • Maintained and administered GIT source code repository and GitHub Enterprise
  • Implemented Maven as build tool on Java projects for the development of build artifacts on the source code
  • Developed and Implemented ANT and MAVEN Scripts to automation of build process on MAVEN build tools to test the application manually and run the JUNIT Test suites in TDD fashion
  • Developed Shell/Python scripts to automate the deployment process
  • Owning the Activity of Upgrading, administering, plug-in management, and User Management and Job creation in Jenkins
  • Bangalore, India

Education

Bachelor of Science - Computer Science

Osmania University
Hyderabad
06-2010

Skills

  • Linux Centos
  • Ubuntu
  • Unix
  • Windows
  • HDFS
  • MapReduce
  • Hive
  • YARN
  • Cassandra
  • Avro
  • Spark
  • ZooKeeper
  • Solr
  • Kafka
  • GIT
  • SVN
  • Bitbucket
  • Jenkins
  • Cloud bees
  • Ant
  • Maven
  • Gradle
  • Terraform
  • Shell
  • Power shell
  • Bash
  • Python
  • Docker
  • Kubernetes
  • VM virtual Box
  • VMware
  • Nagios
  • Prometheus
  • Cloud watch
  • Grafana
  • Datadog
  • Splunk
  • Dynatrace
  • Microsoft Azure
  • AWS
  • Network fundamentals
  • Release management
  • Infrastructure automation
  • Monitoring and logging
  • Scripting languages
  • Performance management
  • Effective communication
  • Continuous integration
  • Containerization technologies
  • Meeting participation
  • IT solution development
  • Task prioritization
  • Developer collaboration
  • Continuous deployment
  • Incident management
  • Linux operating system
  • Configuration management

Timeline

SRE DevOps Engineer

Home Depot
06.2022 - Current

DevOps Engineer

Experian
06.2017 - 12.2020

DevOps Engineer

GETOT
12.2012 - 06.2017

Bachelor of Science - Computer Science

Osmania University
VijayaLaxmi rapati