Summary
Overview
Work History
Education
Skills
Certification
Timeline
Generic

PRAVEEN KUMAR MADDINENI

Summary

Organized and dependable candidate successful at managing multiple priorities with a positive attitude. Willingness to take on added responsibilities to meet team goals.

Overview

9
9
years of professional experience
1
1
Certification

Work History

Sr DevOps AWS/GCP Engineer

Freddie Mac
06.2023 - Current
  • Involved in design and deployment of multiple applications across the AWS ecosystem, leveraging EC2, Route53, S3, RDS, DynamoDB, VPC, SNS, SQS, IAM to enhance high-availability, fault tolerance, and auto-scaling capabilities using AWS Cloud Formation
  • Spearheaded the seamless migration of multi-tier applications from on-premises to AWS/GCP, achieving a 40% reduction in operational costs and enhancing scalability
  • Executed seamless migration of production infrastructure to AWS, utilizing a comprehensive suite of services including AWS Server Migration Service (SMS), AWS Database Migration Service, Elastic Beanstalk, CloudFormation, Redshift, DynamoDB, Code Deploy, Code Commit, and EBS, transitioning from GCP Cloud
  • Integrated SAST and DAST tools (SonarQube, OWASP ZAP) into the CI/CD process, identifying and mitigating 60% more vulnerabilities, substantially improving application security
  • Excelled in designing, deploying, and managing scalable and fault-tolerant API architectures on AWS and Google Cloud, demonstrating advanced cloud infrastructure management skills
  • Created Big Query authorized views for row level security or exposing the data to other teams
  • Gained proficiency in deploying GCP solutions, including Dataproc, GCS, Cloud Functions, and Big Query, to optimize data processing and storage solutions
  • Managed AWS storage solutions (EBS, S3, Glacier) and database services (RDS, DynamoDB), including automating data synchronization to Glacier and executing region-to-region migrations
  • Experience in configuring VPC, Route Tables, Direct Connect, Internet Gateway, Security Groups and CloudWatch monitoring Alerts
  • Designed AWS Cloud Formation templates to create custom sized VPC, subnets, NAT to ensure successful deployment of Web applications and database templates
  • Protect applications by integrating them to Okta through modern authentication protocols like SAML, OAuth and OIDC
  • Created and managed Snowflake roles and permissions to control user access to data
  • Optimized Snowflake queries to improve performance and reduce costs
  • I have created new features and eliminated bugs in java and C#
  • I was also working on Micro-services for new features written in Spring boot
  • Proficient in utilizing SonarQube for conducting comprehensive static code analysis and quality assessments to ensure the delivery of high-quality, maintainable, and secure software solutions
  • Experienced in building, Testing, and deploying applications by adopting DevOps tools like GIT, Chef, Ansible, Jenkins, Docker, and Kubernetes
  • Having good working experience in AWS with EKS by using CloudFormation
  • Load data into Amazon Redshift and use AWS Cloud Watch to collect and monitor AWS RDS instances within Confidential
  • Developed and executed a migration strategy to move Data Warehouse from Oracle platform to AWS Redshift
  • Expertise in converting AWS existing infrastructure to serverless architecture (AWS Lambda, Kinesis) and deployed via Terraform or AWS Cloud formation
  • Extensive experience integrating load balancers with cloud services, including AWS Elastic Load Balancing (ELB) and Google Cloud Load Balancing
  • Developed and maintained high-performance JSON REST APIs, improving data integration and system efficiency by 25%
  • Performed the automation deployments using AWS by creating the IAM roles and policies and used the code pipeline plugin to integrate Jenkins with AWS and created the EC2 instances to provide the virtual servers
  • Used Terraform, a tool for building, changing, and versioning infrastructure safely and efficiently and worked with Terraform key features such as Infrastructure as code, Execution plans, Resource Graphs, Change Automation
  • Reduced build and deployment times by designing and implementing Docker workflow
  • Build and maintained docker container clusters managed by Kubernetes, utilized Kubernetes and docker for the runtime environment of the CI/CD system to build, test and deploy
  • Expertise with writing gitlab-ci.yml at repositories root to configure CI/CD for microservices in Gitlab/Jenkins
  • Developing applications using Golang
  • Worked on a command line tool to interact with RESTFUL API using Golang
  • Developed Micro services using Go language and developed corresponding test cases
  • Responsible for Ansible setup for AWS and GCP environments, authoring playbooks for automated provisioning and configuration, enhancing infrastructure as code practices
  • Cost optimization for Aws services and build Serverless Architecture by using Lambda functions, STEP Function, Athena, Glue, S3, Cloud Watch and Cloud Metrics Created datadog dashboards for various applications and monitored real-time and Historical metrics
  • Configured and integrated GIT into the continuous integration (CI) environment along with Cloud Bees Jenkins and written scripts to containerize using ansible with docker and orchestrate it using Kubernetes
  • Moving applications to multi-cloud AWS, automation using Terraform, Kubernetes, nomad, and Chef
  • Experience working on several docker components like Docker Engine, Hub, Machine and Docker Swarm and Docker Scout
  • Skilled in Groovy programming, leveraging its dynamic nature and concise syntax to develop efficient and readable scripts and applications for various purposes, including automation, build processes, and custom tooling
  • Developed end-to-end MLOps pipelines leveraging AWS SageMaker for model training, deployment, and lifecycle management, AWS Lambda for serverless data processing, and AWS Step Functions for orchestrating machine learning workflows
  • This architecture streamlined the model deployment process, reducing deployment cycles by 30% and enhancing model accuracy
  • Automated Stack monitoring using Nagios by Ansible playbooks and has Integrated Nagios with Cloud Bees Jenkins
  • Environment: AWS Services, GCP Services, Cloudera, Golang, Hashicorp Vault, JIRA, VMware, GITHUB, Docker, YAML, Snowflake, Grafana, Go, SageMaker, Artifactory, Shell, Glue, Terraform, Gitlab CI/CD, Dynatrace, TypeScript, MAVEN, Python, Jenkins, DataProc, Google BigQuery, GKE, SDK, CLI.

Sr DevOps/GCP Infrastructure Engineer

USAA
10.2022 - 03.2023
  • Expertly configured Google Cloud VPC and Database Subnet Groups, achieving resource isolation and enhanced network security
  • Led the deployment and monitoring of scalable infrastructure using Terraform, Docker, and GKE, streamlining configuration management and operational efficiency
  • Utilized Cloud SDK and GCloud CLI for deploying web applications, managing GKE clusters, and implementing autoscaling, optimizing resource utilization based on traffic demand
  • Orchestrated PostgreSQL database management, ensuring data accuracy and operational efficiency through proactive security patch updates and database optimization
  • Ability in development and execution of shell scripts and python scripts to automate administrative tasks
  • Administered Jenkins and Bamboo tools, facilitating CI/CD processes, and significantly accelerating service development
  • Implemented monitoring solutions using Dynatrace, Splunk, and NewRelic, enhancing visibility and proactive management of network services and resources
  • Experienced in TensorFlow and PyTorch for neural network training in image recognition projects, achieving a 47% reduction in classification errors
  • Experience in executing data transformation pipelines, contributing to efficient data pre-processing and analytics
  • Leveraged Cloud Shell SDK for configuring essential GCP services like DataProc, Storage, and BigQuery, enhancing data management and processing capabilities
  • Designed and implemented comprehensive CI/CD pipelines, ensuring rigorous testing and quality assurance before production deployments
  • Developed Terraform templates for creating tailored VPCs, subnets, and NATs, ensuring successful web and database application deployments
  • Created Ansible Playbooks for GCP VM configuration management, utilizing PythonSSH for efficient playbook execution and testing on GCP compute instances
  • Managed GCP's Cloud Identity and Access Management (IAM), defining user roles and privileges to safeguard cloud resources and data access
  • Expertise in designing scalable JSON REST APIs that support web and mobile applications, enhancing user experience
  • Experience in building distributed high-performance systems using Spark and Scala
  • Engineered data pipelines using Cloud Pub/Sub and Cloud Dataflow, facilitating seamless data ingestion into BigQuery, enhancing data analytics capabilities
  • Managed Google Cloud Storage (GCS) buckets, implementing policy management and utilizing GCS Coldline for cost-effective storage solutions and backups
  • Coordinated with developers on GIT source control practices, establishing branching and labeling conventions to streamline version control and collaboration
  • Environment: Red hat, Windows, GIT, PowerBI, Trivy, OWSAP, Cloudera, GITHUB, Chef, Big Query, gcloud, IAM, Jenkins, Ubuntu, Maven, Ruby, Shell Scripting, Subversion, Apache, Google File store, Anthos, ML Concepts, Cloud Storage, VMware, GITHUB, Concourse, YAML, Terraform, GKE, GCP DevOps, Compute Engine.

Azure DevOps / Cloud Engineer

State of Florida
06.2021 - 07.2022
  • Create CI/CD pipelines for the deployment of services and tools to the Kubernetes cluster hosted on Bare metal
  • Deployment of CNF on Kubernetes clusters using Helm charts and TCA tool
  • Create Value files based on test deployments done on test clusters and elevate them to production clusters
  • Install and configure ELK stack on the environment to ship logs from applications hosted on a cluster
  • Configured and automated the Azure DevOps Pipelines & Jenkins Pipelines/Build jobs for Continuous Integration and Deployments Dev to Production environments
  • Designed and developed the pipelines using Databricks and automated the pipelines for the ETL processes and further maintenance of the workloads in the process
  • Worked on continuous integration and continuous delivery jobs for several teams in dev and test environments using shell, Groovy
  • Enabled security parameters by using ACL and Gossip encryption key features on Console
  • Worked with ETL operations in Azure Databricks by connecting to different relational databases using Kafka and used Informatica for creating, executing, and monitoring sessions and workflows
  • Worked on continuous integration and continuous delivery jobs for several teams in dev and test environments using shell, Groovy
  • Worked on automating cron jobs which aid in scheduling dev, model, and prod jobs and disables the job after execution, as self-service to developers
  • Restricted user access/service accounts access over jobs on Jenkins using Assign and managing roles for security purposes in development and test environments
  • Used Databricks, Scala, and Spark for creating the data workflows and capturing the data from Delta tables in Delta Lakes
  • Monitoring and maintaining disk space issues on nodes connected to Jenkins for dev and test environments
  • Generated reports on Jenkins for jobs executed for each channel of business for a period in aiding metrics review
  • Creation of hooks on Bitbucket repositories in aiding to automation of Jenkins jobs
  • Configuring Azure Key Vault services to development teams for handling secrets in dev, test, and production environments using both UI and CLI in Jenkin jobs
  • Configuring on-prem servers on Jenkins to aid in dev and test deployments for several teams, managing and maintaining credentials on Jenkins
  • Create and manage the Azure & AWS cloud infrastructure for applications from various channels in the organization using Terraform
  • Configured Azure VM to launch the new instances with the same configuration using AMIs
  • Deploy the artifacts to staging and Production environments from artifact tools like ECR, ACR
  • Build the docker image, publish it to a DTR repo
  • Monitor the deployed applications using performance monitoring tools like ELK and Grafana
  • Monitors the Kubernetes Cluster jobs and performance
  • Working on upgrading Kubernetes cluster, commissioning & decommissioning of Nodes, Pods
  • Environment: Jira, Confluence, Bitbucket, Jenkins, Azure Cloud (VM, Blob, VMSS, VNET, AKS), Elasticsearch, Kibana, Ubuntu, Linux, Windows, Terraform, Python, shell scripting, Gitlab CI/CD, Kubernetes, Vault, Shell scripting, YAML, TCA, Linux – RHEL, Grafana.

Site Reliability Engineer /Cloud DevOps Engineer

Dignity Health
11.2020 - 04.2021
  • Configured Elastic Load Balancers with EC2 Auto scaling groups
  • Implemented AWS solutions using EC2, S3, RDS, EBS, Elastic Load Balancer, VPC, Auto scaling groups
  • Implementing a Continuous Delivery framework using Jenkins, Jfrog, Maven & Nexus in Linux Environment
  • Using Jenkins AWS Code Deploy plug-in to deploy into AWS
  • Good knowledge on PHP programming, including OO, procedural and knowledge of data structures and design patterns
  • Developed data ingestion modules using AWS step functions, AWS Glue and python modules
  • Solid on PHP coding, code, and performance optimizations, debugging and unit testing
  • Deployment and implementation of Chef for infrastructure as code initiative
  • Configured Ansible control machine and wrote playbooks with Ansible roles
  • Used file module in playbook to copy and remove files on EC2 instances
  • Integrated Hashicorp consul with Jenkins to replace the property values in the scripts from key/value store
  • Created Hashicorp vault server and pulled secrets automatically while the time of provisioning with Terraform
  • Created inventory in Ansible for automating the continuous deployment and wrote playbooks using YAML scripting
  • Developed in AWS CLI script automation for EMR(end-to-end) and other AWS services and build serverless Architecture using Lambda(boto3) and Step functions
  • Experience with container-based deployments using Docker, working with Docker images, Docker Hub and Docker-registries and Kubernetes and (Jfrog and Artifactory)
  • Used Typescript to write the Angular components, Modules, Services and Models
  • Experienced in leveraging Splunk for log and data analysis, enabling proactive monitoring, security threat detection, and actionable insights to support decision-making and enhance system performance
  • Dockerized Jenkins with Master and Slave architecture in OpenShift platform and automated the build jobs
  • Optimized volumes and EC2 instances
  • Used IAM to create new accounts, roles, and groups
  • Resolved system issues and inconsistencies in coordination with quality assurance and engineering teams
  • Hands on experience on JIRA for creating bug tickets, storyboarding, pulling reports from dashboard
  • And configured Nagios for monitoring and log analysis
  • Environment: GIT, Jenkins, OpenShift, Ansible, Maven, Hashicorp Vault, Docker, AWS Glue, consul, Terraform, JIRA, AWS, EC2, WebSphere, Python, Kubernetes, Nagios, Jfrog, Typescript.

DevOps/AWS Engineer

Constella Intelligence
03.2018 - 06.2020
  • Involved in Migration to AWS and implemented the Serverless architecture using the Various AWS services like AWS API Gateway, CloudWatch, ElasticSearch, SQS, DynamoDB, Lambda Functions, CloudFormation, S3, etc
  • Automated the infrastructure in cloud Automation using AWS Cloud Formation templates, Serverless Application Model Templates and deployed the infrastructure using Jenkins
  • Designed, developed, deployed the complete CI/CD on cloud and managed services on AWS
  • Created various stacks in CloudFormation which includes services like Amazon EC2, Amazon S3, API Gateway, Amazon RDS, Amazon Elastic Load Balancing, Athena
  • Involved in Architectural design and implemented the CloudFormation Templates for the whole AWS infrastructure
  • Worked with Auth0 and JSON web tokens for authentication and authorization security configurations using Node.js
  • Implemented the data Forking/Mirroring the HTTP request to the AWS Cloud and On-Prem servers using the Mirror module in the NginxPlus
  • Created CloudWatch alarms for monitoring the Application performance and live traffic, Throughput, Latencies, Error codes and notify users using the SNS
  • Involved in setting up the Ansible & Terraforms Installed/Upgraded to the latest version
  • Used Kubernetes to orchestrate the deployment scaling and management of Docker containers
  • Hands-on experience Amazon EKS to manage Containers as a Service (CaaS), to simplify the deployments of Kubernetes in AWS
  • Customized and developed Puppet modules, Ruby Templates for an application like Newrelic, Nginx Plus, SVN Mirror, RabbitMQ, DB Patching, Backup, and Updates
  • Created SSL and Digital Certificates for secured communication between servers using OpenSSL and Key tool
  • Developed, Supported, and Monitored the application in case of any Production issues and worked on-call support
  • Environment: AWS, RHEL (6,7), Ansible, Jenkins, Bitbucket, SonarQube, NginxPlus (r16), WSO2ESB (4.9.0), New Relic, Splunk, Yaml, Json, JMeter, Shell Script.

Build & Release Engineer

Maveric Systems
06.2016 - 12.2018
  • Developed and supported the Software Release Management and procedures
  • Responsible for design and maintenance of the Subversion/GIT Repositories, views, and the access control strategies
  • Performed all necessary day-to-day Subversion/GIT support for different projects
  • Implemented & maintained the branching and build/release strategies utilizing Subversion/GIT
  • Familiarity with the fundamentals of Linux scripting languages and experience with Linux servers in virtualized environments
  • Involved in periodic archiving and storage of the source code for disaster recovery
  • Used ANT and MAVEN as a build tool on java projects for the development of build artifacts on the source code
  • Deployed Puppet, Puppet Dashboard, and Puppet DB for configuration management to existing infrastructure
  • Wrote Puppet manifests for deploying, configuring, and managing collected for metric collection and monitoring
  • Managed Puppet infrastructure through major version upgrades
  • Worked on Java based applications, responsible for writing business logic using Java, Maven, Spring Boot, Rest web services web technologies
  • Deployed Java applications into web application servers like Web logic
  • Expertise in Unix and Linux system installation, configuration, administration, the development and testing of backup and recovery methodologies, troubleshooting, capacity and performance planning, performance tuning, preventative maintenance, monitoring and alerting setup and security hardening
  • Worked as a system administrator for the build and deployments process on the enterprise server
  • Environment: Subversion, GIT, Anthill pro, Python, Java/J2EE, Spring boot, ANT, MAVEN, JIRA, LINUX, UNIX, XML, Web logic, MY SQL, Perl Scripts, Puppet, Shell scripts
  • 8

Education

Master of Science - Information Science

University of North Texas
Denton, TX
05-2024

Bachelor of Technology - Electrical And Electronics Engineering

Koneru Lakshmaiah Education Foundation
Vaddeswaram, AP
05-2019

Skills

    Docker

    Kubernetes

    Ansible

    Terraform

    Git

    GitHub

    Grafana

    Prometheus

    Splunk

    Python

    Bash

    Shell Scripting

Certification

  • Certified Google Cloud DevOps Engineer [Sep 2026]
  • Certified AWS Solutions Architect – Associate


Timeline

Sr DevOps AWS/GCP Engineer

Freddie Mac
06.2023 - Current

Sr DevOps/GCP Infrastructure Engineer

USAA
10.2022 - 03.2023

Azure DevOps / Cloud Engineer

State of Florida
06.2021 - 07.2022

Site Reliability Engineer /Cloud DevOps Engineer

Dignity Health
11.2020 - 04.2021

DevOps/AWS Engineer

Constella Intelligence
03.2018 - 06.2020

Build & Release Engineer

Maveric Systems
06.2016 - 12.2018

Master of Science - Information Science

University of North Texas

Bachelor of Technology - Electrical And Electronics Engineering

Koneru Lakshmaiah Education Foundation
PRAVEEN KUMAR MADDINENI