Summary
Overview
Work History
Education
Skills
Affiliations
Certification
Languages
Timeline
Generic

Sambasiva CH

San Ramon,CA

Summary

Dedicated and skilled DevOps Engineer with a proven track record in implementing access and authorization automation, leveraging RBAC principles, and zero-trust environments. Proficient in managing credentials, automating code project creation, and ensuring DevOps tools' security and compliance. Adept at facilitating Kubernetes and Rancher upgrades, maintaining container images, and evangelizing best practices for cloud infrastructure provisioning. Responsible Site Engineer coordinates and manages multiple third-party and internal providers. Trusted organizer of rapid and diligent workforces. Applies deep technical knowledge and humanistic management skills to reaching project completion under budget and within prescribed timeframes.

Overview

10
10
years of professional experience
1
1
Certification

Work History

Site Reliability Engineer

CyberArk
Hymera, IN
05.2022 - 09.2024
  • Monitored systems performance using various metrics such as latency, throughput, availability.
  • Implemented automation tools to increase efficiency in deployment processes.
  • Ensured high availability and scalability of applications across multiple environments.
  • Performed root cause analysis of production incidents and provided recommendations for improvement.
  • Documented best practices and procedures for incident response activities.
  • Troubleshooted complex issues related to application architecture and system configurations.

✔ Managing teanant creation for customers and maintaining vault,pvwa and connect to server.

✔ Configured and Managing Build and Release pipelines in aws platform.

✔ Aws EC2 Servers patching and Upgradtion by SSM service in aws.

✔ Creating custom dashboards on Datadog to monitor servers performance.

✔ Creating alerts and notification with Datadog qureys.

✔ Working on Pager duty as firfighter.

✔ Managing Deployments by using AWX tower

✔ Worked on Installation, Upgradation, Configuration, Maintenance, Patch/Package management and Troubleshooting of the Operating systems and services.

✔ Worked on file system creation & maintenance includes Volume group creation, File system creation/extension, Logical Volumes.

✔ Maintaining ansible play books for automating the installation and configuring servers

✔ Worked on usecase specific, custom roles and polices.

✔ Worked on s3 paths creation and mange the s3 buckets by custom roles and policies.

✔ Marinating and modifying the CF templates based on requirments .

✔ Configured to add packages to different pipeline to build and deploy in DEV, QA and Production environments.

✔ Created work tasks in Jira Kanban board to track backlogs, project status and reporting.

✔ Worked on installing multiple agents on IAAS (Infrastructure as a service like splunk, dyntrace etc..,).

✔ Created custom scripts to automate support processes where applicable Ability to Power off/ Down the un-used VMs Ability to identify issue at Azure level and resolve.

✔ Involved in calls with Developers and Microsoft team for troubleshooting the production issues as needed.

✔ Worked on monitoring System Performance of Virtual memory, Managing, Swap Space, Disk utilization and CPU utilization

✔ Worked on aws resources vpc, vpc endpoints, step function, segemaker.

✔ Developed Power Shell scripts and ARM templates to automate the provisioning and deployment process.

✔ Deployed an Azure Data bricks workspace to an existing virtual network that has public and private subnets and properly configured network security groups.

✔ Solely admin the Artifactory, and responsible for backing-up/upgrading to latest Artifactory versions and granting the require access to the authorized people.

  • Created Splunk dashboard for Artifactory application and Monitored server logs. configured email notifications

DevOps Engineer

[Nomihealth]
Austin, TX
08.2023 - 12.2023
  • Implemented access and authorization automation via AD group assignments and RBAC permissions across all DevOps tools, ensuring seamless onboarding and offboarding of users
  • Managed the creation and rotation of credentials leveraging service accounts and PAM to enforce a zero-trust environment
  • Created configuration as code to define objects and modifications across DevOps tools, enhancing efficiency and consistency
  • Automated code project and repository creation with standard configurations such as PR approvals, protected branches, etc
  • Coordinated with product vendor support to troubleshoot integration issues and address feature requests promptly
  • Ensured DevOps tools and plug-ins remained up to date with the latest security and feature enhancements
  • Assisted development teams in their automation journey by driving pipeline creation, deployment scripts, and strategies
  • Provided guidance and standardization on project structure and build tools, promoting consistent artifact creation
  • Integrated SAST and SCA scanning into pipelines to enforce secure code practices and supply chain security
  • Automated tech stack patching using tools like Ansible to ensure application dependencies were up to date.
  • Created CI and CD pipelines with Jenkins and Docker to automate the build process of applications.
  • Configured, managed, and monitored cloud-based services such as AWS EC2, S3, EBS, ELB, RDS using Terraform and Ansible.

Senior Infrastructure Engineer

Cognizant Technologies
Hyde, PA
04.2021 - 05.2022
  • Managed large-scale projects related to infrastructure upgrades and migrations.
  • Configured, monitored and maintained Windows servers for optimal performance.

  • Assisted in the evaluation of new products by researching features and compatibility with existing systems.

· Worked on aws resources vpc, vpc endpoints, step function, segemaker.

· Snowflake administration, case onboardings.

· Creation of data ware house, database, schemas and sys account creation.

· Granting permissions to the roles to the schemas and tables.

· Datbricks cluster creation and maintaaing the workspaces.

· Troubleshooting the s3 path access issues form databricks and connection issues.

· Scope and secret creation to make connection establishment between snowflake and data bricks.

· Troubleshooting the connectivity issues from server less account to snowflake.

· Evaluate performance trends and expected changes in demand and capacity, and establish the appropriate scalability plans for Cost optimization.

· Private Cloud (VPC), Public and Private Subnets, Security Groups, Nginx Load Balancer.

· Provided various process improvement recommendations for High availability for Azure VM’s.

· Worked in an environment majorly involved in Infrastructure as a code (IaaC), execution plans, resource graph and change automation using Terraform. Managed AWS infrastructure as code using Terraform.

· Maintained the Nexus to store the different file types which are build during CI process.

· Worked on Splunk for log file integration and dashboard creation, alerting integrated with ServiceNow.

· Did Blue/Green Deployments, Kenery deployments and Rolling updates in K8’s cluster for achieving zero downtime of the application.

· Used Azure pipeline to build, test and deploy with CI/CD that works with any languages, platform and cloud.

· Participate in all phases of system development, deployment, configuration, and monitoring including performance and availability, alerting, data integrity, security and Disaster Recovery planning.

Senior DevOps Engineer

Wipro Technologies
Bengaluru, INDIA
11.2017 - 04.2021
  • Configured cloud services utilizing Amazon Web Services.
  • Created detailed documentation of processes, procedures, and standards utilized in the environment.
  • Analyzed existing infrastructure and systems for optimization opportunities.

✔ Automation of tasks which seems to be manual and save efforts through ansible playbooks.

✔ Created Ansible playbooks to deploy images in Kubernetes environment.

✔ Created the Ansible roles to setup the infrastructure (K8S, Kafka/Zookeeper, TimescaleDB and Postgres) in DEV,QA,PERF,UAT and PROD environments.

✔ Experience in Configuring monitoring setup from scratch with Prometheus and grafana.

✔ Created a dashboard to get node metrics and docker metrics and alerting.

✔ Created service and deployments in files in Kubernetes for application deployment.

✔ Created jinja2 templates for all Server Configurations and Modifications.

✔ Created local repository in yum.repos.d to run packages using yum, rpm and update.

✔ Coordinated with developers and managers to make sure that code is deployed in the production environment.

✔ Participating in grooming backlogs and retrospective meeting.

✔ By invoking lambada function in groovy automating the dag deployment.

✔ Worked on serverless accounts creation and assume role setup.

✔ Worked on IAM roles, and Policy attachments and updates.

✔ Worked on usecase specific, custom roles and polices.

✔ Worked on s3 paths creation and mange the s3 buckets by custom roles and policies.

✔ Mainating and modifying the CF templates based on requirments

Linux Administrator

Ricoh
Bengaluru, INDIA
07.2014 - 09.2017
  • Troubleshooted network issues and resolved them using Linux system administration tools.
  • Monitored performance of applications running on Linux systems and tuned them as needed.
  • Installed and configured software packages on Linux systems according to customer requirements.
  • Configured, maintained, and secured various Linux systems for multiple users.
  • Provided technical support for end-users experiencing problems with their Linux systems or applications running on them.

✔ Created multiple scripts on Adhoc basis for production validation and to make easy issue debug.

✔ Hands on knowledge in using builds tools like Maven and Ant for the building of deployable artifacts such as war & jar from source code.

✔ Working on Puppet configuration management tool to deploy the wars in target environment.

✔ Knowledge on Blue/Green Deployments in ECS cluster for achieving the zero downtime of the application.

✔ Extensively worked with Version Control Systems SVN (Subversion), GIT.

✔ Experience in Installation and configurations for Linux distributions operating system.

✔ Integrated WordPress with Single sign on using Okta.

✔ Involved in built and deployment of .NET, Java/J2EEbased applications to multiple application server’s in an Agile continuous integration environment and automation of end to end process

✔ Contributed to automation of Post-Deployment validation of application API calls with SOAP UI, PowerShell.

✔ Worked on MSSQL for data reload, permissions setup for different service accounts, data related issues to help QA teams.

✔ Handling day to day tickets opened in service now.

Education

Masters in Computer Science - Computer And Information Sciences

Sothern Arkansas University
Magnolia, AR
12-2023

Skills

  • Access and Authorization Automation
  • RBAC Implementation
  • Privileged Access Management (PAM)
  • Configuration as Code (CaaS)
  • DevOps Tool Automation (BitBucket, Jenkins, etc)
  • Infrastructure as Code (IaC)
  • Kubernetes and Rancher Administration
  • Continuous Integration/Continuous Deployment (CI/CD)
  • Security Scanning (SAST, SCA)
  • Cloud Infrastructure Provisioning (AWS CloudFormation, Terraform)
  • Containerization (Docker, Kubernetes)
  • Scripting and Automation (Ansible, Python, etc)

Affiliations

[Include any relevant professional affiliations here]

Certification

[Include any relevant certifications here]

Languages

[Include any relevant languages here]

Timeline

DevOps Engineer

[Nomihealth]
08.2023 - 12.2023

Site Reliability Engineer

CyberArk
05.2022 - 09.2024

Senior Infrastructure Engineer

Cognizant Technologies
04.2021 - 05.2022

Senior DevOps Engineer

Wipro Technologies
11.2017 - 04.2021

Linux Administrator

Ricoh
07.2014 - 09.2017

Masters in Computer Science - Computer And Information Sciences

Sothern Arkansas University
Sambasiva CH