Summary
Overview
Work History
Education
Skills
Timeline
Generic

Madhusudhan P

Summary

Dynamic Infrastructure Engineer with a proven track record at Wells Fargo, enhancing system reliability and performance. Expert in Linux, proficient in Bash Scripting, and a collaborative team player, I significantly reduced mean time to resolution by leveraging advanced monitoring tools like Splunk and AppDynamics. Passionate about driving efficiency and innovation through automation and keen problem-solving skills.

Overview

12
12
years of professional experience

Work History

Infrastructure Engineer

Wells Fargo
Charlotte
07.2023 - Current
  • Skilled in working with operating systems in Linux and Windows, with the ability to detect, isolate, document, report, resolve system outages and activate system recovery as needed
  • Confer with the client team in analysis, migration, and resolution to any technical challenges in areas like Capacity Planning, Security Management along with Problem determination and resolution
  • Worked on DB migration project from Oracle to Mongo Database
  • Troubleshooting and managing Ansible playbooks during the release run in multiple environments
  • Extensively worked on the AppDynamics and Splunk monitoring tools in analyzing and troubleshooting web applications dashboards in day-to-day production issues
  • Coordinating with various teams and Vendors to ensure the F5 and web applications certificates are in valid state
  • Mentor and provide assistance to 2nd level engineers for troubleshooting and resolving production problems within Service Level Agreement (SLA)
  • Utilized APM tools like Dynatrace for real-time application performance monitoring, reducing mean time to resolution (MTTR) for critical issues
  • Worked with multiple teams for new releases and responsible to find the code issues and guide them as per the production requirements
  • Perform software deployment in multiple Data Centers using Urban Code Deploy tool without outage for an application
  • Analyze log irregularities/bugs that may impact online application availability using Splunk, Grafana, AppDynamics and APM tools
  • Point of team player on OpenShift for creating new Projects, Services for load balancing and adding them to Routes to be accessible from outside, troubleshooting pods through ssh and logs, modification of Build configs & templates
  • Disable and enable nodes through the BIG IP F5 load balancers for application traffic routing

Web System Engineer

Wells Fargo
Charlotte
05.2020 - 04.2022
  • Worked with multiple teams for new releases and responsible to find the code issues and guide them as per the production requirements
  • 24/7 on-call production support (L3)
  • Perform production deployment in multiple Data Centers using Urban Code Deploy tool without outage for an application
  • Responsible to work with Architects for New projects to provide inputs related to Infrastructure
  • Analyze log irregularities/bugs that may impact online application availability using Splunk, AppDynamics, Dynatrace and APM tools
  • Experience in BIG IP F5 load balancers for application traffic routing
  • Install the required fixes for problem resolution
  • Work closely with all partners and vendors when troubleshooting and resolving application related production issues
  • Configure, upgrade and maintain devices to latest code releases and functional improvements
  • Prepare and execute various posting scripts for log rotation, log analysis and documenting the results
  • Coordinate with onshore-offshore team & business stakeholders on daily activities and contribute with peers to the overall architecture

Site Reliability Engineer

Fedex
01.2020 - 04.2022
  • Responsible for On-call rotation for 24/7 Application support
  • Responsible to maintain the Kubernetes clusters on AWS and on-premises
  • Good hands-on experience in writing Ansible playbooks and automating the day-to-day tasks for users as part of self-service jobs
  • Experience with continuous deployment and blue-green deployment strategies
  • Engineered and maintained CI/CD pipelines using Jenkins, streamlining the software development lifecycle and enabling continuous integration and deployment
  • Actively participated in incident response and problem-solving, resolving critical issues promptly to ensure uninterrupted service delivery
  • Automated routine tasks through scripting languages, significantly reducing manual effort and improving system reliability
  • Collaborated with development and operations teams to troubleshoot and resolve complex issues, ensuring seamless application performance
  • Set up and managed build and deployment pipelines for new environments using Jenkins and Atlassian Bamboo
  • Integrated Prometheus and Grafana for monitoring and alerting, ensuring high availability and performance
  • Established and maintained documentation for configurations, processes, and troubleshooting procedures for knowledge sharing and training
  • Automation of all Manual Infrastructure Changes as Code via Python, Jenkins Pipelines, & Ansible
  • Created a private cloud using Docker and Kubernetes that supports DEV, TEST, and PROD environments
  • Development of Terraform integrations, distributed systems, and infrastructure automation tooling
  • Created Python scripts to automate security group creation and management from state files versioned in Github
  • Developing analytical methods, real-time data analysis and statistical methods with Python, for data analysis, processing and integration for standardized and non-standardized reports
  • Documented all build and release process related items
  • Level one support for all the build and deploy issues encountered during the build process
  • Automating backups for shell for Linux and Power shell scripts by transferring data in S3 bucket
  • Worked on Splunk as an Infrastructure Monitoring tool
  • Environment: OpenShift, Docker, Kubernetes, Google Cloud, Jenkins, Ansible, SonarQube, Terraform, MAVEN, PowerShell, Linux, Git, Junit, Nagios, Python, Jira, Nexus, Kafka, Nagios, Splunk, Grafana, ELK Stack, Web Logic

MuleSoft Developer

Advance Auto Parts
Plano, Texas
08.2019 - 04.2020
  • Involved in various phases of Software Development Life Cycle (SDLC) as requirement gathering, modeling, analysis, architecture design and development using AGILE methodology
  • Developed flows/orchestrations for integrating the components like connectors, transformers and scopes written on top of different internal platforms using Mule ESB for XML to CSV conversion
  • Worked on Authentication and Authorization project by using OAuth 2.0and SSL
  • Experience in developing flows using Mule Any point studio
  • Experience in designing, developing, mocking, and managing APIs using Mule soft's Any point platform
  • Built integration solutions in Mule soft using Mule soft API Led connectivity approach
  • Experience in designing, and consuming RESTFUL and SOAP APIs
  • Data changes and synchronization is done in near real time
  • Responsible in developing integration workflows using Mule ESB framework and implemented Data weave and content-based routing using Mule ESB
  • Coordinated in all testing phases and worked closely with Performance testing team to create a baseline for the new application
  • Environment: MuleSoft Anypoint Studio, MuleSoft 4.x, Anypoint Platform, Java JDK, Salesforce, XML, JSON, Oracle, Mule Management Console, RAML 1.0, Git Hub, Jenkins, Urban code deployment, Connectors

Mule soft Developer

Main Event Entertainment
Plano, TX
08.2018 - 07.2019
  • Mule ESB experience in implementing interfaces as per the requirement
  • Experience in technical design documents and participated in review of code/design/test plans
  • Experience in analyzing and developing code as per the requirements
  • Involved in conducting of unit and system integration testing
  • Involved in using of java for different kind of validations and also in enhancing services in WSDL files
  • Experience in providing post development support
  • Involved in various stages of testing like functional and business acceptance
  • Involved in various Mule Connectors such as Https, File, SFTP, VM, DB, JMS, and Amazon S3
  • Strong application integration experience using Mule ESB with connectors, transformations, Routing, Active MQ, Immigrated Mule ESB 3.7.2 to Mule ESB 4.1.4 and updated all project dependencies
  • Worked on TWILIO connector to send text messages
  • Generating CSV reports form Taleo Client Connect (TCC) and integrate to Mulesoft
  • Applied security mechanism of Mule ESB application using OAuth
  • Developed RESTful web services in Mule ESB based on SOA architecture
  • Used encryption algorithms to encrypt the fields in the environment properties
  • Used Database Connectors to connect with respective systems using Mule ESB
  • Used Involved in exposing, consuming Web services SOAP and Restful (JSON) Web services based on SOA architecture
  • Involved in Mule soft API Development using RAML
  • Used Batch Scope for Bulk transfer of Data in MuleSoft
  • Schedule trigger done by Poll, Quartz and used Mule requestor to connect FTP on demand
  • Environment: Anypoint Studio, Anypoint Platform, Twilio, Taleo Client Connect (TCC), SQL Database, Maven, RESTful Webservice

Linux Administrator

AMEO Solutions
, India
04.2013 - 11.2015
  • Installation of patches and packages using RPM and YUM in Red hat Linux
  • Installed and configured SAMBA server for Windows and Linux connectivity
  • Installed and configured Apache / Tomcat web server
  • Installed and configured VNC(Virtual Network Computing) server/client
  • Monitored System Activities like CPU, Memory, Disk and Swap space usage to avoid any performance issues
  • Created and modified users and groups with SUDO permission
  • Created and modified application related objects, created Profiles, users, roles and maintained system security
  • Responsible for setting up cron jobs scripts on production servers
  • Responsible for writing/modifying scripts using sh, ksh, and bash for day-to-day administration
  • Modified Kernel parameters to improve the server performance in Linux
  • Creation of Logical Volume Manager (LVM) for Linux operating systems
  • Involved in design, configuration, installation, implementation, management, maintain and support for the Corporate Linux servers RHEL 4x, 5.x, SLES 9, CENTOS 5.x
  • Coordinating with 24 x 7 on-call support personnel in debugging
  • Coordinating users for any server activities which may involve major changes on software or any hardware related issues
  • Maintained proper documentation of all the activities carried out during the project
  • Worked with DBA team for database performance issues, network related issues on Linux Servers
  • Environment: Red Hat Linux (RHEL 4/5), Logical Volume Manager, Global File System, Red Hat Cluster Servers, Oracle, MySQL, DNS, NIS, NFS, Apache, Tomcat

Education

Master’s In IT -

Wilmington University
Delaware
12.2018

Bachelor of Technology (Btech) -

Jawaharlal Nehru Technological University
Hyderabad
01.2013

Skills

  • Linux (Red Hat, CENTOS)
  • Ubuntu
  • Windows
  • Bash Scripting
  • GITHUB
  • Subversion (SVN)
  • Maven
  • Ant
  • WebSphere Application Server
  • Apache Tomcat
  • WebLogic
  • Nginx
  • Tomcat/TCServer
  • TCP/IP
  • DNS
  • DHCP
  • NAT
  • WAN
  • LAN
  • FTP/TFTP
  • SMTP
  • LDAP
  • Splunk
  • AppDynamics
  • Dynatrace
  • Grafana
  • JIRA
  • VMware
  • VirtualBox
  • Postman
  • Oracle 10g/11g
  • MS SQL Server
  • MS Access

Timeline

Infrastructure Engineer

Wells Fargo
07.2023 - Current

Web System Engineer

Wells Fargo
05.2020 - 04.2022

Site Reliability Engineer

Fedex
01.2020 - 04.2022

MuleSoft Developer

Advance Auto Parts
08.2019 - 04.2020

Mule soft Developer

Main Event Entertainment
08.2018 - 07.2019

Linux Administrator

AMEO Solutions
04.2013 - 11.2015

Master’s In IT -

Wilmington University

Bachelor of Technology (Btech) -

Jawaharlal Nehru Technological University
Madhusudhan P