Collaborate across multiple functional and technical teams to deliver an Agile-based project
Leading project as a Devops Engineer which involved me creating technology infrastructure, automation tools, and maintaining configuration management
Also being accountable for conducting training sessions for the juniors in the team, and other groups regarding how to build processes wherein the dependencies are showcased in the code and also being able to answer architectural and technical DevOps infrastructure questions from the team
Responsible for creating software deployment strategies that are essential for the successful deployment of software in the work environment
And my team and I helped a client with their troubleshooting issue after they just migrated to the cloud
Work with clients to meet business needs and solve problems using the cloud
Used Kafka to collect, process, and store streaming event data or data that has no discrete beginning or end
Work with developers, architects, system administrators and other stakeholders to architect and configure Dev / Stage / QA and Prod environments in AWS (VPC, subnets, Security groups, EC2 instances, load balancer, Database, Redis, Route53, etc.)
Design and implement end-to-end Continuous Integration and Continuous Delivery (CI/CD) pipelines using both Jenkins and AWS pipeline
Helm being a package manager for Kubernetes helps define, install, and upgrade even the most complex Kubernetes applications
Automate the deployment and testing of resources using Infrastructure as Code (Terraform and Cloud Formation) through pipelines using DevOps principals, allowing customers to rapidly build, test, and release code while minimizing errors
Containerization being a lightweight alternative to a virtual machine that involves encapsulating an application in a container with its own operating system
I have used It to provides portable, lightweight, standardized, and easy to deploy
Work with developers to build, deploy and orchestrate containers using Docker and Kubernetes
GitLab, Bamboo
I have worked both in the back end and also have extensive knowledge in amazon web services
I have used Azure, TCP/IP
Go, Typescript and SQL
Provide technical guidance and mentoring to peers, less experienced engineers, and client personnel
Designing for high availability and business continuity using self-healing-based architectures, 2018-12 - Current MS Office Web development technologies fail-over routing policies, multi-AZ deployment of EC2 instances, ELB health checks, Auto Scaling, and other disaster recovery models
AWS Platform: AWS Cloud Formation, AWS Lambda, AWS Systems Manager, S3, VPC, EC2, ELB, RDS, SNS, SQS, SES, Route53, Cloud Front, Service Catalog, AWS Auto Scaling, Trusted Advisor, Cloud Watch, direct connect, Transit gateway, direct connect, DynamoDB etc
Leveraged AWS Control Tower, AWS Organization, etc., to set up and govern a secure, multi-account AWS environment based on the company's requirements
Implemented and managed Ansible Tower to scale automation and handle complex deployments
Monitoring and optimize the environment to ensure costs and performance scale on demand
Utilize Kubernetes software to automate deployment, scaling, and operations of application containers across clusters of hosts
Designed and deployed scalable, highly available, fault tolerant and reliable resources in AWS Hands-on experience with AWS CLI including deploying CFTs, managing S3, EC2, IAM on CLI
Recommended and implemented security best practices in AWS including MFA, access key rotation, encryption using KMS, firewalls- security groups and NACLs, S3 bucket policies and ACLs, mitigating DDOS attacks etc
Experience with integrating multiple data sources (Oracle, SQL Server, Teradata, MongoDB, Excel, CSV)
Experience with Talend desktop enterprise studio products (DI, Big Data, MDM, etc.)
Experience with IaaS using Ansible
Knowledge & experience using Enterprise scheduling tools (e.g., Control-M)
Experience with MongoDB, SQL Server, Teradata, Postgres
Linux and Shell scripting
I have used Azure as a set of integrated software development tools for sharing code, tracking work, and shipping software on-premises
My team utilized all DevOps features or selected what we needed to enhance our collaborative workflows
We used Azure DevOps Server which is compatible with the client’s existing editor or integrated development environment (IDE), enabling a cross-functional team to complete any size project
I am just about 50% heavy on Azure
Hands on experience with AMAZON MGN also known as Amazon Application Migration Service
Groovy being a powerful, optionally typed and dynamic language, with static-typing and static compilation capabilities, for the Java platform aimed at improving developer productivity, It integrates smoothly with any Java program, and immediately delivers to the application powerful features, including scripting capabilities, Domain-Specific Language authoring, runtime and compile-time meta-programming and functional programming
Artifact being a by-product produced during the software development process
It may consist of the project source code, dependencies, binaries, or resources, and could be represented in different layout depending on the technology so, JFrog Artifactory being a universal DevOps solution provides end-to-end automation and management of binaries and artifacts through the application delivery process that improves productivity across your development ecosystem
SonarQube being a Code Quality Assurance tool that collects and analyzes source code and provides reports for the code quality of a project
It combines static and dynamic analysis tools and enables quality to be measured continually over time
Implemented an ansible dynamic environment by using a python code which is available in AWS, to be used to configure the ansible master server, such that it identifies the target servers based on the tags.
DevOps Engineer/AWS Infrastructure Engineer
Bank of America
Arlington, VA
08.2017 - 05.2019
Worked with clients to understand their workflows including strengths and weaknesses, to identify new tech/solutions/improvements to make processes more efficient
Designed and deployed scalable, highly available, fault tolerant and reliable resources in AWS Hands-on experience with AWS CLI including deploying CFTs, managing S3, EC2, IAM on CLI
11.2015 - 11.2018
Developed Solution Definition Documents (SDD) and Low Level Deign Documents for public cloud Architected Amazon RDS with Multi-AZ deployment for automatic failover at the database tie Worked closely with customers, internal staff, and other stakeholders to determine planning, implementation, and integration of system-oriented projects
Designed/developed aspects of migration journey - assess, mobilize, and migrate phase including
leveraging CART, ADS, Migration Evaluator, DMS, Cloud Endure etc
Leveraged AWS Control Tower
AWS Organization, etc., to set up and govern a secure, multi-account AWS environment based on the company's requirements
AWS Platform: AWS Cloud Formation, AWS Lambda, AWS Systems Manager, S3, VPC, EC2, ELB, RDS, SNS, SQS, SES, Route53, Cloud Front, Service Catalog, AWS Auto Scaling, Trusted Advisor, Cloud Watch, direct connect, Transit gateway, direct connect, DynamoDB etc
Implemented security best practices in AWS including access key rotation, multi-factor authentication, role-based permissions, enforced strong password policy, configure security groups and NACLs, S3 bucket policies etc
Managed and monitored all installed systems for highest level of availability
Built high-performing, available, resilient, and efficient 3-tier architecture for customer applications, and performed reviews for architecture and infrastructure builds, following AWS best practices
Provisioned and managed AWS infrastructures using Terraform
Optimized cost through reserved instances, selection and changing of EC2 instance types based on the resource need, S3 storage class and S3 lifecycle policies, leveraging Autoscaling, etc
Recommended and implemented security best practices in AWS including MFA, access key rotation, encryption using KMS, firewalls- security groups and NACLs, S3 bucket policies and ACLs, mitigating DDOS attacks etc
VPC peering with other accounts allowing access and routing to service and users of separate account to communicate
Responsible for installation, configuration, management, and maintenance over Linux systems, including managing users and groups
Used Jira to plan, track, support and close requests, tickets, and incidents
Network: VPC, VGW, TGW, CGW, IGW, NGW, etc
Monitored end-to-end infrastructure using CloudWatch and SNS for notifications
Used AWS IAM to provision authentication and authorization into AWS accounts and restrict/assign access to users and other AWS services
Solutions Architect/Cloud Engineer
Uber
San Francisco, CA
03.2014 - 07.2017
Leveraged AWS Control Tower, AWS Organization, etc., to set up and govern a secure, multi-account, AWS environment based on the company's requirements
AWS Platform: AWS Cloud Formation, AWS Lambda, AWS Systems Manager, S3, VPC, EC2, ELB, RDS, SNS, SQS, SES, Route53, Cloud Front, Service Catalog, AWS Auto Scaling, Trusted Advisor, Cloud Watch, direct connect, Transit gateway, direct connect, DynamoDB etc
Implemented security best practices in AWS including access key rotation, multi-factor authentication, role-based permissions, enforced strong password policy, configure security groups and NACLs, S3 bucket policies etc
Managed and monitored all installed systems for highest level of availability
Built high-performing, available, resilient, and efficient 3-tier architecture for customer applications, and performed reviews for architecture and infrastructure builds, following AWS best practices
Provisioned and managed AWS infrastructures using Terraform
Optimized cost through reserved instances, selection and changing of EC2 instance types based on the resource need, S3 storage class and S3 lifecycle policies, leveraging Autoscaling, etc
Recommended and implemented security best practices in AWS including MFA, access key rotation, encryption using KMS, firewalls- security groups and NACLs, S3 bucket policies and ACLs, mitigating DDOS attacks etc
VPC peering with other accounts allowing access and routing to service and users of separate account to communicate
Responsible for installation, configuration, management, and maintenance over Linux systems, including managing users and groups
Used Jira to plan, track, support and close requests, tickets, and incidents
Network: VPC, VGW, TGW, CGW, IGW, NGW, etc
Monitored end-to-end infrastructure using CloudWatch and SNS for notifications
Used AWS IAM to provision authentication and authorization into AWS accounts and restrict/assign access to users and other AWS services
Health Information Technology Specialist
Nemours Children Hospital
Wilmington, DE
01.2008 - 06.2013
Review records for completeness, accuracy, and compliance with regulations
Process all clinical documents received for imaging to EMR
Protect the security of medical records to ensure confidentiality is maintained
Perform quality analysis on the images to ensure readability, correct indexing, proper orientation and accessibility
Review all documents prior to scanning into EMR to avoid entry of duplicate images into patient record
Assist providers and nurses with locating necessary clinical information from outside healthcare institutions
Plan, develop, maintain, or operate a variety of health record indexes or storage and retrieval systems to collect, classify, store, or analyze information
Process patient admission, discharge and Emergency Department presentation documents
Demonstrates a professional and courteous manner when communicating with others with the ability to clearly and accurately state the agreed upon resolution
Follows all policies and procedures of PPCP
Maintains a HIPAA compliant environment at all times
Completes all required PPCP education initiatives and yearly compliance training
Consistently demonstrate ability to respond to changing situations in a flexible manner in order to meet current needs, such as reprioritizing work as necessary
Minimize non-productive time and fill slow periods with activities that will enable you to prepare to meet the future needs of the System (education, organizing, housekeeping, and assisting others)
Organize job functions and work area to be able to effectively complete varied assignments within established time frames
Maintains a cooperative relationship among health care teams by communicating information; responding to requests; building rapport; participating in team problem-solving methods.
Education
Bachelor of Science - Health Information Management Technology
I am a result driven Professional with over 10 years of experience in cloud computing and designing data security architectures. I have a proven track record of successfully directing and executing tactical operations plans, including but not limited to supporting and collaborating with clients, technical teams and managers, migrating to the cloud, designing and building reliable, secure, efficient, and cost-effective cloud infrastructures, and automating and optimizing mission critical deployments in cloud, leveraging configuration management, CI/CD and DevOps processes, troubleshooting, and problem-solving skills. In addition to these, I count myself as a team player who aims to work with all team members to ensure successful project executions.
Certification
AWS Solutions Architect Associate - 2022
DevOps Engineer Professional Associate - 2023
Certifications
AWS Solutions Architect Associate - 2022
DevOps Engineer Professional Associate - 2023
Timeline
DevSecOps/Cloud Engineer
Remedy Care Solution
06.2019 - Current
DevOps Engineer/AWS Infrastructure Engineer
Bank of America
08.2017 - 05.2019
11.2015 - 11.2018
Solutions Architect/Cloud Engineer
Uber
03.2014 - 07.2017
Health Information Technology Specialist
Nemours Children Hospital
01.2008 - 06.2013
Bachelor of Science - Health Information Management Technology