Around 9 years of IT experience with Amazon Web Services (Amazon EC2, Amazon S3 or Lambda, AWS cloud watch, Amazon elastic load balancer, Amazon Simple DB, Amazon RDS, Elastic Search, Amazon MQ, Amazon Lambdas, Amazon SQS, AWS Identity and access management, Amazon EBS and Amazon CloudFormation). Experience in working with AWS Code Pipeline to deploy Docker containers in AWS ECS using services like CloudFormation, Code Build, Code Deploy. Capable of using AWS utilities such as EMR, S3 and cloud watch to run and monitor Hadoop and spark jobs on Amazon Web Services (AWS). Experienced in Automating, Configuring, and deploying Instances on AWS, Azure environments and Data centers and managing security groups on AWS. Good knowledge in Technologies on systems which comprises of massive amount of data running in highlydistributive mode in Cloudera, Hortonworks Hadoop distributions and Amazon AWS. Experience in Developing Spark applications using Spark - SQL in Databricks for data extraction, transformation, and aggregation from multiple file formats for analyzing & transforming the data to uncover insights into the customer usage patterns. Experience in designing Azure Cloud Architecture and Implementation plans for hosting complex application workloads on MS Azure. Experience on Migrating SQL database to Azure data Lake, Azure data lake Analytics, Azure SQL Database, Data Bricks and Azure SQL Data warehouse and controlling and granting database access and Migrating On premise databases to Azure Data Lake store using Azure Data factory. Experience in dealing with Windows Azure IaaS - Virtual Networks, Virtual Machines, Cloud Services, Resource Groups, Express Route, Traffic Manager, VPN, Load Balancing, Application Gateways, Autoscaling. Experience in designing Azure Cloud Architecture and Implementation plans for hosting complex application workloads on MS Azure. Experience in building power bi reports on Azure Analysis services for better performance when comparing that to direct query using GCP Big Query. Good understanding of Big Data Hadoop and Yarn architecture along with various Hadoop Demons such as Job Tracker, Task Tracker, Name Node, Data Node, Resource/Cluster Manager, and Kafka (distributedstream-processing). Hands on experience in using Hadoop ecosystem components like Hadoop, Hive, Pig, Sqoop, HBase, Cassandra, Spark, Spark Streaming, Spark SQL, Oozie, Zookeeper, Kafka, Flume, MapReduce framework, Yarn, Scala, and Hue. Experience in importing and exporting data by Sqoop between HDFS and RDBMS and migrating accordingto client's requirement. Ingested data into Snowflake cloud data warehouse using Snow pipe. Extensive experience in working with micro batching to ingest millions of files on Snowflake cloud whenfiles arrive to staging area. Results-driven individual with a solid track record in delivering quality work. Known for excellent communication and teamwork abilities, with a commitment to achieving company goals and delivering exceptional service. Passionate about continuous learning and professional development.