" Experienced data engineer with a strong foundation as a .NET full stack developer, bringing five years of proven expertise in software development and a demonstrated trajectory of career growth. Eager to apply my technical proficiency in data engineering, ETL processes, and data warehousing to architect and optimize data solutions that empower businesses to extract actionable insights. Committed to leveraging my diverse background to bridge the gap between software development and data engineering, while continuously expanding my knowledge and contributing to innovative data-driven projects." Summary of Experience: Over 5 Years of strong experience in Software Development Life Cycle (SDLC) including Requirements Analysis, Design Specification, and Testing as per Cycle in both Waterfall and Agile methodologies. Strong Experience with Amazon Web Services (AWS) Cloud Platform which includes services like EC2, S3, EMR, IAM, DynamoDB, Cloud Front, Cloud Watch, Route 53, Auto Scaling, and Security Groups. Experience in Microsoft Azure/Cloud Services like SQL Data Warehouse, Azure SQL Server, Azure Databricks, Azure Data Lake, Azure Blob Storage, and Azure Data Factory. Hands-on experience on Google Cloud Platform (GCP) in all the big data products BigQuery, Cloud Data Proc, Google Cloud Storage, and Composer (Air Flow as a service). Strong experience in using major components of Hadoop ecosystem components like HDFS, YARN, MapReduce, Hive, Impala, Pig, Sqoop, HBase, Spark, Spark SQL, Kafka, Spark Streaming, Flume, Oozie, Zookeeper, Hue. Excellent programming skills with experience in Java, PL/SQL, SQL, Scala, and Python Programming. Hands-on experience in writing Map Reduce programs using Java to handle data sets using Map and Reduce tasks. Extensive knowledge in writing Hadoop jobs for data analysis as per the business requirements using Hive and working on HiveQL queries for required data extraction, joining operations, writing custom UDFs as required, and having good experience in optimizing Hive Queries. Experience with ETL concepts using Informatica Power Center, AB Initio. Experience in importing and exporting the data using Sqoop from HDFS to Relational Database systems and vice-versa and loading into Hive tables, which are partitioned. Developed custom Kafka producer and consumer for different publishing and subscribing to Kafka topics. Involved in converting Hive/SQL queries into Spark transformations using Spark Data frames and Scala. Experience in using Kafka and Kafka brokers to initiate spark context and processing live streaming.