Having 4 Years of professional IT experience. Expertise with Big data on AWS cloud services like S3, Auto Scaling, Glue, EMR, EC2, Lambda, Step Functions, Cloud Watch, Cloud Formation, Athena, Dynamo DB, and Red Shift. Strong experience in core Python, SQL, PL/SQL, and Restful web services. 1 year of JAVA experience. Expertise in using various Hadoop infrastructures such as Map Reduce, Pig, Hive, Zookeeper, Airflow, Snowflake, and Spark for data storage and analysis. Involved in converting Hive/SQL queries into Spark transformations using Spark RDD and Python. Strong understanding of Data Warehouse modeling and Reporting and Analytics platforms like Snowflake. Developed robust ETL pipelines leveraging technologies like Apache Spark and Airflow, ensuring efficient data ingestion, transformation, and loading processes. Collaborate with the engineering team to design and develop an SQL stored procedure that automates the data collection and preprocessing process. Experience in creating dashboards with Power BI and Tableau to provide data-driven insights that form business decisions. Strong focus on teamwork and achieving team goals. Excellent verbal and written communication skills. Hands-on experience with code versioning, automation, and workflow orchestration tools such as Github, Ansible, SLURM, Airflow, and Terraform. Responsive expert experienced in monitoring database performance, troubleshooting issues, and optimizing database environments. Possesses strong analytical skills, excellent problem-solving abilities, and a deep understanding of database technologies and systems. Equally confident working independently and collaboratively as needed and utilizing excellent communication skills.