· 4+ years of development experience on cloud platforms AWS, and GCP. Solid experience in building ETL ingestion flows using AWS.
· Experienced in using distributed computing architectures such as AWS products (EC2, Redshift, and EMR, Elastic search, Athena, and Lambda), Hadoop, Python, Spark, and effective use of MapReduce, SQL, and Cassandra to solve big data type problems.
· Hands-on experience in designing and implementing data engineering pipelines and analyzing data using AWS stacks like AWS EMR, AWS Glue, EC2, AWS Lambda, Athena, Redshift, Sqoop, and Hive.
· Experienced in working with structured data using Hive and optimizing Hive queries.
· Experience with Client-Server application development using Oracle PL/SQL, SQL PLUS, SQL Developer, TOAD, and SQL LOADER.
· Working experience in migrating several other databases to Snowflake
· Strong experience with architecting highly formant databases using MySQL and MongoDB.
· Extensive experience in loading and analyzing large datasets with the Hadoop framework (MapReduce, HDFS, PIG, HIVE, Flume, and Sqoop)
· Developed and maintained AWS Glue ETL workflows to process and transform millions of rows of data daily.
· Hands-on experience in application development using Java, RDBMS, and Linux shell scripting and Oriented Programming (OOPs), multithreading in Core Java, JDBC.
· Excellent working experience in Scrum / Agile framework and Waterfall project execution methodologies.
· Good experience working on analysis tools like Tableau for regression analysis, pie charts, and bar graphs.