Over 17 years of IT experience, with a strong background in data engineering and software development. Expertise in designing and architecting scalable data solutions, including data pipelines and big data platforms. Proficient in ETL solution automation using Redpoint and skilled in big data technologies like PySpark, Hadoop, and Hive. Extensive experience with data lake house implementations, including MinIO object storage, Iceberg, and Kubernetes. Proven track record in migrating legacy data processing systems to modern data platforms. Specialized in creating automated data pipelines, enhancing data accessibility, and driving business insights. Strong programming skills in C#, Python, Java, and JavaScript. Committed to agile methodologies, with experience using GitLab and following SCRUM practices for project delivery. Experienced in providing solutions for go-to-market business strategies and optimizing data workflows. Expertise in Data Integration, Data Quality Solutions, providing logical and physical design and implementation. Expertise in Verities of ETL tools such as Talend, Redpoint and SSIS. Hands on experience with API interactions with the ETL tools. Hands on experience with AWS Glue. Proficient in OOPS concepts and design patterns. Skilled in developing business plans, requirements specifications, user documentation, and architectural systems research. Strong in design and integration problem solving skills. Open to new technologies and Quick learner.