With over 9 years of IT experience in Data Engineering, Data Analysis, and Machine Learning Engineering, I have developed expertise in building data pipelines and ETL scripts using Python, Groovy (Java), SQL, AWS, GCP, Kafka, and Spark. I have worked on batch ETL processes to load data from various sources like MySQL, Mixpanel, REST APIs, and text files into the Snowflake data warehouse. My experience includes leveraging Databricks for distributed data processing, transformation, validation, and data cleaning, ensuring data quality and integrity. I have collaborated with analytics and data science teams to support and deploy models using Docker, Python, Flask, and AWS Beanstalk. Additionally, I built data pipelines using Spark (AWS EMR), processed data from S3, and loaded it into Snowflake. Proficient in development tools such as Eclipse, PyCharm, PyScripter, Notepad++, and Sublime Text, I have a strong foundation in Object-Oriented Programming, writing extensible, reusable, and maintainable code. My work also includes using Python libraries like NumPy, Matplotlib, Beautiful Soup, and Pickle, and developing web services, API gateways, and CI/CD pipelines with CodeBuild, CodeDeploy, Git, and CodePipeline. I am skilled in writing efficient Python code and resolving performance issues. I bring excellent client interaction, presentation skills, and leadership qualities, with proven success working both independently and within teams.