- Designed and implemented data pipelines for generative AI models, leveraging Python and PyTorch for deep learning applications.
- Collaborated with cross-functional teams to integrate generative AI solutions into existing workflow systems, ensuring seamless deployment and scalability.
- Optimized and fine-tuned generative models for performance and efficiency, utilizing AWS Bedrock for model deployment.
- Developed LLM Ops pipelines to streamline the merge and deployment of changes in generative AI models.
- Conducted prompt engineering to enhance the performance of large language models (LLMs) for specific use cases.
- Researched and implemented the latest advancements in generative AI technologies, ensuring cutting-edge solutions.
- Troubleshot and resolved issues related to generative AI models, ensuring smooth operation in production environments.
- Created and maintained comprehensive documentation for generative AI models and their applications.
- Communicated complex technical concepts and findings to non-technical stakeholders, facilitating informed decision-making.
- Developed and deployed AI models on AWS using services like Lambda, S3, and EMR, ensuring scalability and performance.
- Utilized Python and PySpark for data processing and transformation, integrating machine learning models into data pipelines.
- Collaborated with data scientists to implement generative AI solutions, leveraging AWS Bedrock for model deployment.
- Built and maintained CI/CD pipelines for AI model deployment, ensuring efficient and secure updates.
- Conducted data analysis and visualization using Python libraries like Matplotlib and Seaborn to support AI model development.
- Worked with cross-functional teams to ensure alignment between technical solutions and business objectives.
Environment: Python, PyTorch, AWS Bedrock, LLM Ops, Prompt Engineering, Generative AI, MERN Stack, Git, CI/CD.