I co-teach this course with Matei Zaharia, Sam Raymond, and Joseph Bradley.
This course is aimed at developers, data scientists, and engineers looking to build LLM-centric applications with the latest and most popular frameworks. You will use Hugging Face to solve natural language processing (NLP) problems, leverage LangChain to perform complex, multi-stage tasks, and deep-dive into prompt engineering. You will use data embeddings and vector databases to augment LLM pipelines. Additionally, you will fine-tune LLMs with domain-specific data to improve performance and cost, as well as identify the benefits and drawbacks of proprietary models. You will assess societal, safety, and ethical considerations of using LLMs. Finally, you will learn how to deploy your models at scale, leveraging LLMOps best practices.
By the end of this course, you will have built an end-to-end LLM workflow that is ready for production!
Refer to Part 2 here: