What's Inside this 470-page Book (Updated October 2024)?

Please note: this e-book is an interactive resource, not a downloadable PDF.

  • Hands-on Guide on LLMs, Prompting, Retrieval Augmented Generation (RAG) & Fine-tuning

  • Roadmap for Building Production-Ready Applications using LLMs

  • Fundamentals of LLM Theory

  • Simple-to-Advanced LLM Techniques & Frameworks

  • Code Projects with Real-World Applications

  • Colab Notebooks that you can run right away

  • Community access and our own AI Tutor

Cyber Monday Discount!

Use code: CYBER_MONDAY_2024 at checkout for 15% off and start learning right away at your own pace!

  • 00 Days
  • 00 Hours
  • 00 Minutes
  • 00 Seconds

Industry Leaders on the Book

“This is the most comprehensive...to date on building LLM applications, and helps learners understand everything from fundamentals to the simple-to-advanced building blocks of constructing LLM applications. The application topics include prompting, RAG, agents, fine-tuning, and deployment - all essential topics in an AI Engineer's toolkit.”

Jerry Liu, Co-founder and CEO of LlamaIndex

“A truly wonderful resource that develops understanding of LLMs from the ground up, from theory to code and modern frameworks. Grounds your knowledge in research trends and frameworks that develop your intuition around what's coming. Highly recommend.”

Pete Huang, Co-founder of The Neuron

“An indispensable guide for anyone venturing into the world of large language models...Covering everything from theory to practical deployment, it’s a must-have in the library of every aspiring and seasoned AI professional.”

Shashank Kalanithi, Data Engineer at Meta

“It contains thorough explanations and code for you to start using and deploying LLMs, as well as optimizing their performance. Very highly recommended!”

Luis Serrano, PhD, Founder of Serrano.Academy and author of Grokking Machine Learning

“It covers the foundational aspects of LLMs as well as advanced use-cases like finetuning LLMs, Retrieval Augmented Generation and Agents. This will be valuable to anyone looking to dive into the field quickly and efficiently.”

Jeremy Pinto, Senior Applied Research Scientist at Mila

Chapter Overview

    1. Table of Contents

    2. About The Book

    1. Introduction

    2. Why Prompt Engineering, Fine-Tuning, and RAG?

    3. Coding Environment and Packages

    1. A Brief History of Language Models

    2. What are Large Language Models?

    3. Building Blocks of LLMs

    4. Tutorial: Translation with LLMs (GPT-3.5 API)

    5. Tutorial: Control LLMs Output with Few-Shot Learning

    6. Recap

    1. Understanding Transformers

    2. Transformer Model’s Design Choices

    3. Transformer Architecture Optimization Techniques

    4. The Generative Pre-trained Transformer (GPT) Architecture

    5. Introduction to Large Multimodal Models

    6. Proprietary vs. Open Models vs. Open-Source Language Models

    7. Applications and Use-Cases of LLMs

    8. Recap

    1. Understanding Hallucinations and Bias

    2. Reducing Hallucinations by Controlling LLM Outputs

    3. Evaluating LLM Performance

    4. Recap

    1. Prompting and Prompt Engineering

    2. Prompting Techniques

    3. Prompt Injection and Security

    4. Recap

Who is it for?

  • AI Practitioners & Programmers Tinkerers
  • AI/ML Engineers & Computer Science Professionals
  • Students/Researchers & Job Seekers

The Only AI Engineering Toolkit You Need!

To build scalable and reliable products with LLMs

LLM Fundamentals, Architecture, & LLMs in Practice

Foundations

  • Building blocks of LLMs: language modeling, tokenization, embeddings, emergent abilities, scaling laws, context size…
  • Transformer Architecture: attention mechanism, design choices, encoder-only transformers, decoder-only transformers, encoder-decoder transformers, GPT Architecture, Masked Self-Attention, MinGPT


LLMs in Practice

  • Hallucinations & Biases: Mitigation strategies, controlling LLM outputs
  • Decoding methods: greedy search, sampling, beam search, top-k sampling, top-p sampling
  • Objective functions and evaluation metrics: perplexity metric and GLUE, SuperGLUE, BIG-Bench, HELM, FLASK Benchmarks…

Prompting & Frameworks

Prompting

  • Prompting techniques: zero-shot, in context, few-shot, role, chains, and chain-of-thought…
  • Prompt Injection and Prompt Hacking


Frameworks

  • LangChain: prompt templates, output parsers, summarization chain, QA chains
  • LlamaIndex: vector stores, embeddings, data connectors, nodes, indexes

RAG & Fine-Tuning

Retrieval-Augmented Generation Components

  • Data Ingestion(PDFs, web pages, Google Drive), text splitters, LangChain Chains
  • Embeddings, Vector Stores with Activeloop's Deep Lake
  • Querying in LlamaIndex: query construction, expansion, transformation, splitting, customizing a retriever engine…
  • Reranking Documents: recursive, small-to-big
  • RAG Metrics: Mean Reciprocal Rank (MRR), Hit Rate, Mean Average Precision (MAP), and Normalized Discounted Cumulative Gain (NDCG)...
  • Evaluation Tools: evaluating with ragas, custom evaluation of RAG pipelines


Fine-Tuning Optimization Techniques

  • LoRA, QLoRA, supervised fine-tuning, SFT RLHF

Agents, Optimization & Deployment

Agents

  • Using AutoGPT & BabyAGI with LangChain
  • Agent Simulation Project: CAMEL, Generative Agents
  • Building Agents, LangGPT, OpenAI Assistants

Optimization & Deployment

  • Challenges, quantization, pruning, distillation, cloud deployment, CPU and GPU optimization & deployment, creating APIs from open-source LLMs

Why Should You Read This Book?

  • Future-Proof Skills

    This book explores various methods to adapt "foundational" LLMs to specific tasks with enhanced accuracy, reliability, and scalability. It tackles the lack of reliability of “out of the box” LLMs by teaching the AI developer tech stack of the future; Prompting, Fine-Tuning, RAG, and Tools Use.

  • Scalable Solutions

    The book aims to guide developers through creating LLM products ready for production, leveraging the potential of AI across various industries. It breaks down techniques that are scalable for enterprise-level workflows, helping both independent developers and small companies with limited resources create AI products that deliver value to paying customers.

  • Practical Expertise for Everyone

    The book is for anyone who wants to build LLM products that can serve real use cases today. It comes with access to our webpage where we also share lots of additional up-to-date content, code, notebooks, and resources. However, the coding parts of the book is tailored for readers with an intermediate knowledge of Python.

More Towards AI's Book Readers' Reviews

5 star rating

Comprehensive Coverage and Practical

Priyankar Kumar

The book has great coverage of nearly all the important topics related to LLMs and application-building with LLMs. I also liked the focus on the hands-on projects, so that you are not just reading but also trying things out.

The book has great coverage of nearly all the important topics related to LLMs and application-building with LLMs. I also liked the focus on the hands-on projects, so that you are not just reading but also trying things out.

Read Less
5 star rating

Simply a must-read

Eugenio Galioto

The second version of this book can easily be considered a must-read as well as the first version. It's great to have key and evolving concepts explained like this!

The second version of this book can easily be considered a must-read as well as the first version. It's great to have key and evolving concepts explained like this!

Read Less
5 star rating

Best book on the topic

Hiroto Matsushima

FAQ

  • What skills do I learn?

    The book is packed with theories, concepts, projects, applications, and experience that you can confidently put on your CVs. You can add these skills straight into your resume: Large Language Models (LLMs) | LangChain | LlamaIndex | Vector databases | RAG | Prompting | Fine-tuning | Agents | Deployment & Deployment Optimizations | Creating chatbots | Chat with PDFs | Summarization | AI Assistants | RLHF

  • What are the prerequisites to read the book?

    The is written for readers without prior knowledge of AI or NLP. It introduces topics from the ground up, aiming to help you feel comfortable using the power of AI in your next project or to elevate your current project to the next level. A basic understanding of Python helps comprehend the code and implementations, while advanced use cases of the coding techniques are explained in detail in the course.

  • How do we make sure the book is not outdated?

    We ensure the book remains relevant by focusing on the core principles of building production products with LLMs, which are foundational and transferable across generations of models. While the field is fast-evolving and new techniques will emerge, today's LLM developer stack will still be crucial for adapting future models to specific industries and data. Additionally, we provide access to an up-to-date webpage with extra content, code, notebooks, and resources, ensuring readers stay current with the latest advancements.

  • Does it come with a physical copy?

    No. This e-book version is hosted on the platform (not a pdf). You can purchase a soft or hard copy of the book on Amazon (https://amzn.to/4bqYU9b). If you have a physical copy, email Louis-François at [email protected], and we'd be happy to give you a discount on the e-book!

  • Do you have a referral or affiliate program?

    If you refer three or more people, we’ll send you a physical copy of our book as a thank you! Additionally, we have an affiliate program for individuals with an audience. By joining, you can earn commissions for every successful referral made through your unique affiliate link. Please email Louis-François at [email protected] with proof of referral.

  • Can I take this course within my company?

    Yes! We offer both course bundles and custom training solutions tailored specifically for companies. For more information on company packages or to discuss a customized training plan, reach out to Louis at [email protected].