C

CourseWWWork

12 Followers
    18.5 -FastAPI Environment Setup & Dependencies
    4:01
    18.2 -Dockerized Environment Setup for LLMs
    4:03
    17.4 -INST Format LLaMA-2 Instruction Specification
    1:54
    18.1 -Ollama Overview Local LLM Runtime Engine
    2:24
    18.3 -Running Ollama Models with Docker Runner
    3:15
    19.3 -Accessing Instruct-Tuned Models (Google Gemma)
    1:58
    19.1 -Hugging Face Model Deployment – Section Intro
    3:01
    19.2 -Configuring and Securing Hugging Face Account
    2:36
    17.1 -Introduction to Prompt Serialization Styles
    2:00
    16.6 -Chain-of-Thought (CoT) for Reasoning
    12:49
    17.2 -Alpaca Prompt Template for Instruction Tuning
    2:49
    17.3 -ChatML Schema OpenAI’s Structured Prompt Format
    1:30
    16.7 -Auto-CoT Automated Reasoning Prompt Generation
    8:47
    16.8 -Persona-Based Prompting
    5:22
    16.2 -Prompting Types Zero-Shot, Few-Shot, One-Shot
    3:53
    16.4 -Few-Shot Prompting for Contextual Generalization
    3:31
    16.5 -Structured Outputs with Few-Shot Prompting
    3:13
    16.3 -One-Shot Prompting for Deterministic Inference
    3:23
    16.1 -Prompt Fundamentals Encoding Instructions for LLMs
    0:56
    14.1 Understanding Large Language Models (LLMs)
    5:55
    14.5 Implementing a Custom Tokenizer in Python
    4:15
    14.8 Role of Positional Encodings in Transformers
    3:20
    14.3 -How LLMS Work under the Hood
    7:16
    14.2 Deep Dive into the GPT Architecture
    9:07
    14.7 Deep Diving into Vector Embeddings
    9:09