C

CourseWWWork

13 Followers
    21.8 -LangChain Document Loaders for PDF
    3:38
    21.7 -LangChain Installation & Setup
    3:08
    21.6 -Local Vector DB Setup with Docker Compose
    4:43
    21.5 -RAG Pipeline – Retrieval Mechanism in Depth
    5:36
    21.4 -RAG Pipeline – Indexing Workflow Explained
    5:22
    21.3 -Naive Retrieval-Based Solution Approach
    4:49
    21.2 -Defining the Core Problem in RAG Systems
    4:12
    21.1 -Intro to RAG & LangChain – Section Overview
    0:45
    20.3 -Coding Your First AI Agent
    22:37
    20.5 -Building a CLI Coding Agent (Claude Code) from Scratch
    9:50
    20.4 -Enforcing Structured Outputs with Pydantic
    7:34
    20.2 -What Exactly Are AI Agents (Core Concepts)
    10:08
    18.4 -Configuring OpenWebUI with Ollama Backend
    7:24
    19.4 -Installing and Using Hugging Face CLI Tools
    2:37
    19.5 -Model Downloading & Execution from HF Hub
    2:48
    18.6 -Integrating Ollama with FastAPI & Python APIs
    6:19
    20.1 -Agentic AI Fundamentals – Section Intro
    1:01
    18.5 -FastAPI Environment Setup & Dependencies
    4:01
    18.2 -Dockerized Environment Setup for LLMs
    4:03
    19.2 -Configuring and Securing Hugging Face Account
    2:36
    19.3 -Accessing Instruct-Tuned Models (Google Gemma)
    1:58
    19.1 -Hugging Face Model Deployment – Section Intro
    3:01
    18.3 -Running Ollama Models with Docker Runner
    3:15
    18.1 -Ollama Overview Local LLM Runtime Engine
    2:24
    17.4 -INST Format LLaMA-2 Instruction Specification
    1:54
Rumble logo