Generative AI with RAG and VectorDB

(C) Copyright Elephant Scale

April 14, 2024

Course Description

  • Generative AI with Large Language Models (LLM) opens ways to building smart applications as never before.
  • The most popular architecture for that is Retrieval-Augmented Generation (RAG.) RAG systems are built with semantic search.
  • In this course, the students learn how build the RAG systems.
    • For semantic component we use VectorDB from DataStax.
    • For the Generative AI, we have a choice of LLMs, such as ChatGPT or local LLama for HIPAA compliance.
    • For the implementation, we teach the best cloud and cloud architecture for your project.

After the course, you will be able to do the following tasks

  • Talk to an LLM in a correct way.
  • Script talking to LLM for a programmatic implementation.
  • Organize your private documents for the implementation and break them into meaningful fragments for storing in the semantic search engine (VectorDB or Pinecone.)
  • Structure the flow of conversation with LLM about your private documents.
  • Implement the system in production.
  • Architect testing, and continuous improvements.

Audience

  • Developers, data scientists, team leads, project managers

Skill Level

  • Intermediate to advanced.

Duration

  • Three days
  • Can be broken into introduction and advanced parts of appropriate length

Prerequisites

  • General familiarity with machine learning

Format

  • Lectures and hands on labs. (50% – 50%)

Lab environment

  • Zero Install: There is no need to install software on students’ machines!
  • A lab environment in the cloud will be provided for students.

Students will need the following

  • A reasonably modern laptop with unrestricted connection to the Internet. Laptops with overly restrictive VPNs or firewalls may not work properly.
    • A checklist to verify connectivity will be provided
  • Chrome browser

Detailed outline

Prompt Engineering

  • Introduction to AI and LLM
  • Iterative development
    • How to iteratively analyze and refine your prompts to generate marketing copy from a product fact sheet.
  • Summarizing
    • How to make an LLM summarize a document with different requirements and in different formats
  • Inferring
    • How to make an LLM infer sentiment and topics from product reviews and news articles.
  • Transforming
    • How to use Large Language Models for text transformation tasks such as language translation, spelling and grammar checking, tone adjustment, and format conversion.
  • Expanding
    • How to generate customer service emails that are tailored to each customer’s review.
  • Chatbot
    • How to use an LLM to have extended conversations with chatbots personalized or specialized for specific tasks or behaviors.

Semantic Search and VectorDB or Pinecone

  • Organize your private documents for the implementation and break them into meaningful fragments for storing in the semantic search engine (VectorDB or Pinecone)
  • Semantic search
  • Retrieval Augmented Generation (RAG)
  • Recommender systems
  • Hybrid search
  • Facial similarity search
  • Anomaly detection

LangChain (glue to put it together)

  • Models, prompts, and parsers
  • Memory
  • Chains
  • Q&A
  • Evaluation
  • Conversational bot

Architecture, testing, and continuous improvements

  • Overview of Amazon, Azure, and Google clouds of RAG
  • Evaluating and debugging Generative AI
  • Practical examples and demos