Imagine an AI assistant that:
This is not a dream. With three free tools — Ollama, Supabase, and Python — you can build exactly this. This is Class 01. By the end, you will understand the full architecture. In the next classes, we will build it step by step.
Most AI tools like ChatGPT are cloud-based. That means:
A local AI solves all of this. You run the model on your own computer. Your data stays with you. Zero cost per message. Works offline.
Think of a CA firm that wants an AI assistant trained on their client files. They cannot use ChatGPT because client data is confidential. A local AI is the only safe option. Same applies to hospitals, law firms, HR departments — any place where privacy matters.
Ollama lets you download and run large language models (LLMs) on your own computer. Think of it as the engine that powers your AI.
Supabase is an open source database platform built on PostgreSQL. We use it to store:
Every message saved permanently.
Mathematical meaning for smart search.
Python is the glue. It receives your message, searches memory, sends context to Ollama, and saves replies.
Within one conversation, the AI remembers everything you said by passing the full current history to Ollama.
Uses vector embeddings to search months-old messages for similar meanings and injects them as context.
| Task Type | How It Works |
|---|---|
| Answer questions | Uses LLM knowledge + your memory context |
| Read & summarize files | Python reads .txt/.pdf, passes content to Ollama |
| Write and save code | AI generates code, Python saves it to disk |
| Remember preferences | Stored in Supabase, retrieved via memory search |
| Search the web | Python calls search API, passes results to Ollama |
| Run system commands | Python executes shell commands based on AI instructions |
Understand the full system (this class)
Install Ollama, run first model, chat via Python
Create database, tables, connect from Python
Embed messages, store and search with pgvector
Complete chatbot with long-term memory
File reading, web search, task execution
Your personal AI assistant — fully working
local-ai/ main.py # Main chat loop memory.py # Supabase memory: save + retrieve embeddings.py # Convert text to vectors using Ollama tools.py # Extra abilities: files, web search config.py # Settings: model name, DB URL, etc. requirements.txt # Python libraries needed
import requests
response = requests.post('http://localhost:11434/api/generate', json={
'model': 'llama3',
'prompt': 'Hello! Who are you?',
'stream': False
})
print(response.json()['response'])Go to ollama.com and download Ollama for your OS
Install it and run 'ollama pull llama3' in terminal
Run 'ollama run llama3' and type 'Hello' to verify
Create a free account at supabase.com
This series is designed so that anyone with basic Python knowledge can build a production-grade AI agent. Stay tuned for Class 02 where we write our first lines of code.
Master the hottest skills in the industry — from local LLMs to Vector Databases.