AI Learning Syllabus
Overview
This project is a 12-week, build-first AI course for a motivated 16-year-old who learns best by making things, breaking things, and rebuilding them with clearer understanding. The sequence starts with single neurons and gradient descent, then moves through backprop, text generation, attention, GPT-style thinking, PyTorch, API chatbots, retrieval-augmented generation, and a final independent project.
The course is inspired by Andrej Karpathy's style of teaching, but it is not limited to Karpathy resources. It deliberately mixes:
- build-first coding projects
- short, high-value videos
- written reflections
- small experiments
- CPU-safe models and datasets
Who It's For
- A student who is curious about AI and wants to understand how it works under the hood
- A parent or mentor who wants a concrete weekly structure
- A learner using a MacBook Air without relying on a GPU or cloud setup
MacBook Air / CPU Note
Everything in this syllabus is designed to run locally on a CPU-only MacBook Air with Python 3.11+. Training loops are intentionally small, datasets are tiny, and every week is scoped to finish in a reasonable amount of time on consumer hardware.
Quick Start
- Open Terminal and
cd into this folder.
- Run
bash setup.sh.
- Activate the virtual environment:
source .venv/bin/activate
- Run the environment check:
- Start with
week01_neural_network_basics/README.md.
Living Syllabus
This is not meant to behave like an old static course binder. The baseline weekly structure stays stable, but the content layer can evolve.
Use these files as the live layer:
CURRENT_UPDATES.md for the freshest curated additions
UPDATE_LOG.md for the history of changes
live/source_registry.json for the watched sources
The update process is designed to notice new material from tracked creators and repos, map it into the syllabus, and produce an announcement draft for OnPoint/OpenClaw.
How To Use This Repo
Each week follows the same rhythm:
- Read the week's
README.md.
- Watch the required videos before or during coding.
- Run the code and change at least one thing.
- Answer the reflection prompts.
- Save screenshots, plots, or notes as evidence of learning.
Recommended weekly rhythm:
- Day 1: Watch, read, and set up
- Day 2: Build the main exercise
- Day 3: Modify the code and debug
- Day 4: Reflect and explain what changed
Expected Outcomes
By the end of 12 weeks, the student should be able to:
- Explain how a neuron, loss, gradient, and update step fit together
- Read and modify simple neural network code without fear
- Describe why attention helps language models
- Build a tiny character model and a local RAG system
- Use PyTorch for a small training loop
- Call a modern LLM API from Python without hardcoding secrets
- Design and ship a small AI-powered final project
Required Learning Habit
Do not just run the scripts once and move on. Every week should include at least one deliberate modification such as:
- changing hyperparameters
- changing a dataset
- changing an activation function
- printing extra intermediate values
- writing down a prediction before running the code
That is where real understanding starts.
Objective
Use small, runnable projects to build real intuition for how modern AI systems work, while also collecting structured student feedback that improves both the course and OnPoint itself.
Tasks
- Complete one week at a time in order.
- Watch the required videos with notes.
- Run the code and make at least one meaningful change every week.
- Fill out reflections and include one course or product improvement idea.
Deliverables
- Runnable weekly code
- Reflection notes
- Experiment logs
- A final project
- A growing list of co-author feedback from the student
Checkpoint Questions
- What did I build this week?
- What do I understand better than last week?
- What still feels confusing?
- What would improve the course or the product for the next version?