Understanding AI Tools
Know your tools — understand how AI actually works (non-technically) and which tool to use for which job.
INFO
- Time: ~45 minutes
- Difficulty: Beginner
- What you'll learn: How LLMs work, context windows, choosing the right AI tool
This Page Covers
- How LLMs work (without the math)
- Context windows and tokens explained
- Different AI models and their trade-offs
- Platform overview and how to choose
How LLMs Work (Non-Technical)
Pattern Matching, Not Thinking
Large Language Models (LLMs) work by predicting the next word in a sequence. When you ask a question, the AI looks at your text and calculates: "Based on everything I've seen during training, what word is most likely to come next?"
Think of it as very sophisticated autocomplete. When your phone suggests the next word while texting, it's doing a simpler version of what LLMs do. The AI doesn't "understand" what you're asking — it recognizes patterns and generates statistically likely responses.
Why this matters for you: Don't anthropomorphize AI. It's not thinking, reasoning, or understanding. It's pattern-matching at massive scale. This explains why:
- AI can write beautiful code but make silly logical errors
- The same question can get different answers
- AI confidently states things that are completely wrong
Training Data and Knowledge Cutoff
LLMs learn from training data — billions of text samples from the internet, books, academic papers, code repositories, and more. During training, the model reads this text and learns patterns: how sentences are structured, what topics relate to each other, how code syntax works.
Knowledge cutoff refers to when the training data stops. For example, if a model was trained on data up to early 2025, it has no knowledge of events after that date. It won't know about:
- Recent news or events
- New software releases or features
- Updated documentation or APIs
This is why you might ask about a late-2025 or 2026 technology and get outdated information — the AI literally doesn't know it exists.
Why AI Makes Mistakes
Hallucinations happen when AI confidently generates plausible-sounding but incorrect information. Remember: the AI predicts what sounds right, not what is right.
Common hallucination scenarios:
- Made-up citations: AI invents author names, book titles, or URLs that don't exist
- Fake functions: Generates code using methods that the library doesn't have
- Confident nonsense: States incorrect facts with perfect confidence
The AI has no way to verify its outputs. It doesn't "know" facts — it generates text that follows patterns from training data. When the pattern doesn't match reality, you get confidently wrong answers.
The key insight: AI has no internal fact-checker. It can't distinguish between true and false. It only knows "likely" and "unlikely" based on training patterns.
Context Windows and Tokens
What Tokens Are
AI models don't read text character by character — they process tokens. A token is roughly 4 characters, or about 3/4 of a word.
Examples:
- "Hello" = 1 token
- "artificial intelligence" = 2-3 tokens
- A 500-word document = roughly 650-750 tokens
You don't need to count tokens precisely. Just remember: more text = more tokens = more of your context budget used up.
Context Window Limits
The context window is how much text the AI can "see" at once — both your input and its response combined. Think of it as the AI's working memory.
Common context window sizes (check current model docs for exact limits):
- OpenAI GPT (flagship): Varies by plan/model
- OpenAI GPT mini: Varies by plan/model
- Claude (Opus/Sonnet/Haiku): Up to 200K tokens
When you exceed the context window, the AI starts "forgetting" earlier parts of the conversation. This is why long chats get weird — the AI loses track of what you discussed earlier.
Why Context Matters for Code
For coding projects, context is precious. A medium-sized codebase might have:
- 50+ files
- Thousands of lines of code
- Configuration files, tests, documentation
You can't dump everything into a single prompt. Context management means strategically choosing what to share with the AI:
- Share only relevant files
- Summarize context instead of including full code
- Break large tasks into smaller pieces
This is why tools like Cursor and Claude Code exist — they help manage which files and context the AI sees.
Different AI Models
Claude (Anthropic)
Claude is developed by Anthropic with a focus on safety and helpfulness. It excels at:
- Long document analysis: 200K context window handles entire codebases
- Coding tasks: Strong at writing, reviewing, and debugging code
- Following instructions: Tends to stay on task and follow complex prompts
- Nuanced reasoning: Good at understanding edge cases
Available versions (as of early 2026) include Claude Opus 4.5 (highest capability, supports extended thinking), Claude Sonnet 4.5 (best coding model, up to 1M token context available), and Claude Haiku 4.5 (near-frontier performance, fastest and most cost-effective).
GPT (OpenAI)
OpenAI's GPT models power ChatGPT and are available through their API. Strengths include:
- General purpose: Handles almost any task reasonably well
- Multimodal: Can process text, images, and audio
- Broad knowledge: Extensive training data coverage
- Ecosystem: Massive plugin and tool ecosystem
Current models (as of early 2026) include:
- GPT-5.2 Instant — Fast everyday model for general tasks
- GPT-5.2 Thinking — Best for professional knowledge work requiring deeper reasoning
- GPT-5.2 Pro — Smartest model for difficult questions
- GPT-5.2-Codex — Advanced agentic coding model
ChatGPT is the consumer chat interface; the API is for developers building applications.
Open Source Options
Open source models like Llama (Meta) and Mistral offer alternatives:
- Free to use: No per-token costs
- Run locally: Privacy-conscious usage
- Customizable: Can be fine-tuned for specific tasks
Trade-offs: Generally less capable than frontier models, require technical setup to run locally.
Strengths and Trade-offs
| Model | Strengths | Context Window | Cost |
|---|---|---|---|
| Claude Opus 4.5 | Highest capability, extended thinking | Up to 200K tokens | $$$$ |
| Claude Sonnet 4.5 | Best for coding, balanced speed/quality | Up to 200K (1M available) | $$$ |
| Claude Haiku 4.5 | Near-frontier performance, fastest | Up to 200K tokens | $ |
| GPT-5.2 Pro | Smartest, difficult questions | Varies by model | $$$$ |
| GPT-5.2 Thinking | Professional knowledge work | Varies by model | $$$ |
| GPT-5.2 Instant | Fast everyday tasks | Varies by model | $ |
| GPT-5.2-Codex | Advanced agentic coding | Varies by model | $$$$ |
| Llama 3 (Meta) | Open source, runs locally | Varies by setup | Free |
| Mistral | Open source, efficient | Varies by setup | Free |
Cost considerations:
- Premium models charge per token (input and output)
- A heavy coding session might cost $1-10 in API usage
- Subscription plans (ChatGPT Plus, Claude Pro) offer better value for regular users
AI Chat vs AI Agents
Chat: Back-and-Forth Conversation
Traditional AI chat works like a conversation:
- You ask a question
- AI generates a response
- You read the response and copy-paste code manually
- Repeat
You are the driver. The AI only provides text responses — it can't edit files, run commands, or take actions. Every implementation step requires your manual intervention.
Agents: Autonomous Task Execution
AI agents go beyond chat — they can take actions on your behalf:
- Read and write files on your computer
- Run terminal commands
- Browse documentation
- Create entire project structures
Examples:
- ChatGPT/Claude.ai: Chat (you copy-paste)
- Claude Code: Agent (runs commands, edits files)
- Cursor/Windsurf: Agent (directly modifies your codebase)
When to Use Which
Use chat when:
- Learning concepts or getting explanations
- Brainstorming ideas
- Small code snippets you can copy-paste
- You want full control over every change
Use agents when:
- Building entire features or projects
- Making changes across multiple files
- Running multi-step workflows
- You trust the AI to make reasonable decisions
Agents are more powerful but require more trust. Start with chat to understand what the AI suggests, then graduate to agents when you're comfortable.
Platform Overview
Lovable/Bolt/v0 (Full-Stack Builders)
What they do: You describe what you want in natural language, and they generate a complete, working application.
How it works:
- Describe your app: "A todo list with user authentication and dark mode"
- AI generates all the code, deploys it, gives you a live URL
- Iterate with more prompts to refine
What is React?
These tools generate React applications by default. React is a JavaScript framework (library) that makes building interactive user interfaces easier. Instead of writing raw HTML/CSS/JS, React lets developers build reusable "components" — like building blocks for your UI. Other popular frameworks include Vue and Svelte. You don't need to understand React to use these builders, but knowing this helps when you look at the generated code.
Best for:
- Rapid prototyping
- MVPs and proof of concepts
- Non-technical founders validating ideas
- Learning how apps are structured
Limitations: Less control over code quality and architecture. Good for starting, but you may outgrow them.
Cursor/Windsurf (AI-Enhanced Editors)
What they do: Code editors with AI built in. The AI can see your entire project, understand relationships between files, and make changes directly.
How it works:
- Open your project in the editor
- Chat with AI about what you want to build
- AI reads your codebase, suggests changes, applies them with your approval
Best for:
- Existing codebases
- Developers who want AI assistance
- Projects requiring specific architecture decisions
- Learning by seeing AI's reasoning
GitHub Copilot (Code Completion + Chat)
What it does: Inline autocomplete as you type, plus a chat panel for conversations. Like aggressive autocomplete that suggests entire functions, with the ability to ask questions.
How it works:
- You start typing code
- Copilot suggests completions (gray text)
- Press Tab to accept, or keep typing to ignore
- Open Copilot Chat for conversations and explanations
Best for:
- Faster typing of boilerplate code
- Learning syntax patterns
- Working in familiar codebases
- Developers who prefer to stay in flow
Limitations: Less project-wide awareness than Cursor or Claude Code; chat is helpful but not as powerful as dedicated AI coding tools.
Claude Code (Terminal Agent)
What it does: AI agent that runs in your terminal. Can execute commands, edit files, browse the web, and complete complex multi-step tasks.
How it works:
- Describe a task: "Set up a React project with TypeScript and Tailwind"
- Claude Code creates files, runs npm commands, configures everything
- You review and approve changes
Best for:
- Complex, multi-step tasks
- System administration and DevOps
- Power users comfortable with terminal
- Tasks requiring tool integrations
How to Choose the Right Tool
Decision Matrix
| If you want to... | Use this |
|---|---|
| Build a complete app from a description | Lovable, Bolt, or v0 |
| Get AI help while coding in an editor | Cursor or Windsurf |
| Faster autocomplete while typing | GitHub Copilot |
| Automate complex tasks from terminal | Claude Code |
| Learn concepts or ask questions | ChatGPT or Claude.ai |
Matching Tools to Tasks
Just starting out? Start with Lovable or Bolt. You'll get quick wins and see results immediately. Understanding what's possible builds motivation.
Learning to code? Use VS Code + GitHub Copilot. You stay in control while getting helpful suggestions. Great for building intuition about code.
Working on serious projects? Cursor or Claude Code. Full project awareness, proper version control, and the power to make complex changes.
Quick questions or explanations? ChatGPT or Claude.ai. Sometimes you just need a conversation. Chat interfaces are perfect for learning and exploration.
The Opinionated Guide
If you're completely non-technical:
- Start with Lovable for instant gratification
- Graduate to Cursor when you want more control
- Learn Claude Code when you're ready for power-user mode
If you have some coding experience:
- Start with Cursor or VS Code + Copilot
- Add Claude Code for complex tasks
- Use chat interfaces for learning new concepts
What If Your Tool Changes or Disappears?
AI tools evolve rapidly. The tool you learn today might change significantly — or even shut down — tomorrow. This is not a reason to avoid learning; it is a reason to learn the right way.
The Skills Transfer
The good news: skills transfer between tools. If you learn to build with Lovable, you can adapt to Bolt or v0 in an afternoon. The core skills are the same:
- Writing clear prompts that describe what you want
- Understanding the generated code enough to debug it
- Knowing how to deploy and maintain what you build
- Recognizing when AI output is wrong or incomplete
These skills do not depend on any specific tool. They work across all AI builders.
Alternative Full-Stack Builders
If Lovable changes its pricing, limits free usage, or disappears, here are alternatives:
| Tool | Strengths | How It Differs |
|---|---|---|
| Bolt.new | Very similar to Lovable, fast iteration | Slightly different UI, same concept |
| v0 (Vercel) | Great for UI components, React-focused | More component-focused than full apps |
| Replit | Full IDE + AI, runs in browser | More coding-focused, steeper learning curve |
The workflow is nearly identical: describe what you want, AI generates code, you iterate with follow-up prompts.
Exporting Your Code
Most AI builders let you export your code. This is crucial — it means you are never locked in.
How to export (general pattern):
- Look for "Export" or "Download" in the tool's menu
- Choose to connect to GitHub or download as ZIP
- Your code is now yours, independent of the platform
What you get: A standard project (usually React/Next.js) that you can run anywhere. It does not need Lovable or any specific tool to work.
Do this now: If you built something in Module 0, try exporting it. Push it to your own GitHub repository. This protects your work and teaches you the export process before you need it urgently.
Future-Proofing Your Learning
To stay adaptable:
Learn the underlying tech, not just the tool. Understanding HTML/CSS/JS and React basics means you can work with any builder's output.
Export regularly. Keep your code in your own GitHub, not just in the tool's cloud.
Follow AI news. Things move fast. Tools get acquired, pricing changes, new options appear. Knowing what is available keeps you flexible.
Trust the patterns. The prompt → generate → iterate cycle works everywhere. Master the pattern, and new tools become easy.
The goal of this course is not to make you dependent on any specific tool. It is to give you skills that remain valuable regardless of which tools dominate next year.
Key Takeaways
- LLMs predict text patterns, they don't "think"
- Context windows limit how much AI can remember
- Different models have different strengths
- Match the tool to the task (builder vs editor vs agent)
- AI chat is conversational; agents take autonomous action
