Deep (Learning) Focus

Deep (Learning) Focus

Home
Notes
The Author
Archive
About

Sitemap - 2025 - Deep (Learning) Focus

AI Agents from First Principles

A Guide for Debugging LLM Training Data

Llama 4: The Challenges of Creating a Frontier-Level LLM

Vision Large Language Models (vLLMs)

nanoMoE: Mixture-of-Experts (MoE) LLMs from Scratch in PyTorch

Demystifying Reasoning Models

Mixture-of-Experts (MoE) LLMs

Scaling Laws for LLMs: From GPT-3 to o3

© 2025 Cameron R. Wolfe
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share