Guard against infinite loop when no training shards exist, fix README typo

Add assertion after filtering val_path from parquet_paths for the "train"
split so an empty list fails fast instead of spinning in a silent infinite
loop. Also remove stray article "a" in README ("a three files" → "three
files").

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Hugh Brown 2026-03-10 21:34:40 -06:00
parent c12eef778e
commit 09ebea439d
2 changed files with 2 additions and 1 deletions

View File

@ -8,7 +8,7 @@ The idea: give an AI agent a small but real LLM training setup and let it experi
## How it works
The repo is deliberately kept small and only really has a three files that matter:
The repo is deliberately kept small and only really has three files that matter:
- **`prepare.py`** — fixed constants, one-time data prep (downloads training data, trains a BPE tokenizer), and runtime utilities (dataloader, evaluation). Not modified.
- **`train.py`** — the single file the agent edits. Contains the full GPT model, optimizer (Muon + AdamW), and training loop. Everything is fair game: architecture, hyperparameters, optimizer, batch size, etc. **This file is edited and iterated on by the agent**.

View File

@ -258,6 +258,7 @@ def _document_batches(split, tokenizer_batch_size=128):
val_path = os.path.join(DATA_DIR, VAL_FILENAME)
if split == "train":
parquet_paths = [p for p in parquet_paths if p != val_path]
assert len(parquet_paths) > 0, "No training shards found."
else:
parquet_paths = [val_path]
epoch = 1