diff --git a/.gitignore b/.gitignore
index 84617d1de..d54c8bae1 100644
--- a/.gitignore
+++ b/.gitignore
@@ -4,8 +4,6 @@
/coverage/
/dist/
/docs/.observablehq/dist/
-/docs/theme/*.md
-/docs/themes.md
/test/build/
/test/output/**/*-changed.*
/test/output/build/**/*-changed/
diff --git a/docs/config.md b/docs/config.md
index 1063f5a4e..d5174af75 100644
--- a/docs/config.md
+++ b/docs/config.md
@@ -155,6 +155,10 @@ The pages list should _not_ include the home page (`/`) as this is automatically
Whether to show the previous & next links in the footer; defaults to true. The pages are linked in the same order as they appear in the sidebar.
+## dynamicPaths
+
+The list of [parameterized pages](./params) and [dynamic pages](./page-loaders) to generate, either as a (synchronous) iterable of strings, or a function that returns an async iterable of strings if you wish to load the list of dynamic pages asynchronously.
+
## head
An HTML fragment to add to the head. Defaults to the empty string. If specified as a function, receives an object with the page’s `title`, (front-matter) `data`, and `path`, and must return a string.
@@ -224,7 +228,7 @@ These additional results may also point to external links if the **path** is spe
```js run=false
export default {
search: {
- async* index() {
+ async *index() {
yield {
path: "https://example.com",
title: "Example",
@@ -237,7 +241,7 @@ export default {
## interpreters
-The **interpreters** option specifies additional interpreted languages for data loaders, indicating the file extension and associated interpreter. (See [loader routing](./loaders#routing) for more.) The default list of interpreters is:
+The **interpreters** option specifies additional interpreted languages for data loaders, indicating the file extension and associated interpreter. (See [loader routing](./data-loaders#routing) for more.) The default list of interpreters is:
```js run=false
{
diff --git a/docs/convert.md b/docs/convert.md
index db194acfb..354a52647 100644
--- a/docs/convert.md
+++ b/docs/convert.md
@@ -384,7 +384,7 @@ The `convert` command only supports code cell modes: Markdown, JavaScript, HTML,
## Databases
-Database connectors can be replaced by [data loaders](./loaders).
+Database connectors can be replaced by [data loaders](./data-loaders).
## Secrets
diff --git a/docs/data-loaders.md b/docs/data-loaders.md
new file mode 100644
index 000000000..50b482bd3
--- /dev/null
+++ b/docs/data-loaders.md
@@ -0,0 +1,324 @@
+---
+keywords: server-side rendering, ssr
+---
+
+# Data loaders
+
+**Data loaders** generate static snapshots of data during build. For example, a data loader might query a database and output CSV data, or server-side render a chart and output a PNG image.
+
+Why static snapshots? Performance is critical for dashboards: users don’t like to wait, and dashboards only create value if users look at them. Data loaders practically force your app to be fast because data is precomputed and thus can be served instantly — you don’t need to run queries separately for each user on load. Furthermore, data can be highly optimized (and aggregated and anonymized), minimizing what you send to the client. And since data loaders run only during build, your users don’t need direct access to your data warehouse, making your dashboards more secure and robust.
+
+
Data loaders are optional. You can use fetch or WebSocket if you prefer to load data at runtime, or you can store data in static files.
+
You can use continuous deployment to rebuild data as often as you like, ensuring that data is always up-to-date.
+
+Data loaders can be written in any programming language. They can even invoke binary executables such as ffmpeg or DuckDB. For convenience, Framework has built-in support for common languages: JavaScript, TypeScript, Python, and R. Naturally you can use any third-party library or SDK for these languages, too.
+
+A data loader can be as simple as a shell script that invokes [curl](https://curl.se/) to fetch recent earthquakes from the [USGS](https://earthquake.usgs.gov/earthquakes/feed/v1.0/geojson.php):
+
+```sh
+curl https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_day.geojson
+```
+
+Data loaders use [file-based routing](#routing), so assuming this shell script is named `quakes.json.sh`, a `quakes.json` file is then generated at build time. You can access this file from the client using [`FileAttachment`](./files):
+
+```js echo
+FileAttachment("quakes.json").json()
+```
+
+A data loader can transform data to perfectly suit the needs of a dashboard. The JavaScript data loader below uses [D3](./lib/d3) to output [CSV](./lib/csv) with three columns representing the _magnitude_, _longitude_, and _latitude_ of each earthquake.
+
+```js run=false echo
+import {csvFormat} from "d3-dsv";
+
+// Fetch GeoJSON from the USGS.
+const response = await fetch("https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_day.geojson");
+if (!response.ok) throw new Error(`fetch failed: ${response.status}`);
+const collection = await response.json();
+
+// Convert to an array of objects.
+const features = collection.features.map((f) => ({
+ magnitude: f.properties.mag,
+ longitude: f.geometry.coordinates[0],
+ latitude: f.geometry.coordinates[1]
+}));
+
+// Output CSV.
+process.stdout.write(csvFormat(features));
+```
+
+Assuming the loader above is named `quakes.csv.js`, you can access its output from the client as `quakes.csv`:
+
+```js echo
+const quakes = FileAttachment("quakes.csv").csv({typed: true});
+```
+
+Now you can display the earthquakes in a map using [Observable Plot](./lib/plot):
+
+```js
+const world = await fetch(import.meta.resolve("npm:world-atlas/land-110m.json")).then((response) => response.json());
+const land = topojson.feature(world, world.objects.land);
+```
+
+```js echo
+Plot.plot({
+ projection: {
+ type: "orthographic",
+ rotate: [110, -30]
+ },
+ marks: [
+ Plot.graticule(),
+ Plot.sphere(),
+ Plot.geo(land, {stroke: "var(--theme-foreground-faint)"}),
+ Plot.dot(quakes, {x: "longitude", y: "latitude", r: "magnitude", stroke: "#f43f5e"})
+ ]
+})
+```
+
+During preview, the preview server automatically runs the data loader the first time its output is needed and [caches](#caching) the result; if you edit the data loader, the preview server will automatically run it again and push the new result to the client.
+
+## Archives
+
+Data loaders can generate multi-file archives such as ZIP files; individual files can then be pulled from archives using `FileAttachment`. This allows a data loader to output multiple (often related) files from the same source data in one go. Framework also supports _implicit_ data loaders, _extractors_, that extract referenced files from static archives. So whether an archive is static or generated dynamically by a data loader, you can use `FileAttachment` to pull files from it.
+
+The following archive extensions are supported:
+
+- `.zip` - for the [ZIP]() archive format
+- `.tar` - for [tarballs]()
+- `.tar.gz` and `.tgz` - for [compressed tarballs](https://en.wikipedia.org/wiki/Gzip)
+
+Here’s an example of loading an image from `lib/muybridge.zip`:
+
+```js echo
+FileAttachment("lib/muybridge/deer.jpeg").image({width: 320, alt: "A deer"})
+```
+
+You can do the same with static HTML:
+
+
+
+```html run=false
+
+```
+
+Below is a TypeScript data loader `quakes.zip.ts` that uses [JSZip](https://stuk.github.io/jszip/) to generate a ZIP archive of two files, `metadata.json` and `features.csv`. Note that the data loader is responsible for serializing the `metadata` and `features` objects to appropriate format corresponding to the file extension (`.json` and `.csv`); data loaders are responsible for doing their own serialization.
+
+```js run=false
+import {csvFormat} from "d3-dsv";
+import JSZip from "jszip";
+
+// Fetch GeoJSON from the USGS.
+const response = await fetch("https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_day.geojson");
+if (!response.ok) throw new Error(`fetch failed: ${response.status}`);
+const collection = await response.json();
+
+// Convert to an array of objects.
+const features = collection.features.map((f) => ({
+ magnitude: f.properties.mag,
+ longitude: f.geometry.coordinates[0],
+ latitude: f.geometry.coordinates[1]
+}));
+
+// Output a ZIP archive to stdout.
+const zip = new JSZip();
+zip.file("metadata.json", JSON.stringify(collection.metadata, null, 2));
+zip.file("features.csv", csvFormat(features));
+zip.generateNodeStream().pipe(process.stdout);
+```
+
+To load data in the browser, use `FileAttachment`:
+
+```js run=false
+const metadata = FileAttachment("quakes/metadata.json").json();
+const features = FileAttachment("quakes/features.csv").csv({typed: true});
+```
+
+The ZIP file itself can be also referenced as a whole — for example if the names of the files are not known in advance — with [`file.zip`](./lib/zip):
+
+```js echo
+const zip = FileAttachment("quakes.zip").zip();
+const metadata = zip.then((zip) => zip.file("metadata.json").json());
+```
+
+Like with any other file, files from generated archives are live in preview (refreshing automatically if the corresponding data loader is edited), and are added to the build only if [statically referenced](./files#static-analysis) by `FileAttachment`.
+
+## Routing
+
+Data loaders live in the source root (typically `src`) alongside your other source files. When a file is referenced from JavaScript via `FileAttachment`, if the file does not exist, Framework will look for a file of the same name with a double extension to see if there is a corresponding data loader. By default, the following second extensions are checked, in order, with the corresponding language and interpreter:
+
+- `.js` - JavaScript (`node`)
+- `.ts` - TypeScript (`tsx`)
+- `.py` - Python (`python3`)
+- `.R` - R (`Rscript`)
+- `.rs` - Rust (`rust-script`)
+- `.go` - Go (`go run`)
+- `.java` — Java (`java`; requires Java 11+ and [single-file programs](https://openjdk.org/jeps/330))
+- `.jl` - Julia (`julia`)
+- `.php` - PHP (`php`)
+- `.sh` - shell script (`sh`)
+- `.exe` - arbitrary executable
+
+
The interpretersconfiguration option can be used to extend the list of supported extensions.
+
+For example, for the file `quakes.csv`, the following data loaders are considered: `quakes.csv.js`, `quakes.csv.ts`, `quakes.csv.py`, _etc._ The first match is used.
+
+## Execution
+
+To use an interpreted data loader (anything other than `.exe`), the corresponding interpreter must be installed and available on your `$PATH`. Any additional modules, packages, libraries, _etc._, must also be installed. Some interpreters are not available on all platforms; for example `sh` is only available on Unix-like systems.
+
+
+
+You can use a virtual environment in Python, such as [venv](https://docs.python.org/3/tutorial/venv.html) or [uv](https://github.com/astral-sh/uv), to install libraries locally to the project. This is useful when working in multiple projects, and when collaborating; you can also track dependencies in a `requirements.txt` file.
+
+To create a virtual environment with venv:
+
+```sh
+python3 -m venv .venv
+```
+
+Or with uv:
+
+```sh
+uv venv
+```
+
+To activate the virtual environment on macOS or Linux:
+
+```sh
+source .venv/bin/activate
+```
+
+Or on Windows:
+
+```sh
+.venv\Scripts\activate
+```
+
+To install required packages:
+
+```sh
+pip install -r requirements.txt
+```
+
+You can then run the `observable preview` or `observable build` (or `npm run dev` or `npm run build`) commands as usual; data loaders will run within the virtual environment. Run the `deactivate` command or use Control-D to exit the virtual environment.
+
+
+
+Data loaders are run in the same working directory in which you run the `observable build` or `observable preview` command, which is typically the project root. In Node, you can access the current working directory by calling `process.cwd()`, and the data loader’s source location with [`import.meta.url`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/import.meta). To compute the path of a file relative to the data loader source (rather than relative to the current working directory), use [`import.meta.resolve`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/import.meta/resolve). For example, a data loader in `src/summary.txt.js` could read the file `src/table.txt` as:
+
+```js run=false
+import {readFile} from "node:fs/promises";
+import {fileURLToPath} from "node:url";
+
+const table = await readFile(fileURLToPath(import.meta.resolve("./table.txt")), "utf-8");
+```
+
+Executable (`.exe`) data loaders are run directly and must have the executable bit set. This is typically done via [`chmod`](https://en.wikipedia.org/wiki/Chmod). For example:
+
+```sh
+chmod +x src/quakes.csv.exe
+```
+
+While a `.exe` data loader may be any binary executable (_e.g.,_ compiled from C), it is often convenient to specify another interpreter using a [shebang](). For example, to write a data loader in Perl:
+
+```perl
+#!/usr/bin/env perl
+
+print("Hello World\n");
+```
+
+If multiple requests are made concurrently for the same data loader, the data loader will only run once; each concurrent request will receive the same response.
+
+## Output
+
+Data loaders must output to [standard output](). The first extension (such as `.csv`) does not affect the generated snapshot; the data loader is solely responsible for producing the expected output (such as CSV). If you wish to log additional information from within a data loader, be sure to log to standard error, say by using [`console.warn`](https://developer.mozilla.org/en-US/docs/Web/API/console/warn) or `process.stderr`; otherwise the logs will be included in the output file and sent to the client.
+
+## Building
+
+Data loaders generate files at build time that live alongside other [static files](./files) in the `_file` directory of the output root. For example, to generate a `quakes.json` file at build time by fetching and caching data from the USGS, you could write a data loader in a shell script like so:
+
+```ini
+.
+├─ src
+│ ├─ index.md
+│ └─ quakes.json.sh
+└─ ...
+```
+
+Where `quakes.json.sh` is:
+
+```sh
+curl https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_day.geojson
+```
+
+This will produce the following output root:
+
+```ini
+.
+├─ dist
+│ ├─ _file
+│ │ └─ quakes.99da78d9.json
+│ ├─ _observablehq
+│ │ └─ ... # additional assets for serving the site
+│ └─ index.html
+└─ ...
+```
+
+As another example, say you have a `quakes.zip` archive that includes yearly files for observed earthquakes. If you reference `FileAttachment("quakes/2021.csv")`, Framework will pull the `2021.csv` from `quakes.zip`. So this source root:
+
+```ini
+.
+├─ src
+│ ├─ index.md
+│ └─ quakes.zip
+└─ ...
+```
+
+Becomes this output:
+
+```ini
+.
+├─ dist
+│ ├─ _file
+│ │ └─ quakes
+│ │ └─ 2021.e5f2eb94.csv
+│ ├─ _observablehq
+│ │ └─ ... # additional assets for serving the site
+│ └─ index.html
+└─ ...
+```
+
+A data loader is only run during build if its corresponding output file is referenced in at least one page. Framework does not scour the source root (typically `src`) for data loaders.
+
+## Caching
+
+When a data loader runs successfully, its output is saved to a cache which lives in `.observablehq/cache` within the source root (typically `src`).
+
+During preview, Framework considers the cache “fresh” if the modification time of the cached output is newer than the modification time of the corresponding data loader source. If you edit a data loader or update its modification time with `touch`, the cache is invalidated; when previewing a page that uses the data loader, the preview server will detect that the data loader was modified and automatically run it, pushing the new data down to the client and re-evaluating any referencing code — no reload required!
+
+During build, Framework ignores modification times and only runs a data loader if its output is not cached. Continuous integration caches typically don’t preserve modification times, so this design makes it easier to control which data loaders to run by selectively populating the cache.
+
+To purge the data loader cache and force all data loaders to run on the next build, delete the entire cache. For example:
+
+```sh
+rm -rf src/.observablehq/cache
+```
+
+To force a specific data loader to run on the next build instead, delete its corresponding output from the cache. For example, to rebuild `src/quakes.csv`:
+
+```sh
+rm -f src/.observablehq/cache/quakes.csv
+```
+
+See [Automated deploys: Caching](./deploying#caching) for more on caching during CI.
+
+## Errors
+
+When a data loader fails, it _must_ return a non-zero [exit code](https://en.wikipedia.org/wiki/Exit_status). If a data loader produces a zero exit code, Framework will assume that it was successful and will cache and serve the output to the client. Empty output is not by itself considered an error; however, a warning is displayed in the preview server and build logs.
+
+During preview, data loader errors will be shown in the preview server log, and a 500 HTTP status code will be returned to the client that attempted to load the corresponding file. This typically results in an error such as:
+
+```
+RuntimeError: Unable to load file: quakes.csv
+```
+
+When any data loader fails, the entire build fails.
diff --git a/docs/files.md b/docs/files.md
index 8b66f2991..b019f577c 100644
--- a/docs/files.md
+++ b/docs/files.md
@@ -4,7 +4,7 @@ keywords: file, fileattachment, attachment
# Files
-Load files — whether static or generated dynamically by a [data loader](./loaders) — using the built-in `FileAttachment` function. This is available by default in Markdown, but you can import it explicitly like so:
+Load files — whether static or generated dynamically by a [data loader](./data-loaders) — using the built-in `FileAttachment` function. This is available by default in Markdown, but you can import it explicitly like so:
```js echo
import {FileAttachment} from "npm:@observablehq/stdlib";
@@ -16,7 +16,7 @@ The `FileAttachment` function takes a path and returns a file handle. This handl
FileAttachment("volcano.json")
```
-Like a [local import](./imports#local-imports), the path is relative to the calling code’s source file: either the page’s Markdown file or the imported local JavaScript module. To load a remote file, use [`fetch`](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API), or use a [data loader](./loaders) to download the file at build time.
+Like a [local import](./imports#local-imports), the path is relative to the calling code’s source file: either the page’s Markdown file or the imported local JavaScript module. To load a remote file, use [`fetch`](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API), or use a [data loader](./data-loaders) to download the file at build time.
Calling `FileAttachment` doesn’t actually load the file; the contents are only loaded when you invoke a [file contents method](#supported-formats). For example, to load a JSON file:
@@ -32,7 +32,7 @@ volcano
## Static analysis
-The `FileAttachment` function can _only_ be passed a static string literal; constructing a dynamic path such as `FileAttachment("my" + "file.csv")` is invalid syntax. Static analysis is used to invoke [data loaders](./loaders) at build time, and ensures that only referenced files are included in the generated output during build. This also allows a content hash in the file name for cache breaking during deploy.
+The `FileAttachment` function can _only_ be passed a static string literal; constructing a dynamic path such as FileAttachment(\`frame$\{i}.png\`) is invalid syntax. Static analysis is used to invoke [data loaders](./data-loaders) at build time, and ensures that only referenced files are included in the generated output during build. This also allows a content hash in the file name for cache breaking during deploy.
If you have multiple files, you can enumerate them explicitly like so:
diff --git a/docs/getting-started.md b/docs/getting-started.md
index 1477753f1..568bd4e81 100644
--- a/docs/getting-started.md
+++ b/docs/getting-started.md
@@ -295,7 +295,7 @@ If this barfs a bunch of JSON in the terminal, it’s working as intended. 😅
### File attachments
-Framework uses [file-based routing](./loaders#routing) for data loaders: the data loader forecast.json.js serves the file forecast.json. To load this file from src/weather.md we use the relative path ./data/forecast.json. In effect, data loaders are simply a naming convention for generating “static” files — a big advantage of which is that you can edit a data loader and the changes immediately propagate to the live preview without needing a reload.
+Framework uses [file-based routing](./data-loaders#routing) for data loaders: the data loader forecast.json.js serves the file forecast.json. To load this file from src/weather.md we use the relative path ./data/forecast.json. In effect, data loaders are simply a naming convention for generating “static” files — a big advantage of which is that you can edit a data loader and the changes immediately propagate to the live preview without needing a reload.
To load a file in JavaScript, use the built-in [`FileAttachment`](./files). In `weather.md`, replace the contents of the JavaScript code block (the parts inside the triple backticks ```) with the following code:
@@ -557,7 +557,7 @@ forecast = requests.get(station["properties"]["forecastHourly"]).json()
json.dump(forecast, sys.stdout)
```
-To write the data loader in R, name it forecast.json.R. Or as shell script, forecast.json.sh. You get the idea. See [Data loaders: Routing](./loaders#routing) for more. The beauty of this approach is that you can leverage the strengths (and libraries) of multiple languages, and still get instant updates in the browser as you develop.
+To write the data loader in R, name it forecast.json.R. Or as shell script, forecast.json.sh. You get the idea. See [Data loaders: Routing](./data-loaders#routing) for more. The beauty of this approach is that you can leverage the strengths (and libraries) of multiple languages, and still get instant updates in the browser as you develop.
### Deploy automatically
diff --git a/docs/index.md b/docs/index.md
index de4077c5c..0d64abb44 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -141,7 +141,7 @@ index: false
**Observable Framework** is an [open-source](https://github.com/observablehq/framework) static site generator for data apps, dashboards, reports, and more. Framework includes a preview server for local development, and a command-line interface for automating builds & deploys.
-You write simple [Markdown](./markdown) pages — with interactive charts and inputs in [reactive JavaScript](./javascript), and with data snapshots generated by [loaders](./loaders) in _any_ programming language (SQL, Python, R, and more) — and Framework compiles it into a static site with instant page loads for a great user experience. Since everything is just files, you can use your preferred editor and source control, write unit tests, share code with other apps, integrate with CI/CD, and host projects anywhere.
+You write simple [Markdown](./markdown) pages — with interactive charts and inputs in [reactive JavaScript](./javascript), and with data snapshots generated by [data loaders](./data-loaders) in _any_ programming language (SQL, Python, R, and more) — and Framework compiles it into a static site with instant page loads for a great user experience. Since everything is just files, you can use your preferred editor and source control, write unit tests, share code with other apps, integrate with CI/CD, and host projects anywhere.
Framework includes thoughtfully-designed [themes](./themes), [grids](./markdown#grids), and [libraries](./imports) to help you build displays of data that look great on any device, including [Observable Plot](./lib/plot), [D3](./lib/d3), [Mosaic](./lib/mosaic), [Vega-Lite](./lib/vega-lite), [Graphviz](./lib/dot), [Mermaid](./lib/mermaid), [Leaflet](./lib/leaflet), [KaTeX](./lib/tex), and myriad more. And for working with data in the client, there’s [DuckDB](./lib/duckdb), [Arquero](./lib/arquero), [SQLite](./lib/sqlite), and more, too.
diff --git a/docs/lib/mosaic.md b/docs/lib/mosaic.md
index 6b89cc25f..f8b159ad5 100644
--- a/docs/lib/mosaic.md
+++ b/docs/lib/mosaic.md
@@ -7,7 +7,7 @@ sql:
[Mosaic](https://uwdata.github.io/mosaic/) is a system for linking data visualizations, tables, and inputs, leveraging [DuckDB](./duckdb) for scalable processing. Mosaic includes an interactive grammar of graphics, [Mosaic vgplot](https://uwdata.github.io/mosaic/vgplot/), built on [Observable Plot](./plot). With vgplot, you can interactively visualize and explore millions — even billions — of data points.
-The example below shows the pickup and dropoff locations of one million taxi rides in New York City from Jan 1–3, 2010. The dataset is stored in a 8MB [Apache Parquet](./arrow#apache-parquet) file, generated with a [data loader](../loaders).
+The example below shows the pickup and dropoff locations of one million taxi rides in New York City from Jan 1–3, 2010. The dataset is stored in a 8MB [Apache Parquet](./arrow#apache-parquet) file, generated with a [data loader](../data-loaders).
${maps}
diff --git a/docs/lib/sqlite.md b/docs/lib/sqlite.md
index 722f5419f..b5259410d 100644
--- a/docs/lib/sqlite.md
+++ b/docs/lib/sqlite.md
@@ -26,7 +26,7 @@ const db = SQLiteDatabaseClient.open(FileAttachment("chinook.db"));
(Note that unlike [`DuckDBClient`](./duckdb), a `SQLiteDatabaseClient` takes a single argument representing _all_ of the tables in the database; that’s because a SQLite file stores multiple tables, whereas DuckDB typically uses separate Apache Parquet, CSV, or JSON files for each table.)
-Using `FileAttachment` means that referenced files are automatically copied to `dist` during build, and you can even generate SQLite files using [data loaders](../loaders). But if you want to “hot” load a live file from an external server, pass a string to `SQLiteDatabaseClient.open`:
+Using `FileAttachment` means that referenced files are automatically copied to `dist` during build, and you can even generate SQLite files using [data loaders](../data-loaders). But if you want to “hot” load a live file from an external server, pass a string to `SQLiteDatabaseClient.open`:
```js run=false
const db = SQLiteDatabaseClient.open("https://static.observableusercontent.com/files/b3711cfd9bdf50cbe4e74751164d28e907ce366cd4bf56a39a980a48fdc5f998c42a019716a8033e2b54defdd97e4a55ebe4f6464b4f0678ea0311532605a115");
diff --git a/docs/lib/zip.md b/docs/lib/zip.md
index 92a6052c2..4d984af95 100644
--- a/docs/lib/zip.md
+++ b/docs/lib/zip.md
@@ -24,7 +24,7 @@ To pull out a single file from the archive, use the `archive.file` method. It re
muybridge.file("deer.jpeg").image({width: 320, alt: "A deer"})
```
-That said, if you know the name of the file within the ZIP archive statically, you don’t need to load the ZIP archive; you can simply request the [file within the archive](../loaders#archives) directly. The specified file is then extracted from the ZIP archive at build time.
+That said, if you know the name of the file within the ZIP archive statically, you don’t need to load the ZIP archive; you can simply request the [file within the archive](../data-loaders#archives) directly. The specified file is then extracted from the ZIP archive at build time.
```js echo
FileAttachment("muybridge/deer.jpeg").image({width: 320, alt: "A deer"})
@@ -38,7 +38,7 @@ For images and other media, you can simply use static HTML.
```
-One reason to load a ZIP archive is that you don’t know the files statically — maybe there are lots of files and you don’t want to enumerate them statically, or maybe you expect them to change over time and the ZIP archive is generated by a [data loader](../loaders). For example, maybe you want to display an arbitrary collection of images.
+One reason to load a ZIP archive is that you don’t know the files statically — maybe there are lots of files and you don’t want to enumerate them statically, or maybe you expect them to change over time and the ZIP archive is generated by a [data loader](../data-loaders). For example, maybe you want to display an arbitrary collection of images.
```js echo
Gallery(await Promise.all(muybridge.filenames.map((f) => muybridge.file(f).image())))
diff --git a/docs/loaders.md b/docs/loaders.md
index aeb7f33e8..75bb3ef78 100644
--- a/docs/loaders.md
+++ b/docs/loaders.md
@@ -1,320 +1,3 @@
-# Data loaders
+
-**Data loaders** generate static snapshots of data during build. For example, a data loader might query a database and output CSV data, or server-side render a chart and output a PNG image.
-
-Why static snapshots? Performance is critical for dashboards: users don’t like to wait, and dashboards only create value if users look at them. Data loaders practically force your app to be fast because data is precomputed and thus can be served instantly — you don’t need to run queries separately for each user on load. Furthermore, data can be highly optimized (and aggregated and anonymized), minimizing what you send to the client. And since data loaders run only during build, your users don’t need direct access to your data warehouse, making your dashboards more secure and robust.
-
-
Data loaders are optional. You can use fetch or WebSocket if you prefer to load data at runtime, or you can store data in static files.
-
You can use continuous deployment to rebuild data as often as you like, ensuring that data is always up-to-date.
-
-Data loaders can be written in any programming language. They can even invoke binary executables such as ffmpeg or DuckDB. For convenience, Framework has built-in support for common languages: JavaScript, TypeScript, Python, and R. Naturally you can use any third-party library or SDK for these languages, too.
-
-A data loader can be as simple as a shell script that invokes [curl](https://curl.se/) to fetch recent earthquakes from the [USGS](https://earthquake.usgs.gov/earthquakes/feed/v1.0/geojson.php):
-
-```sh
-curl https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_day.geojson
-```
-
-Data loaders use [file-based routing](#routing), so assuming this shell script is named `quakes.json.sh`, a `quakes.json` file is then generated at build time. You can access this file from the client using [`FileAttachment`](./files):
-
-```js echo
-FileAttachment("quakes.json").json()
-```
-
-A data loader can transform data to perfectly suit the needs of a dashboard. The JavaScript data loader below uses [D3](./lib/d3) to output [CSV](./lib/csv) with three columns representing the _magnitude_, _longitude_, and _latitude_ of each earthquake.
-
-```js run=false echo
-import {csvFormat} from "d3-dsv";
-
-// Fetch GeoJSON from the USGS.
-const response = await fetch("https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_day.geojson");
-if (!response.ok) throw new Error(`fetch failed: ${response.status}`);
-const collection = await response.json();
-
-// Convert to an array of objects.
-const features = collection.features.map((f) => ({
- magnitude: f.properties.mag,
- longitude: f.geometry.coordinates[0],
- latitude: f.geometry.coordinates[1]
-}));
-
-// Output CSV.
-process.stdout.write(csvFormat(features));
-```
-
-Assuming the loader above is named `quakes.csv.js`, you can access its output from the client as `quakes.csv`:
-
-```js echo
-const quakes = FileAttachment("quakes.csv").csv({typed: true});
-```
-
-Now you can display the earthquakes in a map using [Observable Plot](./lib/plot):
-
-```js
-const world = await fetch(import.meta.resolve("npm:world-atlas/land-110m.json")).then((response) => response.json());
-const land = topojson.feature(world, world.objects.land);
-```
-
-```js echo
-Plot.plot({
- projection: {
- type: "orthographic",
- rotate: [110, -30]
- },
- marks: [
- Plot.graticule(),
- Plot.sphere(),
- Plot.geo(land, {stroke: "var(--theme-foreground-faint)"}),
- Plot.dot(quakes, {x: "longitude", y: "latitude", r: "magnitude", stroke: "#f43f5e"})
- ]
-})
-```
-
-During preview, the preview server automatically runs the data loader the first time its output is needed and [caches](#caching) the result; if you edit the data loader, the preview server will automatically run it again and push the new result to the client.
-
-## Archives
-
-Data loaders can generate multi-file archives such as ZIP files; individual files can then be pulled from archives using `FileAttachment`. This allows a data loader to output multiple (often related) files from the same source data in one go. Framework also supports _implicit_ data loaders, _extractors_, that extract referenced files from static archives. So whether an archive is static or generated dynamically by a data loader, you can use `FileAttachment` to pull files from it.
-
-The following archive extensions are supported:
-
-- `.zip` - for the [ZIP]() archive format
-- `.tar` - for [tarballs]()
-- `.tar.gz` and `.tgz` - for [compressed tarballs](https://en.wikipedia.org/wiki/Gzip)
-
-Here’s an example of loading an image from `lib/muybridge.zip`:
-
-```js echo
-FileAttachment("lib/muybridge/deer.jpeg").image({width: 320, alt: "A deer"})
-```
-
-You can do the same with static HTML:
-
-
-
-```html run=false
-
-```
-
-Below is a TypeScript data loader `quakes.zip.ts` that uses [JSZip](https://stuk.github.io/jszip/) to generate a ZIP archive of two files, `metadata.json` and `features.csv`. Note that the data loader is responsible for serializing the `metadata` and `features` objects to appropriate format corresponding to the file extension (`.json` and `.csv`); data loaders are responsible for doing their own serialization.
-
-```js run=false
-import {csvFormat} from "d3-dsv";
-import JSZip from "jszip";
-
-// Fetch GeoJSON from the USGS.
-const response = await fetch("https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_day.geojson");
-if (!response.ok) throw new Error(`fetch failed: ${response.status}`);
-const collection = await response.json();
-
-// Convert to an array of objects.
-const features = collection.features.map((f) => ({
- magnitude: f.properties.mag,
- longitude: f.geometry.coordinates[0],
- latitude: f.geometry.coordinates[1]
-}));
-
-// Output a ZIP archive to stdout.
-const zip = new JSZip();
-zip.file("metadata.json", JSON.stringify(collection.metadata, null, 2));
-zip.file("features.csv", csvFormat(features));
-zip.generateNodeStream().pipe(process.stdout);
-```
-
-To load data in the browser, use `FileAttachment`:
-
-```js run=false
-const metadata = FileAttachment("quakes/metadata.json").json();
-const features = FileAttachment("quakes/features.csv").csv({typed: true});
-```
-
-The ZIP file itself can be also referenced as a whole — for example if the names of the files are not known in advance — with [`file.zip`](./lib/zip):
-
-```js echo
-const zip = FileAttachment("quakes.zip").zip();
-const metadata = zip.then((zip) => zip.file("metadata.json").json());
-```
-
-Like with any other file, files from generated archives are live in preview (refreshing automatically if the corresponding data loader is edited), and are added to the build only if [statically referenced](./files#static-analysis) by `FileAttachment`.
-
-## Routing
-
-Data loaders live in the source root (typically `src`) alongside your other source files. When a file is referenced from JavaScript via `FileAttachment`, if the file does not exist, Framework will look for a file of the same name with a double extension to see if there is a corresponding data loader. By default, the following second extensions are checked, in order, with the corresponding language and interpreter:
-
-- `.js` - JavaScript (`node`)
-- `.ts` - TypeScript (`tsx`)
-- `.py` - Python (`python3`)
-- `.R` - R (`Rscript`)
-- `.rs` - Rust (`rust-script`)
-- `.go` - Go (`go run`)
-- `.java` — Java (`java`; requires Java 11+ and [single-file programs](https://openjdk.org/jeps/330))
-- `.jl` - Julia (`julia`)
-- `.php` - PHP (`php`)
-- `.sh` - shell script (`sh`)
-- `.exe` - arbitrary executable
-
-
The interpretersconfiguration option can be used to extend the list of supported extensions.
-
-For example, for the file `quakes.csv`, the following data loaders are considered: `quakes.csv.js`, `quakes.csv.ts`, `quakes.csv.py`, _etc._ The first match is used.
-
-## Execution
-
-To use an interpreted data loader (anything other than `.exe`), the corresponding interpreter must be installed and available on your `$PATH`. Any additional modules, packages, libraries, _etc._, must also be installed. Some interpreters are not available on all platforms; for example `sh` is only available on Unix-like systems.
-
-
-
-You can use a virtual environment in Python, such as [venv](https://docs.python.org/3/tutorial/venv.html) or [uv](https://github.com/astral-sh/uv), to install libraries locally to the project. This is useful when working in multiple projects, and when collaborating; you can also track dependencies in a `requirements.txt` file.
-
-To create a virtual environment with venv:
-
-```sh
-python3 -m venv .venv
-```
-
-Or with uv:
-
-```sh
-uv venv
-```
-
-To activate the virtual environment on macOS or Linux:
-
-```sh
-source .venv/bin/activate
-```
-
-Or on Windows:
-
-```sh
-.venv\Scripts\activate
-```
-
-To install required packages:
-
-```sh
-pip install -r requirements.txt
-```
-
-You can then run the `observable preview` or `observable build` (or `npm run dev` or `npm run build`) commands as usual; data loaders will run within the virtual environment. Run the `deactivate` command or use Control-D to exit the virtual environment.
-
-
-
-Data loaders are run in the same working directory in which you run the `observable build` or `observable preview` command, which is typically the project root. In Node, you can access the current working directory by calling `process.cwd()`, and the data loader’s source location with [`import.meta.url`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/import.meta). To compute the path of a file relative to the data loader source (rather than relative to the current working directory), use [`import.meta.resolve`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/import.meta/resolve). For example, a data loader in `src/summary.txt.js` could read the file `src/table.txt` as:
-
-```js run=false
-import {readFile} from "node:fs/promises";
-import {fileURLToPath} from "node:url";
-
-const table = await readFile(fileURLToPath(import.meta.resolve("./table.txt")), "utf-8");
-```
-
-Executable (`.exe`) data loaders are run directly and must have the executable bit set. This is typically done via [`chmod`](https://en.wikipedia.org/wiki/Chmod). For example:
-
-```sh
-chmod +x src/quakes.csv.exe
-```
-
-While a `.exe` data loader may be any binary executable (_e.g.,_ compiled from C), it is often convenient to specify another interpreter using a [shebang](). For example, to write a data loader in Perl:
-
-```perl
-#!/usr/bin/env perl
-
-print("Hello World\n");
-```
-
-If multiple requests are made concurrently for the same data loader, the data loader will only run once; each concurrent request will receive the same response.
-
-## Output
-
-Data loaders must output to [standard output](). The first extension (such as `.csv`) does not affect the generated snapshot; the data loader is solely responsible for producing the expected output (such as CSV). If you wish to log additional information from within a data loader, be sure to log to stderr, say by using [`console.warn`](https://developer.mozilla.org/en-US/docs/Web/API/console/warn); otherwise the logs will be included in the output file and sent to the client.
-
-## Building
-
-Data loaders generate files at build time that live alongside other [static files](./files) in the `_file` directory of the output root. For example, to generate a `quakes.json` file at build time by fetching and caching data from the USGS, you could write a data loader in a shell script like so:
-
-```ini
-.
-├─ src
-│ ├─ index.md
-│ └─ quakes.json.sh
-└─ ...
-```
-
-Where `quakes.json.sh` is:
-
-```sh
-curl https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_day.geojson
-```
-
-This will produce the following output root:
-
-```ini
-.
-├─ dist
-│ ├─ _file
-│ │ └─ quakes.99da78d9.json
-│ ├─ _observablehq
-│ │ └─ ... # additional assets for serving the site
-│ └─ index.html
-└─ ...
-```
-
-As another example, say you have a `quakes.zip` archive that includes yearly files for observed earthquakes. If you reference `FileAttachment("quakes/2021.csv")`, Framework will pull the `2021.csv` from `quakes.zip`. So this source root:
-
-```ini
-.
-├─ src
-│ ├─ index.md
-│ └─ quakes.zip
-└─ ...
-```
-
-Becomes this output:
-
-```ini
-.
-├─ dist
-│ ├─ _file
-│ │ └─ quakes
-│ │ └─ 2021.e5f2eb94.csv
-│ ├─ _observablehq
-│ │ └─ ... # additional assets for serving the site
-│ └─ index.html
-└─ ...
-```
-
-A data loader is only run during build if its corresponding output file is referenced in at least one page. Framework does not scour the source root (typically `src`) for data loaders.
-
-## Caching
-
-When a data loader runs successfully, its output is saved to a cache which lives in `.observablehq/cache` within the source root (typically `src`).
-
-During preview, Framework considers the cache “fresh” if the modification time of the cached output is newer than the modification time of the corresponding data loader source. If you edit a data loader or update its modification time with `touch`, the cache is invalidated; when previewing a page that uses the data loader, the preview server will detect that the data loader was modified and automatically run it, pushing the new data down to the client and re-evaluating any referencing code — no reload required!
-
-During build, Framework ignores modification times and only runs a data loader if its output is not cached. Continuous integration caches typically don’t preserve modification times, so this design makes it easier to control which data loaders to run by selectively populating the cache.
-
-To purge the data loader cache and force all data loaders to run on the next build, delete the entire cache. For example:
-
-```sh
-rm -rf src/.observablehq/cache
-```
-
-To force a specific data loader to run on the next build instead, delete its corresponding output from the cache. For example, to rebuild `src/quakes.csv`:
-
-```sh
-rm -f src/.observablehq/cache/quakes.csv
-```
-
-See [Automated deploys: Caching](./deploying#caching) for more on caching during CI.
-
-## Errors
-
-When a data loader fails, it _must_ return a non-zero [exit code](https://en.wikipedia.org/wiki/Exit_status). If a data loader produces a zero exit code, Framework will assume that it was successful and will cache and serve the output to the client. Empty output is not by itself considered an error; however, a warning is displayed in the preview server and build logs.
-
-During preview, data loader errors will be shown in the preview server log, and a 500 HTTP status code will be returned to the client that attempted to load the corresponding file. This typically results in an error such as:
-
-```
-RuntimeError: Unable to load file: quakes.csv
-```
-
-When any data loader fails, the entire build fails.
+Moved to [Data loaders](./data-loaders).
diff --git a/docs/page-loaders.md.js b/docs/page-loaders.md.js
new file mode 100644
index 000000000..7006c85d9
--- /dev/null
+++ b/docs/page-loaders.md.js
@@ -0,0 +1,60 @@
+process.stdout.write(`---
+keywords: server-side rendering, ssr
+---
+
+# Page loaders
+
+Page loaders are a special type of [data loader](./data-loaders) for dynamically generating (or “server-side rendering”) pages. Page loaders are programs that emit [Markdown](./markdown) to standard out, and have a double extension starting with \`.md\`, such as \`.md.js\` for a JavaScript page loader or \`.md.py\` for a Python page loader.
+
+By “baking” dynamically-generated content into static Markdown, you can further improve the performance of pages since the content exists on page load rather than waiting for JavaScript to run. You may even be able to avoid loading additional assets and JavaScript libraries.
+
+For example, to render a map of recent earthquakes into static inline SVG using D3, you could use a JavaScript page loader as \`quakes.md.js\`:
+
+~~~js run=false
+import * as d3 from "d3-geo";
+import * as topojson from "topojson-client";
+
+const quakes = await (await fetch("https://earthquake.usgs.gov/earthquakes/feed/v1.0/summary/all_week.geojson")).json();
+const world = await (await fetch("https://cdn.jsdelivr.net/npm/world-atlas@2.0.2/land-110m.json")).json();
+const land = topojson.feature(world, world.objects.land);
+
+const projection = d3.geoOrthographic().rotate([110, -40]).fitExtent([[2, 2], [638, 638]], {type: "Sphere"});
+const path = d3.geoPath(projection);
+
+process.stdout.write(\`# Recent quakes
+
+
+\`);
+~~~
+
+See the [data loaders](./data-loaders) documentation for more on execution, routing, and caching.
+
+
+
+Page loaders often use [parameterized routes](./params) to generate multiple pages from a single program.
+
+
+
+
+
+When using page loaders, keep an eye on the generated page size, particularly with complex maps and data visualizations in SVG. To keep the page size small, consider server-side rendering a low-fidelity placeholder and then replacing it with the full graphic using JavaScript on the client.
+
+
+
+
+
+To allow importing of a JavaScript page loader without running it, have the page loader check whether \`process.argv[1]\` is the same as \`import.meta.url\` before running:
+
+~~~js run=false
+if (process.argv[1] === fileURLToPath(import.meta.url)) {
+ process.stdout.write(\`# Hello page\`);
+}
+~~~
+
+
+`);
diff --git a/docs/params.md b/docs/params.md
new file mode 100644
index 000000000..c3f907043
--- /dev/null
+++ b/docs/params.md
@@ -0,0 +1,157 @@
+# Parameterized routes
+
+Parameterized routes allow a single [Markdown](./markdown) source file or [page loader](./page-loaders) to generate many pages, or a single [data loader](./data-loaders) to generate many files.
+
+A parameterized route is denoted by square brackets, such as `[param]`, in a file or directory name. For example, the following project structure could be used to generate a page for many products:
+
+```
+.
+├─ src
+│ ├─ index.md
+│ └─ products
+│ └─ [product].md
+└─ ⋯
+```
+
+(File and directory names can also be partially parameterized such as `prefix-[param].md` or `[param]-suffix.md`, or contain multiple parameters such as `[year]-[month]-[day].md`.)
+
+The [**dynamicPaths** config option](./config#dynamicPaths) would then specify the list of product pages:
+
+```js run=false
+export default {
+ dynamicPaths: [
+ "/products/100736",
+ "/products/221797",
+ "/products/399145",
+ "/products/475651",
+ …
+ ]
+};
+```
+
+Rather than hard-coding the list of paths as above, you’d more commonly use code to enumerate them, say by querying a database for products. In this case, you can either use [top-level await](https://v8.dev/features/top-level-await) or specify the **dynamicPaths** config option as a function that returns an [async iterable](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Iteration_protocols#the_async_iterator_and_async_iterable_protocols). For example, using [Postgres.js](https://github.com/porsager/postgres/blob/master/README.md#usage) you might say:
+
+```js run=false
+import postgres from "postgres";
+
+const sql = postgres(); // Note: uses psql environment variables
+
+export default {
+ async *dynamicPaths() {
+ for await (const {id} of sql`SELECT id FROM products`.cursor()) {
+ yield `/products/${id}`;
+ }
+ }
+};
+```
+
+## Params in JavaScript
+
+Within a parameterized page, observable.params.param exposes the value of the parameter param to JavaScript [fenced code blocks](./javascript#fenced-code-blocks) and [inline expressions](./javascript#inline-expressions), and likewise for any imported [local modules](./imports#local-imports) with parameterized routes. For example, to display the value of the `product` parameter in Markdown:
+
+```md run=false
+The current product is ${observable.params.product}.
+```
+
+Since parameter values are known statically at build time, you can reference parameter values in calls to `FileAttachment`. For example, to load the JSON file `/products/[product].json` for the corresponding product from the page `/products/[product].md`, you could say:
+
+```js run=false
+const info = FileAttachment(`${observable.params.product}.json`).json();
+```
+
+This is an exception: otherwise `FileAttachment` only accepts a static string literal as an argument since Framework uses [static analysis](./files#static-analysis) to find referenced files. If you need more flexibility, consider using a [page loader](./page-loaders) to generate the page.
+
+## Params in data loaders
+
+Parameter values are passed as command-line arguments such as `--product=42` to parameterized [data loaders](./data-loaders). In a JavaScript data loader, you can use [`parseArgs`](https://nodejs.org/api/util.html#utilparseargsconfig) from `node:util` to parse command-line arguments.
+
+For example, here is a parameterized data loader `sales-[product].csv.js` that generates a CSV of daily sales totals for a particular product by querying a PostgreSQL database:
+
+```js run=false
+import {parseArgs} from "node:util";
+import {csvFormat} from "d3-dsv";
+import postgres from "postgres";
+
+const sql = postgres(); // Note: uses psql environment variables
+
+const {
+ values: {product}
+} = parseArgs({
+ options: {product: {type: "string"}}
+});
+
+const sales = await sql`
+ SELECT
+ DATE(sale_date) AS sale_day,
+ SUM(quantity) AS total_quantity_sold,
+ SUM(total_amount) AS total_sales_amount
+ FROM
+ sales
+ WHERE
+ product_id = ${product}
+ GROUP BY
+ DATE(sale_date)
+ ORDER BY
+ sale_day
+`;
+
+process.stdout.write(csvFormat(sales));
+
+await sql.end();
+```
+
+Using the above data loader, you could then load `sales-42.csv` to get the daily sales data for product 42.
+
+## Params in page loaders
+
+As with data loaders, parameter values are passed as command-line arguments such as `--product=42` to parameterized [page loaders](./page-loaders). In a JavaScript page loader, you can use [`parseArgs`](https://nodejs.org/api/util.html#utilparseargsconfig) from `node:util` to parse command-line arguments. You can then bake parameter values into the resulting page code, or reference them dynamically [in client-side JavaScript](#params-in-java-script) using `observable.params`.
+
+For example, here is a parameterized page loader `sales-[product].md.js` that renders a chart with daily sales numbers for a particular product, loading the data from the parameterized data loader `sales-[product].csv.js` shown above:
+
+~~~~js run=false
+import {parseArgs} from "node:util";
+
+const {
+ values: {product}
+} = parseArgs({
+ options: {product: {type: "string"}}
+});
+
+process.stdout.write(`# Sales of product ${product}
+
+~~~js
+const sales = FileAttachment(\`sales-${product}.csv\`).csv({typed: true});
+~~~
+
+~~~js
+Plot.plot({
+ x: {interval: "day", label: null},
+ y: {grid: true},
+ marks: [
+ Plot.barY(sales, {x: "sale_day", y: "total_sales_amount", tip: true}),
+ Plot.ruleY([0])
+ ]
+})
+~~~
+
+`);
+~~~~
+
+In a page generated by a JavaScript page loader, you typically don’t reference `observable.params`; instead, bake the current parameter values directly into the generated code. (You can still reference `observable.params` in the generated client-side JavaScript if you want to.) Framework’s [theme previews](./themes) are implemented as parameterized page loaders; see [their source](https://github.com/observablehq/framework/blob/main/docs/theme/%5Btheme%5D.md.ts) for a practical example.
+
+## Precedence
+
+If multiple sources match a particular route, Framework choses the most-specific match. Exact matches are preferred over parameterized matches, and higher directories (closer to the root) are given priority over lower directories.
+
+For example, for the page `/product/42`, the following sources might be considered:
+
+* `/product/42.md` (exact match on static file)
+* `/product/42.md.js` (exact match on page loader)
+* `/product/[id].md` (parameterized static file)
+* `/product/[id].md.js` (parameterized page loader)
+* `/[category]/42.md` (static file in parameterized directory)
+* `/[category]/42.md.js` (page loader in parameterized directory)
+* `/[category]/[product].md` (etc.)
+* `/[category]/[product].md.js`
+
+(For brevity, only JavaScript page loaders are shown above; in practice Framework will consider all registered interpreters when checking for page loaders. [Archive data loaders](./data-loaders#archives) are also not shown.)
diff --git a/docs/project-structure.md b/docs/project-structure.md
index 7b4000655..bd535c872 100644
--- a/docs/project-structure.md
+++ b/docs/project-structure.md
@@ -38,7 +38,7 @@ This is the “source root” — where your source files live. It doesn’t hav
#### `src/.observablehq/cache`
-This is where the [data loader](./loaders) cache lives. You don’t typically have to worry about this since it’s autogenerated when the first data loader is referenced. You can `rm -rf src/.observablehq/cache` to clean the cache and force data loaders to re-run.
+This is where the [data loader](./data-loaders) cache lives. You don’t typically have to worry about this since it’s autogenerated when the first data loader is referenced. You can `rm -rf src/.observablehq/cache` to clean the cache and force data loaders to re-run.
#### `src/.observablehq/deploy.json`
@@ -50,7 +50,7 @@ You can put shared [JavaScript modules](./imports) anywhere in your source root,
#### `src/data`
-You can put [data loaders](./loaders) or static files anywhere in your source root, but we recommend putting them here.
+You can put [data loaders](./data-loaders) or static files anywhere in your source root, but we recommend putting them here.
#### `src/index.md`
@@ -69,10 +69,10 @@ Framework uses file-based routing: each page in your project has a corresponding
├─ src
│ ├─ hello.md
│ └─ index.md
-└─ ...
+└─ ⋯
```
-
+
When the site is built, the output root (`dist`) will contain two corresponding static HTML pages (`hello.html` and `index.html`), along with a few additional assets needed for the site to work.
@@ -80,12 +80,18 @@ When the site is built, the output root (`dist`) will contain two corresponding
.
├─ dist
│ ├─ _observablehq
-│ │ └─ ... # additional assets for serving the site
+│ │ └─ ⋯ # additional assets for serving the site
│ ├─ hello.html
│ └─ index.html
-└─ ...
+└─ ⋯
```
+
+
+While normally a Markdown file generates only a single page, Framework also supports [parameterized pages](./params) (also called _dynamic routes_), allowing a Markdown file to generate many pages with different data.
+
+
+
For this site, routes map to files as:
```
@@ -103,11 +109,11 @@ Pages can live in folders. For example:
.
├─ src
│ ├─ missions
-| | ├─ index.md
-| | ├─ apollo.md
+│ │ ├─ index.md
+│ │ ├─ apollo.md
│ │ └─ gemini.md
│ └─ index.md
-└─ ...
+└─ ⋯
```
With this setup, routes are served as:
@@ -125,11 +131,11 @@ As a variant of the above structure, you can move the `missions/index.md` up to
.
├─ src
│ ├─ missions
-| | ├─ apollo.md
+│ │ ├─ apollo.md
│ │ └─ gemini.md
│ ├─ missions.md
│ └─ index.md
-└─ ...
+└─ ⋯
```
Now routes are served as:
diff --git a/docs/sql.md b/docs/sql.md
index 86167f608..05361025e 100644
--- a/docs/sql.md
+++ b/docs/sql.md
@@ -5,9 +5,9 @@ sql:
# SQL
-
This page covers client-side SQL using DuckDB. To run a SQL query on a remote database such as PostgreSQL or Snowflake, use a data loader.
+
This page covers client-side SQL using DuckDB. To run a SQL query on a remote database such as PostgreSQL or Snowflake, use a data loader.
-Framework includes built-in support for client-side SQL powered by [DuckDB](./lib/duckdb). You can use SQL to query data from [CSV](./lib/csv), [TSV](./lib/csv), [JSON](./files#json), [Apache Arrow](./lib/arrow), [Apache Parquet](./lib/arrow#apache-parquet), and DuckDB database files, which can either be static or generated by [data loaders](./loaders).
+Framework includes built-in support for client-side SQL powered by [DuckDB](./lib/duckdb). You can use SQL to query data from [CSV](./lib/csv), [TSV](./lib/csv), [JSON](./files#json), [Apache Arrow](./lib/arrow), [Apache Parquet](./lib/arrow#apache-parquet), and DuckDB database files, which can either be static or generated by [data loaders](./data-loaders).
To use SQL, first register the desired tables in the page’s [front matter](./markdown#front-matter) using the **sql** option. Each key is a table name, and each value is the path to the corresponding data file. For example, to register a table named `gaia` from a Parquet file:
@@ -27,7 +27,7 @@ sql:
---
```
-
For performance and reliability, we recommend using local files rather than loading data from external servers at runtime. You can use a data loader to take a snapshot of a remote data during build if needed.
+
For performance and reliability, we recommend using local files rather than loading data from external servers at runtime. You can use a data loader to take a snapshot of a remote data during build if needed.