Compare commits

..

3 Commits

Author SHA1 Message Date
Aaron Pham
43d7da143e chore: update link url
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
2026-01-30 02:45:38 -05:00
Aaron Pham
2d7793062b fix: type error
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
2026-01-30 02:32:37 -05:00
Aaron Pham
dba5a9c920 feat(bases): migrate from vault to upstream
Signed-off-by: Aaron Pham <contact@aarnphm.xyz>
2026-01-30 02:25:53 -05:00
91 changed files with 7312 additions and 3157 deletions

16
.gitignore vendored
View File

@@ -9,19 +9,3 @@ tsconfig.tsbuildinfo
private/
.replit
replit.nix
erl_crash.dump
# content/ is generated by the export script; only keep the placeholder
content/*
!content/.gitkeep
# static/ox-hugo/ is populated by ox-hugo during export
static/ox-hugo/
# Elixir/Mix build artifacts for the pipeline project
scripts/pipeline/_build/
scripts/pipeline/deps/
scripts/pipeline/erl_crash.dump
# Test helpers (not needed in production)
scripts/test.bib
scripts/test_pipeline.exs
/org-garden/deps/
/org-garden/_build/
/org-garden/result

254
AGENTS.md
View File

@@ -1,254 +0,0 @@
# AGENTS.md - Coding Agent Instructions
This document provides essential information for AI coding agents working in this repository.
## Project Overview
**Quartz** is a static site generator for publishing digital gardens and notes as websites.
Built with TypeScript, Preact, and unified/remark/rehype for markdown processing.
| Stack | Technology |
| ------------- | ----------------------------------------- |
| Language | TypeScript 5.x (strict mode) |
| Runtime | Node.js >=22 (v22.16.0 pinned) |
| Package Mgr | npm >=10.9.2 |
| Module System | ES Modules (`"type": "module"`) |
| UI Framework | Preact 10.x (JSX with `react-jsx` pragma) |
| Build Tool | esbuild |
| Styling | SCSS via esbuild-sass-plugin |
## Environment
This is a Nix project. Use the provided `flake.nix` to enter a dev shell with Node.js 22 and npm:
```bash
nix develop
```
All `npm` commands below must be run inside the dev shell.
## Build, Lint, and Test Commands
```bash
# Type check and format check (CI validation)
npm run check
# Auto-format code with Prettier
npm run format
# Run all tests
npm run test
# Run a single test file
npx tsx --test quartz/util/path.test.ts
# Run tests matching a pattern (use --test-name-pattern)
npx tsx --test --test-name-pattern="typeguards" quartz/util/path.test.ts
# Build the static site
npx quartz build
# Build and serve with hot reload
npx quartz build --serve
# Profile build performance
npm run profile
```
### Test Files Location
Tests use Node.js native test runner via `tsx`. Test files follow the `*.test.ts` pattern:
- `quartz/util/path.test.ts`
- `quartz/util/fileTrie.test.ts`
- `quartz/components/scripts/search.test.ts`
## Code Style Guidelines
### Prettier Configuration (`.prettierrc`)
```json
{
"printWidth": 100,
"tabWidth": 2,
"semi": false,
"trailingComma": "all",
"quoteProps": "as-needed"
}
```
**No ESLint** - only Prettier for formatting. Run `npm run format` before committing.
### TypeScript Configuration
- **Strict mode enabled** (`strict: true`)
- `noUnusedLocals: true` - no unused variables
- `noUnusedParameters: true` - no unused function parameters
- JSX configured for Preact (`jsxImportSource: "preact"`)
### Import Conventions
```typescript
// 1. External packages first
import { PluggableList } from "unified"
import { visit } from "unist-util-visit"
// 2. Internal utilities/types (relative paths)
import { QuartzTransformerPlugin } from "../types"
import { FilePath, slugifyFilePath } from "../../util/path"
import { i18n } from "../../i18n"
```
### Naming Conventions
| Element | Convention | Example |
| ---------------- | ------------ | ----------------------------------- |
| Files (utils) | camelCase | `path.ts`, `fileTrie.ts` |
| Files (comps) | PascalCase | `TableOfContents.tsx`, `Search.tsx` |
| Types/Interfaces | PascalCase | `QuartzComponent`, `FullSlug` |
| Type Guards | `is*` prefix | `isFilePath()`, `isFullSlug()` |
| Constants | UPPER_CASE | `QUARTZ`, `UPSTREAM_NAME` |
| Options types | `Options` | `interface Options { ... }` |
### Branded Types Pattern
This codebase uses branded types for type-safe path handling:
```typescript
type SlugLike<T> = string & { __brand: T }
export type FilePath = SlugLike<"filepath">
export type FullSlug = SlugLike<"full">
export type SimpleSlug = SlugLike<"simple">
// Always validate with type guards before using
export function isFilePath(s: string): s is FilePath { ... }
```
### Component Pattern (Preact)
Components use a factory function pattern with attached static properties:
```typescript
export default ((userOpts?: Partial<Options>) => {
const opts: Options = { ...defaultOptions, ...userOpts }
const ComponentName: QuartzComponent = ({ cfg, displayClass }: QuartzComponentProps) => {
return <div class={classNames(displayClass, "component-name")}>...</div>
}
ComponentName.css = style // SCSS styles
ComponentName.afterDOMLoaded = script // Client-side JS
return ComponentName
}) satisfies QuartzComponentConstructor
```
### Plugin Pattern
Three plugin types: transformers, filters, and emitters.
```typescript
export const PluginName: QuartzTransformerPlugin<Partial<Options>> = (userOpts) => {
const opts = { ...defaultOptions, ...userOpts }
return {
name: "PluginName",
markdownPlugins(ctx) { return [...] },
htmlPlugins(ctx) { return [...] },
externalResources(ctx) { return { js: [], css: [] } },
}
}
```
### Testing Pattern
Use Node.js native test runner with `assert`:
```typescript
import test, { describe, beforeEach } from "node:test"
import assert from "node:assert"
describe("FeatureName", () => {
test("should do something", () => {
assert.strictEqual(actual, expected)
assert.deepStrictEqual(actualObj, expectedObj)
assert(condition) // truthy assertion
assert(!condition) // falsy assertion
})
})
```
### Error Handling
- Use `try/catch` for critical operations (file I/O, parsing)
- Custom `trace` utility for error reporting with stack traces
- `process.exit(1)` for fatal errors
- `console.warn()` for non-fatal issues
### Async Patterns
- Prefer `async/await` over raw promises
- Use async generators (`async *emit()`) for streaming file output
- Use `async-mutex` for concurrent build protection
## Project Structure
```
quartz/
├── bootstrap-cli.mjs # CLI entry point
├── build.ts # Build orchestration
├── cfg.ts # Configuration types
├── components/ # Preact UI components
│ ├── *.tsx # Components
│ ├── scripts/ # Client-side scripts (*.inline.ts)
│ └── styles/ # Component SCSS
├── plugins/
│ ├── transformers/ # Markdown AST transformers
│ ├── filters/ # Content filters
│ ├── emitters/ # Output generators
│ └── types.ts # Plugin type definitions
├── processors/ # Build pipeline (parse/filter/emit)
├── util/ # Utility functions
└── i18n/ # Internationalization (30+ locales)
```
## Branch Workflow
This is a fork of [jackyzha0/quartz](https://github.com/jackyzha0/quartz) with org-roam customizations.
| Branch | Purpose |
| ----------- | ------------------------------------------------ |
| `main` | Clean mirror of upstream quartz — no custom code |
| `org-roam` | Default branch — all customizations live here |
| `feature/*` | Short-lived branches off `org-roam` |
### Pulling Upstream Updates
```bash
git checkout main
git fetch upstream
git merge upstream/main
git checkout org-roam
git merge main
# Resolve conflicts if any, then commit
```
### Working on Features
```bash
git checkout org-roam
git checkout -b feature/my-feature
# ... work ...
git checkout org-roam
git merge feature/my-feature
git branch -d feature/my-feature
```
**Merge direction:** `upstream → main → org-roam → feature/*`
## Important Notes
- **Client-side scripts**: Use `.inline.ts` suffix, bundled via esbuild
- **Isomorphic code**: `quartz/util/path.ts` must not use Node.js APIs
- **Incremental builds**: Plugins can implement `partialEmit` for efficiency
- **Markdown flavors**: Supports Obsidian (`ofm.ts`) and Roam (`roam.ts`) syntax
- **Pipeline build artifacts**: `scripts/pipeline/_build/` and `scripts/pipeline/deps/`
are gitignored — run `mix deps.get` inside `scripts/pipeline/` after a fresh clone

View File

@@ -1,96 +1,13 @@
# Quartz v4 — org-roam edition
# Quartz v4
> "[One] who works with the door open gets all kinds of interruptions, but [they] also occasionally gets clues as to what the world is and what might be important." — Richard Hamming
> [One] who works with the door open gets all kinds of interruptions, but [they] also occasionally gets clues as to what the world is and what might be important. — Richard Hamming
Quartz is a set of tools that helps you publish your [digital garden](https://jzhao.xyz/posts/networked-thought) and notes as a website for free.
This fork adds first-class support for [org-roam](https://www.orgroam.com/) notes via [ox-hugo](https://ox-hugo.scripter.co/).
🔗 Upstream documentation: https://quartz.jzhao.xyz/
🔗 Read the documentation and get started: https://quartz.jzhao.xyz/
[Join the Discord Community](https://discord.gg/cRFFHYye7t)
## Quick Start
### Prerequisites
This project uses Nix. Enter the development shell, which provides Node.js 22, Elixir, and Emacs with ox-hugo:
```bash
nix develop
```
All commands below must be run inside this shell.
```bash
npm install
```
### Building from org-roam notes
Your org-roam notes live in a separate directory. Point `NOTES_DIR` at it:
```bash
# Export notes to content/ and build the site
NOTES_DIR=/path/to/notes npm run build:notes
# Export, build, and serve with hot reload
NOTES_DIR=/path/to/notes npm run serve:notes
# Export only (wipes content/ and re-exports all .org files)
NOTES_DIR=/path/to/notes npm run export
```
The export pipeline runs in four phases:
1. **Wipe** `content/` clean
2. **Export** every `.org` file via `emacs --batch` + ox-hugo → Markdown
3. **Transform** — post-process the Markdown (citation resolution, etc.)
4. **Index** — generate a fallback `index.md` if none was exported
#### Citations (org-citar → Zotero links)
org-citar references (`[cite:@key]`) are resolved to clickable Zotero links.
With Zotero running and the [Better BibTeX](https://retorque.re/zotero-better-bibtex/)
plugin installed, no extra configuration is needed — the pipeline detects it
automatically and links directly to the PDF in your library.
```bash
# Use a local .bib file as fallback when Zotero is not running
BIBTEX_FILE=/path/to/refs.bib NOTES_DIR=/path/to/notes npm run export
# Control warning verbosity for unresolved keys
CITATION_MODE=strict NOTES_DIR=/path/to/notes npm run export
```
| Env var | Default | Purpose |
| --------------- | ------------------------ | ----------------------------------------- |
| `BIBTEX_FILE` | — | Path to `.bib` file for citation fallback |
| `ZOTERO_URL` | `http://localhost:23119` | Zotero Better BibTeX base URL |
| `CITATION_MODE` | `warn` | `silent` / `warn` / `strict` |
### Building without org-roam notes
If you manage `content/` directly with Markdown files:
```bash
# Build the site
npx quartz build
# Build and serve with hot reload
npx quartz build --serve
```
The site is generated in `public/`. When serving, visit http://localhost:8080.
### Development
```bash
npm run check # type check + format check
npm run format # auto-format with Prettier
npm run test # run tests
```
## Sponsors
<p align="center">

View File

@@ -10,8 +10,10 @@ By default, Quartz ships with the [[ObsidianFlavoredMarkdown]] plugin, which is
It also ships with support for [frontmatter parsing](https://help.obsidian.md/Editing+and+formatting/Properties) with the same fields that Obsidian uses through the [[Frontmatter]] transformer plugin.
Finally, Quartz also provides [[CrawlLinks]] plugin, which allows you to customize Quartz's link resolution behaviour to match Obsidian.
Quartz also provides [[CrawlLinks]] plugin, which allows you to customize Quartz's link resolution behaviour to match Obsidian.
For dynamic database-like views, Quartz supports [[bases|Obsidian Bases]] through the [[ObsidianBases]] transformer and [[BasePage]] emitter plugins.
## Configuration
This functionality is provided by the [[ObsidianFlavoredMarkdown]], [[Frontmatter]] and [[CrawlLinks]] plugins. See the plugin pages for customization options.
This functionality is provided by the [[ObsidianFlavoredMarkdown]], [[ObsidianBases]], [[Frontmatter]] and [[CrawlLinks]] plugins. See the plugin pages for customization options.

42
docs/features/bases.md Normal file
View File

@@ -0,0 +1,42 @@
---
title: Bases
tags:
- feature/transformer
- feature/emitter
---
Quartz supports [Obsidian Bases](https://help.obsidian.md/bases), which allow you to create dynamic, database-like views of your notes. See the [official Obsidian documentation](https://help.obsidian.md/bases/syntax) for the full syntax reference.
## Quick Example
Create a `.base` file in your content folder:
```yaml
filters:
and:
- file.hasTag("task")
views:
- type: table
name: "Task List"
order:
- file.name
- status
- due_date
```
Each view gets its own page at `<base-name>/<view-name>`.
## Wikilinks
Link to base views using the standard [[navigation.base#Plugins|wikilink]] syntax:
```markdown
[[my-base.base#Task List]]
```
This resolves to `my-base/Task-List`.
## Configuration
This functionality is provided by the [[ObsidianBases]] transformer plugin (which parses `.base` files) and the [[BasePage]] emitter plugin (which generates the pages).

93
docs/navigation.base Normal file
View File

@@ -0,0 +1,93 @@
filters:
and:
- file.ext == "md"
formulas:
doc_type: |
if(file.hasTag("plugin/transformer"), "transformer",
if(file.hasTag("plugin/emitter"), "emitter",
if(file.hasTag("plugin/filter"), "filter",
if(file.hasTag("component"), "component",
if(file.inFolder("features"), "feature",
if(file.inFolder("advanced"), "advanced",
if(file.inFolder("plugins"), "plugin", "guide")))))))
last_modified: file.mtime.relative()
section: |
if(file.inFolder("plugins"), "plugins",
if(file.inFolder("features"), "features",
if(file.inFolder("advanced"), "advanced",
if(file.inFolder("tags"), "tags", "core"))))
properties:
title:
displayName: Title
formula.doc_type:
displayName: Type
formula.last_modified:
displayName: Updated
formula.section:
displayName: Section
views:
- type: table
name: All Documentation
groupBy:
property: formula.section
direction: ASC
order:
- file.name
- title
- formula.doc_type
- formula.section
- formula.last_modified
sort:
- property: formula.doc_type
direction: ASC
- property: file.name
direction: ASC
columnSize:
file.name: 185
note.title: 268
formula.doc_type: 146
formula.section: 276
- type: table
name: Plugins
filters:
or:
- file.hasTag("plugin/transformer")
- file.hasTag("plugin/emitter")
- file.hasTag("plugin/filter")
groupBy:
property: formula.doc_type
direction: ASC
order:
- file.name
- title
- formula.doc_type
- formula.last_modified
- type: table
name: Components & Features
filters:
or:
- file.hasTag("component")
- file.inFolder("features")
order:
- file.name
- title
- formula.doc_type
- formula.last_modified
- type: list
name: Recently Updated
order:
- file.name
- formula.last_modified
limit: 15
- type: table
name: Core Guides
filters:
not:
- file.inFolder("plugins")
- file.inFolder("features")
- file.inFolder("advanced")
- file.inFolder("tags")
order:
- file.name
- title
- formula.last_modified

18
docs/plugins/BasePage.md Normal file
View File

@@ -0,0 +1,18 @@
---
title: BasePage
tags:
- plugin/emitter
---
This plugin emits pages for each view defined in `.base` files. See [[bases]] for usage.
> [!note]
> For information on how to add, remove or configure plugins, see the [[configuration#Plugins|Configuration]] page.
Pages use `defaultListPageLayout` from `quartz.layout.ts` with `BaseContent` as the page body. To customize the layout, edit `quartz/components/pages/BaseContent.tsx`.
## API
- Category: Emitter
- Function name: `Plugin.BasePage()`.
- Source: [`quartz/plugins/emitters/basePage.tsx`](https://github.com/jackyzha0/quartz/blob/v4/quartz/plugins/emitters/basePage.tsx).

View File

@@ -0,0 +1,20 @@
---
title: ObsidianBases
tags:
- plugin/transformer
---
This plugin parses `.base` files and compiles them for rendering. See [[bases]] for usage.
> [!note]
> For information on how to add, remove or configure plugins, see the [[configuration#Plugins|Configuration]] page.
## Configuration
- `emitWarnings`: If `true` (default), emits parse errors and type mismatches as warnings during build.
## API
- Category: Transformer
- Function name: `Plugin.ObsidianBases()`.
- Source: [`quartz/plugins/transformers/bases.ts`](https://github.com/jackyzha0/quartz/blob/v4/quartz/plugins/transformers/bases.ts).

126
flake.lock generated
View File

@@ -1,126 +0,0 @@
{
"nodes": {
"flake-utils": {
"inputs": {
"systems": "systems"
},
"locked": {
"lastModified": 1731533236,
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"flake-utils_2": {
"inputs": {
"systems": "systems_2"
},
"locked": {
"lastModified": 1731533236,
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1771008912,
"narHash": "sha256-gf2AmWVTs8lEq7z/3ZAsgnZDhWIckkb+ZnAo5RzSxJg=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "a82ccc39b39b621151d6732718e3e250109076fa",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"nixpkgs_2": {
"locked": {
"lastModified": 1771369470,
"narHash": "sha256-0NBlEBKkN3lufyvFegY4TYv5mCNHbi5OmBDrzihbBMQ=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "0182a361324364ae3f436a63005877674cf45efb",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"org-garden": {
"inputs": {
"flake-utils": "flake-utils_2",
"nixpkgs": "nixpkgs_2"
},
"locked": {
"path": "./org-garden",
"type": "path"
},
"original": {
"path": "./org-garden",
"type": "path"
},
"parent": []
},
"root": {
"inputs": {
"flake-utils": "flake-utils",
"nixpkgs": "nixpkgs",
"org-garden": "org-garden"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
"systems_2": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
}
},
"root": "root",
"version": 7
}

View File

@@ -1,47 +0,0 @@
{
description = "Quartz org-roam org notes to website";
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
flake-utils.url = "github:numtide/flake-utils";
org-garden.url = "path:./org-garden";
};
outputs = { self, nixpkgs, flake-utils, org-garden }:
flake-utils.lib.eachDefaultSystem (system:
let
pkgs = import nixpkgs { inherit system; };
# Re-export org-garden's packages
orgGardenPkgs = org-garden.packages.${system};
# Convenience aliases
orgGardenApp = orgGardenPkgs.default;
in
{
# All packages come from org-garden
packages = orgGardenPkgs // {
default = orgGardenApp;
};
# Apps
apps = {
default = { type = "app"; program = "${orgGardenApp}/bin/org-garden"; };
org-garden = { type = "app"; program = "${orgGardenApp}/bin/org-garden"; };
};
# Dev shell for working on the repo
devShells.default = pkgs.mkShell {
buildInputs = [
pkgs.nodejs_22
pkgs.elixir
];
shellHook = ''
echo "Node $(node --version) / npm $(npm --version)"
elixir --version 2>/dev/null | head -1 || true
'';
};
});
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.2 KiB

View File

@@ -1,16 +0,0 @@
:PROPERTIES:
:ID: emt-madrid
:END:
#+title: EMT Madrid (urban bus)
Empresa Municipal de Transportes (EMT) operates the urban bus network
within the municipality of Madrid — around 200 lines.
* Notable lines
- *Line 27* — connects Embajadores with Barrio de la Concepción, one of the
oldest routes in the network.
- *Line 34* — Argüelles to Carabanchel, crossing the city centre via Gran Vía.
- *Búho (owl) lines* — night buses running from Cibeles from midnight to 6 am.
* See also
- [[id:madrid-transport][Madrid Public Transport]]

View File

@@ -1,13 +0,0 @@
#+title: Example: Citation Reference
This file demonstrates how org-citar citations pass through ox-hugo into
markdown, where the pipeline transform resolves them.
The methodology described in [cite:@podlovics2021journalArticle] provides a
useful framework for analysis.
Multiple citations can appear together:
[cite:@podlovics2021journalArticle;@petersen2022book]
Older bare-cite style (org-roam v1 / older citar) also works:
cite:@podlovics2021journalArticle

View File

@@ -1,33 +0,0 @@
:PROPERTIES:
:ID: example-images
:END:
#+title: Example: Image References
This note demonstrates the three image reference scenarios that the pipeline
must handle.
* Scenario 1: External image (URL)
An image hosted on the web — ox-hugo passes the URL through as-is and no
local file handling is needed.
#+attr_html: :link "https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSkzsTuLOt8esM6enoKwkzqA52G3p9hldlf2g&s"
[[file:quartz-logo-external.png]]
* Scenario 2: Local image (same notes directory)
An image sitting next to this .org file inside the notes directory.
ox-hugo copies files referenced with a relative path into the Hugo =static/=
assets tree automatically.
#+CAPTION: Quartz logo (local, same notes dir)
[[file:quartz-logo.png]]
* Scenario 3: External image (outside notes directory)
An image that lives outside the notes directory entirely — for example a
shared assets folder or a system path. ox-hugo still copies it into =static/=
and rewrites the reference.
#+CAPTION: Quartz logo (outside notes dir)
[[file:../notes-external/external-location-image.png]]

View File

@@ -1,17 +0,0 @@
:PROPERTIES:
:ID: madrid-transport
:END:
#+title: Madrid Public Transport
Madrid has one of the most extensive public transport networks in Europe,
operated primarily by [[id:crtm][Consorcio Regional de Transportes de Madrid]] (CRTM).
* Modes
- [[id:metro-madrid][Metro de Madrid]] — 13 lines, ~300 km of track
- [[id:emt-madrid][EMT Bus]] — urban buses within the city
- Cercanías — suburban rail run by Renfe
- Interurbano — regional buses to the wider Community of Madrid
* Ticketing
A single [[https://www.crtm.es][tarjeta transporte]] (transport card) works across all modes.
The Multi card covers zones AC and is topped up at any metro station.

View File

@@ -1,18 +0,0 @@
:PROPERTIES:
:ID: metro-madrid
:END:
#+title: Metro de Madrid
The Madrid Metro is the main rapid transit network in the city, opened in 1919.
It is the second oldest metro in the Iberian Peninsula after Barcelona.
* Key Lines
| Line | Name | Colour | Terminals |
|------+-----------------+--------+------------------------------|
| L1 | Pinar de ChamartínValdecarros | Blue | Pinar de Chamartín / Valdecarros |
| L6 | Circular | Grey | Circular (loop) |
| L10 | — | Dark blue | Hospital Infanta Sofía / Tres Olivos |
* See also
- [[id:madrid-transport][Madrid Public Transport]]
- [[id:sol-interchange][Sol interchange]]

View File

@@ -1,12 +0,0 @@
:PROPERTIES:
:ID: sol-interchange
:END:
#+title: Sol (interchange)
Sol is the busiest interchange station in the Madrid Metro, sitting beneath
Puerta del Sol in the city centre.
Lines serving Sol: [[id:metro-madrid][L1]], L2, L3.
It also connects to the Cercanías hub underneath, making it the de-facto
zero point of Madrid's public transport.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.2 KiB

View File

@@ -1,22 +0,0 @@
:PROPERTIES:
:ID: crtm
:END:
#+title: CRTM — Consorcio Regional de Transportes de Madrid
The CRTM is the regional authority that coordinates public transport across
the Community of Madrid. It does not operate services directly but sets
fares, zones, and integration policy.
* Fare zones
| Zone | Coverage |
|-------+-----------------------------|
| A | Municipality of Madrid |
| B1 | Inner ring municipalities |
| B2 | Outer ring municipalities |
| B3 | Further suburban area |
| C1C2 | Commuter belt |
* Related
- [[id:madrid-transport][Madrid Public Transport]]
- [[id:metro-madrid][Metro de Madrid]]
- [[id:emt-madrid][EMT Madrid]]

View File

@@ -1,19 +0,0 @@
:PROPERTIES:
:ID: m30
:END:
#+title: M-30
The M-30 is Madrid's innermost ring road, circling the city centre at a
radius of roughly 35 km from Puerta del Sol.
It runs mostly underground through the Madrid Río tunnel section along the
Manzanares river, built during the 20042007 renovation that reclaimed the
riverbank as a public park.
* Key junctions
- Nudo Norte — connects to A-1 (Burgos) and A-6 (La Coruña)
- Nudo Sur — connects to A-4 (Cádiz) and A-42 (Toledo)
* See also
- [[id:crtm][CRTM]]
- [[id:madrid-transport][Madrid Public Transport]]

View File

@@ -1,10 +0,0 @@
{
"$schema": "https://opencode.ai/config.json",
"mcp": {
"nixos": {
"type": "local",
"command": ["mcp-nixos"],
"enabled": true
}
}
}

61
org-garden/flake.lock generated
View File

@@ -1,61 +0,0 @@
{
"nodes": {
"flake-utils": {
"inputs": {
"systems": "systems"
},
"locked": {
"lastModified": 1731533236,
"narHash": "sha256-l0KFg5HjrsfsO/JpG+r7fRrqm12kzFHyUHqHCVpMMbI=",
"owner": "numtide",
"repo": "flake-utils",
"rev": "11707dc2f618dd54ca8739b309ec4fc024de578b",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
"nixpkgs": {
"locked": {
"lastModified": 1771369470,
"narHash": "sha256-0NBlEBKkN3lufyvFegY4TYv5mCNHbi5OmBDrzihbBMQ=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "0182a361324364ae3f436a63005877674cf45efb",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
"root": {
"inputs": {
"flake-utils": "flake-utils",
"nixpkgs": "nixpkgs"
}
},
"systems": {
"locked": {
"lastModified": 1681028828,
"narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=",
"owner": "nix-systems",
"repo": "default",
"rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e",
"type": "github"
},
"original": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
}
},
"root": "root",
"version": 7
}

View File

@@ -1,147 +0,0 @@
{
description = "Org-garden org-roam to website publishing pipeline";
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
flake-utils.url = "github:numtide/flake-utils";
};
outputs = { self, nixpkgs, flake-utils }:
flake-utils.lib.eachDefaultSystem (system:
let
pkgs = import nixpkgs { inherit system; };
fs = pkgs.lib.fileset;
# =========================================================================
# Emacs with ox-hugo
# =========================================================================
# Needed at runtime by the escript (export calls `emacs --batch` with ox-hugo)
emacsWithOxHugo = (pkgs.emacsPackagesFor pkgs.emacs-nox).emacsWithPackages
(epkgs: [ epkgs.ox-hugo ]);
# =========================================================================
# Elixir Pipeline
# =========================================================================
# Pre-fetched Hex/Mix dependencies
mixDeps = pkgs.beamPackages.fetchMixDeps {
pname = "org-garden-mix-deps";
version = "0.1.0";
src = fs.toSource {
root = ./.;
fileset = fs.unions [
./mix.exs
./mix.lock
];
};
sha256 = "sha256-si7JAomY1HZ33m6ihUJP5i6PO39CE1clYvuMtn0CbPU=";
};
# Compiled org-garden escript
orgGardenEscript = pkgs.beamPackages.mixRelease {
pname = "org-garden";
version = "0.1.0";
src = fs.toSource {
root = ./.;
fileset = fs.unions [
./mix.exs
./mix.lock
./lib
];
};
escriptBinName = "org_garden";
mixFodDeps = mixDeps;
stripDebug = true;
};
# =========================================================================
# Quartz (fetched from upstream, patched)
# =========================================================================
# Pin to specific upstream commit
quartzVersion = "4.5.2";
quartzRev = "ec00a40aefca73596ab76e3ebe3a8e1129b43688";
# Fetch upstream Quartz source
quartzSrc = pkgs.fetchFromGitHub {
owner = "jackyzha0";
repo = "quartz";
rev = quartzRev;
hash = "sha256-HdtQB5+SRWiypOvAJuJa3Nodl4JHehp2Mz6Rj5gOG0w=";
};
# Apply our patches to Quartz
quartzPatched = pkgs.runCommand "quartz-patched-${quartzVersion}" {
src = quartzSrc;
} ''
cp -r $src $out
chmod -R u+w $out
cd $out
patch -p1 < ${./patches/01-glob-gitignore.patch}
patch -p1 < ${./patches/02-build-gitignore.patch}
patch -p1 < ${./patches/03-static-hugo.patch}
patch -p1 < ${./patches/04-oxhugofm-figure.patch}
'';
# Pre-fetch Quartz npm dependencies
quartzDeps = pkgs.buildNpmPackage {
pname = "org-garden-quartz-deps";
version = quartzVersion;
src = quartzPatched;
npmDepsHash = "sha256-7u+VlIx44B3/ivM9vLMIOn+e4TL4eS6B682vhS+Ikb4=";
dontBuild = true;
installPhase = ''
mkdir -p $out
cp -r node_modules $out/node_modules
'';
};
# =========================================================================
# Combined Application
# =========================================================================
# Wrapped org-garden with Quartz bundled
orgGardenApp = pkgs.writeShellApplication {
name = "org-garden";
runtimeInputs = [ emacsWithOxHugo pkgs.inotify-tools pkgs.nodejs_22 ];
text = ''
# Set up Quartz working directory
QUARTZ_WORK=$(mktemp -d)
trap 'rm -rf "$QUARTZ_WORK"' EXIT
# Copy patched Quartz source
cp -r ${quartzPatched}/. "$QUARTZ_WORK/"
chmod -R u+w "$QUARTZ_WORK"
# Copy default config files
cp ${./quartz-config/quartz.config.ts} "$QUARTZ_WORK/"
cp ${./quartz-config/quartz.layout.ts} "$QUARTZ_WORK/"
cp ${./quartz-config/globals.d.ts} "$QUARTZ_WORK/"
cp ${./quartz-config/index.d.ts} "$QUARTZ_WORK/"
# Link pre-built node_modules
ln -s ${quartzDeps}/node_modules "$QUARTZ_WORK/node_modules"
export QUARTZ_PATH="$QUARTZ_WORK"
export NODE_PATH="${pkgs.nodejs_22}/bin/node"
exec ${orgGardenEscript}/bin/org_garden "$@"
'';
};
in
{
packages.default = orgGardenApp;
packages.escript = orgGardenEscript;
packages.quartz-patched = quartzPatched;
devShells.default = pkgs.mkShell {
buildInputs = [
pkgs.elixir
pkgs.inotify-tools
emacsWithOxHugo
pkgs.nodejs_22
];
};
});
}

View File

@@ -1,189 +0,0 @@
defmodule OrgGarden do
@moduledoc """
Org-roam to website publishing pipeline.
Orchestrates:
1. Org → Markdown export (via Emacs + ox-hugo)
2. Markdown transforms (citations, etc.)
3. Markdown → HTML + serving (via Quartz)
## Usage
opts = %{
zotero_url: "http://localhost:23119",
bibtex_file: System.get_env("BIBTEX_FILE"),
citation_mode: :warn # :silent | :warn | :strict
}
# Batch: all .md files in a directory
OrgGarden.run(content_dir, [OrgGarden.Transforms.Citations], opts)
# Targeted: specific files only
OrgGarden.run_on_files(["content/foo.md"], [OrgGarden.Transforms.Citations], opts)
# With pre-initialized transforms (for watch mode, avoids re-init)
initialized = OrgGarden.init_transforms([OrgGarden.Transforms.Citations], opts)
OrgGarden.run_on_files_with(["content/foo.md"], initialized, opts)
"""
require Logger
@type transform :: module()
@type initialized_transform :: {module(), term()}
@type opts :: map()
@doc "One-shot build: org files → static site"
def build(notes_dir, opts \\ []) do
OrgGarden.CLI.handle_build([notes_dir | opts_to_args(opts)])
end
@doc "Development server: watch + live reload"
def serve(notes_dir, opts \\ []) do
OrgGarden.CLI.handle_serve([notes_dir | opts_to_args(opts)])
end
@doc "Export only: org files → markdown (no Quartz)"
def export(notes_dir, opts \\ []) do
OrgGarden.CLI.handle_export([notes_dir | opts_to_args(opts)])
end
defp opts_to_args(opts) do
Enum.flat_map(opts, fn
{:output, v} -> ["--output", v]
{:port, v} -> ["--port", to_string(v)]
{:ws_port, v} -> ["--ws-port", to_string(v)]
{:watch, true} -> ["--watch"]
{:watch, false} -> []
_ -> []
end)
end
@doc """
Initialize transform modules. Returns a list of `{module, state}` tuples.
Call this once and reuse the result with `run_on_files_with/3` to avoid
re-initializing transforms on every file change (e.g., in watch mode).
"""
@spec init_transforms([transform()], opts()) :: [initialized_transform()]
def init_transforms(transforms, opts) do
Enum.map(transforms, fn mod ->
state = mod.init(opts)
{mod, state}
end)
end
@doc """
Tear down previously initialized transforms, releasing any resources.
"""
@spec teardown_transforms([initialized_transform()]) :: :ok
def teardown_transforms(initialized) do
Enum.each(initialized, fn {mod, state} ->
if function_exported?(mod, :teardown, 1) do
mod.teardown(state)
end
end)
:ok
end
@doc """
Run all transforms over every `.md` file under `content_dir`.
Initializes and tears down transforms automatically.
Returns `{:ok, stats}` where stats maps each transform to a count of files it changed.
"""
@spec run(String.t(), [transform()], opts()) :: {:ok, map()}
def run(content_dir, transforms, opts \\ %{}) do
md_files =
content_dir
|> Path.join("**/*.md")
|> Path.wildcard()
if md_files == [] do
Logger.warning("OrgGarden: no .md files found in #{content_dir}")
{:ok, %{}}
else
Logger.info(
"OrgGarden: processing #{length(md_files)} markdown files " <>
"with #{length(transforms)} transform(s)"
)
initialized = init_transforms(transforms, opts)
stats = apply_transforms(md_files, initialized, opts)
teardown_transforms(initialized)
{:ok, stats}
end
end
@doc """
Run all transforms over specific `.md` files only.
Initializes and tears down transforms automatically.
Files that don't exist are silently skipped.
"""
@spec run_on_files([String.t()], [transform()], opts()) :: {:ok, map()}
def run_on_files(file_paths, transforms, opts \\ %{}) do
existing = Enum.filter(file_paths, &File.exists?/1)
if existing == [] do
Logger.debug("OrgGarden: no files to process")
{:ok, %{}}
else
Logger.info("OrgGarden: processing #{length(existing)} file(s)")
initialized = init_transforms(transforms, opts)
stats = apply_transforms(existing, initialized, opts)
teardown_transforms(initialized)
{:ok, stats}
end
end
@doc """
Run pre-initialized transforms over specific `.md` files.
Does NOT call `init` or `teardown` — the caller manages the transform
lifecycle. Use this in watch mode to avoid re-initializing on every change.
"""
@spec run_on_files_with([String.t()], [initialized_transform()], opts()) :: {:ok, map()}
def run_on_files_with(file_paths, initialized, opts) do
existing = Enum.filter(file_paths, &File.exists?/1)
if existing == [] do
Logger.debug("OrgGarden: no files to process")
{:ok, %{}}
else
stats = apply_transforms(existing, initialized, opts)
{:ok, stats}
end
end
# -------------------------------------------------------------------
# Private
# -------------------------------------------------------------------
defp apply_transforms(md_files, initialized, opts) do
Enum.reduce(md_files, %{}, fn path, acc ->
original = File.read!(path)
{transformed, file_stats} =
Enum.reduce(initialized, {original, %{}}, fn {mod, state}, {content, fstats} ->
result = mod.apply(content, state, opts)
changed = result != content
{result,
Map.update(
fstats,
mod,
if(changed, do: 1, else: 0),
&(&1 + if(changed, do: 1, else: 0))
)}
end)
if transformed != original do
File.write!(path, transformed)
Logger.debug("OrgGarden: updated #{Path.relative_to_cwd(path)}")
end
Map.merge(acc, file_stats, fn _k, a, b -> a + b end)
end)
end
end

View File

@@ -1,14 +0,0 @@
defmodule OrgGarden.Application do
@moduledoc false
use Application
@impl true
def start(_type, _args) do
children = [
{Finch, name: OrgGarden.Finch}
]
opts = [strategy: :one_for_one, name: OrgGarden.AppSupervisor]
Supervisor.start_link(children, opts)
end
end

View File

@@ -1,375 +0,0 @@
defmodule OrgGarden.CLI do
@moduledoc """
Escript entry point for the org-garden pipeline.
## Commands
org-garden serve <notes-dir> [--port 8080] [--ws-port 3001]
org-garden build <notes-dir> [--output <path>]
org-garden export <notes-dir> [--watch]
### serve
Development server with watch + live reload. Starts both the org→md
watcher and Quartz in serve mode.
### build
One-shot build for CI/production. Exports org files, runs transforms,
then builds static site with Quartz.
### export
Just export org→md (current pipeline behavior). Use --watch for
incremental re-export on file changes.
## Arguments
notes-dir Path to the directory containing `.org` notes (required).
Also accepts the `NOTES_DIR` env var.
## Options
--output <path> Output root directory (used as ox-hugo base dir).
Defaults to the `OUTPUT_DIR` env var, or the current
working directory.
--content-dir <p> Output directory for exported Markdown. Defaults to
`<output>/content`.
--port <n> HTTP server port (default: 8080). Only for `serve`.
--ws-port <n> WebSocket hot reload port (default: 3001). Only for `serve`.
--watch After initial batch, watch notes-dir for changes and
incrementally re-export affected files. Only for `export`.
## Environment Variables
BIBTEX_FILE Path to a `.bib` file used as citation fallback.
ZOTERO_URL Zotero Better BibTeX base URL (default: http://localhost:23119).
CITATION_MODE silent | warn (default) | strict.
QUARTZ_PATH Path to quartz directory (required for serve/build).
NODE_PATH Node.js executable (default: node).
"""
require Logger
@transforms [OrgGarden.Transforms.Citations]
def main(argv) do
Application.ensure_all_started(:org_garden)
case argv do
["serve" | rest] -> handle_serve(rest)
["build" | rest] -> handle_build(rest)
["export" | rest] -> handle_export(rest)
# Legacy: treat bare args as export command for backward compatibility
[_ | _] -> handle_export(argv)
_ -> abort("Usage: org-garden <serve|build|export> <notes-dir> [options]")
end
end
# ---------------------------------------------------------------------------
# Command: serve
# ---------------------------------------------------------------------------
def handle_serve(argv) do
require_quartz_env()
{notes_dir, output_dir, content_dir, opts} = parse_serve_args(argv)
pipeline_opts = build_pipeline_opts()
# Initial batch export
wipe(content_dir)
export_all(notes_dir, output_dir)
run_pipeline(content_dir, pipeline_opts)
generate_index(content_dir)
IO.puts("==> Starting development server...")
{:ok, _pid} =
OrgGarden.Supervisor.start_link(
notes_dir: notes_dir,
output_dir: output_dir,
content_dir: content_dir,
pipeline_opts: pipeline_opts,
transforms: @transforms,
port: opts[:port] || 8080,
ws_port: opts[:ws_port] || 3001
)
IO.puts("==> Server running at http://localhost:#{opts[:port] || 8080}")
IO.puts("==> Watching #{notes_dir} for changes (Ctrl+C to stop)")
Process.sleep(:infinity)
end
defp parse_serve_args(argv) do
{opts, positional, _invalid} =
OptionParser.parse(argv,
strict: [
output: :string,
content_dir: :string,
port: :integer,
ws_port: :integer
]
)
notes_dir = extract_notes_dir(positional, "serve")
output_dir = extract_output_dir(opts)
content_dir = extract_content_dir(opts, output_dir)
{notes_dir, output_dir, content_dir, opts}
end
# ---------------------------------------------------------------------------
# Command: build
# ---------------------------------------------------------------------------
def handle_build(argv) do
quartz_path = require_quartz_env()
{notes_dir, output_dir, content_dir, _opts} = parse_build_args(argv)
pipeline_opts = build_pipeline_opts()
# Full batch export
wipe(content_dir)
export_all(notes_dir, output_dir)
run_pipeline(content_dir, pipeline_opts)
generate_index(content_dir)
node_path = System.get_env("NODE_PATH", "node")
IO.puts("==> Building static site with Quartz...")
{output, status} =
System.cmd(
node_path,
[
Path.join(quartz_path, "quartz/bootstrap-cli.mjs"),
"build",
"--directory",
content_dir,
"--output",
Path.join(output_dir, "public")
],
cd: quartz_path,
stderr_to_stdout: true
)
IO.puts(output)
if status != 0 do
abort("Quartz build failed with status #{status}")
end
IO.puts("==> Build complete. Output: #{Path.join(output_dir, "public")}")
end
defp parse_build_args(argv) do
{opts, positional, _invalid} =
OptionParser.parse(argv,
strict: [output: :string, content_dir: :string]
)
notes_dir = extract_notes_dir(positional, "build")
output_dir = extract_output_dir(opts)
content_dir = extract_content_dir(opts, output_dir)
{notes_dir, output_dir, content_dir, opts}
end
# ---------------------------------------------------------------------------
# Command: export (original pipeline behavior)
# ---------------------------------------------------------------------------
def handle_export(argv) do
{notes_dir, output_dir, content_dir, watch?} = parse_export_args(argv)
pipeline_opts = build_pipeline_opts()
# Phase 1-4: full batch export
wipe(content_dir)
export_all(notes_dir, output_dir)
run_pipeline(content_dir, pipeline_opts)
generate_index(content_dir)
md_count =
content_dir
|> Path.join("**/*.md")
|> Path.wildcard()
|> length()
IO.puts("==> Done. #{md_count} markdown files in #{content_dir}")
# Phase 5: optional watch mode
if watch? do
IO.puts("==> Watching #{notes_dir} for .org changes... (Ctrl+C to stop)")
{:ok, _pid} =
OrgGarden.Watcher.start_link(
notes_dir: notes_dir,
output_dir: output_dir,
content_dir: content_dir,
pipeline_opts: pipeline_opts,
transforms: @transforms
)
Process.sleep(:infinity)
end
end
defp parse_export_args(argv) do
{opts, positional, _invalid} =
OptionParser.parse(argv,
strict: [output: :string, content_dir: :string, watch: :boolean]
)
notes_dir = extract_notes_dir(positional, "export")
output_dir = extract_output_dir(opts)
content_dir = extract_content_dir(opts, output_dir)
watch? = Keyword.get(opts, :watch, false)
{notes_dir, output_dir, content_dir, watch?}
end
# ---------------------------------------------------------------------------
# Shared argument extraction
# ---------------------------------------------------------------------------
defp extract_notes_dir(positional, command) do
notes_dir =
case positional do
[dir | _] ->
dir
[] ->
System.get_env("NOTES_DIR") ||
abort("Usage: org-garden #{command} <notes-dir> [options]")
end
notes_dir = Path.expand(notes_dir)
unless File.dir?(notes_dir) do
abort("Error: notes directory does not exist: #{notes_dir}")
end
notes_dir
end
defp extract_output_dir(opts) do
(opts[:output] || System.get_env("OUTPUT_DIR") || File.cwd!())
|> Path.expand()
end
defp extract_content_dir(opts, output_dir) do
(opts[:content_dir] || Path.join(output_dir, "content"))
|> Path.expand()
end
# ---------------------------------------------------------------------------
# Phase 1: Wipe content/
# ---------------------------------------------------------------------------
defp wipe(content_dir) do
IO.puts("==> Wiping #{content_dir}")
File.mkdir_p!(content_dir)
content_dir
|> File.ls!()
|> Enum.reject(&(&1 == ".gitkeep"))
|> Enum.each(fn entry ->
Path.join(content_dir, entry) |> File.rm_rf!()
end)
end
# ---------------------------------------------------------------------------
# Phase 2: Export org files via Emacs + ox-hugo
# ---------------------------------------------------------------------------
defp export_all(notes_dir, output_dir) do
IO.puts("==> Exporting org files from #{notes_dir}")
case OrgGarden.Export.export_all(notes_dir, output_dir) do
{:ok, 0} ->
IO.puts("No .org files found in #{notes_dir}")
System.halt(0)
{:ok, count} ->
IO.puts(" exported #{count} file(s)")
{:error, failures} ->
IO.puts(:stderr, "\nFailed to export #{length(failures)} file(s):")
Enum.each(failures, fn {f, {:error, reason}} ->
IO.puts(:stderr, " #{f}: #{inspect(reason)}")
end)
System.halt(1)
end
end
# ---------------------------------------------------------------------------
# Phase 3: Markdown transformation pipeline
# ---------------------------------------------------------------------------
defp run_pipeline(content_dir, pipeline_opts) do
IO.puts("==> Running markdown pipeline")
{:ok, stats} = OrgGarden.run(content_dir, @transforms, pipeline_opts)
Enum.each(stats, fn {mod, count} ->
IO.puts(" #{inspect(mod)}: #{count} file(s) modified")
end)
end
# ---------------------------------------------------------------------------
# Phase 4: Generate default index.md if none was exported
# ---------------------------------------------------------------------------
defp generate_index(content_dir) do
IO.puts("==> Generating index")
OrgGarden.Index.generate(content_dir)
end
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
defp require_quartz_env do
case System.get_env("QUARTZ_PATH") do
nil ->
abort("""
Error: QUARTZ_PATH environment variable not set.
The 'serve' and 'build' commands require Quartz to be available.
Use the wrapper scripts that set up the environment:
nix run .#notes -- <notes-dir> # for serve
nix run .#build -- <notes-dir> # for build
Or set QUARTZ_PATH manually to point to a quartz-org-roam checkout
with node_modules installed.
For export-only mode (no Quartz), use:
org-garden export <notes-dir> [--watch]
""")
path ->
unless File.exists?(Path.join(path, "quartz/bootstrap-cli.mjs")) do
abort("Error: QUARTZ_PATH=#{path} does not contain quartz/bootstrap-cli.mjs")
end
path
end
end
defp build_pipeline_opts do
%{
zotero_url: System.get_env("ZOTERO_URL", "http://localhost:23119"),
bibtex_file: System.get_env("BIBTEX_FILE"),
citation_mode:
case System.get_env("CITATION_MODE", "warn") do
"silent" -> :silent
"strict" -> :strict
_ -> :warn
end
}
end
defp abort(message) do
IO.puts(:stderr, message)
System.halt(1)
end
end

View File

@@ -1,135 +0,0 @@
defmodule OrgGarden.Export do
@moduledoc """
Org-to-Markdown export via Emacs batch + ox-hugo.
Provides both single-file and batch export, plus a helper to compute
the expected `.md` output path for a given `.org` source file.
"""
require Logger
@doc """
Export a single `.org` file to Markdown via `emacs --batch` + ox-hugo.
Returns `{:ok, exit_code}` with the emacs exit code (0 = success),
or `{:error, reason}` if the command could not be executed.
"""
@spec export_file(String.t(), String.t(), String.t()) :: {:ok, non_neg_integer()} | {:error, term()}
def export_file(orgfile, notes_dir, output_dir) do
section =
orgfile
|> Path.dirname()
|> Path.relative_to(notes_dir)
# ox-hugo requires static/ to exist for image asset copying
File.mkdir_p!(Path.join(output_dir, "static"))
{output, exit_code} =
System.cmd(
"emacs",
[
"--batch",
"--eval", "(require 'ox-hugo)",
"--eval", """
(org-cite-register-processor 'passthrough
:export-citation
(lambda (citation _style _backend _info)
(let ((keys (mapcar (lambda (ref)
(concat "@" (org-element-property :key ref)))
(org-cite-get-references citation))))
(format "[cite:%s]" (string-join keys ";")))))
""",
"--eval", "(setq org-cite-export-processors '((t passthrough)))",
"--eval", ~s[(setq org-hugo-base-dir "#{output_dir}")],
"--eval", ~s[(setq org-hugo-default-section-directory "#{section}")],
"--visit", orgfile,
"--funcall", "org-hugo-export-to-md"
],
stderr_to_stdout: true
)
filtered =
output
|> String.split("\n")
|> Enum.reject(&String.match?(&1, ~r/^Loading|^ad-handle|^For information/))
|> Enum.join("\n")
if filtered != "", do: Logger.info("emacs: #{filtered}")
if exit_code == 0 do
{:ok, exit_code}
else
{:error, {:emacs_exit, exit_code, filtered}}
end
rescue
e -> {:error, e}
end
@doc """
Export all `.org` files found under `notes_dir`.
Returns `{:ok, count}` where `count` is the number of successfully
exported files, or `{:error, failures}` if any files failed.
"""
@spec export_all(String.t(), String.t()) :: {:ok, non_neg_integer()} | {:error, list()}
def export_all(notes_dir, output_dir) do
org_files =
Path.join(notes_dir, "**/*.org")
|> Path.wildcard()
if org_files == [] do
Logger.warning("No .org files found in #{notes_dir}")
{:ok, 0}
else
Logger.info("Exporting #{length(org_files)} org file(s) from #{notes_dir}")
results =
Enum.map(org_files, fn orgfile ->
IO.puts(" exporting: #{orgfile}")
{orgfile, export_file(orgfile, notes_dir, output_dir)}
end)
failures =
Enum.filter(results, fn
{_, {:ok, _}} -> false
{_, {:error, _}} -> true
end)
if failures == [] do
{:ok, length(results)}
else
{:error, failures}
end
end
end
@doc """
Compute the expected `.md` path for a given `.org` file.
Uses the same section-mapping logic as ox-hugo: the relative directory
of the `.org` file within `notes_dir` becomes the section directory
under `content_dir`.
## Examples
iex> OrgGarden.Export.expected_md_path("/notes/bus/emt.org", "/notes", "/out/content")
"/out/content/bus/emt.md"
iex> OrgGarden.Export.expected_md_path("/notes/top-level.org", "/notes", "/out/content")
"/out/content/top-level.md"
"""
@spec expected_md_path(String.t(), String.t(), String.t()) :: String.t()
def expected_md_path(orgfile, notes_dir, content_dir) do
section =
orgfile
|> Path.dirname()
|> Path.relative_to(notes_dir)
basename = Path.basename(orgfile, ".org") <> ".md"
case section do
"." -> Path.join(content_dir, basename)
_ -> Path.join([content_dir, section, basename])
end
end
end

View File

@@ -1,83 +0,0 @@
defmodule OrgGarden.Index do
@moduledoc """
Generates a fallback `index.md` in the content directory if none was
exported from an `.org` file.
The generated index lists all markdown pages alphabetically with links.
"""
@doc """
Generate `content_dir/index.md` if it does not already exist.
If an `index.md` was already created by ox-hugo (from an `index.org`),
it is left untouched.
"""
@spec generate(String.t()) :: :ok
def generate(content_dir) do
index_path = Path.join(content_dir, "index.md")
unless File.exists?(index_path) do
IO.puts(" generating default index.md")
pages =
Path.join(content_dir, "**/*.md")
|> Path.wildcard()
|> Enum.map(fn path ->
slug = Path.relative_to(path, content_dir) |> Path.rootname()
title =
path
|> File.read!()
|> then(fn content ->
case Regex.run(~r/^title\s*=\s*"(.+)"/m, content) do
[_, t] -> t
_ -> slug
end
end)
{slug, title}
end)
|> Enum.sort_by(fn {_, title} -> title end)
|> Enum.map(fn {slug, title} -> "- [#{title}](#{slug})" end)
|> Enum.join("\n")
File.write!(index_path, """
---
title: Index
---
#{pages}
""")
end
:ok
end
@doc """
Regenerate the index by removing any previously generated one first.
Only removes the index if it was generated by us (contains `title: Index`).
User-exported index files (from `index.org`) are left untouched.
"""
@spec regenerate(String.t()) :: :ok
def regenerate(content_dir) do
index_path = Path.join(content_dir, "index.md")
if File.exists?(index_path) do
content = File.read!(index_path)
if generated_index?(content) do
File.rm!(index_path)
end
end
generate(content_dir)
end
defp generated_index?(content) do
# Our generated index uses "title: Index" in YAML frontmatter.
# ox-hugo uses TOML frontmatter (title = "..."), so this won't
# match user-exported files.
String.contains?(content, "title: Index")
end
end

View File

@@ -1,118 +0,0 @@
defmodule OrgGarden.Quartz do
@moduledoc """
Manages Quartz Node.js process as an Erlang Port.
Required environment:
- QUARTZ_PATH: path to quartz repo (with node_modules)
- NODE_PATH: path to node executable (default: "node")
Starts Quartz in serve mode (`npx quartz build --serve`) and forwards
all stdout/stderr output to the Logger with a `[quartz]` prefix.
If Quartz exits, this GenServer will stop, which triggers the supervisor
to restart the entire supervision tree (strategy: :one_for_all).
"""
use GenServer
require Logger
defstruct [:port, :quartz_path, :content_dir, :http_port, :ws_port]
# -------------------------------------------------------------------
# Client API
# -------------------------------------------------------------------
@doc """
Start the Quartz process as a linked GenServer.
## Options
* `:content_dir` — directory where markdown files are located (required)
* `:port` — HTTP server port (default: 8080)
* `:ws_port` — WebSocket hot reload port (default: 3001)
"""
def start_link(opts) do
GenServer.start_link(__MODULE__, opts, name: __MODULE__)
end
# -------------------------------------------------------------------
# GenServer callbacks
# -------------------------------------------------------------------
@impl true
def init(opts) do
quartz_path =
System.get_env("QUARTZ_PATH") ||
raise "QUARTZ_PATH environment variable not set"
node_path = System.get_env("NODE_PATH", "node")
content_dir = Keyword.fetch!(opts, :content_dir)
http_port = Keyword.get(opts, :port, 8080)
ws_port = Keyword.get(opts, :ws_port, 3001)
cli_path = Path.join(quartz_path, "quartz/bootstrap-cli.mjs")
unless File.exists?(cli_path) do
raise "Quartz CLI not found at #{cli_path}. Check QUARTZ_PATH."
end
args = [
cli_path,
"build",
"--serve",
"--directory", content_dir,
"--port", to_string(http_port),
"--wsPort", to_string(ws_port)
]
Logger.info("[quartz] Starting: #{node_path} #{Enum.join(args, " ")}")
Logger.info("[quartz] Working directory: #{quartz_path}")
port =
Port.open({:spawn_executable, node_path}, [
:binary,
:exit_status,
:stderr_to_stdout,
args: args,
cd: quartz_path,
env: [{~c"NODE_NO_WARNINGS", ~c"1"}]
])
state = %__MODULE__{
port: port,
quartz_path: quartz_path,
content_dir: content_dir,
http_port: http_port,
ws_port: ws_port
}
{:ok, state}
end
@impl true
def handle_info({port, {:data, data}}, %{port: port} = state) do
data
|> String.split("\n", trim: true)
|> Enum.each(&Logger.info("[quartz] #{&1}"))
{:noreply, state}
end
@impl true
def handle_info({port, {:exit_status, status}}, %{port: port} = state) do
Logger.error("[quartz] Process exited with status #{status}")
{:stop, {:quartz_exit, status}, state}
end
@impl true
def terminate(_reason, %{port: port}) when is_port(port) do
# Attempt graceful shutdown
Port.close(port)
:ok
rescue
_ -> :ok
end
def terminate(_reason, _state), do: :ok
end

View File

@@ -1,178 +0,0 @@
defmodule OrgGarden.Resolvers.BibTeX do
@moduledoc """
Resolves citation keys from a local BibTeX (.bib) file.
Configured via the `BIBTEX_FILE` environment variable, or passed directly
as `opts.bibtex_file`. The file is parsed once at init time and the
resulting entry map is reused for all lookups.
Supports extracting: author last names, year, title, DOI, URL.
BibTeX entry format parsed:
@type{citationkey,
author = {Last, First and Last2, First2},
year = {2021},
title = {Some Title},
doi = {10.xxxx/yyyy},
url = {https://example.com},
}
Returns `{:ok, %{label: "Author, Year", url: "..."}}` or `:error`.
"""
require Logger
# ------------------------------------------------------------------
# Public API
# ------------------------------------------------------------------
@doc """
Parse a .bib file and return a map of `%{citation_key => entry_map}`.
Returns `{:ok, entries}` or `{:error, reason}`.
"""
@spec load(String.t()) :: {:ok, map()} | {:error, term()}
def load(path) do
case File.read(path) do
{:ok, content} ->
entries = parse_entries(content)
Logger.info("BibTeX: loaded #{map_size(entries)} entries from #{path}")
{:ok, entries}
{:error, reason} ->
{:error, reason}
end
end
@doc """
Resolve a citation key from pre-loaded BibTeX entries.
"""
@spec resolve(String.t(), map()) :: {:ok, map()} | :error
def resolve(key, entries) do
case Map.fetch(entries, key) do
{:ok, entry} ->
label = build_label(entry)
url = build_url(entry)
{:ok, %{label: label, url: url}}
:error ->
:error
end
end
# ------------------------------------------------------------------
# Parsing
# ------------------------------------------------------------------
# Match @type{key, ...fields...}
# We handle nested braces by scanning character by character after
# finding the opening, rather than relying on a single regex.
@entry_header ~r/@\w+\s*\{\s*([^,\s]+)\s*,/
defp parse_entries(content) do
# Split on "@" boundaries, then parse each chunk
content
|> String.split(~r/(?=@\w+\s*\{)/, trim: true)
|> Enum.reduce(%{}, fn chunk, acc ->
case Regex.run(@entry_header, chunk) do
[_, key] ->
fields = parse_fields(chunk)
Map.put(acc, String.trim(key), fields)
_ ->
acc
end
end)
end
# Extract key = {value} or key = "value" pairs from an entry block.
# Handles simple single-depth braces; good enough for common fields.
@field_regex ~r/(\w+)\s*=\s*(?:\{([^{}]*(?:\{[^{}]*\}[^{}]*)*)\}|"([^"]*)")/
defp parse_fields(chunk) do
@field_regex
|> Regex.scan(chunk)
|> Enum.reduce(%{}, fn match, acc ->
field_name = Enum.at(match, 1) |> String.downcase()
# Value is in capture group 2 (braces) or 3 (quotes)
value =
case {Enum.at(match, 2, ""), Enum.at(match, 3, "")} do
{"", q} -> q
{b, _} -> b
end
Map.put(acc, field_name, String.trim(value))
end)
end
# ------------------------------------------------------------------
# Label & URL building
# ------------------------------------------------------------------
defp build_label(entry) do
author_part =
entry
|> Map.get("author", "")
|> parse_authors()
|> format_authors()
year = Map.get(entry, "year", Map.get(entry, "date", ""))
year = extract_year(year)
if year && author_part != "", do: "#{author_part}, #{year}", else: author_part
end
defp parse_authors(""), do: []
defp parse_authors(author_str) do
author_str
|> String.split(" and ", trim: true)
|> Enum.map(&extract_last_name/1)
|> Enum.reject(&(&1 == ""))
end
# Handles "Last, First" and "First Last" formats
defp extract_last_name(name) do
name = String.trim(name)
cond do
String.contains?(name, ",") ->
name |> String.split(",") |> List.first() |> String.trim()
String.contains?(name, " ") ->
name |> String.split(" ") |> List.last() |> String.trim()
true ->
name
end
end
defp format_authors([]), do: "Unknown"
defp format_authors([single]), do: single
defp format_authors([first | rest]), do: "#{first} & #{List.last(rest)}"
defp extract_year(""), do: nil
defp extract_year(str) do
case Regex.run(~r/\b(\d{4})\b/, str) do
[_, year] -> year
_ -> nil
end
end
defp build_url(entry) do
cond do
doi = Map.get(entry, "doi", "") |> non_empty() ->
"https://doi.org/#{doi}"
url = Map.get(entry, "url", "") |> non_empty() ->
url
true ->
nil
end
end
defp non_empty(""), do: nil
defp non_empty(v), do: v
end

View File

@@ -1,18 +0,0 @@
defmodule OrgGarden.Resolvers.DOI do
@moduledoc """
Last-resort citation resolver — always succeeds.
If the citation key looks like a DOI (starts with "10."), returns a
`https://doi.org/...` link. Otherwise returns the key itself as a
plain label with no URL.
"""
@spec resolve(String.t()) :: {:ok, map()}
def resolve(key) do
if String.starts_with?(key, "10.") do
{:ok, %{label: key, url: "https://doi.org/#{key}"}}
else
{:ok, %{label: key, url: nil}}
end
end
end

View File

@@ -1,182 +0,0 @@
defmodule OrgGarden.Resolvers.Zotero do
@moduledoc """
Resolves citation keys via Zotero Better BibTeX's JSON-RPC API.
Requires Zotero to be running with the Better BibTeX plugin installed.
Default endpoint: http://localhost:23119/better-bibtex/json-rpc
Resolution strategy:
1. Search by citation key via `item.search`
2. If found, try to get a PDF attachment link (zotero://open-pdf/...)
3. Fall back to zotero://select/items/@key
Returns `{:ok, %{label: "Author, Year", url: "zotero://..."}}` or `:error`.
"""
require Logger
@rpc_path "/better-bibtex/json-rpc"
@doc """
Attempt to resolve `key` against a running Zotero instance.
`base_url` defaults to `http://localhost:23119`.
"""
@spec resolve(String.t(), String.t()) :: {:ok, map()} | :error
def resolve(key, base_url \\ "http://localhost:23119") do
url = base_url <> @rpc_path
payload =
Jason.encode!(%{
jsonrpc: "2.0",
method: "item.search",
params: [
[["citationKey", "is", key]]
],
id: 1
})
case Req.post(url,
body: payload,
headers: [{"content-type", "application/json"}],
receive_timeout: 5_000,
finch: OrgGarden.Finch
) do
{:ok, %{status: 200, body: body}} ->
parse_response(body, key, base_url)
{:ok, %{status: status}} ->
Logger.debug("Zotero: unexpected HTTP #{status} for key #{key}")
:error
{:error, reason} ->
Logger.debug("Zotero: connection failed for key #{key}: #{inspect(reason)}")
:error
other ->
Logger.debug("Zotero: unexpected result for key #{key}: #{inspect(other)}")
:error
end
rescue
e ->
Logger.debug("Zotero: exception resolving key #{key}: #{inspect(e)}")
:error
end
# ------------------------------------------------------------------
# Private helpers
# ------------------------------------------------------------------
defp parse_response(%{"result" => [item | _]}, key, base_url) do
label = build_label(item)
url = resolve_url(item, key, base_url)
{:ok, %{label: label, url: url}}
end
defp parse_response(%{"result" => []}, key, _base_url) do
Logger.debug("Zotero: no item found for key #{key}")
:error
end
defp parse_response(%{"error" => err}, key, _base_url) do
Logger.debug("Zotero: RPC error for key #{key}: #{inspect(err)}")
:error
end
defp parse_response(body, key, _base_url) do
Logger.debug("Zotero: unexpected response shape for key #{key}: #{inspect(body)}")
:error
end
defp fetch_pdf_url(key, base_url) do
payload =
Jason.encode!(%{
jsonrpc: "2.0",
method: "item.attachments",
params: [key],
id: 2
})
case Req.post(base_url <> @rpc_path,
body: payload,
headers: [{"content-type", "application/json"}],
receive_timeout: 5_000,
finch: OrgGarden.Finch
) do
{:ok, %{status: 200, body: %{"result" => attachments}}} when is_list(attachments) ->
attachments
|> Enum.find_value(fn att ->
open = Map.get(att, "open", "")
path = Map.get(att, "path", "")
if String.ends_with?(path, ".pdf"), do: open, else: nil
end)
_ ->
nil
end
rescue
_ -> nil
end
# CSL-JSON format: authors are under "author" with "family"/"given" keys.
# Year is under "issued" -> "date-parts" -> [[year, month, day]].
defp build_label(item) do
authors = Map.get(item, "author", [])
year = extract_year(item)
author_part =
case authors do
[] ->
"Unknown"
[single] ->
Map.get(single, "family", Map.get(single, "literal", "Unknown"))
[first | rest] ->
first_name = Map.get(first, "family", Map.get(first, "literal", "Unknown"))
last_name =
rest
|> List.last()
|> then(&Map.get(&1, "family", Map.get(&1, "literal", "Unknown")))
"#{first_name} & #{last_name}"
end
if year, do: "#{author_part}, #{year}", else: author_part
end
# "issued": {"date-parts": [["2021", 2, 3]]}
defp extract_year(item) do
case get_in(item, ["issued", "date-parts"]) do
[[year | _] | _] -> to_string(year)
_ -> nil
end
end
defp resolve_url(item, key, base_url) do
# Prefer zotero://open-pdf/... for items with a PDF attachment.
# Fall back to zotero://select/library/items/KEY to open the item in Zotero.
# The "id" field is a URI like "http://zotero.org/users/123/items/ABCD1234".
pdf_url = fetch_pdf_url(key, base_url)
if pdf_url do
pdf_url
else
item_key =
item
|> Map.get("id", "")
|> String.split("/")
|> List.last()
|> non_empty()
if item_key do
"zotero://select/library/items/#{item_key}"
else
"zotero://select/items/@#{key}"
end
end
end
defp non_empty(nil), do: nil
defp non_empty(""), do: nil
defp non_empty(v), do: v
end

View File

@@ -1,40 +0,0 @@
defmodule OrgGarden.Supervisor do
@moduledoc """
Supervises development server components.
Strategy: :one_for_all
If either child fails, restart both to ensure consistent state.
Children:
1. OrgGarden.Watcher - watches .org files for changes
2. OrgGarden.Quartz - runs Quartz Node.js server
## Usage
OrgGarden.Supervisor.start_link(
notes_dir: "/path/to/notes",
output_dir: "/path/to/output",
content_dir: "/path/to/output/content",
pipeline_opts: %{zotero_url: "...", ...},
transforms: [OrgGarden.Transforms.Citations],
port: 8080,
ws_port: 3001
)
"""
use Supervisor
def start_link(opts) do
Supervisor.start_link(__MODULE__, opts, name: __MODULE__)
end
@impl true
def init(opts) do
children = [
{OrgGarden.Watcher,
Keyword.take(opts, [:notes_dir, :output_dir, :content_dir, :pipeline_opts, :transforms])},
{OrgGarden.Quartz, Keyword.take(opts, [:content_dir, :port, :ws_port])}
]
Supervisor.init(children, strategy: :one_for_all)
end
end

View File

@@ -1,48 +0,0 @@
defmodule OrgGarden.Transform do
@moduledoc """
Behaviour that all markdown transform modules must implement.
## Callbacks
- `init/1` — called once before processing; returns transform-specific state.
Default implementation returns the opts map unchanged.
- `apply/3` — called per .md file; returns the (possibly modified) content.
- `teardown/1` — optional cleanup after all files are processed.
## Example
defmodule MyTransform do
@behaviour OrgGarden.Transform
@impl true
def init(opts), do: %{some_state: opts[:value]}
@impl true
def apply(content, state, _opts) do
String.replace(content, "foo", state.some_state)
end
end
"""
@doc "One-time initialisation. Returns opaque state passed to apply/3."
@callback init(opts :: map()) :: term()
@doc "Transform file content. Returns the (possibly modified) content string."
@callback apply(content :: String.t(), state :: term(), opts :: map()) :: String.t()
@doc "Optional cleanup after all files are processed."
@callback teardown(state :: term()) :: :ok
@optional_callbacks teardown: 1
defmacro __using__(_) do
quote do
@behaviour OrgGarden.Transform
@impl OrgGarden.Transform
def init(opts), do: opts
defoverridable init: 1
end
end
end

View File

@@ -1,231 +0,0 @@
defmodule OrgGarden.Transforms.Citations do
@moduledoc """
Markdown transform: resolves org-citar citation keys to hyperlinks.
## Recognised citation syntax (as output by ox-hugo from org-citar)
[cite:@key] → org-cite / citar standard (most common)
[cite:@key1;@key2] → multiple citations
cite:key → older roam-style bare cite syntax
## Resolution chain (in order)
1. Zotero (live instance via Better BibTeX JSON-RPC) — preferred
2. BibTeX file (BIBTEX_FILE env var) — fallback
3. DOI / bare key — always succeeds
## Modes (opts.citation_mode)
:silent — silently use DOI/bare-key fallback when Zotero+BibTeX fail
:warn — (default) emit a Logger.warning for unresolved keys
:strict — raise on unresolved keys (aborts pipeline)
## Format
Resolved citations are rendered as:
[Label](url) when a URL is available
[Label] when no URL could be determined (bare key fallback)
Multiple semicolon-separated keys become space-separated links:
[cite:@a;@b] → [Author A, 2020](url_a) [Author B, 2019](url_b)
## init/1 callback
Loads the BibTeX file (if configured) once before processing begins,
and probes Zotero availability, emitting warnings as appropriate.
"""
@behaviour OrgGarden.Transform
require Logger
alias OrgGarden.Resolvers.Zotero
alias OrgGarden.Resolvers.BibTeX
alias OrgGarden.Resolvers.DOI
# Match [cite:@key] and [cite:@key1;@key2;...] (org-cite / citar style)
@cite_bracket_regex ~r/\[cite:(@[^\]]+)\]/
# Match bare cite:key or cite:@key (older roam style, no brackets, optional @ prefix)
@cite_bare_regex ~r/(?<![(\[])cite:@?([a-zA-Z0-9_:-]+)/
# ------------------------------------------------------------------
# OrgGarden callbacks
# ------------------------------------------------------------------
@doc """
Called once before processing any files. Loads BibTeX, probes Zotero.
Returns a state map passed to every `apply/3` call.
"""
def init(opts) do
bibtex_entries = load_bibtex(opts)
zotero_available = probe_zotero(opts)
if not zotero_available and bibtex_entries == %{} do
Logger.warning(
"Citations: neither Zotero nor a BibTeX file is available. " <>
"All citations will fall back to bare-key rendering. " <>
"Set BIBTEX_FILE env var or start Zotero with Better BibTeX to resolve citations."
)
end
%{
bibtex_entries: bibtex_entries,
zotero_available: zotero_available,
zotero_url: Map.get(opts, :zotero_url, "http://localhost:23119"),
citation_mode: Map.get(opts, :citation_mode, :warn)
}
end
@doc """
Apply citation resolution to a single markdown file's content.
"""
def apply(content, state, _opts) do
content
|> resolve_bracket_citations(state)
|> resolve_bare_citations(state)
end
# ------------------------------------------------------------------
# Resolution passes
# ------------------------------------------------------------------
defp resolve_bracket_citations(content, state) do
Regex.replace(@cite_bracket_regex, content, fn _full, keys_str ->
keys_str
|> String.split(";")
|> Enum.map(&String.trim/1)
|> Enum.map(fn "@" <> key -> key end)
|> Enum.map(&resolve_key(&1, state))
|> Enum.join(" ")
end)
end
defp resolve_bare_citations(content, state) do
Regex.replace(@cite_bare_regex, content, fn _full, key ->
resolve_key(key, state)
end)
end
# ------------------------------------------------------------------
# Single-key resolution chain
# ------------------------------------------------------------------
defp resolve_key(key, state) do
info =
with :error <- try_zotero(key, state),
:error <- try_bibtex(key, state) do
handle_unresolved(key, state)
else
{:ok, citation_info} -> citation_info
end
format_result(info)
end
defp try_zotero(_key, %{zotero_available: false}), do: :error
defp try_zotero(key, %{zotero_url: url}) do
Zotero.resolve(key, url)
end
defp try_bibtex(_key, %{bibtex_entries: entries}) when map_size(entries) == 0, do: :error
defp try_bibtex(key, %{bibtex_entries: entries}) do
BibTeX.resolve(key, entries)
end
defp handle_unresolved(key, %{citation_mode: mode}) do
case mode do
:strict ->
raise "Citations: could not resolve citation key '#{key}' and mode is :strict"
:warn ->
Logger.warning("Citations: unresolved citation key '#{key}' — using bare-key fallback")
{:ok, result} = DOI.resolve(key)
result
:silent ->
{:ok, result} = DOI.resolve(key)
result
end
end
defp format_result(%{label: label, url: nil}), do: "[#{label}]"
defp format_result(%{label: label, url: url}), do: "[#{label}](#{url})"
# ------------------------------------------------------------------
# Init helpers
# ------------------------------------------------------------------
defp load_bibtex(opts) do
path = Map.get(opts, :bibtex_file) || System.get_env("BIBTEX_FILE")
cond do
is_nil(path) ->
Logger.debug("Citations: BIBTEX_FILE not set — BibTeX resolver disabled")
%{}
not File.exists?(path) ->
Logger.warning("Citations: BIBTEX_FILE=#{path} does not exist — BibTeX resolver disabled")
%{}
true ->
case BibTeX.load(path) do
{:ok, entries} -> entries
{:error, reason} ->
Logger.warning("Citations: failed to load BibTeX file #{path}: #{inspect(reason)}")
%{}
end
end
end
defp probe_zotero(opts) do
url = Map.get(opts, :zotero_url, "http://localhost:23119")
# Use a no-op JSON-RPC call to probe availability.
# /better-bibtex/cayw is intentionally avoided — it blocks waiting for
# user interaction and never returns without a pick.
payload =
Jason.encode!(%{
jsonrpc: "2.0",
method: "item.search",
params: [[[]]],
id: 0
})
result =
try do
Req.post(url <> "/better-bibtex/json-rpc",
body: payload,
headers: [{"content-type", "application/json"}],
receive_timeout: 3_000,
finch: OrgGarden.Finch
)
rescue
e -> {:error, e}
end
case result do
{:ok, %{status: 200}} ->
Logger.info("Citations: Zotero Better BibTeX is available at #{url}")
true
{:ok, %{status: status}} ->
Logger.warning(
"Citations: Zotero responded HTTP #{status} at #{url}" <>
"is Better BibTeX installed?"
)
false
_ ->
Logger.warning(
"Citations: Zotero not reachable at #{url}" <>
"start Zotero with Better BibTeX or set BIBTEX_FILE as fallback"
)
false
end
end
end

View File

@@ -1,236 +0,0 @@
defmodule OrgGarden.Watcher do
@moduledoc """
File-watching GenServer that detects `.org` file changes and triggers
incremental export + transform for only the affected files.
Uses the `file_system` package (inotify on Linux, fsevents on macOS)
to watch the notes directory. Events are debounced per-file (500ms)
to coalesce rapid writes (e.g., Emacs auto-save).
## Lifecycle
Started dynamically by `OrgGarden.CLI` after the initial batch export.
Transforms are initialized once at startup and reused across all
incremental rebuilds to avoid repeated Zotero probes and BibTeX loads.
## Usage
OrgGarden.Watcher.start_link(
notes_dir: "/path/to/notes",
output_dir: "/path/to/output",
content_dir: "/path/to/output/content",
pipeline_opts: %{zotero_url: "...", ...},
transforms: [OrgGarden.Transforms.Citations]
)
"""
use GenServer
require Logger
@debounce_ms 500
# -------------------------------------------------------------------
# Client API
# -------------------------------------------------------------------
@doc """
Start the watcher as a linked process.
## Options
* `:notes_dir` — directory to watch for `.org` changes (required)
* `:output_dir` — ox-hugo base dir (required)
* `:content_dir` — directory where `.md` files are written (required)
* `:pipeline_opts` — opts map passed to transforms (required)
* `:transforms` — list of transform modules (default: `[OrgGarden.Transforms.Citations]`)
"""
def start_link(opts) do
GenServer.start_link(__MODULE__, opts, name: __MODULE__)
end
# -------------------------------------------------------------------
# GenServer callbacks
# -------------------------------------------------------------------
@impl true
def init(opts) do
notes_dir = Keyword.fetch!(opts, :notes_dir)
output_dir = Keyword.fetch!(opts, :output_dir)
content_dir = Keyword.fetch!(opts, :content_dir)
pipeline_opts = Keyword.fetch!(opts, :pipeline_opts)
transforms = Keyword.get(opts, :transforms, [OrgGarden.Transforms.Citations])
# Initialize transforms once — reused for all incremental rebuilds
initialized_transforms = OrgGarden.init_transforms(transforms, pipeline_opts)
# Start the file system watcher
{:ok, watcher_pid} = FileSystem.start_link(dirs: [notes_dir], recursive: true)
FileSystem.subscribe(watcher_pid)
Logger.info("Watcher: monitoring #{notes_dir} for .org changes")
{:ok,
%{
notes_dir: notes_dir,
output_dir: output_dir,
content_dir: content_dir,
pipeline_opts: pipeline_opts,
watcher_pid: watcher_pid,
initialized_transforms: initialized_transforms,
pending: %{}
}}
end
@impl true
def handle_info({:file_event, _pid, {path, events}}, state) do
path = to_string(path)
if org_file?(path) and not temporary_file?(path) do
event_type = classify_events(events)
Logger.debug("Watcher: #{event_type} event for #{path}")
{:noreply, schedule_debounce(path, event_type, state)}
else
{:noreply, state}
end
end
@impl true
def handle_info({:file_event, _pid, :stop}, state) do
Logger.warning("Watcher: file system monitor stopped unexpectedly")
{:stop, :watcher_stopped, state}
end
@impl true
def handle_info({:debounced, path, event_type}, state) do
state = %{state | pending: Map.delete(state.pending, path)}
case event_type do
:deleted ->
handle_delete(path, state)
_created_or_modified ->
handle_change(path, state)
end
{:noreply, state}
end
@impl true
def terminate(_reason, state) do
OrgGarden.teardown_transforms(state.initialized_transforms)
:ok
end
# -------------------------------------------------------------------
# Event handling
# -------------------------------------------------------------------
defp handle_change(orgfile, state) do
%{
notes_dir: notes_dir,
output_dir: output_dir,
content_dir: content_dir,
pipeline_opts: pipeline_opts,
initialized_transforms: initialized_transforms
} = state
md_path = OrgGarden.Export.expected_md_path(orgfile, notes_dir, content_dir)
IO.puts("==> Changed: #{Path.relative_to(orgfile, notes_dir)}")
case OrgGarden.Export.export_file(orgfile, notes_dir, output_dir) do
{:ok, _} ->
IO.puts(" exported: #{Path.relative_to(md_path, content_dir)}")
{:ok, stats} = OrgGarden.run_on_files_with([md_path], initialized_transforms, pipeline_opts)
Enum.each(stats, fn {mod, count} ->
if count > 0, do: IO.puts(" #{inspect(mod)}: #{count} file(s) modified")
end)
regenerate_index(content_dir)
IO.puts("==> Done")
{:error, reason} ->
Logger.error("Watcher: export failed for #{orgfile}: #{inspect(reason)}")
end
end
defp handle_delete(orgfile, state) do
%{notes_dir: notes_dir, content_dir: content_dir} = state
md_path = OrgGarden.Export.expected_md_path(orgfile, notes_dir, content_dir)
IO.puts("==> Deleted: #{Path.relative_to(orgfile, notes_dir)}")
if File.exists?(md_path) do
File.rm!(md_path)
IO.puts(" removed: #{Path.relative_to(md_path, content_dir)}")
# Clean up empty parent directories left behind
cleanup_empty_dirs(Path.dirname(md_path), content_dir)
end
regenerate_index(content_dir)
IO.puts("==> Done")
end
# -------------------------------------------------------------------
# Index generation
# -------------------------------------------------------------------
defp regenerate_index(content_dir) do
OrgGarden.Index.regenerate(content_dir)
end
# -------------------------------------------------------------------
# Helpers
# -------------------------------------------------------------------
defp schedule_debounce(path, event_type, state) do
# Cancel any existing timer for this path
case Map.get(state.pending, path) do
nil -> :ok
old_ref -> Process.cancel_timer(old_ref)
end
ref = Process.send_after(self(), {:debounced, path, event_type}, @debounce_ms)
%{state | pending: Map.put(state.pending, path, ref)}
end
defp org_file?(path), do: String.ends_with?(path, ".org")
defp temporary_file?(path) do
basename = Path.basename(path)
# Emacs creates temp files like .#file.org and #file.org#
String.starts_with?(basename, ".#") or
(String.starts_with?(basename, "#") and String.ends_with?(basename, "#"))
end
defp classify_events(events) do
cond do
:removed in events or :deleted in events -> :deleted
:created in events -> :created
:modified in events or :changed in events -> :modified
# renamed can mean created or deleted depending on context;
# if the file exists it was renamed into the watched dir
:renamed in events -> :modified
true -> :modified
end
end
defp cleanup_empty_dirs(dir, stop_at) do
dir = Path.expand(dir)
stop_at = Path.expand(stop_at)
if dir != stop_at and File.dir?(dir) do
case File.ls!(dir) do
[] ->
File.rmdir!(dir)
cleanup_empty_dirs(Path.dirname(dir), stop_at)
_ ->
:ok
end
end
end
end

View File

@@ -1,34 +0,0 @@
defmodule OrgGarden.MixProject do
use Mix.Project
def project do
[
app: :org_garden,
version: "0.1.0",
elixir: "~> 1.17",
start_permanent: Mix.env() == :prod,
deps: deps(),
escript: escript()
]
end
def application do
[
extra_applications: [:logger],
mod: {OrgGarden.Application, []}
]
end
defp escript do
[main_module: OrgGarden.CLI]
end
defp deps do
[
{:finch, "~> 0.19"},
{:req, "~> 0.5"},
{:jason, "~> 1.4"},
{:file_system, "~> 1.0"}
]
end
end

View File

@@ -1,12 +0,0 @@
%{
"file_system": {:hex, :file_system, "1.1.1", "31864f4685b0148f25bd3fbef2b1228457c0c89024ad67f7a81a3ffbc0bbad3a", [:mix], [], "hexpm", "7a15ff97dfe526aeefb090a7a9d3d03aa907e100e262a0f8f7746b78f8f87a5d"},
"finch": {:hex, :finch, "0.21.0", "b1c3b2d48af02d0c66d2a9ebfb5622be5c5ecd62937cf79a88a7f98d48a8290c", [:mix], [{:mime, "~> 1.0 or ~> 2.0", [hex: :mime, repo: "hexpm", optional: false]}, {:mint, "~> 1.6.2 or ~> 1.7", [hex: :mint, repo: "hexpm", optional: false]}, {:nimble_options, "~> 0.4 or ~> 1.0", [hex: :nimble_options, repo: "hexpm", optional: false]}, {:nimble_pool, "~> 1.1", [hex: :nimble_pool, repo: "hexpm", optional: false]}, {:telemetry, "~> 0.4 or ~> 1.0", [hex: :telemetry, repo: "hexpm", optional: false]}], "hexpm", "87dc6e169794cb2570f75841a19da99cfde834249568f2a5b121b809588a4377"},
"hpax": {:hex, :hpax, "1.0.3", "ed67ef51ad4df91e75cc6a1494f851850c0bd98ebc0be6e81b026e765ee535aa", [:mix], [], "hexpm", "8eab6e1cfa8d5918c2ce4ba43588e894af35dbd8e91e6e55c817bca5847df34a"},
"jason": {:hex, :jason, "1.4.4", "b9226785a9aa77b6857ca22832cffa5d5011a667207eb2a0ad56adb5db443b8a", [:mix], [{:decimal, "~> 1.0 or ~> 2.0", [hex: :decimal, repo: "hexpm", optional: true]}], "hexpm", "c5eb0cab91f094599f94d55bc63409236a8ec69a21a67814529e8d5f6cc90b3b"},
"mime": {:hex, :mime, "2.0.7", "b8d739037be7cd402aee1ba0306edfdef982687ee7e9859bee6198c1e7e2f128", [:mix], [], "hexpm", "6171188e399ee16023ffc5b76ce445eb6d9672e2e241d2df6050f3c771e80ccd"},
"mint": {:hex, :mint, "1.7.1", "113fdb2b2f3b59e47c7955971854641c61f378549d73e829e1768de90fc1abf1", [:mix], [{:castore, "~> 0.1.0 or ~> 1.0", [hex: :castore, repo: "hexpm", optional: true]}, {:hpax, "~> 0.1.1 or ~> 0.2.0 or ~> 1.0", [hex: :hpax, repo: "hexpm", optional: false]}], "hexpm", "fceba0a4d0f24301ddee3024ae116df1c3f4bb7a563a731f45fdfeb9d39a231b"},
"nimble_options": {:hex, :nimble_options, "1.1.1", "e3a492d54d85fc3fd7c5baf411d9d2852922f66e69476317787a7b2bb000a61b", [:mix], [], "hexpm", "821b2470ca9442c4b6984882fe9bb0389371b8ddec4d45a9504f00a66f650b44"},
"nimble_pool": {:hex, :nimble_pool, "1.1.0", "bf9c29fbdcba3564a8b800d1eeb5a3c58f36e1e11d7b7fb2e084a643f645f06b", [:mix], [], "hexpm", "af2e4e6b34197db81f7aad230c1118eac993acc0dae6bc83bac0126d4ae0813a"},
"req": {:hex, :req, "0.5.17", "0096ddd5b0ed6f576a03dde4b158a0c727215b15d2795e59e0916c6971066ede", [:mix], [{:brotli, "~> 0.3.1", [hex: :brotli, repo: "hexpm", optional: true]}, {:ezstd, "~> 1.0", [hex: :ezstd, repo: "hexpm", optional: true]}, {:finch, "~> 0.17", [hex: :finch, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: false]}, {:mime, "~> 2.0.6 or ~> 2.1", [hex: :mime, repo: "hexpm", optional: false]}, {:nimble_csv, "~> 1.0", [hex: :nimble_csv, repo: "hexpm", optional: true]}, {:plug, "~> 1.0", [hex: :plug, repo: "hexpm", optional: true]}], "hexpm", "0b8bc6ffdfebbc07968e59d3ff96d52f2202d0536f10fef4dc11dc02a2a43e39"},
"telemetry": {:hex, :telemetry, "1.3.0", "fedebbae410d715cf8e7062c96a1ef32ec22e764197f70cda73d82778d61e7a2", [:rebar3], [], "hexpm", "7015fc8919dbe63764f4b4b87a95b7c0996bd539e0d499be6ec9d7f3875b79e6"},
}

View File

@@ -1,19 +0,0 @@
diff --git a/quartz/util/glob.ts b/quartz/util/glob.ts
index 7a71160..91fbaa7 100644
--- a/quartz/util/glob.ts
+++ b/quartz/util/glob.ts
@@ -10,12 +10,13 @@ export async function glob(
pattern: string,
cwd: string,
ignorePatterns: string[],
+ respectGitignore: boolean = true,
): Promise<FilePath[]> {
const fps = (
await globby(pattern, {
cwd,
ignore: ignorePatterns,
- gitignore: true,
+ gitignore: respectGitignore,
})
).map(toPosixPath)
return fps as FilePath[]

View File

@@ -1,13 +0,0 @@
diff --git a/quartz/build.ts b/quartz/build.ts
index b98f4a8..3166a06 100644
--- a/quartz/build.ts
+++ b/quartz/build.ts
@@ -71,7 +71,7 @@ async function buildQuartz(argv: Argv, mut: Mutex, clientRefresh: () => void) {
console.log(`Cleaned output directory \`${output}\` in ${perf.timeSince("clean")}`)
perf.addEvent("glob")
- const allFiles = await glob("**/*.*", argv.directory, cfg.configuration.ignorePatterns)
+ const allFiles = await glob("**/*.*", argv.directory, cfg.configuration.ignorePatterns, false)
const markdownPaths = allFiles.filter((fp) => fp.endsWith(".md")).sort()
console.log(
`Found ${markdownPaths.length} input files from \`${argv.directory}\` in ${perf.timeSince("glob")}`,

View File

@@ -1,34 +0,0 @@
diff --git a/quartz/plugins/emitters/static.ts b/quartz/plugins/emitters/static.ts
index 0b45290..8b34049 100644
--- a/quartz/plugins/emitters/static.ts
+++ b/quartz/plugins/emitters/static.ts
@@ -7,6 +7,7 @@ import { dirname } from "path"
export const Static: QuartzEmitterPlugin = () => ({
name: "Static",
async *emit({ argv, cfg }) {
+ // Copy Quartz's own internal static assets (quartz/static/) → output/static/
const staticPath = joinSegments(QUARTZ, "static")
const fps = await glob("**", staticPath, cfg.configuration.ignorePatterns)
const outputStaticPath = joinSegments(argv.output, "static")
@@ -18,6 +19,21 @@ export const Static: QuartzEmitterPlugin = () => ({
await fs.promises.copyFile(src, dest)
yield dest
}
+
+ // Copy user-facing static assets (static/) → output/ preserving paths.
+ // This mirrors Hugo's convention: static/ox-hugo/foo.png is served at /ox-hugo/foo.png,
+ // which matches the src="/ox-hugo/..." paths that ox-hugo writes into exported markdown.
+ const userStaticPath = "static"
+ if (fs.existsSync(userStaticPath)) {
+ const userFps = await glob("**", userStaticPath, cfg.configuration.ignorePatterns, false)
+ for (const fp of userFps) {
+ const src = joinSegments(userStaticPath, fp) as FilePath
+ const dest = joinSegments(argv.output, fp) as FilePath
+ await fs.promises.mkdir(dirname(dest), { recursive: true })
+ await fs.promises.copyFile(src, dest)
+ yield dest
+ }
+ }
},
async *partialEmit() {},
})

View File

@@ -1,44 +0,0 @@
diff --git a/quartz/plugins/transformers/oxhugofm.ts b/quartz/plugins/transformers/oxhugofm.ts
index 303566e..4fb5e2c 100644
--- a/quartz/plugins/transformers/oxhugofm.ts
+++ b/quartz/plugins/transformers/oxhugofm.ts
@@ -27,7 +27,10 @@ const defaultOptions: Options = {
const relrefRegex = new RegExp(/\[([^\]]+)\]\(\{\{< relref "([^"]+)" >\}\}\)/, "g")
const predefinedHeadingIdRegex = new RegExp(/(.*) {#(?:.*)}/, "g")
const hugoShortcodeRegex = new RegExp(/{{(.*)}}/, "g")
-const figureTagRegex = new RegExp(/< ?figure src="(.*)" ?>/, "g")
+// Matches the full Hugo {{< figure src="..." ... >}} shortcode and captures src.
+// Must run before the generic shortcode stripper to avoid partial-match issues
+// with captions that contain HTML (e.g. <span class="figure-number">).
+const figureShortcodeRegex = new RegExp(/{{<\s*figure\b[^}]*\bsrc="([^"]*)"[^}]*>}}/, "g")
// \\\\\( -> matches \\(
// (.+?) -> Lazy match for capturing the equation
// \\\\\) -> matches \\)
@@ -70,19 +73,19 @@ export const OxHugoFlavouredMarkdown: QuartzTransformerPlugin<Partial<Options>>
})
}
- if (opts.removeHugoShortcode) {
+ if (opts.replaceFigureWithMdImg) {
src = src.toString()
- src = src.replaceAll(hugoShortcodeRegex, (_value, ...capture) => {
- const [scContent] = capture
- return scContent
+ src = src.replaceAll(figureShortcodeRegex, (_value, ...capture) => {
+ const [imgSrc] = capture
+ return `![](${imgSrc})`
})
}
- if (opts.replaceFigureWithMdImg) {
+ if (opts.removeHugoShortcode) {
src = src.toString()
- src = src.replaceAll(figureTagRegex, (_value, ...capture) => {
- const [src] = capture
- return `![](${src})`
+ src = src.replaceAll(hugoShortcodeRegex, (_value, ...capture) => {
+ const [scContent] = capture
+ return scContent
})
}

View File

@@ -1,17 +0,0 @@
export declare global {
interface Document {
addEventListener<K extends keyof CustomEventMap>(
type: K,
listener: (this: Document, ev: CustomEventMap[K]) => void,
): void
removeEventListener<K extends keyof CustomEventMap>(
type: K,
listener: (this: Document, ev: CustomEventMap[K]) => void,
): void
dispatchEvent<K extends keyof CustomEventMap>(ev: CustomEventMap[K] | UIEvent): void
}
interface Window {
spaNavigate(url: URL, isBack: boolean = false)
addCleanup(fn: (...args: any[]) => void)
}
}

View File

@@ -1,15 +0,0 @@
declare module "*.scss" {
const content: string
export = content
}
// dom custom event
interface CustomEventMap {
prenav: CustomEvent<{}>
nav: CustomEvent<{ url: FullSlug }>
themechange: CustomEvent<{ theme: "light" | "dark" }>
readermodechange: CustomEvent<{ mode: "on" | "off" }>
}
type ContentIndex = Record<FullSlug, ContentDetails>
declare const fetchData: Promise<ContentIndex>

View File

@@ -1,101 +0,0 @@
import { QuartzConfig } from "./quartz/cfg"
import * as Plugin from "./quartz/plugins"
/**
* Quartz 4 Configuration
*
* See https://quartz.jzhao.xyz/configuration for more information.
*/
const config: QuartzConfig = {
configuration: {
pageTitle: "Quartz 4",
pageTitleSuffix: "",
enableSPA: true,
enablePopovers: true,
analytics: {
provider: "plausible",
},
locale: "en-US",
baseUrl: "quartz.jzhao.xyz",
ignorePatterns: ["private", "templates", ".obsidian"],
defaultDateType: "modified",
theme: {
fontOrigin: "googleFonts",
cdnCaching: true,
typography: {
header: "Schibsted Grotesk",
body: "Source Sans Pro",
code: "IBM Plex Mono",
},
colors: {
lightMode: {
light: "#faf8f8",
lightgray: "#e5e5e5",
gray: "#b8b8b8",
darkgray: "#4e4e4e",
dark: "#2b2b2b",
secondary: "#284b63",
tertiary: "#84a59d",
highlight: "rgba(143, 159, 169, 0.15)",
textHighlight: "#fff23688",
},
darkMode: {
light: "#161618",
lightgray: "#393639",
gray: "#646464",
darkgray: "#d4d4d4",
dark: "#ebebec",
secondary: "#7b97aa",
tertiary: "#84a59d",
highlight: "rgba(143, 159, 169, 0.15)",
textHighlight: "#b3aa0288",
},
},
},
},
plugins: {
transformers: [
Plugin.FrontMatter({ delimiters: "+++", language: "toml" }),
Plugin.CreatedModifiedDate({
priority: ["frontmatter", "git", "filesystem"],
}),
Plugin.SyntaxHighlighting({
theme: {
light: "github-light",
dark: "github-dark",
},
keepBackground: false,
}),
// OxHugoFlavouredMarkdown must come before GitHubFlavoredMarkdown.
// Note: not compatible with ObsidianFlavoredMarkdown — use one or the other.
// If ox-hugo exports TOML frontmatter, change FrontMatter to:
// Plugin.FrontMatter({ delims: "+++", language: "toml" })
Plugin.OxHugoFlavouredMarkdown(),
Plugin.GitHubFlavoredMarkdown(),
Plugin.TableOfContents(),
Plugin.CrawlLinks({ markdownLinkResolution: "shortest" }),
Plugin.Description(),
Plugin.Latex({ renderEngine: "katex" }),
],
filters: [Plugin.RemoveDrafts()],
emitters: [
Plugin.AliasRedirects(),
Plugin.ComponentResources(),
Plugin.ContentPage(),
Plugin.FolderPage(),
Plugin.TagPage(),
Plugin.ContentIndex({
enableSiteMap: true,
enableRSS: true,
}),
Plugin.Assets(),
Plugin.Static(),
Plugin.Favicon(),
Plugin.NotFoundPage(),
// Comment out CustomOgImages to speed up build time
Plugin.CustomOgImages(),
],
},
}
export default config

View File

@@ -1,68 +0,0 @@
import { PageLayout, SharedLayout } from "./quartz/cfg"
import * as Component from "./quartz/components"
// components shared across all pages
export const sharedPageComponents: SharedLayout = {
head: Component.Head(),
header: [],
afterBody: [],
footer: Component.Footer({
links: {
GitHub: "https://github.com/jackyzha0/quartz",
"Discord Community": "https://discord.gg/cRFFHYye7t",
},
}),
}
// components for pages that display a single page (e.g. a single note)
export const defaultContentPageLayout: PageLayout = {
beforeBody: [
Component.ConditionalRender({
component: Component.Breadcrumbs(),
condition: (page) => page.fileData.slug !== "index",
}),
Component.ArticleTitle(),
Component.ContentMeta(),
Component.TagList(),
],
left: [
Component.PageTitle(),
Component.MobileOnly(Component.Spacer()),
Component.Flex({
components: [
{
Component: Component.Search(),
grow: true,
},
{ Component: Component.Darkmode() },
{ Component: Component.ReaderMode() },
],
}),
Component.Explorer(),
],
right: [
Component.Graph(),
Component.DesktopOnly(Component.TableOfContents()),
Component.Backlinks(),
],
}
// components for pages that display lists of pages (e.g. tags or folders)
export const defaultListPageLayout: PageLayout = {
beforeBody: [Component.Breadcrumbs(), Component.ArticleTitle(), Component.ContentMeta()],
left: [
Component.PageTitle(),
Component.MobileOnly(Component.Spacer()),
Component.Flex({
components: [
{
Component: Component.Search(),
grow: true,
},
{ Component: Component.Darkmode() },
],
}),
Component.Explorer(),
],
right: [],
}

View File

@@ -17,10 +17,7 @@
"check": "tsc --noEmit && npx prettier . --check",
"format": "npx prettier . --write",
"test": "tsx --test",
"profile": "0x -D prof ./quartz/bootstrap-cli.mjs build --concurrency=1",
"export": "elixir scripts/export.exs",
"build:notes": "elixir scripts/export.exs && npx quartz build",
"serve:notes": "elixir scripts/export.exs && npx quartz build --serve"
"profile": "0x -D prof ./quartz/bootstrap-cli.mjs build --concurrency=1"
},
"engines": {
"npm": ">=10.9.2",

View File

@@ -55,7 +55,7 @@ const config: QuartzConfig = {
},
plugins: {
transformers: [
Plugin.FrontMatter({ delimiters: "+++", language: "toml" }),
Plugin.FrontMatter(),
Plugin.CreatedModifiedDate({
priority: ["frontmatter", "git", "filesystem"],
}),
@@ -66,16 +66,13 @@ const config: QuartzConfig = {
},
keepBackground: false,
}),
// OxHugoFlavouredMarkdown must come before GitHubFlavoredMarkdown.
// Note: not compatible with ObsidianFlavoredMarkdown — use one or the other.
// If ox-hugo exports TOML frontmatter, change FrontMatter to:
// Plugin.FrontMatter({ delims: "+++", language: "toml" })
Plugin.OxHugoFlavouredMarkdown(),
Plugin.ObsidianFlavoredMarkdown({ enableInHtmlEmbed: false }),
Plugin.GitHubFlavoredMarkdown(),
Plugin.TableOfContents(),
Plugin.CrawlLinks({ markdownLinkResolution: "shortest" }),
Plugin.Description(),
Plugin.Latex({ renderEngine: "katex" }),
Plugin.ObsidianBases(),
],
filters: [Plugin.RemoveDrafts()],
emitters: [
@@ -94,6 +91,7 @@ const config: QuartzConfig = {
Plugin.NotFoundPage(),
// Comment out CustomOgImages to speed up build time
Plugin.CustomOgImages(),
Plugin.BasePage(),
],
},
}

View File

@@ -71,8 +71,8 @@ async function buildQuartz(argv: Argv, mut: Mutex, clientRefresh: () => void) {
console.log(`Cleaned output directory \`${output}\` in ${perf.timeSince("clean")}`)
perf.addEvent("glob")
const allFiles = await glob("**/*.*", argv.directory, cfg.configuration.ignorePatterns, false)
const markdownPaths = allFiles.filter((fp) => fp.endsWith(".md")).sort()
const allFiles = await glob("**/*.*", argv.directory, cfg.configuration.ignorePatterns)
const markdownPaths = allFiles.filter((fp) => fp.endsWith(".md") || fp.endsWith(".base")).sort()
console.log(
`Found ${markdownPaths.length} input files from \`${argv.directory}\` in ${perf.timeSince("glob")}`,
)

View File

@@ -0,0 +1,218 @@
import { JSX } from "preact"
import { QuartzComponent, QuartzComponentConstructor, QuartzComponentProps } from "./types"
import { classNames } from "../util/lang"
import { resolveRelative } from "../util/path"
// @ts-ignore
import script from "./scripts/base-view-selector.inline"
import baseViewSelectorStyle from "./styles/baseViewSelector.scss"
const icons: Record<string, JSX.Element> = {
table: (
<svg
xmlns="http://www.w3.org/2000/svg"
width="16"
height="16"
viewBox="0 0 24 24"
fill="none"
stroke="currentColor"
stroke-width="2"
stroke-linecap="round"
stroke-linejoin="round"
>
<rect width="18" height="18" x="3" y="3" rx="2" />
<path d="M3 9h18" />
<path d="M3 15h18" />
<path d="M9 3v18" />
<path d="M15 3v18" />
</svg>
),
list: (
<svg
xmlns="http://www.w3.org/2000/svg"
width="16"
height="16"
viewBox="0 0 24 24"
fill="none"
stroke="currentColor"
stroke-width="2"
stroke-linecap="round"
stroke-linejoin="round"
>
<line x1="8" x2="21" y1="6" y2="6" />
<line x1="8" x2="21" y1="12" y2="12" />
<line x1="8" x2="21" y1="18" y2="18" />
<line x1="3" x2="3.01" y1="6" y2="6" />
<line x1="3" x2="3.01" y1="12" y2="12" />
<line x1="3" x2="3.01" y1="18" y2="18" />
</svg>
),
chevronsUpDown: (
<svg
xmlns="http://www.w3.org/2000/svg"
width="16"
height="16"
viewBox="0 0 24 24"
fill="none"
stroke="currentColor"
stroke-width="2"
stroke-linecap="round"
stroke-linejoin="round"
>
<path d="m7 15 5 5 5-5" />
<path d="m7 9 5-5 5 5" />
</svg>
),
chevronRight: (
<svg
xmlns="http://www.w3.org/2000/svg"
width="16"
height="16"
viewBox="0 0 24 24"
fill="none"
stroke="currentColor"
stroke-width="2"
stroke-linecap="round"
stroke-linejoin="round"
>
<path d="m9 18 6-6-6-6" />
</svg>
),
x: (
<svg
xmlns="http://www.w3.org/2000/svg"
width="16"
height="16"
viewBox="0 0 24 24"
fill="none"
stroke="currentColor"
stroke-width="2"
stroke-linecap="round"
stroke-linejoin="round"
>
<path d="M18 6 6 18" />
<path d="m6 6 12 12" />
</svg>
),
map: (
<svg
xmlns="http://www.w3.org/2000/svg"
width="16"
height="16"
viewBox="0 0 24 24"
fill="none"
stroke="currentColor"
stroke-width="2"
stroke-linecap="round"
stroke-linejoin="round"
>
<path d="M15 6v12a3 3 0 1 0 3-3H6a3 3 0 1 0 3 3V6a3 3 0 1 0-3 3h12a3 3 0 1 0-3-3" />
</svg>
),
card: (
<svg
xmlns="http://www.w3.org/2000/svg"
width="16"
height="16"
viewBox="0 0 24 24"
fill="none"
stroke="currentColor"
stroke-width="2"
stroke-linecap="round"
stroke-linejoin="round"
>
<rect width="7" height="7" x="3" y="3" rx="1" />
<rect width="7" height="7" x="14" y="3" rx="1" />
<rect width="7" height="7" x="14" y="14" rx="1" />
<rect width="7" height="7" x="3" y="14" rx="1" />
</svg>
),
}
const viewTypeIcons: Record<string, JSX.Element | undefined> = {
table: icons.table,
list: icons.list,
gallery: icons.card,
board: icons.table,
calendar: icons.table,
map: icons.map,
cards: icons.card,
}
const BaseViewSelector: QuartzComponent = ({ fileData, displayClass }: QuartzComponentProps) => {
const baseMeta = fileData.basesMetadata
if (!baseMeta || baseMeta.allViews.length <= 1) {
return null
}
const currentViewName = baseMeta.currentView
const allViews = baseMeta.allViews
const currentIcon =
viewTypeIcons[allViews.find((view) => view.name === currentViewName)?.type ?? ""] ?? icons.table
return (
<div class={classNames(displayClass, "bases-toolbar")} data-base-view-selector>
<div class="bases-toolbar-item bases-toolbar-views-menu">
<span
class="text-icon-button"
aria-label="Select view"
aria-expanded="false"
aria-haspopup="true"
role="button"
tabindex={0}
>
<span class="text-button-icon">{currentIcon}</span>
<span class="text-button-label">{currentViewName.toLowerCase()}</span>
<span class="text-button-icon mod-aux">{icons.chevronsUpDown}</span>
</span>
</div>
<div class="menu-scroll" data-dropdown>
<div class="bases-toolbar-menu-container">
<div class="search-input-container">
<input type="search" placeholder="Search..." data-search-input />
<div class="search-input-clear-button" data-clear-search hidden>
{icons.x}
</div>
</div>
<div class="bases-toolbar-items">
<div class="suggestion-group" data-group="views" data-view-list>
{allViews.map((view) => {
const isActive = view.name === currentViewName
const icon = viewTypeIcons[view.type] || icons.table
const href = resolveRelative(fileData.slug!, view.slug)
return (
<a
href={href}
data-slug={view.slug}
class={
isActive
? "suggestion-item bases-toolbar-menu-item mod-active is-selected"
: "suggestion-item bases-toolbar-menu-item"
}
data-view-name={view.name}
data-view-type={view.type}
>
<div class="bases-toolbar-menu-item-info">
<div class="bases-toolbar-menu-item-info-icon">{icon}</div>
<div class="bases-toolbar-menu-item-name">{view.name.toLowerCase()}</div>
</div>
<div class="clickable-icon bases-toolbar-menu-item-icon">
{icons.chevronRight}
</div>
</a>
)
})}
</div>
</div>
</div>
</div>
</div>
)
}
BaseViewSelector.css = baseViewSelectorStyle
BaseViewSelector.afterDOMLoaded = script
export default (() => BaseViewSelector) satisfies QuartzComponentConstructor

View File

@@ -51,7 +51,9 @@ export default ((opts?: Partial<BreadcrumbOptions>) => {
ctx,
}: QuartzComponentProps) => {
const trie = (ctx.trie ??= trieFromAllFiles(allFiles))
const slugParts = fileData.slug!.split("/")
const baseMeta = fileData.basesMetadata
const slugParts = (baseMeta ? baseMeta.baseSlug : fileData.slug!).split("/")
const pathNodes = trie.ancestryChain(slugParts)
if (!pathNodes) {
@@ -64,14 +66,24 @@ export default ((opts?: Partial<BreadcrumbOptions>) => {
crumb.displayName = options.rootName
}
// For last node (current page), set empty path
if (idx === pathNodes.length - 1) {
crumb.path = ""
if (baseMeta) {
crumb.path = resolveRelative(fileData.slug!, simplifySlug(baseMeta.baseSlug))
} else {
crumb.path = ""
}
}
return crumb
})
if (baseMeta && options.showCurrentPage) {
crumbs.push({
displayName: baseMeta.currentView.replaceAll("-", " "),
path: "",
})
}
if (!options.showCurrentPage) {
crumbs.pop()
}

View File

@@ -1,6 +1,8 @@
import Content from "./pages/Content"
import TagContent from "./pages/TagContent"
import FolderContent from "./pages/FolderContent"
import BaseContent from "./pages/BaseContent"
import BaseViewSelector from "./BaseViewSelector"
import NotFound from "./pages/404"
import ArticleTitle from "./ArticleTitle"
import Darkmode from "./Darkmode"
@@ -29,6 +31,8 @@ export {
Content,
TagContent,
FolderContent,
BaseContent,
BaseViewSelector,
Darkmode,
ReaderMode,
Head,

View File

@@ -0,0 +1,20 @@
import { QuartzComponent, QuartzComponentConstructor, QuartzComponentProps } from "../types"
import style from "../styles/basePage.scss"
import { htmlToJsx } from "../../util/jsx"
export default (() => {
const BaseContent: QuartzComponent = (props: QuartzComponentProps) => {
const { fileData, tree } = props
return (
<div class="popover-hint">
<article class={["base-content", ...(fileData.frontmatter?.cssclasses ?? [])].join(" ")}>
{htmlToJsx(fileData.filePath!, fileData.basesRenderedTree ?? tree)}
</article>
</div>
)
}
BaseContent.css = style
return BaseContent
}) satisfies QuartzComponentConstructor

View File

@@ -0,0 +1,144 @@
let documentClickHandler: ((e: MouseEvent) => void) | null = null
function setupBaseViewSelector() {
const selectors = document.querySelectorAll("[data-base-view-selector]")
if (selectors.length === 0) return
if (!documentClickHandler) {
documentClickHandler = (e: MouseEvent) => {
document.querySelectorAll("[data-base-view-selector]").forEach((selector) => {
if (selector.contains(e.target as Node)) return
const trigger = selector.querySelector(".text-icon-button") as HTMLElement | null
if (trigger?.getAttribute("aria-expanded") === "true") {
selector.dispatchEvent(new CustomEvent("close-dropdown"))
}
})
}
document.addEventListener("click", documentClickHandler)
window.addCleanup(() => {
if (documentClickHandler) {
document.removeEventListener("click", documentClickHandler)
documentClickHandler = null
}
})
}
selectors.forEach((selector) => {
if (selector.hasAttribute("data-initialized")) return
selector.setAttribute("data-initialized", "true")
const triggerEl = selector.querySelector(".text-icon-button") as HTMLElement | null
const searchInputEl = selector.querySelector("[data-search-input]") as HTMLInputElement | null
const clearButtonEl = selector.querySelector("[data-clear-search]") as HTMLElement | null
const viewListEl = selector.querySelector("[data-view-list]") as HTMLElement | null
if (!triggerEl || !searchInputEl || !clearButtonEl || !viewListEl) return
const trigger = triggerEl
const searchInput = searchInputEl
const clearButton = clearButtonEl
const viewList = viewListEl
function toggleDropdown() {
if (trigger.getAttribute("aria-expanded") === "true") {
closeDropdown()
return
}
openDropdown()
}
function openDropdown() {
trigger.setAttribute("aria-expanded", "true")
trigger.classList.add("has-active-menu")
setTimeout(() => searchInput.focus(), 10)
}
function closeDropdown() {
trigger.setAttribute("aria-expanded", "false")
trigger.classList.remove("has-active-menu")
searchInput.value = ""
clearButton.hidden = true
filterViews("")
}
function filterViews(query: string) {
const items = viewList.querySelectorAll<HTMLElement>(".bases-toolbar-menu-item")
const lowerQuery = query.toLowerCase()
items.forEach((item) => {
const viewName = (item.getAttribute("data-view-name") || "").toLowerCase()
const viewType = (item.getAttribute("data-view-type") || "").toLowerCase()
const matches = viewName.includes(lowerQuery) || viewType.includes(lowerQuery)
item.style.display = matches ? "" : "none"
})
}
function handleSearchInput() {
const query = searchInput.value
filterViews(query)
clearButton.hidden = query.length === 0
}
function clearSearch() {
searchInput.value = ""
clearButton.hidden = true
filterViews("")
searchInput.focus()
}
const handleTriggerClick = (e: MouseEvent) => {
e.stopPropagation()
toggleDropdown()
}
const handleTriggerKeydown = (e: KeyboardEvent) => {
if (e.key === "Enter" || e.key === " ") {
e.preventDefault()
toggleDropdown()
}
}
const handleSearchKeydown = (e: KeyboardEvent) => {
if (e.key === "Escape") {
if (searchInput.value) {
clearSearch()
} else {
closeDropdown()
}
}
}
const handleClearClick = (e: MouseEvent) => {
e.stopPropagation()
clearSearch()
}
trigger.addEventListener("click", handleTriggerClick)
trigger.addEventListener("keydown", handleTriggerKeydown)
searchInput.addEventListener("input", handleSearchInput)
searchInput.addEventListener("keydown", handleSearchKeydown)
clearButton.addEventListener("click", handleClearClick)
const viewLinks = viewList.querySelectorAll(".bases-toolbar-menu-item")
viewLinks.forEach((link) => {
link.addEventListener("click", closeDropdown)
window.addCleanup(() => link.removeEventListener("click", closeDropdown))
})
selector.addEventListener("close-dropdown", closeDropdown)
window.addCleanup(() => {
trigger.removeEventListener("click", handleTriggerClick)
trigger.removeEventListener("keydown", handleTriggerKeydown)
searchInput.removeEventListener("input", handleSearchInput)
searchInput.removeEventListener("keydown", handleSearchKeydown)
clearButton.removeEventListener("click", handleClearClick)
selector.removeEventListener("close-dropdown", closeDropdown)
selector.removeAttribute("data-initialized")
closeDropdown()
})
})
}
document.addEventListener("nav", setupBaseViewSelector)

View File

@@ -0,0 +1,299 @@
.base-content {
width: 100%;
}
.base-view {
width: 100%;
overflow-x: auto;
}
.base-table {
width: 100%;
border-collapse: collapse;
font-size: 0.875rem;
th,
td {
padding: 0.5rem 0.75rem;
text-align: left;
border-bottom: 1px solid var(--lightgray);
}
th {
font-weight: 600;
color: var(--darkgray);
background: var(--light);
position: sticky;
top: 0;
}
tbody tr:hover {
background: var(--light);
}
a.internal {
color: var(--secondary);
text-decoration: none;
&:hover {
text-decoration: underline;
}
}
}
.base-group-header td {
font-weight: 600;
background: var(--light);
color: var(--dark);
padding-top: 1rem;
}
.base-summary-row {
background: var(--light);
font-weight: 500;
.base-summary-cell {
border-top: 2px solid var(--lightgray);
color: var(--darkgray);
}
}
.base-checkbox {
pointer-events: none;
width: 1rem;
height: 1rem;
accent-color: var(--secondary);
}
.base-list {
list-style: none;
padding: 0;
margin: 0;
li {
padding: 0.375rem 0;
border-bottom: 1px solid var(--lightgray);
&:last-child {
border-bottom: none;
}
}
a.internal {
color: var(--secondary);
text-decoration: none;
&:hover {
text-decoration: underline;
}
}
}
.base-list-container {
display: flex;
flex-direction: column;
gap: 1.5rem;
}
.base-list-group {
.base-list-group-header {
font-size: 1rem;
font-weight: 600;
margin-bottom: 0.5rem;
color: var(--dark);
}
}
.base-list-nested {
list-style: none;
padding-left: 1rem;
margin-top: 0.25rem;
font-size: 0.8125rem;
color: var(--darkgray);
}
.base-list-meta-label {
font-weight: 500;
}
.base-card-grid {
--base-card-min: 200px;
--base-card-aspect: 1.4;
display: grid;
grid-template-columns: repeat(auto-fill, minmax(var(--base-card-min), 1fr));
gap: 1rem;
}
.base-card-container {
display: flex;
flex-direction: column;
gap: 1.5rem;
}
.base-card-group {
.base-card-group-header {
font-size: 1rem;
font-weight: 600;
margin-bottom: 0.75rem;
color: var(--dark);
}
}
.base-card {
display: flex;
flex-direction: column;
border: 1px solid var(--lightgray);
border-radius: 8px;
overflow: hidden;
background: var(--light);
transition: box-shadow 0.15s ease;
&:hover {
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.1);
}
}
.base-card-image-link {
display: block;
aspect-ratio: var(--base-card-aspect);
background-position: center;
background-repeat: no-repeat;
}
.base-card-content {
padding: 0.75rem;
display: flex;
flex-direction: column;
gap: 0.5rem;
}
.base-card-title-link {
text-decoration: none;
color: inherit;
&:hover .base-card-title {
color: var(--secondary);
}
}
.base-card-title {
font-size: 0.9375rem;
font-weight: 600;
margin: 0;
line-height: 1.3;
transition: color 0.15s ease;
}
.base-card-meta {
display: flex;
flex-direction: column;
gap: 0.25rem;
font-size: 0.8125rem;
color: var(--darkgray);
}
.base-card-meta-item {
display: flex;
gap: 0.25rem;
}
.base-card-meta-label {
font-weight: 500;
&::after {
content: ":";
}
}
.base-calendar-container {
display: flex;
flex-direction: column;
gap: 1.5rem;
}
.base-calendar-group {
.base-calendar-group-header {
font-size: 0.9375rem;
font-weight: 600;
margin-bottom: 0.5rem;
color: var(--dark);
font-variant-numeric: tabular-nums;
}
}
.base-map {
width: 100%;
min-height: 400px;
background: var(--light);
border: 1px solid var(--lightgray);
border-radius: 8px;
display: flex;
align-items: center;
justify-content: center;
color: var(--darkgray);
&::before {
content: "Map view requires client-side JavaScript";
font-size: 0.875rem;
}
}
.base-diagnostics {
background: #fff3cd;
border: 1px solid #ffc107;
border-radius: 4px;
padding: 1rem;
margin-bottom: 1rem;
font-size: 0.875rem;
}
.base-diagnostics-title {
font-weight: 600;
margin-bottom: 0.5rem;
color: #856404;
}
.base-diagnostics-meta {
display: flex;
gap: 0.5rem;
margin-bottom: 0.75rem;
color: #856404;
}
.base-diagnostics-page {
font-family: var(--codeFont);
}
.base-diagnostics-list {
list-style: none;
padding: 0;
margin: 0;
display: flex;
flex-direction: column;
gap: 0.5rem;
}
.base-diagnostics-item {
background: white;
padding: 0.5rem;
border-radius: 4px;
}
.base-diagnostics-label {
font-weight: 500;
color: #856404;
}
.base-diagnostics-message {
color: #664d03;
margin: 0.25rem 0;
}
.base-diagnostics-source {
display: block;
font-size: 0.8125rem;
color: #6c757d;
white-space: pre-wrap;
word-break: break-all;
}

View File

@@ -0,0 +1,275 @@
@use "../../styles/variables.scss" as *;
.bases-toolbar {
position: relative;
display: inline-block;
margin: 1rem 0;
font-family: var(--bodyFont);
.bases-toolbar-item {
display: inline-block;
position: relative;
&.bases-toolbar-views-menu {
.text-icon-button {
display: flex;
align-items: center;
gap: 0.375rem;
padding: 0.375rem 0.75rem;
background: var(--light);
border: 1px solid var(--lightgray);
border-radius: 6px;
color: var(--darkgray);
font-size: 0.875rem;
font-weight: 500;
cursor: pointer;
transition: all 0.15s ease;
user-select: none;
&:hover {
background: var(--highlight);
border-color: var(--gray);
}
&.has-active-menu {
border-color: var(--secondary);
background: var(--highlight);
}
.text-button-icon {
display: flex;
align-items: center;
justify-content: center;
width: 16px;
height: 16px;
color: var(--gray);
flex-shrink: 0;
svg {
width: 16px;
height: 16px;
}
&.mod-aux {
opacity: 0.7;
}
}
.text-button-label {
font-size: 0.875rem;
color: var(--dark);
font-weight: 500;
}
}
}
}
.menu-scroll {
position: absolute;
top: calc(100% + 0.5rem);
left: 0;
z-index: 100;
max-height: 400px;
background: var(--light);
border: 1px solid var(--lightgray);
border-radius: 8px;
box-shadow:
0 4px 6px -1px rgb(0 0 0 / 0.1),
0 2px 4px -2px rgb(0 0 0 / 0.1);
overflow: hidden;
min-width: 280px;
display: none;
}
&:has(.text-icon-button.has-active-menu) .menu-scroll {
display: block;
}
.bases-toolbar-menu-container {
display: flex;
flex-direction: column;
max-height: 400px;
.search-input-container {
position: relative;
padding: 0.5rem;
border-bottom: 1px solid var(--lightgray);
input[type="search"] {
width: 100%;
padding: 0.375rem 0.75rem;
padding-right: 2rem;
background: var(--light);
border: 1px solid var(--secondary);
border-radius: 6px;
font-size: 0.875rem;
color: var(--dark);
outline: none;
transition: box-shadow 0.15s ease;
font-family: var(--bodyFont);
&::placeholder {
color: var(--gray);
opacity: 0.7;
}
&:focus {
box-shadow: 0 0 0 2px var(--highlight);
}
&::-webkit-search-cancel-button {
display: none;
}
}
.search-input-clear-button {
position: absolute;
right: 1rem;
top: 50%;
transform: translateY(-50%);
display: flex;
align-items: center;
justify-content: center;
width: 20px;
height: 20px;
cursor: pointer;
opacity: 0.5;
transition: opacity 0.15s ease;
color: var(--gray);
&:hover {
opacity: 1;
}
&[hidden] {
display: none;
}
svg {
width: 14px;
height: 14px;
}
}
}
.bases-toolbar-items {
overflow-y: auto;
max-height: 340px;
&::-webkit-scrollbar {
width: 8px;
}
&::-webkit-scrollbar-track {
background: transparent;
}
&::-webkit-scrollbar-thumb {
background: var(--lightgray);
border-radius: 4px;
&:hover {
background: var(--gray);
}
}
.suggestion-group {
&[data-group="views"] {
padding: 0.25rem 0;
text-transform: lowercase;
}
}
.suggestion-item {
display: block;
text-decoration: none;
color: inherit;
cursor: pointer;
&.bases-toolbar-menu-item {
display: flex;
align-items: center;
justify-content: space-between;
padding: 0.5rem 0.75rem;
margin: 0 0.25rem;
border-radius: 4px;
transition: background 0.15s ease;
&:hover {
background: var(--lightgray);
}
&.mod-active {
font-weight: $semiBoldWeight;
}
&.is-selected {
.bases-toolbar-menu-item-info {
.bases-toolbar-menu-item-name {
font-weight: 600;
color: var(--secondary);
}
}
}
.bases-toolbar-menu-item-info {
display: flex;
align-items: center;
gap: 0.5rem;
flex: 1;
.bases-toolbar-menu-item-info-icon {
display: flex;
align-items: center;
justify-content: center;
width: 16px;
height: 16px;
color: var(--gray);
flex-shrink: 0;
svg {
width: 16px;
height: 16px;
}
}
.bases-toolbar-menu-item-name {
font-size: 0.875rem;
color: var(--dark);
}
}
.clickable-icon.bases-toolbar-menu-item-icon {
display: flex;
align-items: center;
justify-content: center;
width: 16px;
height: 16px;
opacity: 0;
transition: opacity 0.15s ease;
color: var(--gray);
flex-shrink: 0;
svg {
width: 16px;
height: 16px;
}
}
&:hover .clickable-icon.bases-toolbar-menu-item-icon {
opacity: 0.5;
}
}
}
}
}
}
@media all and ($mobile) {
.bases-toolbar {
.menu-scroll {
min-width: 240px;
left: auto;
}
}
}

View File

@@ -7,8 +7,12 @@ import { Argv } from "../../util/ctx"
import { QuartzConfig } from "../../cfg"
const filesToCopy = async (argv: Argv, cfg: QuartzConfig) => {
// glob all non MD files in content folder and copy it over
return await glob("**", argv.directory, ["**/*.md", ...cfg.configuration.ignorePatterns])
// glob all non MD/base files in content folder and copy it over
return await glob("**", argv.directory, [
"**/*.md",
"**/*.base",
...cfg.configuration.ignorePatterns,
])
}
const copyFile = async (argv: Argv, fp: FilePath) => {
@@ -37,7 +41,7 @@ export const Assets: QuartzEmitterPlugin = () => {
async *partialEmit(ctx, _content, _resources, changeEvents) {
for (const changeEvent of changeEvents) {
const ext = path.extname(changeEvent.path)
if (ext === ".md") continue
if (ext === ".md" || ext === ".base") continue
if (changeEvent.type === "add" || changeEvent.type === "change") {
yield copyFile(ctx.argv, changeEvent.path)

View File

@@ -0,0 +1,184 @@
import { QuartzEmitterPlugin } from "../types"
import { QuartzComponentProps } from "../../components/types"
import HeaderConstructor from "../../components/Header"
import BodyConstructor from "../../components/Body"
import { pageResources, renderPage } from "../../components/renderPage"
import { ProcessedContent, QuartzPluginData } from "../vfile"
import { FullPageLayout } from "../../cfg"
import { pathToRoot } from "../../util/path"
import { defaultListPageLayout, sharedPageComponents } from "../../../quartz.layout"
import { BaseContent, BaseViewSelector } from "../../components"
import { write } from "./helpers"
import { BuildCtx } from "../../util/ctx"
import { StaticResources } from "../../util/resources"
import {
renderBaseViewsForFile,
RenderedBaseView,
BaseViewMeta,
BaseMetadata,
} from "../../util/base/render"
import { BaseFile } from "../../util/base/types"
interface BasePageOptions extends FullPageLayout {}
function isBaseFile(data: QuartzPluginData): boolean {
return Boolean(data.basesConfig && (data.basesConfig as BaseFile).views?.length > 0)
}
function getBaseFiles(content: ProcessedContent[]): ProcessedContent[] {
return content.filter(([_, file]) => isBaseFile(file.data))
}
async function processBasePage(
ctx: BuildCtx,
baseFileData: QuartzPluginData,
renderedView: RenderedBaseView,
allViews: BaseViewMeta[],
allFiles: QuartzPluginData[],
opts: FullPageLayout,
resources: StaticResources,
) {
const slug = renderedView.slug
const cfg = ctx.cfg.configuration
const externalResources = pageResources(pathToRoot(slug), resources)
const viewFileData: QuartzPluginData = {
...baseFileData,
slug,
frontmatter: {
...baseFileData.frontmatter,
title: renderedView.view.name,
},
basesRenderedTree: renderedView.tree,
basesAllViews: allViews,
basesCurrentView: renderedView.view.name,
basesMetadata: {
baseSlug: baseFileData.slug!,
currentView: renderedView.view.name,
allViews,
},
}
const componentData: QuartzComponentProps = {
ctx,
fileData: viewFileData,
externalResources,
cfg,
children: [],
tree: renderedView.tree,
allFiles,
}
const content = renderPage(cfg, slug, componentData, opts, externalResources)
return write({
ctx,
content,
slug,
ext: ".html",
})
}
export const BasePage: QuartzEmitterPlugin<Partial<BasePageOptions>> = (userOpts) => {
const baseOpts: FullPageLayout = {
...sharedPageComponents,
...defaultListPageLayout,
pageBody: BaseContent(),
...userOpts,
}
const opts: FullPageLayout = {
...baseOpts,
beforeBody: [
...baseOpts.beforeBody.filter((component) => component.name !== "ArticleTitle"),
BaseViewSelector(),
],
}
const { head: Head, header, beforeBody, pageBody, afterBody, left, right, footer: Footer } = opts
const Header = HeaderConstructor()
const Body = BodyConstructor()
return {
name: "BasePage",
getQuartzComponents() {
return [
Head,
Header,
Body,
...header,
...beforeBody,
pageBody,
...afterBody,
...left,
...right,
Footer,
]
},
async *emit(ctx, content, resources) {
const allFiles = content.map((c) => c[1].data)
const baseFiles = getBaseFiles(content)
for (const [_, file] of baseFiles) {
const baseFileData = file.data
const { views, allViews } = renderBaseViewsForFile(baseFileData, allFiles)
for (const renderedView of views) {
yield processBasePage(
ctx,
baseFileData,
renderedView,
allViews,
allFiles,
opts,
resources,
)
}
}
},
async *partialEmit(ctx, content, resources, changeEvents) {
const allFiles = content.map((c) => c[1].data)
const baseFiles = getBaseFiles(content)
const affectedBaseSlugs = new Set<string>()
for (const event of changeEvents) {
if (!event.file) continue
const slug = event.file.data.slug
if (slug && isBaseFile(event.file.data)) {
affectedBaseSlugs.add(slug)
}
}
for (const [_, file] of baseFiles) {
const baseFileData = file.data
const baseSlug = baseFileData.slug
if (!baseSlug || !affectedBaseSlugs.has(baseSlug)) continue
const { views, allViews } = renderBaseViewsForFile(baseFileData, allFiles)
for (const renderedView of views) {
yield processBasePage(
ctx,
baseFileData,
renderedView,
allViews,
allFiles,
opts,
resources,
)
}
}
},
}
}
declare module "vfile" {
interface DataMap {
basesRenderedTree?: import("hast").Root
basesAllViews?: BaseViewMeta[]
basesCurrentView?: string
basesMetadata?: BaseMetadata
}
}

View File

@@ -83,6 +83,8 @@ export const ContentPage: QuartzEmitterPlugin<Partial<FullPageLayout>> = (userOp
containsIndex = true
}
if (file.data.filePath!.endsWith(".base")) continue
// only process home page, non-tag pages, and non-index pages
if (slug.endsWith("/index") || slug.startsWith("tags/")) continue
yield processContent(ctx, tree, file.data, allFiles, opts, resources)
@@ -112,6 +114,7 @@ export const ContentPage: QuartzEmitterPlugin<Partial<FullPageLayout>> = (userOp
for (const [tree, file] of content) {
const slug = file.data.slug!
if (!changedSlugs.has(slug)) continue
if (file.data.filePath!.endsWith(".base")) continue
if (slug.endsWith("/index") || slug.startsWith("tags/")) continue
yield processContent(ctx, tree, file.data, allFiles, opts, resources)

View File

@@ -10,3 +10,4 @@ export { ComponentResources } from "./componentResources"
export { NotFoundPage } from "./404"
export { CNAME } from "./cname"
export { CustomOgImages } from "./ogImage"
export { BasePage } from "./basePage"

View File

@@ -7,7 +7,6 @@ import { dirname } from "path"
export const Static: QuartzEmitterPlugin = () => ({
name: "Static",
async *emit({ argv, cfg }) {
// Copy Quartz's own internal static assets (quartz/static/) → output/static/
const staticPath = joinSegments(QUARTZ, "static")
const fps = await glob("**", staticPath, cfg.configuration.ignorePatterns)
const outputStaticPath = joinSegments(argv.output, "static")
@@ -19,21 +18,6 @@ export const Static: QuartzEmitterPlugin = () => ({
await fs.promises.copyFile(src, dest)
yield dest
}
// Copy user-facing static assets (static/) → output/ preserving paths.
// This mirrors Hugo's convention: static/ox-hugo/foo.png is served at /ox-hugo/foo.png,
// which matches the src="/ox-hugo/..." paths that ox-hugo writes into exported markdown.
const userStaticPath = "static"
if (fs.existsSync(userStaticPath)) {
const userFps = await glob("**", userStaticPath, cfg.configuration.ignorePatterns, false)
for (const fp of userFps) {
const src = joinSegments(userStaticPath, fp) as FilePath
const dest = joinSegments(argv.output, fp) as FilePath
await fs.promises.mkdir(dirname(dest), { recursive: true })
await fs.promises.copyFile(src, dest)
yield dest
}
}
},
async *partialEmit() {},
})

View File

@@ -0,0 +1,521 @@
import * as yaml from "js-yaml"
import { QuartzTransformerPlugin } from "../types"
import { FilePath, getFileExtension } from "../../util/path"
import {
BaseFile,
BaseView,
BaseFileFilter,
parseViews,
parseViewSummaries,
BUILTIN_SUMMARY_TYPES,
BuiltinSummaryType,
} from "../../util/base/types"
import {
parseExpressionSource,
compileExpression,
buildPropertyExpressionSource,
ProgramIR,
BasesExpressions,
BaseExpressionDiagnostic,
Span,
} from "../../util/base/compiler"
export interface BasesOptions {
/** Whether to emit diagnostics as warnings during build */
emitWarnings: boolean
}
const defaultOptions: BasesOptions = {
emitWarnings: true,
}
type FilterStructure =
| string
| { and?: FilterStructure[]; or?: FilterStructure[]; not?: FilterStructure[] }
function compileFilterStructure(
filter: FilterStructure | undefined,
file: string,
diagnostics: BaseExpressionDiagnostic[],
context: string,
): ProgramIR | undefined {
if (!filter) return undefined
if (typeof filter === "string") {
const result = parseExpressionSource(filter, file)
if (result.diagnostics.length > 0) {
for (const diag of result.diagnostics) {
diagnostics.push({
kind: diag.kind as "lex" | "parse" | "runtime",
message: diag.message,
span: diag.span,
context,
source: filter,
})
}
}
if (!result.program.body) return undefined
return compileExpression(result.program.body)
}
const compileParts = (
parts: FilterStructure[],
combiner: "&&" | "||",
negate: boolean,
): ProgramIR | undefined => {
const compiled: ProgramIR[] = []
for (const part of parts) {
const partIR = compileFilterStructure(part, file, diagnostics, context)
if (partIR) compiled.push(partIR)
}
if (compiled.length === 0) return undefined
if (compiled.length === 1) {
if (negate) {
return wrapWithNot(compiled[0])
}
return compiled[0]
}
let result = compiled[0]
for (let i = 1; i < compiled.length; i++) {
result = combineWithLogical(result, compiled[i], combiner, negate)
}
return result
}
if (filter.and && filter.and.length > 0) {
return compileParts(filter.and, "&&", false)
}
if (filter.or && filter.or.length > 0) {
return compileParts(filter.or, "||", false)
}
if (filter.not && filter.not.length > 0) {
return compileParts(filter.not, "&&", true)
}
return undefined
}
function wrapWithNot(ir: ProgramIR): ProgramIR {
const span = ir.span
return {
instructions: [
...ir.instructions,
{ op: "to_bool" as const, span },
{ op: "unary" as const, operator: "!" as const, span },
],
span,
}
}
function combineWithLogical(
left: ProgramIR,
right: ProgramIR,
operator: "&&" | "||",
negateRight: boolean,
): ProgramIR {
const span: Span = {
start: left.span.start,
end: right.span.end,
file: left.span.file,
}
const rightIR = negateRight ? wrapWithNot(right) : right
if (operator === "&&") {
const jumpIfFalseIndex = left.instructions.length + 1
const jumpIndex = jumpIfFalseIndex + rightIR.instructions.length + 2
return {
instructions: [
...left.instructions,
{ op: "jump_if_false" as const, target: jumpIndex, span },
...rightIR.instructions,
{ op: "to_bool" as const, span },
{ op: "jump" as const, target: jumpIndex + 1, span },
{
op: "const" as const,
literal: { type: "Literal" as const, kind: "boolean" as const, value: false, span },
span,
},
],
span,
}
} else {
const jumpIfTrueIndex = left.instructions.length + 1
const jumpIndex = jumpIfTrueIndex + rightIR.instructions.length + 2
return {
instructions: [
...left.instructions,
{ op: "jump_if_true" as const, target: jumpIndex, span },
...rightIR.instructions,
{ op: "to_bool" as const, span },
{ op: "jump" as const, target: jumpIndex + 1, span },
{
op: "const" as const,
literal: { type: "Literal" as const, kind: "boolean" as const, value: true, span },
span,
},
],
span,
}
}
}
function collectPropertiesFromViews(views: BaseView[]): Set<string> {
const properties = new Set<string>()
for (const view of views) {
if (view.order) {
for (const prop of view.order) {
properties.add(prop)
}
}
if (view.groupBy) {
const groupProp = typeof view.groupBy === "string" ? view.groupBy : view.groupBy.property
properties.add(groupProp)
}
if (view.sort) {
for (const sortConfig of view.sort) {
properties.add(sortConfig.property)
}
}
if (view.image) properties.add(view.image)
if (view.date) properties.add(view.date)
if (view.dateField) properties.add(view.dateField)
if (view.dateProperty) properties.add(view.dateProperty)
if (view.coordinates) properties.add(view.coordinates)
if (view.markerIcon) properties.add(view.markerIcon)
if (view.markerColor) properties.add(view.markerColor)
}
return properties
}
function compilePropertyExpressions(
properties: Set<string>,
file: string,
diagnostics: BaseExpressionDiagnostic[],
): Record<string, ProgramIR> {
const expressions: Record<string, ProgramIR> = {}
for (const property of properties) {
const source = buildPropertyExpressionSource(property)
if (!source) continue
const result = parseExpressionSource(source, file)
if (result.diagnostics.length > 0) {
for (const diag of result.diagnostics) {
diagnostics.push({
kind: diag.kind as "lex" | "parse" | "runtime",
message: diag.message,
span: diag.span,
context: `property.${property}`,
source,
})
}
}
if (result.program.body) {
expressions[property] = compileExpression(result.program.body)
}
}
return expressions
}
function compileFormulas(
formulas: Record<string, string> | undefined,
file: string,
diagnostics: BaseExpressionDiagnostic[],
): Record<string, ProgramIR> {
if (!formulas) return {}
const compiled: Record<string, ProgramIR> = {}
for (const [name, source] of Object.entries(formulas)) {
const trimmed = source.trim()
if (!trimmed) continue
const result = parseExpressionSource(trimmed, file)
if (result.diagnostics.length > 0) {
for (const diag of result.diagnostics) {
diagnostics.push({
kind: diag.kind as "lex" | "parse" | "runtime",
message: diag.message,
span: diag.span,
context: `formulas.${name}`,
source: trimmed,
})
}
}
if (result.program.body) {
compiled[name] = compileExpression(result.program.body)
}
}
return compiled
}
function compileSummaries(
summaries: Record<string, string> | undefined,
file: string,
diagnostics: BaseExpressionDiagnostic[],
): Record<string, ProgramIR> {
if (!summaries) return {}
const compiled: Record<string, ProgramIR> = {}
for (const [name, source] of Object.entries(summaries)) {
const trimmed = source.trim()
if (!trimmed) continue
const normalized = trimmed.toLowerCase()
if (BUILTIN_SUMMARY_TYPES.includes(normalized as BuiltinSummaryType)) {
continue
}
const result = parseExpressionSource(trimmed, file)
if (result.diagnostics.length > 0) {
for (const diag of result.diagnostics) {
diagnostics.push({
kind: diag.kind as "lex" | "parse" | "runtime",
message: diag.message,
span: diag.span,
context: `summaries.${name}`,
source: trimmed,
})
}
}
if (result.program.body) {
compiled[name] = compileExpression(result.program.body)
}
}
return compiled
}
function compileViewSummaries(
views: BaseView[],
topLevelSummaries: Record<string, string> | undefined,
file: string,
diagnostics: BaseExpressionDiagnostic[],
): Record<string, Record<string, ProgramIR>> {
const result: Record<string, Record<string, ProgramIR>> = {}
for (let i = 0; i < views.length; i++) {
const view = views[i]
if (!view.summaries) continue
const viewSummaryConfig = parseViewSummaries(
view.summaries as Record<string, string>,
topLevelSummaries,
)
if (!viewSummaryConfig?.columns) continue
const viewExpressions: Record<string, ProgramIR> = {}
for (const [column, def] of Object.entries(viewSummaryConfig.columns)) {
if (def.type !== "formula" || !def.expression) continue
const parseResult = parseExpressionSource(def.expression, file)
if (parseResult.diagnostics.length > 0) {
for (const diag of parseResult.diagnostics) {
diagnostics.push({
kind: diag.kind as "lex" | "parse" | "runtime",
message: diag.message,
span: diag.span,
context: `views[${i}].summaries.${column}`,
source: def.expression,
})
}
}
if (parseResult.program.body) {
viewExpressions[column] = compileExpression(parseResult.program.body)
}
}
if (Object.keys(viewExpressions).length > 0) {
result[String(i)] = viewExpressions
}
}
return result
}
export const ObsidianBases: QuartzTransformerPlugin<Partial<BasesOptions>> = (userOpts) => {
const opts = { ...defaultOptions, ...userOpts }
return {
name: "ObsidianBases",
textTransform(_ctx, src) {
return src
},
markdownPlugins(_ctx) {
return [
() => {
return (_tree, file) => {
const filePath = file.data.filePath as FilePath | undefined
if (!filePath) return
const ext = getFileExtension(filePath)
if (ext !== ".base") return
const content = file.value.toString()
if (!content.trim()) return
const diagnostics: BaseExpressionDiagnostic[] = []
const filePathStr = filePath
try {
const parsed = yaml.load(content, { schema: yaml.JSON_SCHEMA }) as Record<
string,
unknown
>
if (!parsed || typeof parsed !== "object") {
diagnostics.push({
kind: "parse",
message: "Base file must contain a valid YAML object",
span: {
start: { offset: 0, line: 1, column: 1 },
end: { offset: 0, line: 1, column: 1 },
file: filePathStr,
},
context: "root",
source: content.slice(0, 100),
})
file.data.basesDiagnostics = diagnostics
return
}
const rawViews = parsed.views
if (!Array.isArray(rawViews) || rawViews.length === 0) {
diagnostics.push({
kind: "parse",
message: "Base file must have at least one view defined",
span: {
start: { offset: 0, line: 1, column: 1 },
end: { offset: 0, line: 1, column: 1 },
file: filePathStr,
},
context: "views",
source: "views: []",
})
file.data.basesDiagnostics = diagnostics
return
}
const views = parseViews(rawViews)
const filters = parsed.filters as BaseFileFilter | undefined
const properties = parsed.properties as
| Record<string, { displayName?: string }>
| undefined
const summaries = parsed.summaries as Record<string, string> | undefined
const formulas = parsed.formulas as Record<string, string> | undefined
const baseConfig: BaseFile = {
filters,
views,
properties,
summaries,
formulas,
}
const compiledFilters = compileFilterStructure(
filters as FilterStructure | undefined,
filePathStr,
diagnostics,
"filters",
)
const viewFilters: Record<string, ProgramIR> = {}
for (let i = 0; i < views.length; i++) {
const view = views[i]
if (view.filters) {
const compiled = compileFilterStructure(
view.filters as FilterStructure,
filePathStr,
diagnostics,
`views[${i}].filters`,
)
if (compiled) {
viewFilters[String(i)] = compiled
}
}
}
const compiledFormulas = compileFormulas(formulas, filePathStr, diagnostics)
const compiledSummaries = compileSummaries(summaries, filePathStr, diagnostics)
const compiledViewSummaries = compileViewSummaries(
views,
summaries,
filePathStr,
diagnostics,
)
const viewProperties = collectPropertiesFromViews(views)
for (const name of Object.keys(compiledFormulas)) {
viewProperties.add(`formula.${name}`)
}
const propertyExpressions = compilePropertyExpressions(
viewProperties,
filePathStr,
diagnostics,
)
const expressions: BasesExpressions = {
filters: compiledFilters,
viewFilters,
formulas: compiledFormulas,
summaries: compiledSummaries,
viewSummaries: compiledViewSummaries,
propertyExpressions,
}
file.data.basesConfig = baseConfig
file.data.basesExpressions = expressions
file.data.basesDiagnostics = diagnostics
const existingFrontmatter = (file.data.frontmatter ?? {}) as Record<string, unknown>
file.data.frontmatter = {
title: views[0]?.name ?? file.stem ?? "Base",
tags: ["base"],
...existingFrontmatter,
}
if (opts.emitWarnings && diagnostics.length > 0) {
for (const diag of diagnostics) {
console.warn(
`[bases] ${filePathStr}:${diag.span.start.line}:${diag.span.start.column} - ${diag.message}`,
)
}
}
} catch (err) {
const message = err instanceof Error ? err.message : String(err)
diagnostics.push({
kind: "parse",
message: `Failed to parse base file: ${message}`,
span: {
start: { offset: 0, line: 1, column: 1 },
end: { offset: 0, line: 1, column: 1 },
file: filePathStr,
},
context: "root",
source: content.slice(0, 100),
})
file.data.basesDiagnostics = diagnostics
if (opts.emitWarnings) {
console.warn(`[bases] ${filePathStr}: ${message}`)
}
}
}
},
]
},
}
}
declare module "vfile" {
interface DataMap {
basesConfig?: BaseFile
basesExpressions?: BasesExpressions
basesDiagnostics?: BaseExpressionDiagnostic[]
}
}

View File

@@ -11,3 +11,4 @@ export { SyntaxHighlighting } from "./syntax"
export { TableOfContents } from "./toc"
export { HardLineBreaks } from "./linebreaks"
export { RoamFlavoredMarkdown } from "./roam"
export { ObsidianBases } from "./bases"

View File

@@ -289,8 +289,11 @@ export const ObsidianFlavoredMarkdown: QuartzTransformerPlugin<Partial<Options>>
}
}
// internal link
const url = fp + anchor
const isBaseFile = fp.endsWith(".base")
const basePath = isBaseFile ? fp.slice(0, -5) : fp
const url = isBaseFile
? basePath + (anchor ? `/${anchor.slice(1).replace(/\s+/g, "-")}` : "")
: fp + anchor
return {
type: "link",
@@ -298,7 +301,7 @@ export const ObsidianFlavoredMarkdown: QuartzTransformerPlugin<Partial<Options>>
children: [
{
type: "text",
value: alias ?? fp,
value: alias ?? basePath,
},
],
}

View File

@@ -27,10 +27,7 @@ const defaultOptions: Options = {
const relrefRegex = new RegExp(/\[([^\]]+)\]\(\{\{< relref "([^"]+)" >\}\}\)/, "g")
const predefinedHeadingIdRegex = new RegExp(/(.*) {#(?:.*)}/, "g")
const hugoShortcodeRegex = new RegExp(/{{(.*)}}/, "g")
// Matches the full Hugo {{< figure src="..." ... >}} shortcode and captures src.
// Must run before the generic shortcode stripper to avoid partial-match issues
// with captions that contain HTML (e.g. <span class="figure-number">).
const figureShortcodeRegex = new RegExp(/{{<\s*figure\b[^}]*\bsrc="([^"]*)"[^}]*>}}/, "g")
const figureTagRegex = new RegExp(/< ?figure src="(.*)" ?>/, "g")
// \\\\\( -> matches \\(
// (.+?) -> Lazy match for capturing the equation
// \\\\\) -> matches \\)
@@ -73,14 +70,6 @@ export const OxHugoFlavouredMarkdown: QuartzTransformerPlugin<Partial<Options>>
})
}
if (opts.replaceFigureWithMdImg) {
src = src.toString()
src = src.replaceAll(figureShortcodeRegex, (_value, ...capture) => {
const [imgSrc] = capture
return `![](${imgSrc})`
})
}
if (opts.removeHugoShortcode) {
src = src.toString()
src = src.replaceAll(hugoShortcodeRegex, (_value, ...capture) => {
@@ -89,6 +78,14 @@ export const OxHugoFlavouredMarkdown: QuartzTransformerPlugin<Partial<Options>>
})
}
if (opts.replaceFigureWithMdImg) {
src = src.toString()
src = src.replaceAll(figureTagRegex, (_value, ...capture) => {
const [src] = capture
return `![](${src})`
})
}
if (opts.replaceOrgLatex) {
src = src.toString()
src = src.replaceAll(inlineLatexRegex, (_value, ...capture) => {

View File

@@ -104,12 +104,16 @@ export function createFileParser(ctx: BuildCtx, fps: FilePath[]) {
file.data.relativePath = path.posix.relative(argv.directory, file.path) as FilePath
file.data.slug = slugifyFilePath(file.data.relativePath)
const ast = processor.parse(file)
const isBaseFile = fp.endsWith(".base")
const ast: MDRoot = isBaseFile ? { type: "root", children: [] } : processor.parse(file)
const newAst = await processor.run(ast, file)
res.push([newAst, file])
if (argv.verbose) {
console.log(`[markdown] ${fp} -> ${file.data.slug} (${perf.timeSince()})`)
console.log(
`[${isBaseFile ? "base" : "markdown"}] ${fp} -> ${file.data.slug} (${perf.timeSince()})`,
)
}
} catch (err) {
trace(`\nFailed to process markdown \`${fp}\``, err as Error)

View File

@@ -0,0 +1,92 @@
# bases compiler + runtime (quartz implementation)
status: active
last updated: 2026-01-28
this directory contains the obsidian bases compiler, interpreter, and runtime helpers used by quartz to render `.base` files. it is designed to match obsidian bases syntax and semantics with deterministic evaluation and consistent diagnostics.
You can test it out with any of the base file in my vault here:
```bash
npx tsx quartz/util/base/inspect-base.ts docs/navigation.base > /tmp/ast-ir.json
jq '.expressions[] | {context, kind, source, ast}' /tmp/ast-ir.json
jq '.expressions[] | {context, kind, ir}' /tmp/ast-ir.json
```
## scope
- parse base expressions (filters, formulas, summaries, property expressions)
- compile expressions to bytecode ir
- interpret bytecode with a deterministic stack vm
- resolve file, note, formula, and property values
- render views (table, list, cards/gallery, board, calendar, map)
- surface parse and runtime diagnostics in base output
## architecture (pipeline)
1. parse `.base` yaml (plugin: `quartz/plugins/transformers/bases.ts`)
2. parse expressions into ast (`compiler/parser.ts`)
3. compile ast to ir (`compiler/ir.ts`)
4. evaluate ir per row with caches (`compiler/interpreter.ts`)
5. render views and diagnostics (`render.ts`)
## modules
- `compiler/lexer.ts`: tokenizer with span tracking and regex support
- `compiler/parser.ts`: pratt parser for expression grammar and error recovery
- `compiler/ir.ts`: bytecode instruction set + compiler
- `compiler/interpreter.ts`: stack vm, value model, coercions, methods, functions
- `compiler/diagnostics.ts`: diagnostics types and helpers
- `compiler/schema.ts`: summary config schema and builtins
- `compiler/properties.ts`: property expression builder for columns and config keys
- `render.ts`: view rendering and diagnostics output
- `query.ts`: summaries and view summary helpers
- `types.ts`: base config types and yaml parsing helpers
## value model (runtime)
runtime values are tagged unions with explicit kinds:
- null, boolean, number, string
- date, duration
- list, object
- file, link
- regex, html, icon, image
coercions are permissive to match obsidian behavior. comparisons prefer type-aware equality (links resolve to files when possible, dates compare by time, etc), with fallbacks when resolution fails.
## expression features (spec parity)
- operators: `==`, `!=`, `>`, `<`, `>=`, `<=`, `&&`, `||`, `!`, `+`, `-`, `*`, `/`, `%`
- member and index access
- function calls and method calls
- list literals and regex literals
- `this` binding with embed-aware scoping
- list helpers (`filter`, `map`, `reduce`) using implicit locals `value`, `index`, `acc`
- summary context helpers: `values` (column values) and `rows` (row files)
## diagnostics
- parser diagnostics are collected with spans at compile time
- runtime diagnostics are collected during evaluation and deduped per context
- base views render diagnostics above the view output
## this scoping
- main base file: `this` resolves to the base file
- embedded base: `this` resolves to the embedding file
- row evaluation: `file` resolves to the row file
## performance decisions
- bytecode ir keeps evaluation linear and stable
- per-build backlink index avoids n^2 scans
- property cache memoizes property expressions per file
- formula cache memoizes formula evaluation per file
## view rendering
- table, list, cards/gallery, board, calendar, map
- map rendering expects coordinates `[lat, lon]` and map config fields
- view filters combine with base filters via logical and

View File

@@ -0,0 +1,76 @@
export type Position = { offset: number; line: number; column: number }
export type Span = { start: Position; end: Position; file?: string }
export type Program = { type: "Program"; body: Expr | null; span: Span }
export type Expr =
| Literal
| Identifier
| UnaryExpr
| BinaryExpr
| LogicalExpr
| CallExpr
| MemberExpr
| IndexExpr
| ListExpr
| ErrorExpr
export type LiteralKind = "number" | "string" | "boolean" | "null" | "date" | "duration" | "regex"
export type NumberLiteral = { type: "Literal"; kind: "number"; value: number; span: Span }
export type StringLiteral = { type: "Literal"; kind: "string"; value: string; span: Span }
export type BooleanLiteral = { type: "Literal"; kind: "boolean"; value: boolean; span: Span }
export type NullLiteral = { type: "Literal"; kind: "null"; value: null; span: Span }
export type DateLiteral = { type: "Literal"; kind: "date"; value: string; span: Span }
export type DurationLiteral = { type: "Literal"; kind: "duration"; value: string; span: Span }
export type RegexLiteral = {
type: "Literal"
kind: "regex"
value: string
flags: string
span: Span
}
export type Literal =
| NumberLiteral
| StringLiteral
| BooleanLiteral
| NullLiteral
| DateLiteral
| DurationLiteral
| RegexLiteral
export type Identifier = { type: "Identifier"; name: string; span: Span }
export type UnaryExpr = { type: "UnaryExpr"; operator: "!" | "-"; argument: Expr; span: Span }
export type BinaryExpr = {
type: "BinaryExpr"
operator: "+" | "-" | "*" | "/" | "%" | "==" | "!=" | ">" | ">=" | "<" | "<="
left: Expr
right: Expr
span: Span
}
export type LogicalExpr = {
type: "LogicalExpr"
operator: "&&" | "||"
left: Expr
right: Expr
span: Span
}
export type CallExpr = { type: "CallExpr"; callee: Expr; args: Expr[]; span: Span }
export type MemberExpr = { type: "MemberExpr"; object: Expr; property: string; span: Span }
export type IndexExpr = { type: "IndexExpr"; object: Expr; index: Expr; span: Span }
export type ListExpr = { type: "ListExpr"; elements: Expr[]; span: Span }
export type ErrorExpr = { type: "ErrorExpr"; message: string; span: Span }
export function spanFrom(start: Span, end: Span): Span {
return { start: start.start, end: end.end, file: start.file || end.file }
}

View File

@@ -0,0 +1,9 @@
import { Span } from "./ast"
export type BaseExpressionDiagnostic = {
kind: "lex" | "parse" | "runtime"
message: string
span: Span
context: string
source: string
}

View File

@@ -0,0 +1,3 @@
import { Span } from "./ast"
export type Diagnostic = { kind: "lex" | "parse"; message: string; span: Span }

View File

@@ -0,0 +1,10 @@
import { ProgramIR } from "./ir"
export type BasesExpressions = {
filters?: ProgramIR
viewFilters: Record<string, ProgramIR>
formulas: Record<string, ProgramIR>
summaries: Record<string, ProgramIR>
viewSummaries: Record<string, Record<string, ProgramIR>>
propertyExpressions: Record<string, ProgramIR>
}

View File

@@ -0,0 +1,44 @@
export { lex } from "./lexer"
export { parseExpressionSource } from "./parser"
export type { ParseResult } from "./parser"
export type { Diagnostic } from "./errors"
export type { Program, Expr, Span, Position } from "./ast"
export type { BaseExpressionDiagnostic } from "./diagnostics"
export type { BasesExpressions } from "./expressions"
export type { Instruction, ProgramIR } from "./ir"
export { compileExpression } from "./ir"
export { buildPropertyExpressionSource } from "./properties"
export type {
SummaryDefinition,
ViewSummaryConfig,
PropertyConfig,
BuiltinSummaryType,
} from "./schema"
export { BUILTIN_SUMMARY_TYPES } from "./schema"
export {
evaluateExpression,
evaluateFilterExpression,
evaluateSummaryExpression,
valueToUnknown,
} from "./interpreter"
export type {
EvalContext,
Value,
NullValue,
BooleanValue,
NumberValue,
StringValue,
DateValue,
DurationValue,
ListValue,
ObjectValue,
FileValue,
LinkValue,
RegexValue,
HtmlValue,
IconValue,
ImageValue,
ValueKind,
ValueOf,
} from "./interpreter"
export { isValueKind } from "./interpreter"

View File

@@ -0,0 +1,73 @@
import assert from "node:assert"
import test from "node:test"
import { FilePath, FullSlug, SimpleSlug } from "../../path"
type ContentLayout = "default" | "article" | "page"
import { evaluateExpression, valueToUnknown, EvalContext } from "./interpreter"
import { compileExpression } from "./ir"
import { parseExpressionSource } from "./parser"
const parseExpr = (source: string) => {
const result = parseExpressionSource(source, "test")
if (!result.program.body) {
throw new Error(`expected expression for ${source}`)
}
return compileExpression(result.program.body)
}
const makeCtx = (): EvalContext => {
const fileA = {
slug: "a" as FullSlug,
filePath: "a.md" as FilePath,
frontmatter: { title: "A", pageLayout: "default" as ContentLayout },
links: [] as SimpleSlug[],
}
const fileB = {
slug: "b" as FullSlug,
filePath: "b.md" as FilePath,
frontmatter: { title: "B", pageLayout: "default" as ContentLayout },
links: ["a"] as SimpleSlug[],
}
return { file: fileA, allFiles: [fileA, fileB] }
}
test("link equality resolves to file targets", () => {
const expr = parseExpr('link("a") == file("a")')
const value = valueToUnknown(evaluateExpression(expr, makeCtx()))
assert.strictEqual(value, true)
})
test("link equality matches raw string targets", () => {
const expr = parseExpr('link("a") == "a"')
const value = valueToUnknown(evaluateExpression(expr, makeCtx()))
assert.strictEqual(value, true)
})
test("date arithmetic handles month additions", () => {
const expr = parseExpr('date("2025-01-01") + "1M"')
const value = valueToUnknown(evaluateExpression(expr, makeCtx()))
assert.ok(value instanceof Date)
assert.strictEqual(value.toISOString().split("T")[0], "2025-02-01")
})
test("date subtraction returns duration in ms", () => {
const expr = parseExpr('date("2025-01-02") - date("2025-01-01")')
const value = valueToUnknown(evaluateExpression(expr, makeCtx()))
assert.strictEqual(value, 86400000)
})
test("list summary helpers compute statistics", () => {
const meanExpr = parseExpr("([1, 2, 3]).mean()")
const medianExpr = parseExpr("([1, 2, 3]).median()")
const stddevExpr = parseExpr("([1, 2, 3]).stddev()")
const sumExpr = parseExpr("([1, 2, 3]).sum()")
const ctx = makeCtx()
assert.strictEqual(valueToUnknown(evaluateExpression(meanExpr, ctx)), 2)
assert.strictEqual(valueToUnknown(evaluateExpression(medianExpr, ctx)), 2)
assert.strictEqual(valueToUnknown(evaluateExpression(sumExpr, ctx)), 6)
const stddev = valueToUnknown(evaluateExpression(stddevExpr, ctx))
assert.strictEqual(typeof stddev, "number")
if (typeof stddev === "number") {
assert.ok(Math.abs(stddev - Math.sqrt(2 / 3)) < 1e-6)
}
})

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,164 @@
import { BinaryExpr, Expr, Literal, Span, UnaryExpr } from "./ast"
export type JumpInstruction = {
op: "jump" | "jump_if_false" | "jump_if_true"
target: number
span: Span
}
export type Instruction =
| { op: "const"; literal: Literal; span: Span }
| { op: "ident"; name: string; span: Span }
| { op: "load_formula"; name: string; span: Span }
| { op: "load_formula_index"; span: Span }
| { op: "member"; property: string; span: Span }
| { op: "index"; span: Span }
| { op: "list"; count: number; span: Span }
| { op: "unary"; operator: UnaryExpr["operator"]; span: Span }
| { op: "binary"; operator: BinaryExpr["operator"]; span: Span }
| { op: "to_bool"; span: Span }
| { op: "call_global"; name: string; argc: number; span: Span }
| { op: "call_method"; name: string; argc: number; span: Span }
| { op: "call_dynamic"; span: Span }
| { op: "filter"; program: ProgramIR | null; span: Span }
| { op: "map"; program: ProgramIR | null; span: Span }
| { op: "reduce"; program: ProgramIR | null; initial: ProgramIR | null; span: Span }
| JumpInstruction
export type ProgramIR = { instructions: Instruction[]; span: Span }
const compileExpr = (expr: Expr, out: Instruction[]) => {
switch (expr.type) {
case "Literal":
out.push({ op: "const", literal: expr, span: expr.span })
return
case "Identifier":
out.push({ op: "ident", name: expr.name, span: expr.span })
return
case "UnaryExpr":
compileExpr(expr.argument, out)
out.push({ op: "unary", operator: expr.operator, span: expr.span })
return
case "BinaryExpr":
compileExpr(expr.left, out)
compileExpr(expr.right, out)
out.push({ op: "binary", operator: expr.operator, span: expr.span })
return
case "LogicalExpr": {
if (expr.operator === "&&") {
compileExpr(expr.left, out)
const jumpFalse: JumpInstruction = { op: "jump_if_false", target: -1, span: expr.span }
out.push(jumpFalse)
compileExpr(expr.right, out)
out.push({ op: "to_bool", span: expr.span })
const jumpEnd: JumpInstruction = { op: "jump", target: -1, span: expr.span }
out.push(jumpEnd)
const falseTarget = out.length
jumpFalse.target = falseTarget
out.push({
op: "const",
literal: { type: "Literal", kind: "boolean", value: false, span: expr.span },
span: expr.span,
})
jumpEnd.target = out.length
return
}
compileExpr(expr.left, out)
const jumpTrue: JumpInstruction = { op: "jump_if_true", target: -1, span: expr.span }
out.push(jumpTrue)
compileExpr(expr.right, out)
out.push({ op: "to_bool", span: expr.span })
const jumpEnd: JumpInstruction = { op: "jump", target: -1, span: expr.span }
out.push(jumpEnd)
const trueTarget = out.length
jumpTrue.target = trueTarget
out.push({
op: "const",
literal: { type: "Literal", kind: "boolean", value: true, span: expr.span },
span: expr.span,
})
jumpEnd.target = out.length
return
}
case "MemberExpr":
if (expr.object.type === "Identifier" && expr.object.name === "formula") {
out.push({ op: "load_formula", name: expr.property, span: expr.span })
return
}
compileExpr(expr.object, out)
out.push({ op: "member", property: expr.property, span: expr.span })
return
case "IndexExpr":
if (expr.object.type === "Identifier" && expr.object.name === "formula") {
compileExpr(expr.index, out)
out.push({ op: "load_formula_index", span: expr.span })
return
}
compileExpr(expr.object, out)
compileExpr(expr.index, out)
out.push({ op: "index", span: expr.span })
return
case "ListExpr":
for (const element of expr.elements) {
compileExpr(element, out)
}
out.push({ op: "list", count: expr.elements.length, span: expr.span })
return
case "CallExpr": {
if (expr.callee.type === "Identifier") {
for (const arg of expr.args) {
compileExpr(arg, out)
}
out.push({
op: "call_global",
name: expr.callee.name,
argc: expr.args.length,
span: expr.span,
})
return
}
if (expr.callee.type === "MemberExpr") {
const method = expr.callee.property
if (method === "filter" || method === "map" || method === "reduce") {
compileExpr(expr.callee.object, out)
const exprArg = expr.args[0]
const program = exprArg ? compileExpression(exprArg) : null
if (method === "filter") {
out.push({ op: "filter", program, span: expr.span })
return
}
if (method === "map") {
out.push({ op: "map", program, span: expr.span })
return
}
const initialArg = expr.args[1]
const initial = initialArg ? compileExpression(initialArg) : null
out.push({ op: "reduce", program, initial, span: expr.span })
return
}
compileExpr(expr.callee.object, out)
for (const arg of expr.args) {
compileExpr(arg, out)
}
out.push({ op: "call_method", name: method, argc: expr.args.length, span: expr.span })
return
}
compileExpr(expr.callee, out)
out.push({ op: "call_dynamic", span: expr.span })
return
}
case "ErrorExpr":
out.push({
op: "const",
literal: { type: "Literal", kind: "null", value: null, span: expr.span },
span: expr.span,
})
return
}
}
export const compileExpression = (expr: Expr): ProgramIR => {
const instructions: Instruction[] = []
compileExpr(expr, instructions)
return { instructions, span: expr.span }
}

View File

@@ -0,0 +1,53 @@
import assert from "node:assert"
import test from "node:test"
import { lex } from "./lexer"
test("lexes bracket access with hyphenated keys", () => {
const result = lex('note["my-field"]')
const types = result.tokens.map((token) => token.type)
assert.deepStrictEqual(types, ["identifier", "punctuation", "string", "punctuation", "eof"])
const value = result.tokens[2]
if (value.type !== "string") {
throw new Error("expected string token")
}
assert.strictEqual(value.value, "my-field")
})
test("lexes bracket access with escaped quotes", () => {
const result = lex('note["my\\\"field"]')
const value = result.tokens.find((token) => token.type === "string")
if (!value || value.type !== "string") {
throw new Error("expected string token")
}
assert.strictEqual(value.value, 'my"field')
})
test("lexes regex literals with flags", () => {
const result = lex('name.replace(/:/g, "-")')
const regexToken = result.tokens.find((token) => token.type === "regex")
if (!regexToken || regexToken.type !== "regex") {
throw new Error("expected regex token")
}
assert.strictEqual(regexToken.pattern, ":")
assert.strictEqual(regexToken.flags, "g")
})
test("lexes regex literals with escaped slashes", () => {
const result = lex("path.matches(/\\//)")
const regexToken = result.tokens.find((token) => token.type === "regex")
if (!regexToken || regexToken.type !== "regex") {
throw new Error("expected regex token")
}
assert.strictEqual(regexToken.pattern, "\\/")
assert.strictEqual(regexToken.flags, "")
})
test("lexes division as operator, not regex", () => {
const result = lex("a / b")
const operatorToken = result.tokens.find(
(token) => token.type === "operator" && token.value === "/",
)
assert.ok(operatorToken)
const regexToken = result.tokens.find((token) => token.type === "regex")
assert.strictEqual(regexToken, undefined)
})

View File

@@ -0,0 +1,300 @@
import { Position, Span } from "./ast"
import { Diagnostic } from "./errors"
import {
Operator,
Punctuation,
Token,
StringToken,
RegexToken,
NumberToken,
BooleanToken,
NullToken,
ThisToken,
IdentifierToken,
OperatorToken,
PunctuationToken,
EofToken,
} from "./tokens"
type LexResult = { tokens: Token[]; diagnostics: Diagnostic[] }
const operatorTokens: Operator[] = [
"==",
"!=",
">=",
"<=",
"&&",
"||",
"+",
"-",
"*",
"/",
"%",
"!",
">",
"<",
]
const punctuationTokens: Punctuation[] = [".", ",", "(", ")", "[", "]"]
const isOperator = (value: string): value is Operator =>
operatorTokens.some((token) => token === value)
const isPunctuation = (value: string): value is Punctuation =>
punctuationTokens.some((token) => token === value)
export function lex(input: string, file?: string): LexResult {
const tokens: Token[] = []
const diagnostics: Diagnostic[] = []
let index = 0
let line = 1
let column = 1
let canStartRegex = true
const makePosition = (offset: number, lineValue: number, columnValue: number): Position => ({
offset,
line: lineValue,
column: columnValue,
})
const currentPosition = (): Position => makePosition(index, line, column)
const makeSpan = (start: Position, end: Position): Span => ({ start, end, file })
const advance = (): string => {
const ch = input[index]
index += 1
if (ch === "\n") {
line += 1
column = 1
} else {
column += 1
}
return ch
}
const peek = (offset = 0): string => input[index + offset] ?? ""
const addDiagnostic = (message: string, span: Span) => {
diagnostics.push({ kind: "lex", message, span })
}
const updateRegexState = (token: Token | null) => {
if (!token) {
canStartRegex = true
return
}
if (token.type === "operator") {
canStartRegex = true
return
}
if (token.type === "punctuation") {
canStartRegex = token.value === "(" || token.value === "[" || token.value === ","
return
}
canStartRegex = false
}
const isWhitespace = (ch: string) => ch === " " || ch === "\t" || ch === "\n" || ch === "\r"
const isDigit = (ch: string) => ch >= "0" && ch <= "9"
const isIdentStart = (ch: string) =>
(ch >= "a" && ch <= "z") || (ch >= "A" && ch <= "Z") || ch === "_"
const isIdentContinue = (ch: string) => isIdentStart(ch) || isDigit(ch)
while (index < input.length) {
const ch = peek()
if (isWhitespace(ch)) {
advance()
continue
}
const start = currentPosition()
if (ch === "=" && peek(1) !== "=") {
let offset = 1
while (isWhitespace(peek(offset))) {
offset += 1
}
if (peek(offset) === ">") {
advance()
for (let step = 1; step < offset; step += 1) {
advance()
}
if (peek() === ">") {
advance()
}
const end = currentPosition()
addDiagnostic(
"arrow functions are not supported, use list.filter(expression)",
makeSpan(start, end),
)
continue
}
}
if (ch === '"' || ch === "'") {
const quote = advance()
let value = ""
let closed = false
while (index < input.length) {
const curr = advance()
if (curr === quote) {
closed = true
break
}
if (curr === "\\") {
const next = advance()
if (next === "n") value += "\n"
else if (next === "t") value += "\t"
else if (next === "r") value += "\r"
else if (next === "\\" || next === "'" || next === '"') value += next
else value += next
} else {
value += curr
}
}
const end = currentPosition()
const span = makeSpan(start, end)
if (!closed) addDiagnostic("unterminated string literal", span)
const token: StringToken = { type: "string", value, span }
tokens.push(token)
updateRegexState(token)
continue
}
if (ch === "/" && canStartRegex) {
const next = peek(1)
if (next !== "/" && next !== "") {
advance()
let pattern = ""
let closed = false
let inClass = false
while (index < input.length) {
const curr = advance()
if (curr === "\\" && index < input.length) {
const escaped = advance()
pattern += `\\${escaped}`
continue
}
if (curr === "[" && !inClass) inClass = true
if (curr === "]" && inClass) inClass = false
if (curr === "/" && !inClass) {
closed = true
break
}
pattern += curr
}
let flags = ""
while (index < input.length) {
const flag = peek()
if (!/^[gimsuy]$/.test(flag)) break
flags += advance()
}
const end = currentPosition()
const span = makeSpan(start, end)
if (!closed) addDiagnostic("unterminated regex literal", span)
const token: RegexToken = { type: "regex", pattern, flags, span }
tokens.push(token)
updateRegexState(token)
continue
}
}
if (isDigit(ch)) {
let num = ""
while (index < input.length && isDigit(peek())) {
num += advance()
}
if (peek() === "." && isDigit(peek(1))) {
num += advance()
while (index < input.length && isDigit(peek())) {
num += advance()
}
}
const end = currentPosition()
const span = makeSpan(start, end)
const token: NumberToken = { type: "number", value: Number(num), span }
tokens.push(token)
updateRegexState(token)
continue
}
if (isIdentStart(ch)) {
let ident = ""
while (index < input.length && isIdentContinue(peek())) {
ident += advance()
}
const end = currentPosition()
const span = makeSpan(start, end)
if (ident === "true" || ident === "false") {
const token: BooleanToken = { type: "boolean", value: ident === "true", span }
tokens.push(token)
updateRegexState(token)
continue
}
if (ident === "null") {
const token: NullToken = { type: "null", span }
tokens.push(token)
updateRegexState(token)
continue
}
if (ident === "this") {
const token: ThisToken = { type: "this", span }
tokens.push(token)
updateRegexState(token)
continue
}
const token: IdentifierToken = { type: "identifier", value: ident, span }
tokens.push(token)
updateRegexState(token)
continue
}
const twoChar = ch + peek(1)
if (isOperator(twoChar)) {
advance()
advance()
const end = currentPosition()
const span = makeSpan(start, end)
const token: OperatorToken = { type: "operator", value: twoChar, span }
tokens.push(token)
updateRegexState(token)
continue
}
if (isOperator(ch)) {
advance()
const end = currentPosition()
const span = makeSpan(start, end)
const token: OperatorToken = { type: "operator", value: ch, span }
tokens.push(token)
updateRegexState(token)
continue
}
if (isPunctuation(ch)) {
advance()
const end = currentPosition()
const span = makeSpan(start, end)
const token: PunctuationToken = { type: "punctuation", value: ch, span }
tokens.push(token)
updateRegexState(token)
continue
}
advance()
const end = currentPosition()
addDiagnostic(`unexpected character: ${ch}`, makeSpan(start, end))
}
const eofPos = currentPosition()
const eofSpan = makeSpan(eofPos, eofPos)
const eofToken: EofToken = { type: "eof", span: eofSpan }
tokens.push(eofToken)
updateRegexState(eofToken)
return { tokens, diagnostics }
}

View File

@@ -0,0 +1,261 @@
import assert from "node:assert"
import test from "node:test"
import { parseExpressionSource } from "./parser"
const isRecord = (value: unknown): value is Record<string, unknown> =>
typeof value === "object" && value !== null
const strip = (node: unknown): unknown => {
if (!isRecord(node)) return node
const type = node.type
if (type === "Identifier") {
return { type, name: node.name }
}
if (type === "Literal") {
const kind = node.kind
const value = node.value
const flags = node.flags
return flags !== undefined ? { type, kind, value, flags } : { type, kind, value }
}
if (type === "UnaryExpr") {
return { type, operator: node.operator, argument: strip(node.argument) }
}
if (type === "BinaryExpr" || type === "LogicalExpr") {
return { type, operator: node.operator, left: strip(node.left), right: strip(node.right) }
}
if (type === "CallExpr") {
const args = Array.isArray(node.args) ? node.args.map(strip) : []
return { type, callee: strip(node.callee), args }
}
if (type === "MemberExpr") {
return { type, object: strip(node.object), property: node.property }
}
if (type === "IndexExpr") {
return { type, object: strip(node.object), index: strip(node.index) }
}
if (type === "ListExpr") {
const elements = Array.isArray(node.elements) ? node.elements.map(strip) : []
return { type, elements }
}
if (type === "ErrorExpr") {
return { type, message: node.message }
}
return node
}
test("ebnf to ast mapping snapshots", () => {
const cases: Array<{ source: string; expected: unknown }> = [
{
source: 'status == "done"',
expected: {
type: "BinaryExpr",
operator: "==",
left: { type: "Identifier", name: "status" },
right: { type: "Literal", kind: "string", value: "done" },
},
},
{
source: "!done",
expected: {
type: "UnaryExpr",
operator: "!",
argument: { type: "Identifier", name: "done" },
},
},
{
source: "file.ctime",
expected: {
type: "MemberExpr",
object: { type: "Identifier", name: "file" },
property: "ctime",
},
},
{
source: 'note["my-field"]',
expected: {
type: "IndexExpr",
object: { type: "Identifier", name: "note" },
index: { type: "Literal", kind: "string", value: "my-field" },
},
},
{
source: "date(due) < today()",
expected: {
type: "BinaryExpr",
operator: "<",
left: {
type: "CallExpr",
callee: { type: "Identifier", name: "date" },
args: [{ type: "Identifier", name: "due" }],
},
right: { type: "CallExpr", callee: { type: "Identifier", name: "today" }, args: [] },
},
},
{
source: "now() - file.ctime",
expected: {
type: "BinaryExpr",
operator: "-",
left: { type: "CallExpr", callee: { type: "Identifier", name: "now" }, args: [] },
right: {
type: "MemberExpr",
object: { type: "Identifier", name: "file" },
property: "ctime",
},
},
},
{
source: "(pages * 2).round(0)",
expected: {
type: "CallExpr",
callee: {
type: "MemberExpr",
object: {
type: "BinaryExpr",
operator: "*",
left: { type: "Identifier", name: "pages" },
right: { type: "Literal", kind: "number", value: 2 },
},
property: "round",
},
args: [{ type: "Literal", kind: "number", value: 0 }],
},
},
{
source: 'tags.containsAny("a","b")',
expected: {
type: "CallExpr",
callee: {
type: "MemberExpr",
object: { type: "Identifier", name: "tags" },
property: "containsAny",
},
args: [
{ type: "Literal", kind: "string", value: "a" },
{ type: "Literal", kind: "string", value: "b" },
],
},
},
{
source: "list(links).filter(value.isTruthy())",
expected: {
type: "CallExpr",
callee: {
type: "MemberExpr",
object: {
type: "CallExpr",
callee: { type: "Identifier", name: "list" },
args: [{ type: "Identifier", name: "links" }],
},
property: "filter",
},
args: [
{
type: "CallExpr",
callee: {
type: "MemberExpr",
object: { type: "Identifier", name: "value" },
property: "isTruthy",
},
args: [],
},
],
},
},
{
source: '["a", "b", "c"].length',
expected: {
type: "MemberExpr",
object: {
type: "ListExpr",
elements: [
{ type: "Literal", kind: "string", value: "a" },
{ type: "Literal", kind: "string", value: "b" },
{ type: "Literal", kind: "string", value: "c" },
],
},
property: "length",
},
},
{
source: "this.file.name",
expected: {
type: "MemberExpr",
object: {
type: "MemberExpr",
object: { type: "Identifier", name: "this" },
property: "file",
},
property: "name",
},
},
{
source: "a || b && c",
expected: {
type: "LogicalExpr",
operator: "||",
left: { type: "Identifier", name: "a" },
right: {
type: "LogicalExpr",
operator: "&&",
left: { type: "Identifier", name: "b" },
right: { type: "Identifier", name: "c" },
},
},
},
{
source: "values[0]",
expected: {
type: "IndexExpr",
object: { type: "Identifier", name: "values" },
index: { type: "Literal", kind: "number", value: 0 },
},
},
]
for (const entry of cases) {
const result = parseExpressionSource(entry.source)
assert.strictEqual(result.diagnostics.length, 0)
assert.deepStrictEqual(strip(result.program.body), entry.expected)
}
})
test("syntax doc samples parse", () => {
const samples = [
'note["price"]',
"file.size > 10",
"file.hasLink(this.file)",
'date("2024-12-01") + "1M" + "4h" + "3m"',
"now() - file.ctime",
"property[0]",
'link("filename", icon("plus"))',
'file.mtime > now() - "1 week"',
'/abc/.matches("abcde")',
'name.replace(/:/g, "-")',
'values.filter(value.isType("number")).reduce(if(acc == null || value > acc, value, acc), null)',
]
for (const source of samples) {
const result = parseExpressionSource(source)
assert.strictEqual(result.diagnostics.length, 0)
assert.ok(result.program.body)
}
})
test("string escapes are decoded", () => {
const result = parseExpressionSource('"a\\n\\"b"')
assert.strictEqual(result.diagnostics.length, 0)
const literal = strip(result.program.body)
if (!isRecord(literal)) {
throw new Error("expected literal record")
}
assert.strictEqual(literal.type, "Literal")
assert.strictEqual(literal.kind, "string")
assert.strictEqual(literal.value, 'a\n"b')
})
test("parser reports errors and recovers", () => {
const result = parseExpressionSource("status ==")
assert.ok(result.diagnostics.length > 0)
assert.ok(result.program.body)
})

View File

@@ -0,0 +1,370 @@
import {
BinaryExpr,
CallExpr,
ErrorExpr,
Expr,
Identifier,
IndexExpr,
ListExpr,
Literal,
LogicalExpr,
MemberExpr,
Program,
UnaryExpr,
spanFrom,
} from "./ast"
import { Diagnostic } from "./errors"
import { lex } from "./lexer"
import { Operator, Token } from "./tokens"
export type ParseResult = { program: Program; tokens: Token[]; diagnostics: Diagnostic[] }
type InfixInfo = { lbp: number; rbp: number; kind: "binary" | "logical" }
const infixBindingPowers: Record<string, InfixInfo> = {
"||": { lbp: 1, rbp: 2, kind: "logical" },
"&&": { lbp: 3, rbp: 4, kind: "logical" },
"==": { lbp: 5, rbp: 6, kind: "binary" },
"!=": { lbp: 5, rbp: 6, kind: "binary" },
">": { lbp: 7, rbp: 8, kind: "binary" },
">=": { lbp: 7, rbp: 8, kind: "binary" },
"<": { lbp: 7, rbp: 8, kind: "binary" },
"<=": { lbp: 7, rbp: 8, kind: "binary" },
"+": { lbp: 9, rbp: 10, kind: "binary" },
"-": { lbp: 9, rbp: 10, kind: "binary" },
"*": { lbp: 11, rbp: 12, kind: "binary" },
"/": { lbp: 11, rbp: 12, kind: "binary" },
"%": { lbp: 11, rbp: 12, kind: "binary" },
}
const isLogicalOperator = (value: Operator): value is LogicalExpr["operator"] =>
value === "&&" || value === "||"
const isBinaryOperator = (value: Operator): value is BinaryExpr["operator"] =>
value === "+" ||
value === "-" ||
value === "*" ||
value === "/" ||
value === "%" ||
value === "==" ||
value === "!=" ||
value === ">" ||
value === ">=" ||
value === "<" ||
value === "<="
export function parseExpressionSource(source: string, file?: string): ParseResult {
const { tokens, diagnostics } = lex(source, file)
const parser = new Parser(tokens, diagnostics)
const program = parser.parseProgram()
return { program, tokens, diagnostics }
}
class Parser {
private tokens: Token[]
private index: number
private diagnostics: Diagnostic[]
constructor(tokens: Token[], diagnostics: Diagnostic[]) {
this.tokens = tokens
this.index = 0
this.diagnostics = diagnostics
}
parseProgram(): Program {
const start = this.tokens[0]?.span ?? this.tokens[this.tokens.length - 1].span
const body = this.peek().type === "eof" ? null : this.parseExpression(0)
const end = this.tokens[this.tokens.length - 1]?.span ?? start
return { type: "Program", body, span: spanFrom(start, end) }
}
private parseExpression(minBp: number): Expr {
let left = this.parsePrefix()
left = this.parsePostfix(left)
while (true) {
const token = this.peek()
if (token.type !== "operator") break
const info = infixBindingPowers[token.value]
if (!info || info.lbp < minBp) break
this.advance()
const right = this.parseExpression(info.rbp)
const span = spanFrom(left.span, right.span)
if (info.kind === "logical" && isLogicalOperator(token.value)) {
left = { type: "LogicalExpr", operator: token.value, left, right, span }
} else if (info.kind === "binary" && isBinaryOperator(token.value)) {
left = { type: "BinaryExpr", operator: token.value, left, right, span }
} else {
this.error("unexpected operator", token.span)
}
}
return left
}
private parsePrefix(): Expr {
const token = this.peek()
if (token.type === "operator" && (token.value === "!" || token.value === "-")) {
this.advance()
const argument = this.parseExpression(13)
const span = spanFrom(token.span, argument.span)
const node: UnaryExpr = { type: "UnaryExpr", operator: token.value, argument, span }
return node
}
return this.parsePrimary()
}
private parsePostfix(expr: Expr): Expr {
let current = expr
while (true) {
const token = this.peek()
if (token.type === "punctuation" && token.value === ".") {
this.advance()
const propToken = this.peek()
if (propToken.type !== "identifier") {
this.error("expected identifier after '.'", propToken.span)
return current
}
this.advance()
const span = spanFrom(current.span, propToken.span)
const node: MemberExpr = {
type: "MemberExpr",
object: current,
property: propToken.value,
span,
}
current = node
continue
}
if (token.type === "punctuation" && token.value === "[") {
this.advance()
const indexExpr = this.parseExpression(0)
const endToken = this.peek()
if (!(endToken.type === "punctuation" && endToken.value === "]")) {
this.error("expected ']'", endToken.span)
this.syncTo("]")
} else {
this.advance()
}
const span = spanFrom(current.span, endToken.span)
const node: IndexExpr = { type: "IndexExpr", object: current, index: indexExpr, span }
current = node
continue
}
if (token.type === "punctuation" && token.value === "(") {
this.advance()
const args: Expr[] = []
while (this.peek().type !== "eof") {
const next = this.peek()
if (next.type === "punctuation" && next.value === ")") {
this.advance()
break
}
const arg = this.parseExpression(0)
args.push(arg)
const sep = this.peek()
if (sep.type === "punctuation" && sep.value === ",") {
this.advance()
const maybeClose = this.peek()
if (maybeClose.type === "punctuation" && maybeClose.value === ")") {
this.advance()
break
}
continue
}
if (sep.type === "punctuation" && sep.value === ")") {
this.advance()
break
}
this.error("expected ',' or ')'", sep.span)
this.syncTo(")")
const maybeClose = this.peek()
if (maybeClose.type === "punctuation" && maybeClose.value === ")") {
this.advance()
}
break
}
const endToken = this.previous()
const span = spanFrom(current.span, endToken.span)
const node: CallExpr = { type: "CallExpr", callee: current, args, span }
current = node
continue
}
break
}
return current
}
private parsePrimary(): Expr {
const token = this.peek()
if (token.type === "number") {
this.advance()
const node: Literal = {
type: "Literal",
kind: "number",
value: token.value,
span: token.span,
}
return node
}
if (token.type === "string") {
this.advance()
const node: Literal = {
type: "Literal",
kind: "string",
value: token.value,
span: token.span,
}
return node
}
if (token.type === "boolean") {
this.advance()
const node: Literal = {
type: "Literal",
kind: "boolean",
value: token.value,
span: token.span,
}
return node
}
if (token.type === "null") {
this.advance()
const node: Literal = { type: "Literal", kind: "null", value: null, span: token.span }
return node
}
if (token.type === "regex") {
this.advance()
const node: Literal = {
type: "Literal",
kind: "regex",
value: token.pattern,
flags: token.flags,
span: token.span,
}
return node
}
if (token.type === "identifier") {
this.advance()
const node: Identifier = { type: "Identifier", name: token.value, span: token.span }
return node
}
if (token.type === "this") {
this.advance()
const node: Identifier = { type: "Identifier", name: "this", span: token.span }
return node
}
if (token.type === "punctuation" && token.value === "(") {
this.advance()
const expr = this.parseExpression(0)
const closeToken = this.peek()
if (closeToken.type === "punctuation" && closeToken.value === ")") {
this.advance()
} else {
this.error("expected ')'", closeToken.span)
this.syncTo(")")
const maybeClose = this.peek()
if (maybeClose.type === "punctuation" && maybeClose.value === ")") {
this.advance()
}
}
return expr
}
if (token.type === "punctuation" && token.value === "[") {
return this.parseList()
}
this.error("unexpected token", token.span)
this.advance()
const node: ErrorExpr = { type: "ErrorExpr", message: "unexpected token", span: token.span }
return node
}
private parseList(): Expr {
const startToken = this.peek()
this.advance()
const elements: Expr[] = []
while (this.peek().type !== "eof") {
const next = this.peek()
if (next.type === "punctuation" && next.value === "]") {
this.advance()
const span = spanFrom(startToken.span, next.span)
const node: ListExpr = { type: "ListExpr", elements, span }
return node
}
const element = this.parseExpression(0)
elements.push(element)
const sep = this.peek()
if (sep.type === "punctuation" && sep.value === ",") {
this.advance()
const maybeClose = this.peek()
if (maybeClose.type === "punctuation" && maybeClose.value === "]") {
this.advance()
const span = spanFrom(startToken.span, maybeClose.span)
const node: ListExpr = { type: "ListExpr", elements, span }
return node
}
continue
}
if (sep.type === "punctuation" && sep.value === "]") {
this.advance()
const span = spanFrom(startToken.span, sep.span)
const node: ListExpr = { type: "ListExpr", elements, span }
return node
}
this.error("expected ',' or ']'", sep.span)
this.syncTo("]")
const maybeClose = this.peek()
if (maybeClose.type === "punctuation" && maybeClose.value === "]") {
const endToken = maybeClose
this.advance()
const span = spanFrom(startToken.span, endToken.span)
const node: ListExpr = { type: "ListExpr", elements, span }
return node
}
break
}
const endToken = this.previous()
const span = spanFrom(startToken.span, endToken.span)
return { type: "ListExpr", elements, span }
}
private error(message: string, span: Token["span"]) {
this.diagnostics.push({ kind: "parse", message, span })
}
private syncTo(value: ")" | "]") {
while (this.peek().type !== "eof") {
const token = this.peek()
if (token.type === "punctuation" && token.value === value) {
return
}
this.advance()
}
}
private peek(): Token {
return this.tokens[this.index]
}
private previous(): Token {
return this.tokens[Math.max(0, this.index - 1)]
}
private advance(): Token {
const token = this.tokens[this.index]
if (this.index < this.tokens.length - 1) this.index += 1
return token
}
}

View File

@@ -0,0 +1,27 @@
import assert from "node:assert"
import test from "node:test"
import { parseExpressionSource } from "./parser"
import { buildPropertyExpressionSource } from "./properties"
test("builds property expression sources", () => {
const cases: Array<{ input: string; expected: string }> = [
{ input: "status", expected: "note.status" },
{ input: "note.status", expected: "note.status" },
{ input: "file.name", expected: "file.name" },
{ input: "file.my-field", expected: 'file["my-field"]' },
{ input: "my-field", expected: 'note["my-field"]' },
{ input: 'note["my field"]', expected: 'note["my field"]' },
{ input: "formula.total", expected: "formula.total" },
{ input: "this.file.name", expected: "this.file.name" },
{ input: "a.b-c.d", expected: 'note.a["b-c"].d' },
{ input: "date(file.ctime)", expected: "date(file.ctime)" },
]
for (const entry of cases) {
const result = buildPropertyExpressionSource(entry.input)
assert.strictEqual(result, entry.expected)
const parsed = parseExpressionSource(entry.expected)
assert.strictEqual(parsed.diagnostics.length, 0)
assert.ok(parsed.program.body)
}
})

View File

@@ -0,0 +1,27 @@
const simpleIdentifierPattern = /^[A-Za-z_][A-Za-z0-9_]*$/
export function buildPropertyExpressionSource(property: string): string | null {
const trimmed = property.trim()
if (!trimmed) return null
if (trimmed.includes("(") || trimmed.includes("[") || trimmed.includes("]")) {
return trimmed
}
const parts = trimmed.split(".")
const root = parts[0]
const rest = parts.slice(1)
const buildAccess = (base: string, segments: string[]) => {
let source = base
for (const segment of segments) {
if (simpleIdentifierPattern.test(segment)) {
source = `${source}.${segment}`
} else {
source = `${source}[${JSON.stringify(segment)}]`
}
}
return source
}
if (root === "file" || root === "note" || root === "formula" || root === "this") {
return buildAccess(root, rest)
}
return buildAccess("note", parts)
}

View File

@@ -0,0 +1,36 @@
export const BUILTIN_SUMMARY_TYPES = [
"count",
"sum",
"average",
"avg",
"min",
"max",
"range",
"unique",
"filled",
"missing",
"median",
"stddev",
"checked",
"unchecked",
"empty",
"earliest",
"latest",
] as const
export type BuiltinSummaryType = (typeof BUILTIN_SUMMARY_TYPES)[number]
export interface SummaryDefinition {
type: "builtin" | "formula"
builtinType?: BuiltinSummaryType
formulaRef?: string
expression?: string
}
export interface ViewSummaryConfig {
columns: Record<string, SummaryDefinition>
}
export interface PropertyConfig {
displayName?: string
}

View File

@@ -0,0 +1,42 @@
import { Span } from "./ast"
export type Operator =
| "=="
| "!="
| ">="
| "<="
| ">"
| "<"
| "&&"
| "||"
| "+"
| "-"
| "*"
| "/"
| "%"
| "!"
export type Punctuation = "." | "," | "(" | ")" | "[" | "]"
export type NumberToken = { type: "number"; value: number; span: Span }
export type StringToken = { type: "string"; value: string; span: Span }
export type BooleanToken = { type: "boolean"; value: boolean; span: Span }
export type NullToken = { type: "null"; span: Span }
export type IdentifierToken = { type: "identifier"; value: string; span: Span }
export type ThisToken = { type: "this"; span: Span }
export type OperatorToken = { type: "operator"; value: Operator; span: Span }
export type PunctuationToken = { type: "punctuation"; value: Punctuation; span: Span }
export type RegexToken = { type: "regex"; pattern: string; flags: string; span: Span }
export type EofToken = { type: "eof"; span: Span }
export type Token =
| NumberToken
| StringToken
| BooleanToken
| NullToken
| IdentifierToken
| ThisToken
| OperatorToken
| PunctuationToken
| RegexToken
| EofToken

View File

@@ -0,0 +1,278 @@
import yaml from "js-yaml"
import fs from "node:fs/promises"
import path from "node:path"
import {
parseExpressionSource,
compileExpression,
buildPropertyExpressionSource,
BUILTIN_SUMMARY_TYPES,
} from "./compiler"
import { Expr, LogicalExpr, UnaryExpr, spanFrom } from "./compiler/ast"
import { Diagnostic } from "./compiler/errors"
const isRecord = (value: unknown): value is Record<string, unknown> =>
typeof value === "object" && value !== null && !Array.isArray(value)
type CollectedExpression = {
kind: string
context: string
source: string
ast: Expr | null
ir: unknown
diagnostics: Diagnostic[]
}
const parseToExpr = (source: string, filePath: string) => {
const result = parseExpressionSource(source, filePath)
return { expr: result.program.body ?? null, diagnostics: result.diagnostics }
}
const buildLogical = (operator: "&&" | "||", expressionsList: Expr[]): Expr | null => {
if (expressionsList.length === 0) return null
let current: Expr | null = null
for (const next of expressionsList) {
if (!current) {
current = next
continue
}
const span = spanFrom(current.span, next.span)
const node: LogicalExpr = { type: "LogicalExpr", operator, left: current, right: next, span }
current = node
}
return current
}
const negateExpressions = (expressionsList: Expr[]): Expr[] =>
expressionsList.map((expr) => {
const node: UnaryExpr = {
type: "UnaryExpr",
operator: "!",
argument: expr,
span: spanFrom(expr.span, expr.span),
}
return node
})
const buildFilterExpr = (
raw: unknown,
context: string,
diagnostics: Diagnostic[],
filePath: string,
): Expr | null => {
if (typeof raw === "string") {
const parsed = parseToExpr(raw, filePath)
diagnostics.push(...parsed.diagnostics)
return parsed.expr
}
if (!isRecord(raw)) return null
if (Array.isArray(raw.and)) {
const parts = raw.and
.map((entry, index) =>
buildFilterExpr(entry, `${context}.and[${index}]`, diagnostics, filePath),
)
.filter((entry): entry is Expr => Boolean(entry))
return buildLogical("&&", parts)
}
if (Array.isArray(raw.or)) {
const parts = raw.or
.map((entry, index) =>
buildFilterExpr(entry, `${context}.or[${index}]`, diagnostics, filePath),
)
.filter((entry): entry is Expr => Boolean(entry))
return buildLogical("||", parts)
}
if (Array.isArray(raw.not)) {
const parts = raw.not
.map((entry, index) =>
buildFilterExpr(entry, `${context}.not[${index}]`, diagnostics, filePath),
)
.filter((entry): entry is Expr => Boolean(entry))
return buildLogical("&&", negateExpressions(parts))
}
return null
}
const collectPropertyExpressions = (
views: unknown[],
): Map<string, { source: string; context: string }> => {
const entries = new Map<string, { source: string; context: string }>()
const addProperty = (property: string, context: string) => {
const key = property.trim()
if (!key || entries.has(key)) return
const source = buildPropertyExpressionSource(key)
if (!source) return
entries.set(key, { source, context })
}
views.forEach((view, viewIndex) => {
if (!isRecord(view)) return
const viewContext = `views[${viewIndex}]`
if (Array.isArray(view.order)) {
view.order.forEach((entry, orderIndex) => {
if (typeof entry === "string") {
addProperty(entry, `${viewContext}.order[${orderIndex}]`)
}
})
}
if (Array.isArray(view.sort)) {
view.sort.forEach((entry, sortIndex) => {
if (isRecord(entry) && typeof entry.property === "string") {
addProperty(entry.property, `${viewContext}.sort[${sortIndex}].property`)
}
})
}
if (typeof view.groupBy === "string") {
addProperty(view.groupBy, `${viewContext}.groupBy`)
} else if (isRecord(view.groupBy) && typeof view.groupBy.property === "string") {
addProperty(view.groupBy.property, `${viewContext}.groupBy.property`)
}
if (view.summaries && isRecord(view.summaries)) {
const columns =
"columns" in view.summaries && isRecord(view.summaries.columns)
? view.summaries.columns
: view.summaries
for (const key of Object.keys(columns)) {
addProperty(key, `${viewContext}.summaries.${key}`)
}
}
if (typeof view.image === "string") {
addProperty(view.image, `${viewContext}.image`)
}
if (view.type === "map") {
const coords = typeof view.coordinates === "string" ? view.coordinates : "coordinates"
addProperty(coords, `${viewContext}.coordinates`)
if (typeof view.markerIcon === "string") {
addProperty(view.markerIcon, `${viewContext}.markerIcon`)
}
if (typeof view.markerColor === "string") {
addProperty(view.markerColor, `${viewContext}.markerColor`)
}
}
})
return entries
}
const main = async () => {
const inputPath = process.argv[2] ? String(process.argv[2]) : "content/antilibrary.base"
const filePath = path.resolve(process.cwd(), inputPath)
const raw = await fs.readFile(filePath, "utf8")
const parsed = yaml.load(raw)
const config = isRecord(parsed) ? parsed : {}
const collected: CollectedExpression[] = []
if (config.filters !== undefined) {
const diagnostics: Diagnostic[] = []
const expr = buildFilterExpr(config.filters, "filters", diagnostics, filePath)
collected.push({
kind: "filters",
context: "filters",
source: typeof config.filters === "string" ? config.filters : JSON.stringify(config.filters),
ast: expr,
ir: expr ? compileExpression(expr) : null,
diagnostics,
})
}
if (isRecord(config.formulas)) {
for (const [name, value] of Object.entries(config.formulas)) {
if (typeof value !== "string") continue
const parsedExpr = parseToExpr(value, filePath)
collected.push({
kind: "formula",
context: `formulas.${name}`,
source: value,
ast: parsedExpr.expr,
ir: parsedExpr.expr ? compileExpression(parsedExpr.expr) : null,
diagnostics: parsedExpr.diagnostics,
})
}
}
const topLevelSummaries = isRecord(config.summaries) ? config.summaries : {}
if (isRecord(config.summaries)) {
for (const [name, value] of Object.entries(config.summaries)) {
if (typeof value !== "string") continue
const parsedExpr = parseToExpr(value, filePath)
collected.push({
kind: "summary",
context: `summaries.${name}`,
source: value,
ast: parsedExpr.expr,
ir: parsedExpr.expr ? compileExpression(parsedExpr.expr) : null,
diagnostics: parsedExpr.diagnostics,
})
}
}
if (Array.isArray(config.views)) {
config.views.forEach((view, index) => {
if (!isRecord(view)) return
if (view.filters !== undefined) {
const diagnostics: Diagnostic[] = []
const expr = buildFilterExpr(view.filters, `views[${index}].filters`, diagnostics, filePath)
collected.push({
kind: "view.filter",
context: `views[${index}].filters`,
source: typeof view.filters === "string" ? view.filters : JSON.stringify(view.filters),
ast: expr,
ir: expr ? compileExpression(expr) : null,
diagnostics,
})
}
if (view.summaries && isRecord(view.summaries)) {
const columns =
"columns" in view.summaries && isRecord(view.summaries.columns)
? view.summaries.columns
: view.summaries
for (const [column, summaryValue] of Object.entries(columns)) {
if (typeof summaryValue !== "string") continue
const normalized = summaryValue.toLowerCase().trim()
const builtins = new Set<string>(BUILTIN_SUMMARY_TYPES)
if (builtins.has(normalized)) continue
const summarySource =
summaryValue in topLevelSummaries && typeof topLevelSummaries[summaryValue] === "string"
? String(topLevelSummaries[summaryValue])
: summaryValue
const parsedExpr = parseToExpr(summarySource, filePath)
collected.push({
kind: "view.summary",
context: `views[${index}].summaries.${column}`,
source: summarySource,
ast: parsedExpr.expr,
ir: parsedExpr.expr ? compileExpression(parsedExpr.expr) : null,
diagnostics: parsedExpr.diagnostics,
})
}
}
})
}
const views = Array.isArray(config.views) ? config.views : []
const propertyExpressions = collectPropertyExpressions(views)
for (const [_, entry] of propertyExpressions.entries()) {
const parsedExpr = parseToExpr(entry.source, filePath)
collected.push({
kind: "property",
context: entry.context,
source: entry.source,
ast: parsedExpr.expr,
ir: parsedExpr.expr ? compileExpression(parsedExpr.expr) : null,
diagnostics: parsedExpr.diagnostics,
})
}
const payload = { file: inputPath, count: collected.length, expressions: collected }
process.stdout.write(JSON.stringify(payload, null, 2))
}
main()

248
quartz/util/base/query.ts Normal file
View File

@@ -0,0 +1,248 @@
import { QuartzPluginData } from "../../plugins/vfile"
import { evaluateSummaryExpression, valueToUnknown, EvalContext, ProgramIR } from "./compiler"
import { SummaryDefinition, ViewSummaryConfig, BuiltinSummaryType } from "./types"
type SummaryValueResolver = (
file: QuartzPluginData,
column: string,
allFiles: QuartzPluginData[],
) => unknown
type SummaryContextFactory = (file: QuartzPluginData) => EvalContext
export function computeColumnSummary(
column: string,
files: QuartzPluginData[],
summary: SummaryDefinition,
allFiles: QuartzPluginData[] = [],
valueResolver: SummaryValueResolver,
getContext: SummaryContextFactory,
summaryExpression?: ProgramIR,
): string | number | undefined {
if (files.length === 0) {
return undefined
}
const values = files.map((file) => valueResolver(file, column, allFiles))
if (summary.type === "builtin" && summary.builtinType) {
return computeBuiltinSummary(values, summary.builtinType)
}
if (summary.type === "formula" && summary.expression) {
if (summaryExpression) {
const summaryCtx = getContext(files[0])
summaryCtx.diagnosticContext = `summaries.${column}`
summaryCtx.diagnosticSource = summary.expression
summaryCtx.rows = files
const value = evaluateSummaryExpression(summaryExpression, values, summaryCtx)
const unknownValue = valueToUnknown(value)
if (typeof unknownValue === "number" || typeof unknownValue === "string") {
return unknownValue
}
return undefined
}
}
return undefined
}
function computeBuiltinSummary(
values: any[],
type: BuiltinSummaryType,
): string | number | undefined {
switch (type) {
case "count":
return values.length
case "sum": {
const nums = values.filter((v) => typeof v === "number")
if (nums.length === 0) return undefined
return nums.reduce((acc, v) => acc + v, 0)
}
case "average":
case "avg": {
const nums = values.filter((v) => typeof v === "number")
if (nums.length === 0) return undefined
const sum = nums.reduce((acc, v) => acc + v, 0)
return Math.round((sum / nums.length) * 100) / 100
}
case "min": {
const comparable = values.filter(
(v) => typeof v === "number" || v instanceof Date || typeof v === "string",
)
if (comparable.length === 0) return undefined
const normalized = comparable.map((v) => (v instanceof Date ? v.getTime() : v))
const min = Math.min(...normalized.filter((v) => typeof v === "number"))
if (isNaN(min)) {
const strings = comparable.filter((v) => typeof v === "string") as string[]
if (strings.length === 0) return undefined
return strings.sort()[0]
}
if (comparable.some((v) => v instanceof Date)) {
return new Date(min).toISOString().split("T")[0]
}
return min
}
case "max": {
const comparable = values.filter(
(v) => typeof v === "number" || v instanceof Date || typeof v === "string",
)
if (comparable.length === 0) return undefined
const normalized = comparable.map((v) => (v instanceof Date ? v.getTime() : v))
const max = Math.max(...normalized.filter((v) => typeof v === "number"))
if (isNaN(max)) {
const strings = comparable.filter((v) => typeof v === "string") as string[]
if (strings.length === 0) return undefined
return strings.sort().reverse()[0]
}
if (comparable.some((v) => v instanceof Date)) {
return new Date(max).toISOString().split("T")[0]
}
return max
}
case "range": {
const comparable = values.filter(
(v) => typeof v === "number" || v instanceof Date || typeof v === "string",
)
if (comparable.length === 0) return undefined
const normalized = comparable.map((v) => (v instanceof Date ? v.getTime() : v))
const nums = normalized.filter((v) => typeof v === "number")
if (nums.length === 0) return undefined
const min = Math.min(...nums)
const max = Math.max(...nums)
if (comparable.some((v) => v instanceof Date)) {
return `${new Date(min).toISOString().split("T")[0]} - ${new Date(max).toISOString().split("T")[0]}`
}
return `${min} - ${max}`
}
case "unique": {
const nonNull = values.filter((v) => v !== undefined && v !== null && v !== "")
const unique = new Set(nonNull.map((v) => (v instanceof Date ? v.toISOString() : String(v))))
return unique.size
}
case "filled": {
const filled = values.filter((v) => v !== undefined && v !== null && v !== "")
return filled.length
}
case "missing": {
const missing = values.filter((v) => v === undefined || v === null || v === "")
return missing.length
}
case "median": {
const nums = values.filter((v) => typeof v === "number") as number[]
if (nums.length === 0) return undefined
const sorted = [...nums].sort((a, b) => a - b)
const mid = Math.floor(sorted.length / 2)
if (sorted.length % 2 === 0) {
return (sorted[mid - 1] + sorted[mid]) / 2
}
return sorted[mid]
}
case "stddev": {
const nums = values.filter((v) => typeof v === "number") as number[]
if (nums.length === 0) return undefined
const mean = nums.reduce((acc, v) => acc + v, 0) / nums.length
const variance = nums.reduce((acc, v) => acc + (v - mean) * (v - mean), 0) / nums.length
return Math.round(Math.sqrt(variance) * 100) / 100
}
case "checked":
return values.filter((v) => v === true).length
case "unchecked":
return values.filter((v) => v === false).length
case "empty": {
const count = values.filter(
(v) =>
v === undefined ||
v === null ||
v === "" ||
(Array.isArray(v) && v.length === 0) ||
(typeof v === "object" && v !== null && !Array.isArray(v) && Object.keys(v).length === 0),
).length
return count
}
case "earliest": {
const dates = values.filter(
(v) =>
v instanceof Date ||
(typeof v === "string" && /^\d{4}-\d{2}-\d{2}/.test(v)) ||
typeof v === "number",
)
if (dates.length === 0) return undefined
const timestamps = dates.map((v) => {
if (v instanceof Date) return v.getTime()
if (typeof v === "string") return new Date(v).getTime()
return v
})
const earliest = Math.min(...timestamps)
return new Date(earliest).toISOString().split("T")[0]
}
case "latest": {
const dates = values.filter(
(v) =>
v instanceof Date ||
(typeof v === "string" && /^\d{4}-\d{2}-\d{2}/.test(v)) ||
typeof v === "number",
)
if (dates.length === 0) return undefined
const timestamps = dates.map((v) => {
if (v instanceof Date) return v.getTime()
if (typeof v === "string") return new Date(v).getTime()
return v
})
const latest = Math.max(...timestamps)
return new Date(latest).toISOString().split("T")[0]
}
default:
return undefined
}
}
export function computeViewSummaries(
columns: string[],
files: QuartzPluginData[],
summaryConfig: ViewSummaryConfig | undefined,
allFiles: QuartzPluginData[] = [],
getContext: SummaryContextFactory,
valueResolver: SummaryValueResolver,
summaryExpressions?: Record<string, ProgramIR>,
): Record<string, string | number | undefined> {
const results: Record<string, string | number | undefined> = {}
if (!summaryConfig?.columns) {
return results
}
for (const column of columns) {
const summary = summaryConfig.columns[column]
if (summary) {
const expression = summaryExpressions ? summaryExpressions[column] : undefined
results[column] = computeColumnSummary(
column,
files,
summary,
allFiles,
valueResolver,
getContext,
expression,
)
}
}
return results
}

1335
quartz/util/base/render.ts Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,31 @@
import assert from "node:assert"
import test from "node:test"
import { parseViews, parseViewSummaries } from "./types"
test("parseViews preserves raw filters", () => {
const views = parseViews([
{ type: "table", name: "test", filters: 'status == "done"', order: ["file.name"] },
])
assert.strictEqual(views.length, 1)
assert.strictEqual(views[0].filters, 'status == "done"')
assert.deepStrictEqual(views[0].order, ["file.name"])
})
test("parseViews rejects missing type/name", () => {
assert.throws(() => parseViews([{}]))
})
test("parseViewSummaries resolves builtin and formula refs", () => {
const summaries = parseViewSummaries(
{ price: "Average", score: "avgScore", extra: "values.length" },
{ avgScore: "values.mean()" },
)
assert.ok(summaries)
if (!summaries) return
assert.strictEqual(summaries.columns.price.type, "builtin")
assert.strictEqual(summaries.columns.score.type, "formula")
assert.strictEqual(summaries.columns.score.formulaRef, "avgScore")
assert.strictEqual(summaries.columns.extra.type, "formula")
})

119
quartz/util/base/types.ts Normal file
View File

@@ -0,0 +1,119 @@
import {
SummaryDefinition,
ViewSummaryConfig,
PropertyConfig,
BuiltinSummaryType,
BUILTIN_SUMMARY_TYPES,
} from "./compiler/schema"
export type { SummaryDefinition, ViewSummaryConfig, PropertyConfig, BuiltinSummaryType }
export { BUILTIN_SUMMARY_TYPES }
const isRecord = (value: unknown): value is Record<string, unknown> =>
typeof value === "object" && value !== null && !Array.isArray(value)
const isNonEmptyString = (value: unknown): value is string =>
typeof value === "string" && value.trim().length > 0
export type BaseFileFilter =
| string
| { and: BaseFileFilter[] }
| { or: BaseFileFilter[] }
| { not: BaseFileFilter[] }
export interface BaseFile {
filters?: BaseFileFilter
views: BaseView[]
properties?: Record<string, PropertyConfig>
summaries?: Record<string, string>
formulas?: Record<string, string>
}
export interface BaseView {
type: "table" | "list" | "gallery" | "board" | "calendar" | "card" | "cards" | "map"
name: string
order?: string[]
sort?: BaseSortConfig[]
columnSize?: Record<string, number>
groupBy?: string | BaseGroupBy
limit?: number
filters?: BaseFileFilter
summaries?: Record<string, string> | ViewSummaryConfig
image?: string
cardSize?: number
cardAspect?: number
nestedProperties?: boolean
indentProperties?: boolean
separator?: string
date?: string
dateField?: string
dateProperty?: string
coordinates?: string
markerIcon?: string
markerColor?: string
defaultZoom?: number
defaultCenter?: [number, number]
clustering?: boolean
groupSizes?: Record<string, number>
groupAspects?: Record<string, number>
}
export interface BaseSortConfig {
property: string
direction: "ASC" | "DESC"
}
export interface BaseGroupBy {
property: string
direction: "ASC" | "DESC"
}
export function parseViews(raw: unknown[]): BaseView[] {
return raw.map((entry) => {
if (!isRecord(entry)) throw new Error("Each view must be an object")
const { type, name } = entry
if (!isNonEmptyString(type) || !isNonEmptyString(name)) {
throw new Error("Each view must have 'type' and 'name' fields")
}
return { ...entry, type, name } as BaseView
})
}
export function parseViewSummaries(
viewSummaries: Record<string, string> | ViewSummaryConfig | undefined,
topLevelSummaries?: Record<string, string>,
): ViewSummaryConfig | undefined {
if (!viewSummaries || typeof viewSummaries !== "object") return undefined
if ("columns" in viewSummaries && typeof viewSummaries.columns === "object") {
return viewSummaries as ViewSummaryConfig
}
const columns: Record<string, SummaryDefinition> = {}
for (const [column, summaryValue] of Object.entries(viewSummaries)) {
if (typeof summaryValue !== "string") continue
const normalized = summaryValue.toLowerCase().trim()
if (BUILTIN_SUMMARY_TYPES.includes(normalized as BuiltinSummaryType)) {
columns[column] = { type: "builtin", builtinType: normalized as BuiltinSummaryType }
continue
}
if (topLevelSummaries && summaryValue in topLevelSummaries) {
columns[column] = {
type: "formula",
formulaRef: summaryValue,
expression: topLevelSummaries[summaryValue],
}
continue
}
if (summaryValue.includes("(") || summaryValue.includes(".")) {
columns[column] = { type: "formula", expression: summaryValue }
}
}
return Object.keys(columns).length > 0 ? { columns } : undefined
}

View File

@@ -10,13 +10,12 @@ export async function glob(
pattern: string,
cwd: string,
ignorePatterns: string[],
respectGitignore: boolean = true,
): Promise<FilePath[]> {
const fps = (
await globby(pattern, {
cwd,
ignore: ignorePatterns,
gitignore: respectGitignore,
gitignore: true,
})
).map(toPosixPath)
return fps as FilePath[]

View File

@@ -73,7 +73,7 @@ export function slugifyFilePath(fp: FilePath, excludeExt?: boolean): FullSlug {
fp = stripSlashes(fp) as FilePath
let ext = getFileExtension(fp)
const withoutFileExt = fp.replace(new RegExp(ext + "$"), "")
if (excludeExt || [".md", ".html", undefined].includes(ext)) {
if (excludeExt || [".md", ".html", ".base", undefined].includes(ext)) {
ext = ""
}

94
quartz/util/wikilinks.ts Normal file
View File

@@ -0,0 +1,94 @@
import { FilePath, FullSlug, slugifyFilePath } from "./path"
export type WikilinkWithPosition = {
wikilink: ParsedWikilink
start: number
end: number
}
export type ParsedWikilink = {
raw: string
target: string
anchor?: string
alias?: string
embed: boolean
}
export type ResolvedWikilink = {
slug: FullSlug
anchor?: string
}
const wikilinkRegex = /^!?\[\[([^\]|#]+)(?:#([^\]|]+))?(?:\|([^\]]+))?\]\]$/
export function parseWikilink(text: string): ParsedWikilink | null {
const trimmed = text.trim()
const match = wikilinkRegex.exec(trimmed)
if (!match) return null
const [, target, anchor, alias] = match
return {
raw: trimmed,
target: target?.trim() ?? "",
anchor: anchor?.trim(),
alias: alias?.trim(),
embed: trimmed.startsWith("!"),
}
}
export function resolveWikilinkTarget(
parsed: ParsedWikilink,
currentSlug: FullSlug,
): ResolvedWikilink | null {
const target = parsed.target.trim()
if (!target) return null
if (target.startsWith("/")) {
const slug = slugifyFilePath(target.slice(1).replace(/\\/g, "/") as FilePath)
return { slug, anchor: parsed.anchor }
}
const currentParts = currentSlug.split("/")
const currentDir = currentParts.slice(0, -1)
const targetParts = target.replace(/\\/g, "/").split("/")
const resolved: string[] = [...currentDir]
for (const part of targetParts) {
if (part === "..") {
resolved.pop()
} else if (part !== "." && part.length > 0) {
resolved.push(part)
}
}
const slug = slugifyFilePath(resolved.join("/") as FilePath)
return { slug, anchor: parsed.anchor }
}
const globalWikilinkRegex = /!?\[\[([^\]|#]+)(?:#([^\]|]+))?(?:\|([^\]]+))?\]\]/g
export function extractWikilinksWithPositions(text: string): WikilinkWithPosition[] {
const results: WikilinkWithPosition[] = []
let match: RegExpExecArray | null
globalWikilinkRegex.lastIndex = 0
while ((match = globalWikilinkRegex.exec(text)) !== null) {
const [fullMatch, target, anchor, alias] = match
results.push({
wikilink: {
raw: fullMatch,
target: target?.trim() ?? "",
anchor: anchor?.trim(),
alias: alias?.trim(),
embed: fullMatch.startsWith("!"),
},
start: match.index,
end: match.index + fullMatch.length,
})
}
return results
}