mirror of
https://github.com/blakeblackshear/frigate.git
synced 2026-04-28 23:06:13 +02:00
Docs refactor (#22703)
* add generation script a script to read yaml code blocks from docs markdown files and generate corresponding "Frigate UI" tab instructions based on the json schema, i18n, section configs (hidden fields), and nav mappings * first pass * components * add to gitignore * second pass * fix broken anchors * fixes * clean up tabs * version bump * tweaks * remove role mapping config from ui
This commit is contained in:
184
docs/scripts/README.md
Normal file
184
docs/scripts/README.md
Normal file
@@ -0,0 +1,184 @@
|
||||
# Documentation Scripts
|
||||
|
||||
## generate_ui_tabs.py
|
||||
|
||||
Automatically generates "Frigate UI" tab content for documentation files based on the YAML config examples already in the docs.
|
||||
|
||||
Instead of manually writing UI instructions for every YAML block, this script reads three data sources from the codebase and generates the UI tabs:
|
||||
|
||||
1. **JSON Schema** (from Pydantic config models) -- field names, types, defaults
|
||||
2. **i18n translation files** -- the exact labels shown in the Settings UI
|
||||
3. **Section mappings** (from Settings.tsx) -- config key to UI navigation path
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Run from the repository root. The script imports Frigate's Python config models directly, so the `frigate` package must be importable:
|
||||
|
||||
```bash
|
||||
# From repo root -- no extra install needed if your environment can import frigate
|
||||
python3 docs/scripts/generate_ui_tabs.py --help
|
||||
```
|
||||
|
||||
### Usage
|
||||
|
||||
#### Preview (default)
|
||||
|
||||
Shows what would be generated for each bare YAML block, without modifying any files:
|
||||
|
||||
```bash
|
||||
# Single file
|
||||
python3 docs/scripts/generate_ui_tabs.py docs/docs/configuration/record.md
|
||||
|
||||
# All config docs
|
||||
python3 docs/scripts/generate_ui_tabs.py docs/docs/configuration/
|
||||
```
|
||||
|
||||
#### Inject
|
||||
|
||||
Wraps bare YAML blocks with `<ConfigTabs>` and inserts the generated UI tab. Also adds the required imports (`ConfigTabs`, `TabItem`, `NavPath`) after the frontmatter if missing.
|
||||
|
||||
Already-wrapped blocks are skipped (idempotent).
|
||||
|
||||
```bash
|
||||
python3 docs/scripts/generate_ui_tabs.py --inject docs/docs/configuration/record.md
|
||||
```
|
||||
|
||||
#### Check
|
||||
|
||||
Compares existing UI tabs against what the script would generate from the current schema and i18n files. Prints a unified diff for each drifted block and exits with code 1 if any drift is found.
|
||||
|
||||
Use this in CI to catch stale docs after schema or i18n changes.
|
||||
|
||||
```bash
|
||||
python3 docs/scripts/generate_ui_tabs.py --check docs/docs/configuration/
|
||||
```
|
||||
|
||||
#### Regenerate
|
||||
|
||||
Replaces the UI tab content in existing `<ConfigTabs>` blocks with freshly generated content. The YAML tab is preserved exactly as-is. Only blocks that have actually changed are rewritten.
|
||||
|
||||
```bash
|
||||
# Preview changes without writing
|
||||
python3 docs/scripts/generate_ui_tabs.py --regenerate --dry-run docs/docs/configuration/
|
||||
|
||||
# Apply changes
|
||||
python3 docs/scripts/generate_ui_tabs.py --regenerate docs/docs/configuration/
|
||||
```
|
||||
|
||||
#### Output to directory (`--outdir`)
|
||||
|
||||
Write generated files to a separate directory instead of modifying the originals. The source directory structure is mirrored. Files without changes are copied as-is so the output is a complete snapshot suitable for diffing.
|
||||
|
||||
Works with `--inject` and `--regenerate`.
|
||||
|
||||
```bash
|
||||
# Generate into a named directory
|
||||
python3 docs/scripts/generate_ui_tabs.py --inject --outdir /tmp/generated docs/docs/configuration/
|
||||
|
||||
# Then diff original vs generated
|
||||
diff -rq docs/docs/configuration/ /tmp/generated/
|
||||
|
||||
# Or let an AI agent compare them
|
||||
diff -ru docs/docs/configuration/record.md /tmp/generated/record.md
|
||||
```
|
||||
|
||||
This is useful for AI agents that need to review the generated output before applying it, or for previewing what `--inject` or `--regenerate` would do across an entire directory.
|
||||
|
||||
#### Verbose mode
|
||||
|
||||
Add `-v` to any mode for detailed diagnostics (skipped blocks, reasons, unchanged blocks):
|
||||
|
||||
```bash
|
||||
python3 docs/scripts/generate_ui_tabs.py -v docs/docs/configuration/
|
||||
```
|
||||
|
||||
### Typical workflow
|
||||
|
||||
```bash
|
||||
# 1. Preview what would be generated (output to temp dir, originals untouched)
|
||||
python3 docs/scripts/generate_ui_tabs.py --inject --outdir /tmp/ui-preview docs/docs/configuration/
|
||||
# Compare: diff -ru docs/docs/configuration/ /tmp/ui-preview/
|
||||
|
||||
# 2. Apply: inject UI tabs into the actual docs
|
||||
python3 docs/scripts/generate_ui_tabs.py --inject docs/docs/configuration/
|
||||
|
||||
# 3. Review and hand-edit where needed (the script gets you 90% there)
|
||||
|
||||
# 4. Later, after schema or i18n changes, check for drift
|
||||
python3 docs/scripts/generate_ui_tabs.py --check docs/docs/configuration/
|
||||
|
||||
# 5. If drifted, preview then regenerate
|
||||
python3 docs/scripts/generate_ui_tabs.py --regenerate --outdir /tmp/ui-regen docs/docs/configuration/
|
||||
# Compare: diff -ru docs/docs/configuration/ /tmp/ui-regen/
|
||||
|
||||
# 6. Apply regeneration
|
||||
python3 docs/scripts/generate_ui_tabs.py --regenerate docs/docs/configuration/
|
||||
```
|
||||
|
||||
### How it decides what to generate
|
||||
|
||||
The script detects two patterns from the YAML block content:
|
||||
|
||||
**Pattern A -- Field table.** When the YAML has inline comments (e.g., `# <- description`), the script generates a markdown table with field names and descriptions:
|
||||
|
||||
```markdown
|
||||
Navigate to <NavPath path="Settings > Global configuration > Recording" />.
|
||||
|
||||
| Field | Description |
|
||||
|-------|-------------|
|
||||
| **Continuous retention > Retention days** | Days to retain recordings. |
|
||||
| **Motion retention > Retention days** | Days to retain recordings. |
|
||||
```
|
||||
|
||||
**Pattern B -- Set instructions.** When the YAML has concrete values without comments, the script generates step-by-step instructions:
|
||||
|
||||
```markdown
|
||||
Navigate to <NavPath path="Settings > Global configuration > Recording" />.
|
||||
|
||||
- Set **Enable recording** to on
|
||||
- Set **Continuous retention > Retention days** to `3`
|
||||
- Set **Alert retention > Event retention > Retention days** to `30`
|
||||
- Set **Alert retention > Event retention > Retention mode** to `all`
|
||||
```
|
||||
|
||||
**Camera-level config** is auto-detected when the YAML is nested under `cameras:`. The output uses a generic camera reference rather than the example camera name from the YAML:
|
||||
|
||||
```markdown
|
||||
1. Navigate to <NavPath path="Settings > Camera configuration > Recording" /> and select your camera.
|
||||
- Set **Enable recording** to on
|
||||
- Set **Continuous retention > Retention days** to `5`
|
||||
```
|
||||
|
||||
### What gets skipped
|
||||
|
||||
- YAML blocks already inside `<ConfigTabs>` (for `--inject`)
|
||||
- YAML blocks whose top-level key is not a known config section (e.g., `go2rtc`, `docker-compose`, `scrape_configs`)
|
||||
- Fields listed in `hiddenFields` in the section configs (e.g., `enabled_in_config`)
|
||||
|
||||
### File structure
|
||||
|
||||
```
|
||||
docs/scripts/
|
||||
├── generate_ui_tabs.py # CLI entry point
|
||||
├── README.md # This file
|
||||
└── lib/
|
||||
├── __init__.py
|
||||
├── schema_loader.py # Loads JSON schema from Pydantic models
|
||||
├── i18n_loader.py # Loads i18n translation JSON files
|
||||
├── section_config_parser.py # Parses TS section configs (hiddenFields, etc.)
|
||||
├── yaml_extractor.py # Extracts YAML blocks and ConfigTabs from markdown
|
||||
├── ui_generator.py # Generates UI tab markdown content
|
||||
└── nav_map.py # Maps config sections to Settings UI nav paths
|
||||
```
|
||||
|
||||
### Data sources
|
||||
|
||||
| Source | Path | What it provides |
|
||||
|--------|------|------------------|
|
||||
| Pydantic models | `frigate/config/` | Field names, types, defaults, nesting |
|
||||
| JSON schema | Generated from Pydantic at runtime | Full schema with `$defs` and `$ref` |
|
||||
| i18n (global) | `web/public/locales/en/config/global.json` | Field labels for global settings |
|
||||
| i18n (cameras) | `web/public/locales/en/config/cameras.json` | Field labels for camera settings |
|
||||
| i18n (menu) | `web/public/locales/en/views/settings.json` | Sidebar menu labels |
|
||||
| Section configs | `web/src/components/config-form/section-configs/*.ts` | Hidden fields, advanced fields, field order |
|
||||
| Navigation map | Hardcoded from `web/src/pages/Settings.tsx` | Config section to UI path mapping |
|
||||
660
docs/scripts/generate_ui_tabs.py
Normal file
660
docs/scripts/generate_ui_tabs.py
Normal file
@@ -0,0 +1,660 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Generate Frigate UI tab content for documentation files.
|
||||
|
||||
This script reads YAML code blocks from documentation markdown files and
|
||||
generates corresponding "Frigate UI" tab instructions based on:
|
||||
- JSON Schema (from Pydantic config models)
|
||||
- i18n translation files (for UI field labels)
|
||||
- Section configs (for hidden/advanced field info)
|
||||
- Navigation mappings (for Settings UI paths)
|
||||
|
||||
Usage:
|
||||
# Preview generated UI tabs for a single file
|
||||
python docs/scripts/generate_ui_tabs.py docs/docs/configuration/record.md
|
||||
|
||||
# Preview all config docs
|
||||
python docs/scripts/generate_ui_tabs.py docs/docs/configuration/
|
||||
|
||||
# Inject UI tabs into files (wraps bare YAML blocks with ConfigTabs)
|
||||
python docs/scripts/generate_ui_tabs.py --inject docs/docs/configuration/record.md
|
||||
|
||||
# Regenerate existing UI tabs from current schema/i18n
|
||||
python docs/scripts/generate_ui_tabs.py --regenerate docs/docs/configuration/
|
||||
|
||||
# Check for drift between existing UI tabs and what would be generated
|
||||
python docs/scripts/generate_ui_tabs.py --check docs/docs/configuration/
|
||||
|
||||
# Write generated files to a temp directory for comparison (originals unchanged)
|
||||
python docs/scripts/generate_ui_tabs.py --inject --outdir /tmp/generated docs/docs/configuration/
|
||||
|
||||
# Show detailed warnings and diagnostics
|
||||
python docs/scripts/generate_ui_tabs.py --verbose docs/docs/configuration/
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import difflib
|
||||
import shutil
|
||||
import sys
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
|
||||
# Ensure frigate package is importable
|
||||
sys.path.insert(0, str(Path(__file__).resolve().parents[1].parent))
|
||||
|
||||
from lib.i18n_loader import load_i18n
|
||||
from lib.nav_map import ALL_CONFIG_SECTIONS
|
||||
from lib.schema_loader import load_schema
|
||||
from lib.section_config_parser import load_section_configs
|
||||
from lib.ui_generator import generate_ui_content, wrap_with_config_tabs
|
||||
from lib.yaml_extractor import (
|
||||
extract_config_tabs_blocks,
|
||||
extract_yaml_blocks,
|
||||
)
|
||||
|
||||
|
||||
def process_file(
|
||||
filepath: Path,
|
||||
schema: dict,
|
||||
i18n: dict,
|
||||
section_configs: dict,
|
||||
inject: bool = False,
|
||||
verbose: bool = False,
|
||||
outpath: Path | None = None,
|
||||
) -> dict:
|
||||
"""Process a single markdown file for initial injection of bare YAML blocks.
|
||||
|
||||
Args:
|
||||
outpath: If set, write the result here instead of modifying filepath.
|
||||
|
||||
Returns:
|
||||
Stats dict with counts of blocks found, generated, skipped, etc.
|
||||
"""
|
||||
content = filepath.read_text()
|
||||
blocks = extract_yaml_blocks(content)
|
||||
|
||||
stats = {
|
||||
"file": str(filepath),
|
||||
"total_blocks": len(blocks),
|
||||
"config_blocks": 0,
|
||||
"already_wrapped": 0,
|
||||
"generated": 0,
|
||||
"skipped": 0,
|
||||
"warnings": [],
|
||||
}
|
||||
|
||||
if not blocks:
|
||||
return stats
|
||||
|
||||
# For injection, we need to track replacements
|
||||
replacements: list[tuple[int, int, str]] = []
|
||||
|
||||
for block in blocks:
|
||||
# Skip non-config YAML blocks
|
||||
if block.section_key is None or (
|
||||
block.section_key not in ALL_CONFIG_SECTIONS
|
||||
and not block.is_camera_level
|
||||
):
|
||||
stats["skipped"] += 1
|
||||
if verbose and block.config_keys:
|
||||
stats["warnings"].append(
|
||||
f" Line {block.line_start}: Skipped block with keys "
|
||||
f"{block.config_keys} (not a known config section)"
|
||||
)
|
||||
continue
|
||||
|
||||
stats["config_blocks"] += 1
|
||||
|
||||
# Skip already-wrapped blocks
|
||||
if block.inside_config_tabs:
|
||||
stats["already_wrapped"] += 1
|
||||
if verbose:
|
||||
stats["warnings"].append(
|
||||
f" Line {block.line_start}: Already inside ConfigTabs, skipping"
|
||||
)
|
||||
continue
|
||||
|
||||
# Generate UI content
|
||||
ui_content = generate_ui_content(
|
||||
block, schema, i18n, section_configs
|
||||
)
|
||||
|
||||
if ui_content is None:
|
||||
stats["skipped"] += 1
|
||||
if verbose:
|
||||
stats["warnings"].append(
|
||||
f" Line {block.line_start}: Could not generate UI content "
|
||||
f"for section '{block.section_key}'"
|
||||
)
|
||||
continue
|
||||
|
||||
stats["generated"] += 1
|
||||
|
||||
if inject:
|
||||
full_block = wrap_with_config_tabs(
|
||||
ui_content, block.raw, block.highlight
|
||||
)
|
||||
replacements.append((block.line_start, block.line_end, full_block))
|
||||
else:
|
||||
# Preview mode: print to stdout
|
||||
print(f"\n{'='*60}")
|
||||
print(f"File: {filepath}")
|
||||
print(f"Line {block.line_start}: section={block.section_key}, "
|
||||
f"camera={block.is_camera_level}")
|
||||
print(f"{'='*60}")
|
||||
print()
|
||||
print("--- Generated UI tab ---")
|
||||
print(ui_content)
|
||||
print()
|
||||
print("--- Would produce ---")
|
||||
print(wrap_with_config_tabs(ui_content, block.raw, block.highlight))
|
||||
print()
|
||||
|
||||
# Apply injections in reverse order (to preserve line numbers)
|
||||
if inject and replacements:
|
||||
lines = content.split("\n")
|
||||
for start, end, replacement in reversed(replacements):
|
||||
# start/end are 1-based line numbers
|
||||
# The YAML block spans from the ``` line before start to the ``` line at end
|
||||
# We need to replace from the opening ``` to the closing ```
|
||||
block_start = start - 2 # 0-based index of ```yaml line
|
||||
block_end = end - 1 # 0-based index of closing ``` line
|
||||
|
||||
replacement_lines = replacement.split("\n")
|
||||
lines[block_start : block_end + 1] = replacement_lines
|
||||
|
||||
new_content = "\n".join(lines)
|
||||
|
||||
# Ensure imports are present
|
||||
new_content = _ensure_imports(new_content)
|
||||
|
||||
target = outpath or filepath
|
||||
target.parent.mkdir(parents=True, exist_ok=True)
|
||||
target.write_text(new_content)
|
||||
print(f" Injected {len(replacements)} ConfigTabs block(s) into {target}")
|
||||
elif outpath is not None:
|
||||
# No changes but outdir requested -- copy original so the output
|
||||
# directory contains a complete set of files for diffing.
|
||||
outpath.parent.mkdir(parents=True, exist_ok=True)
|
||||
shutil.copy2(filepath, outpath)
|
||||
|
||||
return stats
|
||||
|
||||
|
||||
def regenerate_file(
|
||||
filepath: Path,
|
||||
schema: dict,
|
||||
i18n: dict,
|
||||
section_configs: dict,
|
||||
dry_run: bool = False,
|
||||
verbose: bool = False,
|
||||
outpath: Path | None = None,
|
||||
) -> dict:
|
||||
"""Regenerate UI tabs in existing ConfigTabs blocks.
|
||||
|
||||
Strips the current UI tab content and regenerates it from the YAML tab
|
||||
using the current schema and i18n data.
|
||||
|
||||
Args:
|
||||
outpath: If set, write the result here instead of modifying filepath.
|
||||
|
||||
Returns:
|
||||
Stats dict
|
||||
"""
|
||||
content = filepath.read_text()
|
||||
tab_blocks = extract_config_tabs_blocks(content)
|
||||
|
||||
stats = {
|
||||
"file": str(filepath),
|
||||
"total_blocks": len(tab_blocks),
|
||||
"regenerated": 0,
|
||||
"unchanged": 0,
|
||||
"skipped": 0,
|
||||
"warnings": [],
|
||||
}
|
||||
|
||||
if not tab_blocks:
|
||||
return stats
|
||||
|
||||
replacements: list[tuple[int, int, str]] = []
|
||||
|
||||
for tab_block in tab_blocks:
|
||||
yaml_block = tab_block.yaml_block
|
||||
|
||||
# Skip non-config blocks
|
||||
if yaml_block.section_key is None or (
|
||||
yaml_block.section_key not in ALL_CONFIG_SECTIONS
|
||||
and not yaml_block.is_camera_level
|
||||
):
|
||||
stats["skipped"] += 1
|
||||
if verbose:
|
||||
stats["warnings"].append(
|
||||
f" Line {tab_block.line_start}: Skipped (not a config section)"
|
||||
)
|
||||
continue
|
||||
|
||||
# Generate fresh UI content
|
||||
new_ui = generate_ui_content(
|
||||
yaml_block, schema, i18n, section_configs
|
||||
)
|
||||
|
||||
if new_ui is None:
|
||||
stats["skipped"] += 1
|
||||
if verbose:
|
||||
stats["warnings"].append(
|
||||
f" Line {tab_block.line_start}: Could not regenerate "
|
||||
f"for section '{yaml_block.section_key}'"
|
||||
)
|
||||
continue
|
||||
|
||||
# Compare with existing
|
||||
existing_ui = tab_block.ui_content
|
||||
if _normalize_whitespace(new_ui) == _normalize_whitespace(existing_ui):
|
||||
stats["unchanged"] += 1
|
||||
if verbose:
|
||||
stats["warnings"].append(
|
||||
f" Line {tab_block.line_start}: Unchanged"
|
||||
)
|
||||
continue
|
||||
|
||||
stats["regenerated"] += 1
|
||||
|
||||
new_full = wrap_with_config_tabs(
|
||||
new_ui, yaml_block.raw, yaml_block.highlight
|
||||
)
|
||||
replacements.append(
|
||||
(tab_block.line_start, tab_block.line_end, new_full)
|
||||
)
|
||||
|
||||
if dry_run or verbose:
|
||||
print(f"\n{'='*60}")
|
||||
print(f"File: {filepath}, line {tab_block.line_start}")
|
||||
print(f"Section: {yaml_block.section_key}")
|
||||
print(f"{'='*60}")
|
||||
_print_diff(existing_ui, new_ui, filepath, tab_block.line_start)
|
||||
|
||||
# Apply replacements
|
||||
if not dry_run and replacements:
|
||||
lines = content.split("\n")
|
||||
for start, end, replacement in reversed(replacements):
|
||||
block_start = start - 1 # 0-based index of <ConfigTabs> line
|
||||
block_end = end - 1 # 0-based index of </ConfigTabs> line
|
||||
replacement_lines = replacement.split("\n")
|
||||
lines[block_start : block_end + 1] = replacement_lines
|
||||
|
||||
new_content = "\n".join(lines)
|
||||
target = outpath or filepath
|
||||
target.parent.mkdir(parents=True, exist_ok=True)
|
||||
target.write_text(new_content)
|
||||
print(
|
||||
f" Regenerated {len(replacements)} ConfigTabs block(s) in {target}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
elif outpath is not None:
|
||||
outpath.parent.mkdir(parents=True, exist_ok=True)
|
||||
shutil.copy2(filepath, outpath)
|
||||
|
||||
return stats
|
||||
|
||||
|
||||
def check_file(
|
||||
filepath: Path,
|
||||
schema: dict,
|
||||
i18n: dict,
|
||||
section_configs: dict,
|
||||
verbose: bool = False,
|
||||
) -> dict:
|
||||
"""Check for drift between existing UI tabs and what would be generated.
|
||||
|
||||
Returns:
|
||||
Stats dict with drift info. Non-zero "drifted" means the file is stale.
|
||||
"""
|
||||
content = filepath.read_text()
|
||||
tab_blocks = extract_config_tabs_blocks(content)
|
||||
|
||||
stats = {
|
||||
"file": str(filepath),
|
||||
"total_blocks": len(tab_blocks),
|
||||
"up_to_date": 0,
|
||||
"drifted": 0,
|
||||
"skipped": 0,
|
||||
"warnings": [],
|
||||
}
|
||||
|
||||
if not tab_blocks:
|
||||
return stats
|
||||
|
||||
for tab_block in tab_blocks:
|
||||
yaml_block = tab_block.yaml_block
|
||||
|
||||
if yaml_block.section_key is None or (
|
||||
yaml_block.section_key not in ALL_CONFIG_SECTIONS
|
||||
and not yaml_block.is_camera_level
|
||||
):
|
||||
stats["skipped"] += 1
|
||||
continue
|
||||
|
||||
new_ui = generate_ui_content(
|
||||
yaml_block, schema, i18n, section_configs
|
||||
)
|
||||
|
||||
if new_ui is None:
|
||||
stats["skipped"] += 1
|
||||
continue
|
||||
|
||||
existing_ui = tab_block.ui_content
|
||||
if _normalize_whitespace(new_ui) == _normalize_whitespace(existing_ui):
|
||||
stats["up_to_date"] += 1
|
||||
else:
|
||||
stats["drifted"] += 1
|
||||
print(f"\n{'='*60}")
|
||||
print(f"DRIFT: {filepath}, line {tab_block.line_start}")
|
||||
print(f"Section: {yaml_block.section_key}")
|
||||
print(f"{'='*60}")
|
||||
_print_diff(existing_ui, new_ui, filepath, tab_block.line_start)
|
||||
|
||||
return stats
|
||||
|
||||
|
||||
def _normalize_whitespace(text: str) -> str:
|
||||
"""Normalize whitespace for comparison (strip lines, collapse blanks)."""
|
||||
lines = [line.rstrip() for line in text.strip().splitlines()]
|
||||
# Collapse multiple blank lines into one
|
||||
result: list[str] = []
|
||||
prev_blank = False
|
||||
for line in lines:
|
||||
if line == "":
|
||||
if not prev_blank:
|
||||
result.append(line)
|
||||
prev_blank = True
|
||||
else:
|
||||
result.append(line)
|
||||
prev_blank = False
|
||||
return "\n".join(result)
|
||||
|
||||
|
||||
def _print_diff(existing: str, generated: str, filepath: Path, line: int):
|
||||
"""Print a unified diff between existing and generated UI content."""
|
||||
existing_lines = existing.strip().splitlines(keepends=True)
|
||||
generated_lines = generated.strip().splitlines(keepends=True)
|
||||
|
||||
diff = difflib.unified_diff(
|
||||
existing_lines,
|
||||
generated_lines,
|
||||
fromfile=f"{filepath}:{line} (existing)",
|
||||
tofile=f"{filepath}:{line} (generated)",
|
||||
lineterm="",
|
||||
)
|
||||
diff_text = "\n".join(diff)
|
||||
if diff_text:
|
||||
print(diff_text)
|
||||
else:
|
||||
print(" (whitespace-only difference)")
|
||||
|
||||
|
||||
def _ensure_imports(content: str) -> str:
|
||||
"""Ensure ConfigTabs/TabItem/NavPath imports are present in the file."""
|
||||
lines = content.split("\n")
|
||||
|
||||
needed_imports = []
|
||||
if "<ConfigTabs>" in content and 'import ConfigTabs' not in content:
|
||||
needed_imports.append(
|
||||
'import ConfigTabs from "@site/src/components/ConfigTabs";'
|
||||
)
|
||||
if "<TabItem" in content and 'import TabItem' not in content:
|
||||
needed_imports.append('import TabItem from "@theme/TabItem";')
|
||||
if "<NavPath" in content and 'import NavPath' not in content:
|
||||
needed_imports.append(
|
||||
'import NavPath from "@site/src/components/NavPath";'
|
||||
)
|
||||
|
||||
if not needed_imports:
|
||||
return content
|
||||
|
||||
# Insert imports after frontmatter (---)
|
||||
insert_idx = 0
|
||||
frontmatter_count = 0
|
||||
for i, line in enumerate(lines):
|
||||
if line.strip() == "---":
|
||||
frontmatter_count += 1
|
||||
if frontmatter_count == 2:
|
||||
insert_idx = i + 1
|
||||
break
|
||||
|
||||
# Add blank line before imports if needed
|
||||
import_block = [""] + needed_imports + [""]
|
||||
lines[insert_idx:insert_idx] = import_block
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Generate Frigate UI tab content for documentation files"
|
||||
)
|
||||
parser.add_argument(
|
||||
"paths",
|
||||
nargs="+",
|
||||
type=Path,
|
||||
help="Markdown file(s) or directory to process",
|
||||
)
|
||||
|
||||
mode_group = parser.add_mutually_exclusive_group()
|
||||
mode_group.add_argument(
|
||||
"--inject",
|
||||
action="store_true",
|
||||
help="Inject generated content into files (wraps bare YAML blocks)",
|
||||
)
|
||||
mode_group.add_argument(
|
||||
"--regenerate",
|
||||
action="store_true",
|
||||
help="Regenerate UI tabs in existing ConfigTabs from current schema/i18n",
|
||||
)
|
||||
mode_group.add_argument(
|
||||
"--check",
|
||||
action="store_true",
|
||||
help="Check for drift between existing UI tabs and current schema/i18n (exit 1 if drifted)",
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--outdir",
|
||||
type=Path,
|
||||
default=None,
|
||||
help="Write output files to this directory instead of modifying originals. "
|
||||
"Mirrors the source directory structure. Use with --inject or --regenerate.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--dry-run",
|
||||
action="store_true",
|
||||
help="With --regenerate, show diffs but don't write files",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--verbose", "-v",
|
||||
action="store_true",
|
||||
help="Show detailed warnings and diagnostics",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
# Collect files and determine base directory for relative path computation
|
||||
files: list[Path] = []
|
||||
base_dirs: list[Path] = []
|
||||
for p in args.paths:
|
||||
if p.is_dir():
|
||||
files.extend(sorted(p.glob("**/*.md")))
|
||||
base_dirs.append(p.resolve())
|
||||
elif p.is_file():
|
||||
files.append(p)
|
||||
base_dirs.append(p.resolve().parent)
|
||||
else:
|
||||
print(f"Warning: {p} not found, skipping", file=sys.stderr)
|
||||
|
||||
if not files:
|
||||
print("No markdown files found", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Use the first input path's directory as the base for relative paths
|
||||
base_dir = base_dirs[0] if base_dirs else Path.cwd()
|
||||
|
||||
# Resolve outdir: create a temp directory if --outdir is given without a path
|
||||
outdir: Path | None = args.outdir
|
||||
created_tmpdir = False
|
||||
if outdir is not None:
|
||||
if str(outdir) == "auto":
|
||||
outdir = Path(tempfile.mkdtemp(prefix="frigate-ui-tabs-"))
|
||||
created_tmpdir = True
|
||||
outdir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Build file->outpath mapping
|
||||
file_outpaths: dict[Path, Path | None] = {}
|
||||
for f in files:
|
||||
if outdir is not None:
|
||||
try:
|
||||
rel = f.resolve().relative_to(base_dir)
|
||||
except ValueError:
|
||||
rel = Path(f.name)
|
||||
file_outpaths[f] = outdir / rel
|
||||
else:
|
||||
file_outpaths[f] = None
|
||||
|
||||
# Load data sources
|
||||
print("Loading schema from Pydantic models...", file=sys.stderr)
|
||||
schema = load_schema()
|
||||
print("Loading i18n translations...", file=sys.stderr)
|
||||
i18n = load_i18n()
|
||||
print("Loading section configs...", file=sys.stderr)
|
||||
section_configs = load_section_configs()
|
||||
print(f"Processing {len(files)} file(s)...\n", file=sys.stderr)
|
||||
|
||||
if args.check:
|
||||
_run_check(files, schema, i18n, section_configs, args.verbose)
|
||||
elif args.regenerate:
|
||||
_run_regenerate(
|
||||
files, schema, i18n, section_configs,
|
||||
args.dry_run, args.verbose, file_outpaths,
|
||||
)
|
||||
else:
|
||||
_run_inject(
|
||||
files, schema, i18n, section_configs,
|
||||
args.inject, args.verbose, file_outpaths,
|
||||
)
|
||||
|
||||
if outdir is not None:
|
||||
print(f"\nOutput written to: {outdir}", file=sys.stderr)
|
||||
|
||||
|
||||
def _run_inject(files, schema, i18n, section_configs, inject, verbose, file_outpaths):
|
||||
"""Run default mode: preview or inject bare YAML blocks."""
|
||||
total_stats = {
|
||||
"files": 0,
|
||||
"total_blocks": 0,
|
||||
"config_blocks": 0,
|
||||
"already_wrapped": 0,
|
||||
"generated": 0,
|
||||
"skipped": 0,
|
||||
}
|
||||
|
||||
for filepath in files:
|
||||
stats = process_file(
|
||||
filepath, schema, i18n, section_configs,
|
||||
inject=inject, verbose=verbose,
|
||||
outpath=file_outpaths.get(filepath),
|
||||
)
|
||||
|
||||
total_stats["files"] += 1
|
||||
for key in ["total_blocks", "config_blocks", "already_wrapped",
|
||||
"generated", "skipped"]:
|
||||
total_stats[key] += stats[key]
|
||||
|
||||
if verbose and stats["warnings"]:
|
||||
print(f"\n{filepath}:", file=sys.stderr)
|
||||
for w in stats["warnings"]:
|
||||
print(w, file=sys.stderr)
|
||||
|
||||
print("\n" + "=" * 60, file=sys.stderr)
|
||||
print("Summary:", file=sys.stderr)
|
||||
print(f" Files processed: {total_stats['files']}", file=sys.stderr)
|
||||
print(f" Total YAML blocks: {total_stats['total_blocks']}", file=sys.stderr)
|
||||
print(f" Config blocks: {total_stats['config_blocks']}", file=sys.stderr)
|
||||
print(f" Already wrapped: {total_stats['already_wrapped']}", file=sys.stderr)
|
||||
print(f" Generated: {total_stats['generated']}", file=sys.stderr)
|
||||
print(f" Skipped: {total_stats['skipped']}", file=sys.stderr)
|
||||
print("=" * 60, file=sys.stderr)
|
||||
|
||||
|
||||
def _run_regenerate(files, schema, i18n, section_configs, dry_run, verbose, file_outpaths):
|
||||
"""Run regenerate mode: update existing ConfigTabs blocks."""
|
||||
total_stats = {
|
||||
"files": 0,
|
||||
"total_blocks": 0,
|
||||
"regenerated": 0,
|
||||
"unchanged": 0,
|
||||
"skipped": 0,
|
||||
}
|
||||
|
||||
for filepath in files:
|
||||
stats = regenerate_file(
|
||||
filepath, schema, i18n, section_configs,
|
||||
dry_run=dry_run, verbose=verbose,
|
||||
outpath=file_outpaths.get(filepath),
|
||||
)
|
||||
|
||||
total_stats["files"] += 1
|
||||
for key in ["total_blocks", "regenerated", "unchanged", "skipped"]:
|
||||
total_stats[key] += stats[key]
|
||||
|
||||
if verbose and stats["warnings"]:
|
||||
print(f"\n{filepath}:", file=sys.stderr)
|
||||
for w in stats["warnings"]:
|
||||
print(w, file=sys.stderr)
|
||||
|
||||
action = "Would regenerate" if dry_run else "Regenerated"
|
||||
print("\n" + "=" * 60, file=sys.stderr)
|
||||
print("Summary:", file=sys.stderr)
|
||||
print(f" Files processed: {total_stats['files']}", file=sys.stderr)
|
||||
print(f" ConfigTabs blocks: {total_stats['total_blocks']}", file=sys.stderr)
|
||||
print(f" {action}: {total_stats['regenerated']}", file=sys.stderr)
|
||||
print(f" Unchanged: {total_stats['unchanged']}", file=sys.stderr)
|
||||
print(f" Skipped: {total_stats['skipped']}", file=sys.stderr)
|
||||
print("=" * 60, file=sys.stderr)
|
||||
|
||||
|
||||
def _run_check(files, schema, i18n, section_configs, verbose):
|
||||
"""Run check mode: detect drift without modifying files."""
|
||||
total_stats = {
|
||||
"files": 0,
|
||||
"total_blocks": 0,
|
||||
"up_to_date": 0,
|
||||
"drifted": 0,
|
||||
"skipped": 0,
|
||||
}
|
||||
|
||||
for filepath in files:
|
||||
stats = check_file(
|
||||
filepath, schema, i18n, section_configs, verbose=verbose,
|
||||
)
|
||||
|
||||
total_stats["files"] += 1
|
||||
for key in ["total_blocks", "up_to_date", "drifted", "skipped"]:
|
||||
total_stats[key] += stats[key]
|
||||
|
||||
print("\n" + "=" * 60, file=sys.stderr)
|
||||
print("Summary:", file=sys.stderr)
|
||||
print(f" Files processed: {total_stats['files']}", file=sys.stderr)
|
||||
print(f" ConfigTabs blocks: {total_stats['total_blocks']}", file=sys.stderr)
|
||||
print(f" Up to date: {total_stats['up_to_date']}", file=sys.stderr)
|
||||
print(f" Drifted: {total_stats['drifted']}", file=sys.stderr)
|
||||
print(f" Skipped: {total_stats['skipped']}", file=sys.stderr)
|
||||
print("=" * 60, file=sys.stderr)
|
||||
|
||||
if total_stats["drifted"] > 0:
|
||||
print(
|
||||
f"\n{total_stats['drifted']} block(s) have drifted from schema/i18n. "
|
||||
"Run with --regenerate to update.",
|
||||
file=sys.stderr,
|
||||
)
|
||||
sys.exit(1)
|
||||
else:
|
||||
print("\nAll UI tabs are up to date.", file=sys.stderr)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
0
docs/scripts/lib/__init__.py
Normal file
0
docs/scripts/lib/__init__.py
Normal file
139
docs/scripts/lib/i18n_loader.py
Normal file
139
docs/scripts/lib/i18n_loader.py
Normal file
@@ -0,0 +1,139 @@
|
||||
"""Load i18n translation files for Settings UI field labels."""
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
# Base path for locale files
|
||||
WEB_LOCALES = Path(__file__).resolve().parents[3] / "web" / "public" / "locales" / "en"
|
||||
|
||||
|
||||
def load_i18n() -> dict[str, Any]:
|
||||
"""Load and merge all relevant i18n files.
|
||||
|
||||
Returns:
|
||||
Dict with keys: "global", "cameras", "settings_menu"
|
||||
"""
|
||||
global_path = WEB_LOCALES / "config" / "global.json"
|
||||
cameras_path = WEB_LOCALES / "config" / "cameras.json"
|
||||
settings_path = WEB_LOCALES / "views" / "settings.json"
|
||||
|
||||
result: dict[str, Any] = {}
|
||||
|
||||
with open(global_path) as f:
|
||||
result["global"] = json.load(f)
|
||||
|
||||
with open(cameras_path) as f:
|
||||
result["cameras"] = json.load(f)
|
||||
|
||||
with open(settings_path) as f:
|
||||
settings = json.load(f)
|
||||
result["settings_menu"] = settings.get("menu", {})
|
||||
|
||||
# Build a unified enum value → label lookup from all known sources.
|
||||
# Merges multiple maps so callers don't need to know which file
|
||||
# a particular enum lives in.
|
||||
value_labels: dict[str, str] = {}
|
||||
|
||||
config_form = settings.get("configForm", {})
|
||||
|
||||
# FFmpeg preset labels (preset-vaapi → "VAAPI (Intel/AMD GPU)")
|
||||
value_labels.update(
|
||||
config_form.get("ffmpegArgs", {}).get("presetLabels", {})
|
||||
)
|
||||
|
||||
# Timestamp position (tl → "Top left")
|
||||
value_labels.update(settings.get("timestampPosition", {}))
|
||||
|
||||
# Input role options (detect → "Detect")
|
||||
value_labels.update(
|
||||
config_form.get("inputRoles", {}).get("options", {})
|
||||
)
|
||||
|
||||
# GenAI role options (vision → "Vision")
|
||||
value_labels.update(
|
||||
config_form.get("genaiRoles", {}).get("options", {})
|
||||
)
|
||||
|
||||
result["value_labels"] = value_labels
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def get_field_label(
|
||||
i18n: dict[str, Any],
|
||||
section_key: str,
|
||||
field_path: list[str],
|
||||
level: str = "global",
|
||||
) -> str | None:
|
||||
"""Look up the UI label for a field.
|
||||
|
||||
Args:
|
||||
i18n: Loaded i18n data from load_i18n()
|
||||
section_key: Config section (e.g., "record")
|
||||
field_path: Path within section (e.g., ["continuous", "days"])
|
||||
level: "global" or "cameras"
|
||||
|
||||
Returns:
|
||||
The label string, or None if not found.
|
||||
"""
|
||||
source = i18n.get(level, {})
|
||||
node = source.get(section_key, {})
|
||||
|
||||
for key in field_path:
|
||||
if not isinstance(node, dict):
|
||||
return None
|
||||
node = node.get(key, {})
|
||||
|
||||
if isinstance(node, dict):
|
||||
return node.get("label")
|
||||
return None
|
||||
|
||||
|
||||
def get_field_description(
|
||||
i18n: dict[str, Any],
|
||||
section_key: str,
|
||||
field_path: list[str],
|
||||
level: str = "global",
|
||||
) -> str | None:
|
||||
"""Look up the UI description for a field."""
|
||||
source = i18n.get(level, {})
|
||||
node = source.get(section_key, {})
|
||||
|
||||
for key in field_path:
|
||||
if not isinstance(node, dict):
|
||||
return None
|
||||
node = node.get(key, {})
|
||||
|
||||
if isinstance(node, dict):
|
||||
return node.get("description")
|
||||
return None
|
||||
|
||||
|
||||
def get_value_label(
|
||||
i18n: dict[str, Any],
|
||||
value: str,
|
||||
) -> str | None:
|
||||
"""Look up the display label for an enum/option value.
|
||||
|
||||
Args:
|
||||
i18n: Loaded i18n data from load_i18n()
|
||||
value: The raw config value (e.g., "preset-vaapi", "tl")
|
||||
|
||||
Returns:
|
||||
The human-readable label (e.g., "VAAPI (Intel/AMD GPU)"), or None.
|
||||
"""
|
||||
return i18n.get("value_labels", {}).get(value)
|
||||
|
||||
|
||||
def get_section_label(
|
||||
i18n: dict[str, Any],
|
||||
section_key: str,
|
||||
level: str = "global",
|
||||
) -> str | None:
|
||||
"""Get the top-level label for a config section."""
|
||||
source = i18n.get(level, {})
|
||||
section = source.get(section_key, {})
|
||||
if isinstance(section, dict):
|
||||
return section.get("label")
|
||||
return None
|
||||
120
docs/scripts/lib/nav_map.py
Normal file
120
docs/scripts/lib/nav_map.py
Normal file
@@ -0,0 +1,120 @@
|
||||
"""Map config section keys to Settings UI navigation paths."""
|
||||
|
||||
# Derived from web/src/pages/Settings.tsx section mappings
|
||||
# and web/public/locales/en/views/settings.json menu labels.
|
||||
#
|
||||
# Format: section_key -> (group_label, page_label)
|
||||
# Navigation path: "Settings > {group_label} > {page_label}"
|
||||
|
||||
GLOBAL_NAV: dict[str, tuple[str, str]] = {
|
||||
"detect": ("Global configuration", "Object detection"),
|
||||
"ffmpeg": ("Global configuration", "FFmpeg"),
|
||||
"record": ("Global configuration", "Recording"),
|
||||
"snapshots": ("Global configuration", "Snapshots"),
|
||||
"motion": ("Global configuration", "Motion detection"),
|
||||
"objects": ("Global configuration", "Objects"),
|
||||
"review": ("Global configuration", "Review"),
|
||||
"audio": ("Global configuration", "Audio events"),
|
||||
"live": ("Global configuration", "Live playback"),
|
||||
"timestamp_style": ("Global configuration", "Timestamp style"),
|
||||
"notifications": ("Notifications", "Notifications"),
|
||||
}
|
||||
|
||||
CAMERA_NAV: dict[str, tuple[str, str]] = {
|
||||
"detect": ("Camera configuration", "Object detection"),
|
||||
"ffmpeg": ("Camera configuration", "FFmpeg"),
|
||||
"record": ("Camera configuration", "Recording"),
|
||||
"snapshots": ("Camera configuration", "Snapshots"),
|
||||
"motion": ("Camera configuration", "Motion detection"),
|
||||
"objects": ("Camera configuration", "Objects"),
|
||||
"review": ("Camera configuration", "Review"),
|
||||
"audio": ("Camera configuration", "Audio events"),
|
||||
"audio_transcription": ("Camera configuration", "Audio transcription"),
|
||||
"notifications": ("Camera configuration", "Notifications"),
|
||||
"live": ("Camera configuration", "Live playback"),
|
||||
"birdseye": ("Camera configuration", "Birdseye"),
|
||||
"face_recognition": ("Camera configuration", "Face recognition"),
|
||||
"lpr": ("Camera configuration", "License plate recognition"),
|
||||
"mqtt": ("Camera configuration", "MQTT"),
|
||||
"onvif": ("Camera configuration", "ONVIF"),
|
||||
"ui": ("Camera configuration", "Camera UI"),
|
||||
"timestamp_style": ("Camera configuration", "Timestamp style"),
|
||||
}
|
||||
|
||||
ENRICHMENT_NAV: dict[str, tuple[str, str]] = {
|
||||
"semantic_search": ("Enrichments", "Semantic search"),
|
||||
"genai": ("Enrichments", "Generative AI"),
|
||||
"face_recognition": ("Enrichments", "Face recognition"),
|
||||
"lpr": ("Enrichments", "License plate recognition"),
|
||||
"classification": ("Enrichments", "Object classification"),
|
||||
"audio_transcription": ("Enrichments", "Audio transcription"),
|
||||
}
|
||||
|
||||
SYSTEM_NAV: dict[str, tuple[str, str]] = {
|
||||
"go2rtc_streams": ("System", "go2rtc streams"),
|
||||
"database": ("System", "Database"),
|
||||
"mqtt": ("System", "MQTT"),
|
||||
"tls": ("System", "TLS"),
|
||||
"auth": ("System", "Authentication"),
|
||||
"networking": ("System", "Networking"),
|
||||
"proxy": ("System", "Proxy"),
|
||||
"ui": ("System", "UI"),
|
||||
"logger": ("System", "Logging"),
|
||||
"environment_vars": ("System", "Environment variables"),
|
||||
"telemetry": ("System", "Telemetry"),
|
||||
"birdseye": ("System", "Birdseye"),
|
||||
"detectors": ("System", "Detector hardware"),
|
||||
"model": ("System", "Detection model"),
|
||||
}
|
||||
|
||||
# All known top-level config section keys
|
||||
ALL_CONFIG_SECTIONS = (
|
||||
set(GLOBAL_NAV)
|
||||
| set(CAMERA_NAV)
|
||||
| set(ENRICHMENT_NAV)
|
||||
| set(SYSTEM_NAV)
|
||||
| {"cameras"}
|
||||
)
|
||||
|
||||
|
||||
def get_nav_path(section_key: str, level: str = "global") -> str | None:
|
||||
"""Get the full navigation path for a config section.
|
||||
|
||||
Args:
|
||||
section_key: Config section key (e.g., "record")
|
||||
level: "global", "camera", "enrichment", or "system"
|
||||
|
||||
Returns:
|
||||
NavPath string like "Settings > Global configuration > Recording",
|
||||
or None if not found.
|
||||
"""
|
||||
nav_tables = {
|
||||
"global": GLOBAL_NAV,
|
||||
"camera": CAMERA_NAV,
|
||||
"enrichment": ENRICHMENT_NAV,
|
||||
"system": SYSTEM_NAV,
|
||||
}
|
||||
|
||||
table = nav_tables.get(level)
|
||||
if table is None:
|
||||
return None
|
||||
|
||||
entry = table.get(section_key)
|
||||
if entry is None:
|
||||
return None
|
||||
|
||||
group, page = entry
|
||||
return f"Settings > {group} > {page}"
|
||||
|
||||
|
||||
def detect_level(section_key: str) -> str:
|
||||
"""Detect whether a config section is global, camera, enrichment, or system."""
|
||||
if section_key in SYSTEM_NAV:
|
||||
return "system"
|
||||
if section_key in ENRICHMENT_NAV:
|
||||
return "enrichment"
|
||||
if section_key in GLOBAL_NAV:
|
||||
return "global"
|
||||
if section_key in CAMERA_NAV:
|
||||
return "camera"
|
||||
return "global"
|
||||
88
docs/scripts/lib/schema_loader.py
Normal file
88
docs/scripts/lib/schema_loader.py
Normal file
@@ -0,0 +1,88 @@
|
||||
"""Load JSON schema from Frigate's Pydantic config models."""
|
||||
|
||||
from typing import Any
|
||||
|
||||
|
||||
def load_schema() -> dict[str, Any]:
|
||||
"""Generate and return the full JSON schema for FrigateConfig."""
|
||||
from frigate.config.config import FrigateConfig
|
||||
from frigate.util.schema import get_config_schema
|
||||
|
||||
return get_config_schema(FrigateConfig)
|
||||
|
||||
|
||||
def resolve_ref(schema: dict[str, Any], ref: str) -> dict[str, Any]:
|
||||
"""Resolve a $ref pointer within the schema."""
|
||||
# ref format: "#/$defs/RecordConfig"
|
||||
parts = ref.lstrip("#/").split("/")
|
||||
node = schema
|
||||
for part in parts:
|
||||
node = node[part]
|
||||
return node
|
||||
|
||||
|
||||
def resolve_schema_node(
|
||||
schema: dict[str, Any], node: dict[str, Any]
|
||||
) -> dict[str, Any]:
|
||||
"""Resolve a schema node, following $ref and allOf if present."""
|
||||
if "$ref" in node:
|
||||
node = resolve_ref(schema, node["$ref"])
|
||||
if "allOf" in node:
|
||||
merged: dict[str, Any] = {}
|
||||
for item in node["allOf"]:
|
||||
resolved = resolve_schema_node(schema, item)
|
||||
merged.update(resolved)
|
||||
return merged
|
||||
return node
|
||||
|
||||
|
||||
def get_section_schema(
|
||||
schema: dict[str, Any], section_key: str
|
||||
) -> dict[str, Any] | None:
|
||||
"""Get the resolved schema for a top-level config section."""
|
||||
props = schema.get("properties", {})
|
||||
if section_key not in props:
|
||||
return None
|
||||
return resolve_schema_node(schema, props[section_key])
|
||||
|
||||
|
||||
def get_field_info(
|
||||
schema: dict[str, Any], section_key: str, field_path: list[str]
|
||||
) -> dict[str, Any] | None:
|
||||
"""Get schema info for a specific field path within a section.
|
||||
|
||||
Args:
|
||||
schema: Full JSON schema
|
||||
section_key: Top-level section (e.g., "record")
|
||||
field_path: List of nested keys (e.g., ["continuous", "days"])
|
||||
|
||||
Returns:
|
||||
Resolved schema node for the field, or None if not found.
|
||||
"""
|
||||
section = get_section_schema(schema, section_key)
|
||||
if section is None:
|
||||
return None
|
||||
|
||||
node = section
|
||||
for key in field_path:
|
||||
props = node.get("properties", {})
|
||||
if key not in props:
|
||||
return None
|
||||
node = resolve_schema_node(schema, props[key])
|
||||
|
||||
return node
|
||||
|
||||
|
||||
def is_boolean_field(field_schema: dict[str, Any]) -> bool:
|
||||
"""Check if a schema node represents a boolean field."""
|
||||
return field_schema.get("type") == "boolean"
|
||||
|
||||
|
||||
def is_enum_field(field_schema: dict[str, Any]) -> bool:
|
||||
"""Check if a schema node is an enum."""
|
||||
return "enum" in field_schema
|
||||
|
||||
|
||||
def is_object_field(field_schema: dict[str, Any]) -> bool:
|
||||
"""Check if a schema node is an object with properties."""
|
||||
return field_schema.get("type") == "object" or "properties" in field_schema
|
||||
130
docs/scripts/lib/section_config_parser.py
Normal file
130
docs/scripts/lib/section_config_parser.py
Normal file
@@ -0,0 +1,130 @@
|
||||
"""Parse TypeScript section config files for hidden/advanced field info."""
|
||||
|
||||
import json
|
||||
import re
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
SECTION_CONFIGS_DIR = (
|
||||
Path(__file__).resolve().parents[3]
|
||||
/ "web"
|
||||
/ "src"
|
||||
/ "components"
|
||||
/ "config-form"
|
||||
/ "section-configs"
|
||||
)
|
||||
|
||||
|
||||
def _extract_string_array(text: str, field_name: str) -> list[str]:
|
||||
"""Extract a string array value from TypeScript object literal text."""
|
||||
pattern = rf"{field_name}\s*:\s*\[(.*?)\]"
|
||||
match = re.search(pattern, text, re.DOTALL)
|
||||
if not match:
|
||||
return []
|
||||
content = match.group(1)
|
||||
return re.findall(r'"([^"]*)"', content)
|
||||
|
||||
|
||||
def _parse_section_file(filepath: Path) -> dict[str, Any]:
|
||||
"""Parse a single section config .ts file."""
|
||||
text = filepath.read_text()
|
||||
|
||||
# Extract base block
|
||||
base_match = re.search(r"base\s*:\s*\{(.*?)\n \}", text, re.DOTALL)
|
||||
base_text = base_match.group(1) if base_match else ""
|
||||
|
||||
# Extract global block
|
||||
global_match = re.search(r"global\s*:\s*\{(.*?)\n \}", text, re.DOTALL)
|
||||
global_text = global_match.group(1) if global_match else ""
|
||||
|
||||
# Extract camera block
|
||||
camera_match = re.search(r"camera\s*:\s*\{(.*?)\n \}", text, re.DOTALL)
|
||||
camera_text = camera_match.group(1) if camera_match else ""
|
||||
|
||||
result: dict[str, Any] = {
|
||||
"fieldOrder": _extract_string_array(base_text, "fieldOrder"),
|
||||
"hiddenFields": _extract_string_array(base_text, "hiddenFields"),
|
||||
"advancedFields": _extract_string_array(base_text, "advancedFields"),
|
||||
}
|
||||
|
||||
# Merge global-level hidden fields
|
||||
global_hidden = _extract_string_array(global_text, "hiddenFields")
|
||||
if global_hidden:
|
||||
result["globalHiddenFields"] = global_hidden
|
||||
|
||||
# Merge camera-level hidden fields
|
||||
camera_hidden = _extract_string_array(camera_text, "hiddenFields")
|
||||
if camera_hidden:
|
||||
result["cameraHiddenFields"] = camera_hidden
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def load_section_configs() -> dict[str, dict[str, Any]]:
|
||||
"""Load all section configs from TypeScript files.
|
||||
|
||||
Returns:
|
||||
Dict mapping section name to parsed config.
|
||||
"""
|
||||
# Read sectionConfigs.ts to get the mapping of section keys to filenames
|
||||
registry_path = SECTION_CONFIGS_DIR.parent / "sectionConfigs.ts"
|
||||
registry_text = registry_path.read_text()
|
||||
|
||||
configs: dict[str, dict[str, Any]] = {}
|
||||
|
||||
for ts_file in SECTION_CONFIGS_DIR.glob("*.ts"):
|
||||
if ts_file.name == "types.ts":
|
||||
continue
|
||||
|
||||
section_name = ts_file.stem
|
||||
configs[section_name] = _parse_section_file(ts_file)
|
||||
|
||||
# Map section config keys from the registry (handles renames like
|
||||
# "timestamp_style: timestampStyle")
|
||||
key_map: dict[str, str] = {}
|
||||
for match in re.finditer(
|
||||
r"(\w+)(?:\s*:\s*\w+)?\s*,", registry_text[registry_text.find("{") :]
|
||||
):
|
||||
key = match.group(1)
|
||||
key_map[key] = key
|
||||
|
||||
# Handle explicit key mappings like `timestamp_style: timestampStyle`
|
||||
for match in re.finditer(r"(\w+)\s*:\s*(\w+)\s*,", registry_text):
|
||||
key_map[match.group(1)] = match.group(2)
|
||||
|
||||
return configs
|
||||
|
||||
|
||||
def get_hidden_fields(
|
||||
configs: dict[str, dict[str, Any]],
|
||||
section_key: str,
|
||||
level: str = "global",
|
||||
) -> set[str]:
|
||||
"""Get the set of hidden fields for a section at a given level.
|
||||
|
||||
Args:
|
||||
configs: Loaded section configs
|
||||
section_key: Config section name (e.g., "record")
|
||||
level: "global" or "camera"
|
||||
|
||||
Returns:
|
||||
Set of hidden field paths (e.g., {"enabled_in_config", "sync_recordings"})
|
||||
"""
|
||||
config = configs.get(section_key, {})
|
||||
hidden = set(config.get("hiddenFields", []))
|
||||
|
||||
if level == "global":
|
||||
hidden.update(config.get("globalHiddenFields", []))
|
||||
elif level == "camera":
|
||||
hidden.update(config.get("cameraHiddenFields", []))
|
||||
|
||||
return hidden
|
||||
|
||||
|
||||
def get_advanced_fields(
|
||||
configs: dict[str, dict[str, Any]],
|
||||
section_key: str,
|
||||
) -> set[str]:
|
||||
"""Get the set of advanced fields for a section."""
|
||||
config = configs.get(section_key, {})
|
||||
return set(config.get("advancedFields", []))
|
||||
283
docs/scripts/lib/ui_generator.py
Normal file
283
docs/scripts/lib/ui_generator.py
Normal file
@@ -0,0 +1,283 @@
|
||||
"""Generate UI tab markdown content from parsed YAML blocks."""
|
||||
|
||||
from typing import Any
|
||||
|
||||
from .i18n_loader import get_field_description, get_field_label, get_value_label
|
||||
from .nav_map import ALL_CONFIG_SECTIONS, detect_level, get_nav_path
|
||||
from .schema_loader import is_boolean_field, is_object_field
|
||||
from .section_config_parser import get_hidden_fields
|
||||
from .yaml_extractor import YamlBlock, get_leaf_paths
|
||||
|
||||
|
||||
def _format_value(
|
||||
value: object,
|
||||
field_schema: dict[str, Any] | None,
|
||||
i18n: dict[str, Any] | None = None,
|
||||
) -> str:
|
||||
"""Format a YAML value for UI display.
|
||||
|
||||
Looks up i18n labels for enum/option values when available.
|
||||
"""
|
||||
if field_schema and is_boolean_field(field_schema):
|
||||
return "on" if value else "off"
|
||||
if isinstance(value, bool):
|
||||
return "on" if value else "off"
|
||||
if isinstance(value, list):
|
||||
if len(value) == 0:
|
||||
return "an empty list"
|
||||
items = []
|
||||
for v in value:
|
||||
label = get_value_label(i18n, str(v)) if i18n else None
|
||||
items.append(f"`{label}`" if label else f"`{v}`")
|
||||
return ", ".join(items)
|
||||
if value is None:
|
||||
return "empty"
|
||||
|
||||
# Try i18n label for the raw value (enum translations)
|
||||
if i18n and isinstance(value, str):
|
||||
label = get_value_label(i18n, value)
|
||||
if label:
|
||||
return f"`{label}`"
|
||||
|
||||
return f"`{value}`"
|
||||
|
||||
|
||||
def _build_field_label(
|
||||
i18n: dict[str, Any],
|
||||
section_key: str,
|
||||
field_path: list[str],
|
||||
level: str,
|
||||
) -> str:
|
||||
"""Build the display label for a field using i18n labels.
|
||||
|
||||
For a path like ["continuous", "days"], produces
|
||||
"Continuous retention > Retention days" using the actual i18n labels.
|
||||
"""
|
||||
parts: list[str] = []
|
||||
|
||||
for depth in range(len(field_path)):
|
||||
sub_path = field_path[: depth + 1]
|
||||
label = get_field_label(i18n, section_key, sub_path, level)
|
||||
|
||||
if label:
|
||||
parts.append(label)
|
||||
else:
|
||||
# Fallback to title-cased field name
|
||||
parts.append(field_path[depth].replace("_", " ").title())
|
||||
|
||||
return " > ".join(parts)
|
||||
|
||||
|
||||
def _is_hidden(
|
||||
field_key: str,
|
||||
full_path: list[str],
|
||||
hidden_fields: set[str],
|
||||
) -> bool:
|
||||
"""Check if a field should be hidden from UI output."""
|
||||
# Check exact match
|
||||
if field_key in hidden_fields:
|
||||
return True
|
||||
|
||||
# Check dotted path match (e.g., "alerts.enabled_in_config")
|
||||
dotted = ".".join(str(p) for p in full_path)
|
||||
if dotted in hidden_fields:
|
||||
return True
|
||||
|
||||
# Check wildcard patterns (e.g., "filters.*.mask")
|
||||
for pattern in hidden_fields:
|
||||
if "*" in pattern:
|
||||
parts = pattern.split(".")
|
||||
if len(parts) == len(full_path):
|
||||
match = all(
|
||||
p == "*" or p == fp for p, fp in zip(parts, full_path)
|
||||
)
|
||||
if match:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
|
||||
def generate_ui_content(
|
||||
block: YamlBlock,
|
||||
schema: dict[str, Any],
|
||||
i18n: dict[str, Any],
|
||||
section_configs: dict[str, dict[str, Any]],
|
||||
) -> str | None:
|
||||
"""Generate UI tab markdown content for a YAML block.
|
||||
|
||||
Args:
|
||||
block: Parsed YAML block from a doc file
|
||||
schema: Full JSON schema
|
||||
i18n: Loaded i18n translations
|
||||
section_configs: Parsed section config data
|
||||
|
||||
Returns:
|
||||
Generated markdown string for the UI tab, or None if the block
|
||||
can't be converted (not a config block, etc.)
|
||||
"""
|
||||
if block.section_key is None:
|
||||
return None
|
||||
|
||||
# Determine which config data to walk
|
||||
if block.is_camera_level:
|
||||
# Camera-level: unwrap cameras.{name}.{section}
|
||||
cam_data = block.parsed.get("cameras", {})
|
||||
cam_name = block.camera_name or next(iter(cam_data), None)
|
||||
if not cam_name:
|
||||
return None
|
||||
inner = cam_data.get(cam_name, {})
|
||||
if not isinstance(inner, dict):
|
||||
return None
|
||||
level = "camera"
|
||||
else:
|
||||
inner = block.parsed
|
||||
# Determine level from section key
|
||||
level = detect_level(block.section_key)
|
||||
|
||||
# Collect sections to process (may span multiple top-level keys)
|
||||
sections_to_process: list[tuple[str, dict]] = []
|
||||
for key in inner:
|
||||
if key in ALL_CONFIG_SECTIONS or key == block.section_key:
|
||||
val = inner[key]
|
||||
if isinstance(val, dict):
|
||||
sections_to_process.append((key, val))
|
||||
else:
|
||||
# Simple scalar at section level (e.g., record.enabled = True)
|
||||
sections_to_process.append((key, {key: val}))
|
||||
|
||||
# If inner is the section itself (e.g., parsed = {"record": {...}})
|
||||
if not sections_to_process and block.section_key in inner:
|
||||
section_data = inner[block.section_key]
|
||||
if isinstance(section_data, dict):
|
||||
sections_to_process = [(block.section_key, section_data)]
|
||||
|
||||
if not sections_to_process:
|
||||
# Try treating the whole inner dict as the section data
|
||||
sections_to_process = [(block.section_key, inner)]
|
||||
|
||||
# Choose pattern based on whether YAML has comments (descriptive) or values
|
||||
use_table = block.has_comments
|
||||
|
||||
lines: list[str] = []
|
||||
step_num = 1
|
||||
|
||||
for section_key, section_data in sections_to_process:
|
||||
# Get navigation path
|
||||
i18n_level = "cameras" if level == "camera" else "global"
|
||||
nav_path = get_nav_path(section_key, level)
|
||||
if nav_path is None:
|
||||
# Try global as fallback
|
||||
nav_path = get_nav_path(section_key, "global")
|
||||
if nav_path is None:
|
||||
continue
|
||||
|
||||
# Get hidden fields for this section
|
||||
hidden = get_hidden_fields(section_configs, section_key, level)
|
||||
|
||||
# Get leaf paths from the YAML data
|
||||
leaves = get_leaf_paths(section_data)
|
||||
|
||||
# Filter out hidden fields
|
||||
visible_leaves: list[tuple[tuple[str, ...], object]] = []
|
||||
for path, value in leaves:
|
||||
path_list = list(path)
|
||||
if not _is_hidden(path_list[-1], path_list, hidden):
|
||||
visible_leaves.append((path, value))
|
||||
|
||||
if not visible_leaves:
|
||||
continue
|
||||
|
||||
if use_table:
|
||||
# Pattern A: Field table with descriptions
|
||||
lines.append(
|
||||
f'Navigate to <NavPath path="{nav_path}" />.'
|
||||
)
|
||||
lines.append("")
|
||||
lines.append("| Field | Description |")
|
||||
lines.append("|-------|-------------|")
|
||||
|
||||
for path, _value in visible_leaves:
|
||||
path_list = list(path)
|
||||
label = _build_field_label(
|
||||
i18n, section_key, path_list, i18n_level
|
||||
)
|
||||
desc = get_field_description(
|
||||
i18n, section_key, path_list, i18n_level
|
||||
)
|
||||
if not desc:
|
||||
desc = ""
|
||||
lines.append(f"| **{label}** | {desc} |")
|
||||
else:
|
||||
# Pattern B: Set instructions
|
||||
multi_section = len(sections_to_process) > 1
|
||||
|
||||
if multi_section:
|
||||
camera_note = ""
|
||||
if block.is_camera_level:
|
||||
camera_note = (
|
||||
" and select your camera"
|
||||
)
|
||||
lines.append(
|
||||
f'{step_num}. Navigate to <NavPath path="{nav_path}" />{camera_note}.'
|
||||
)
|
||||
else:
|
||||
if block.is_camera_level:
|
||||
lines.append(
|
||||
f'1. Navigate to <NavPath path="{nav_path}" /> and select your camera.'
|
||||
)
|
||||
else:
|
||||
lines.append(
|
||||
f'Navigate to <NavPath path="{nav_path}" />.'
|
||||
)
|
||||
lines.append("")
|
||||
|
||||
from .schema_loader import get_field_info
|
||||
|
||||
for path, value in visible_leaves:
|
||||
path_list = list(path)
|
||||
label = _build_field_label(
|
||||
i18n, section_key, path_list, i18n_level
|
||||
)
|
||||
field_info = get_field_info(schema, section_key, path_list)
|
||||
formatted = _format_value(value, field_info, i18n)
|
||||
|
||||
if multi_section or block.is_camera_level:
|
||||
lines.append(f" - Set **{label}** to {formatted}")
|
||||
else:
|
||||
lines.append(f"- Set **{label}** to {formatted}")
|
||||
|
||||
step_num += 1
|
||||
|
||||
if not lines:
|
||||
return None
|
||||
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def wrap_with_config_tabs(ui_content: str, yaml_raw: str, highlight: str | None = None) -> str:
|
||||
"""Wrap UI content and YAML in ConfigTabs markup.
|
||||
|
||||
Args:
|
||||
ui_content: Generated UI tab markdown
|
||||
yaml_raw: Original YAML text
|
||||
highlight: Optional highlight spec (e.g., "{3-4}")
|
||||
|
||||
Returns:
|
||||
Full ConfigTabs MDX block
|
||||
"""
|
||||
highlight_str = f" {highlight}" if highlight else ""
|
||||
|
||||
return f"""<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
|
||||
{ui_content}
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
|
||||
```yaml{highlight_str}
|
||||
{yaml_raw}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</ConfigTabs>"""
|
||||
283
docs/scripts/lib/yaml_extractor.py
Normal file
283
docs/scripts/lib/yaml_extractor.py
Normal file
@@ -0,0 +1,283 @@
|
||||
"""Extract YAML code blocks from markdown documentation files."""
|
||||
|
||||
import re
|
||||
from dataclasses import dataclass, field
|
||||
|
||||
import yaml
|
||||
|
||||
|
||||
@dataclass
|
||||
class YamlBlock:
|
||||
"""A YAML code block extracted from a markdown file."""
|
||||
|
||||
raw: str # Original YAML text
|
||||
parsed: dict # Parsed YAML content
|
||||
line_start: int # Line number in the markdown file (1-based)
|
||||
line_end: int # End line number
|
||||
highlight: str | None = None # Highlight spec (e.g., "{3-4}")
|
||||
has_comments: bool = False # Whether the YAML has inline comments
|
||||
inside_config_tabs: bool = False # Already wrapped in ConfigTabs
|
||||
section_key: str | None = None # Detected top-level config section
|
||||
is_camera_level: bool = False # Whether this is camera-level config
|
||||
camera_name: str | None = None # Camera name if camera-level
|
||||
config_keys: list[str] = field(
|
||||
default_factory=list
|
||||
) # Top-level keys in the YAML
|
||||
|
||||
|
||||
def extract_yaml_blocks(content: str) -> list[YamlBlock]:
|
||||
"""Extract all YAML fenced code blocks from markdown content.
|
||||
|
||||
Args:
|
||||
content: Markdown file content
|
||||
|
||||
Returns:
|
||||
List of YamlBlock instances
|
||||
"""
|
||||
blocks: list[YamlBlock] = []
|
||||
lines = content.split("\n")
|
||||
i = 0
|
||||
in_config_tabs = False
|
||||
|
||||
while i < len(lines):
|
||||
line = lines[i]
|
||||
|
||||
# Track ConfigTabs context
|
||||
if "<ConfigTabs>" in line:
|
||||
in_config_tabs = True
|
||||
elif "</ConfigTabs>" in line:
|
||||
in_config_tabs = False
|
||||
|
||||
# Look for YAML fence opening
|
||||
fence_match = re.match(r"^```yaml\s*(\{[^}]*\})?\s*$", line)
|
||||
if fence_match:
|
||||
highlight = fence_match.group(1)
|
||||
start_line = i + 1 # 1-based
|
||||
yaml_lines: list[str] = []
|
||||
i += 1
|
||||
|
||||
# Collect until closing fence
|
||||
while i < len(lines) and not lines[i].startswith("```"):
|
||||
yaml_lines.append(lines[i])
|
||||
i += 1
|
||||
|
||||
end_line = i + 1 # 1-based, inclusive of closing fence
|
||||
raw = "\n".join(yaml_lines)
|
||||
|
||||
# Check for inline comments
|
||||
has_comments = any(
|
||||
re.search(r"#\s*(<-|[A-Za-z])", yl) for yl in yaml_lines
|
||||
)
|
||||
|
||||
# Parse YAML
|
||||
try:
|
||||
parsed = yaml.safe_load(raw)
|
||||
except yaml.YAMLError:
|
||||
i += 1
|
||||
continue
|
||||
|
||||
if not isinstance(parsed, dict):
|
||||
i += 1
|
||||
continue
|
||||
|
||||
# Detect config section and level
|
||||
config_keys = list(parsed.keys())
|
||||
section_key = None
|
||||
is_camera = False
|
||||
camera_name = None
|
||||
|
||||
if "cameras" in parsed and isinstance(parsed["cameras"], dict):
|
||||
is_camera = True
|
||||
cam_entries = parsed["cameras"]
|
||||
if len(cam_entries) == 1:
|
||||
camera_name = list(cam_entries.keys())[0]
|
||||
inner = cam_entries[camera_name]
|
||||
if isinstance(inner, dict):
|
||||
inner_keys = list(inner.keys())
|
||||
if len(inner_keys) >= 1:
|
||||
section_key = inner_keys[0]
|
||||
elif len(config_keys) >= 1:
|
||||
section_key = config_keys[0]
|
||||
|
||||
blocks.append(
|
||||
YamlBlock(
|
||||
raw=raw,
|
||||
parsed=parsed,
|
||||
line_start=start_line,
|
||||
line_end=end_line,
|
||||
highlight=highlight,
|
||||
has_comments=has_comments,
|
||||
inside_config_tabs=in_config_tabs,
|
||||
section_key=section_key,
|
||||
is_camera_level=is_camera,
|
||||
camera_name=camera_name,
|
||||
config_keys=config_keys,
|
||||
)
|
||||
)
|
||||
|
||||
i += 1
|
||||
|
||||
return blocks
|
||||
|
||||
|
||||
@dataclass
|
||||
class ConfigTabsBlock:
|
||||
"""An existing ConfigTabs block in a markdown file."""
|
||||
|
||||
line_start: int # 1-based line of <ConfigTabs>
|
||||
line_end: int # 1-based line of </ConfigTabs>
|
||||
ui_content: str # Content inside the UI TabItem
|
||||
yaml_block: YamlBlock # The YAML block inside the YAML TabItem
|
||||
raw_text: str # Full raw text of the ConfigTabs block
|
||||
|
||||
|
||||
def extract_config_tabs_blocks(content: str) -> list[ConfigTabsBlock]:
|
||||
"""Extract existing ConfigTabs blocks from markdown content.
|
||||
|
||||
Parses the structure:
|
||||
<ConfigTabs>
|
||||
<TabItem value="ui">
|
||||
...ui content...
|
||||
</TabItem>
|
||||
<TabItem value="yaml">
|
||||
```yaml
|
||||
...yaml...
|
||||
```
|
||||
</TabItem>
|
||||
</ConfigTabs>
|
||||
|
||||
Returns:
|
||||
List of ConfigTabsBlock instances
|
||||
"""
|
||||
blocks: list[ConfigTabsBlock] = []
|
||||
lines = content.split("\n")
|
||||
i = 0
|
||||
|
||||
while i < len(lines):
|
||||
if "<ConfigTabs>" not in lines[i]:
|
||||
i += 1
|
||||
continue
|
||||
|
||||
block_start = i # 0-based
|
||||
|
||||
# Find </ConfigTabs>
|
||||
j = i + 1
|
||||
while j < len(lines) and "</ConfigTabs>" not in lines[j]:
|
||||
j += 1
|
||||
|
||||
if j >= len(lines):
|
||||
i += 1
|
||||
continue
|
||||
|
||||
block_end = j # 0-based, line with </ConfigTabs>
|
||||
block_text = "\n".join(lines[block_start : block_end + 1])
|
||||
|
||||
# Extract UI content (between <TabItem value="ui"> and </TabItem>)
|
||||
ui_match = re.search(
|
||||
r'<TabItem\s+value="ui">\s*\n(.*?)\n\s*</TabItem>',
|
||||
block_text,
|
||||
re.DOTALL,
|
||||
)
|
||||
ui_content = ui_match.group(1).strip() if ui_match else ""
|
||||
|
||||
# Extract YAML block from inside the yaml TabItem
|
||||
yaml_tab_match = re.search(
|
||||
r'<TabItem\s+value="yaml">\s*\n(.*?)\n\s*</TabItem>',
|
||||
block_text,
|
||||
re.DOTALL,
|
||||
)
|
||||
|
||||
yaml_block = None
|
||||
if yaml_tab_match:
|
||||
yaml_tab_text = yaml_tab_match.group(1)
|
||||
fence_match = re.search(
|
||||
r"```yaml\s*(\{[^}]*\})?\s*\n(.*?)\n```",
|
||||
yaml_tab_text,
|
||||
re.DOTALL,
|
||||
)
|
||||
if fence_match:
|
||||
highlight = fence_match.group(1)
|
||||
yaml_raw = fence_match.group(2)
|
||||
has_comments = bool(
|
||||
re.search(r"#\s*(<-|[A-Za-z])", yaml_raw)
|
||||
)
|
||||
|
||||
try:
|
||||
parsed = yaml.safe_load(yaml_raw)
|
||||
except yaml.YAMLError:
|
||||
parsed = {}
|
||||
|
||||
if isinstance(parsed, dict):
|
||||
config_keys = list(parsed.keys())
|
||||
section_key = None
|
||||
is_camera = False
|
||||
camera_name = None
|
||||
|
||||
if "cameras" in parsed and isinstance(
|
||||
parsed["cameras"], dict
|
||||
):
|
||||
is_camera = True
|
||||
cam_entries = parsed["cameras"]
|
||||
if len(cam_entries) == 1:
|
||||
camera_name = list(cam_entries.keys())[0]
|
||||
inner = cam_entries[camera_name]
|
||||
if isinstance(inner, dict):
|
||||
inner_keys = list(inner.keys())
|
||||
if len(inner_keys) >= 1:
|
||||
section_key = inner_keys[0]
|
||||
elif len(config_keys) >= 1:
|
||||
section_key = config_keys[0]
|
||||
|
||||
yaml_block = YamlBlock(
|
||||
raw=yaml_raw,
|
||||
parsed=parsed,
|
||||
line_start=block_start + 1,
|
||||
line_end=block_end + 1,
|
||||
highlight=highlight,
|
||||
has_comments=has_comments,
|
||||
inside_config_tabs=True,
|
||||
section_key=section_key,
|
||||
is_camera_level=is_camera,
|
||||
camera_name=camera_name,
|
||||
config_keys=config_keys,
|
||||
)
|
||||
|
||||
if yaml_block:
|
||||
blocks.append(
|
||||
ConfigTabsBlock(
|
||||
line_start=block_start + 1, # 1-based
|
||||
line_end=block_end + 1, # 1-based
|
||||
ui_content=ui_content,
|
||||
yaml_block=yaml_block,
|
||||
raw_text=block_text,
|
||||
)
|
||||
)
|
||||
|
||||
i = j + 1
|
||||
|
||||
return blocks
|
||||
|
||||
|
||||
def get_leaf_paths(
|
||||
data: dict, prefix: tuple[str, ...] = ()
|
||||
) -> list[tuple[tuple[str, ...], object]]:
|
||||
"""Walk a parsed YAML dict and return all leaf key paths with values.
|
||||
|
||||
Args:
|
||||
data: Parsed YAML dict
|
||||
prefix: Current key path prefix
|
||||
|
||||
Returns:
|
||||
List of (key_path_tuple, value) pairs.
|
||||
e.g., [( ("record", "continuous", "days"), 3 ), ...]
|
||||
"""
|
||||
results: list[tuple[tuple[str, ...], object]] = []
|
||||
|
||||
for key, value in data.items():
|
||||
path = prefix + (str(key),)
|
||||
if isinstance(value, dict):
|
||||
results.extend(get_leaf_paths(value, path))
|
||||
else:
|
||||
results.append((path, value))
|
||||
|
||||
return results
|
||||
Reference in New Issue
Block a user