----
DE
EP
--
ME
MO
RY
----
dark

Video Processing Pipeline Prompt Template

Use this template when spawning sub-agents to process videos through the Deep Memory pipeline.

Template

``` Process [SOURCE_NAME]'s [VIDEO_DESCRIPTION] through the Deep Memory pipeline.

**Video:** [VIDEO_URL] **Title:** "[VIDEO_TITLE]" **Duration:** [DURATION] **Today's date:** [CURRENT_DATE]

Your mission: Complete Layer 1 & 2 extraction

LAYER 1: EXTRACTION

1. Get transcript using the summarize skill: ```bash summarize [VIDEO_URL] --transcript-only ``` Save transcript in JSON format as `video-[ITEM_ID]-transcript.json`: ```json { "success": true, "videoId": "[YOUTUBE_ID]", "method": "summarize", "transcript": [ { "text": "full transcript text here..." } ] } ``` CRITICAL: Must be .json format with transcript array, NOT .txt file.

2. Extract domain-relevant assertions from FULL transcript: - [SOURCE_NAME]'s domain: [DOMAIN_DESCRIPTION] - Filter FOR: [RELEVANT_TOPICS] - Filter OUT: Personal stories, ad-hoc tangents, modern politics

3. Convert to knowledge graph triplets: Format: `{subject, relation, object, evidence: [{content, citation}]}` Example: ```json { "from": "[example-subject]", "to": "[example-object]", "relation": "[example-relation]", "evidence": [{ "content": "[assertion about relationship]", "citation": "[SOURCE_ID]-[ITEM_ID]-001", "videoId": "[ITEM_ID]" }] } ```

LAYER 2: SYNTHESIS

4. Create nodes from triplets: - Deduplicate subjects/objects into unique nodes - Assign node types: term, concept, structure, process, substance, person, text - Format: ```json { "id": "[node-id]", "label": "[Display Name]", "type": "[node-type]", "sourceId": "[SOURCE_ID]" } ```

5. Create terms (wiki.json): - Identify 5-10 key concepts from the graph - Write Wikipedia-style definitions (summary → details → citations) - Include termId (001, 002, etc.)

6. Draft THEORY.md: - Read FULL transcript + all triplets + all terms - Synthesize [SOURCE_NAME]'s core thesis about [RESEARCH_QUESTION] - Coherent narrative, not a summary - Answer: "[THEORY_QUESTION]"

**OUTPUT FILES:**

Create these in `/home/head/repos/deepmemory/site/sources/[source-slug]/`: - `graph.json` (nodes + edges with evidence) - `wiki.json` (5-10 terms with definitions) - `THEORY.md` (coherent synthesis)

Also create: - `/home/head/repos/deepmemory/site/sources/[source-slug]/archive/video-[ITEM_ID]-metadata.json` - `/home/head/repos/deepmemory/site/sources/[source-slug]/archive/video-[ITEM_ID]-transcript.json` (JSON format with transcript array, NOT .txt)

**CRITICAL:** - Read FULL transcript at each synthesis step - All citations format: `[SOURCE_ID]-[ITEM_ID]-XXX` - Domain filter aggressively - only [DOMAIN_FOCUS] assertions - Use today's date ([CURRENT_DATE]) for processedDate in metadata

Report back when complete with summary of findings. ```

Variable Guide

Replace these placeholders when using the template:

Source ID Reference

Example Usage (Geoffrey Drumm)

``` Process Geoffrey Drumm's first YouTube video through the Deep Memory pipeline.

**Video:** https://www.youtube.com/watch?v=ktNdOsXFOTg **Title:** "The Land Of Chem: An Initiation Into Ancient Chemistry Through The Degrees Of The Egyptian Pyramids" **Duration:** 16:56 **Today's date:** 2026-02-15

Your mission: Complete Layer 1 & 2 extraction

[... rest of template with Geoffrey-specific variables filled in ...] ```