📡 Signals

Atomic units of lived data. The foundation of pattern recognition and synthesis.

What is a Signal?

A signal is the atomic unit of lived data in Autonomy. It represents a single moment of documented reality — raw capture that forms the input layer for AI-powered pattern detection and synthesis.

Signals are not content. They're not posts. They're timestamped documentation, optionally geolocated, preserved with full structural fidelity.

Every signal belongs to a realm and has a type (the medium) and optional context (the intent).

Signal Types (Medium)

DOCUMENT

Text in any form — writing, notes, code, references. The most common signal type.

PHOTO

Visual capture. Images preserved with technical metadata when available.

TRANSMISSION

Audio or video recordings. YouTube videos. Podcast episodes. Processed via transcript.

CONVERSATION

Dialogue logs. AI chat transcripts. Co-created content from back-and-forth exchange.

Signal Context (Intent)

Context describes why a signal was created. It's optional but provides valuable metadata for AI synthesis and pattern recognition.

CAPTURE

Default — generic documentation, intent to be determined

NOTE

Quick capture, ephemeral thought

JOURNAL

Reflective writing, daily log

CODE

Technical artifact, implementation

REFERENCE

External source, citation

OBSERVATION

Field note, documented reality

ARTIFACT

Created work, finished piece

Signal Structure

Core Fields

signal_idUnique identifier (ULID format)
realm_idWhich realm this signal belongs to
signal_typeMedium: DOCUMENT, PHOTO, TRANSMISSION, CONVERSATION
signal_contextIntent: CAPTURE, NOTE, JOURNAL, CODE, REFERENCE, OBSERVATION, ARTIFACT
signal_titleBrief title (initially from synthesis)
signal_descriptionLonger description (initially from synthesis)
signal_authorWho created/captured this signal
signal_temperatureImportance (-1.0 to 1.0, default 0.0)
signal_statusACTIVE, PENDING, REJECTED, FAILED, or ARCHIVED
signal_visibilityPUBLIC, PRIVATE, SANCTUM, or SHARED

Temporal Data

stamp_createdWhen the original content was captured/created
stamp_importedWhen signal was ingested into the system
stamp_updatedLast modification timestamp (auto-updated)

Geospatial Data (Optional)

signal_locationPostGIS Point (PostgreSQL) - GeoJSON format
signal_latitude / signal_longitudeDecimal coordinates (MySQL)

Type-Specific Data

signal_metadataTechnical/immutable facts about the signal
signal_payloadThe actual content data
DOCUMENT - Metadata & Payload

Metadata:

  • word_count - Number of words
  • character_count - Number of characters
  • language - Language code (e.g., 'en', 'es')
  • file_extension - File extension (e.g., '.md', '.txt')
  • encoding - Character encoding (e.g., 'utf-8')
  • mime_type - MIME type (e.g., 'text/plain')

Payload:

  • content - The actual text content
  • format - Rendering format: 'plain', 'markdown', or 'html'
PHOTO - Metadata & Payload

Metadata (EXIF & Properties):

  • camera - Camera model
  • lens - Lens information
  • iso - ISO sensitivity
  • aperture - Aperture value (e.g., 'f/1.5')
  • shutter_speed - Shutter speed (e.g., '1/120')
  • focal_length - Focal length in mm
  • width - Image width in pixels
  • height - Image height in pixels
  • file_size - File size in bytes
  • mime_type - MIME type (e.g., 'image/jpeg')
  • color_space - Color space (e.g., 'sRGB')
  • timestamp_original - Original capture timestamp from EXIF
  • gps_altitude - GPS altitude in meters

Payload:

  • file_path - Local path or cloud storage URL
  • thumbnail_path - Optimized thumbnail path
  • original_filename - Original filename when uploaded
TRANSMISSION - Metadata & Payload

Metadata:

  • source_type - 'audio', 'video', or 'other'
  • source_url - YouTube URL, file path, etc.
  • youtube_id - YouTube video ID (if applicable)
  • youtube_channel - YouTube channel name
  • youtube_published_at - YouTube publish timestamp
  • youtube_thumbnail - YouTube thumbnail URL
  • timestamps - Array of topic markers: [{topic, timestamp}]
  • duration - Duration in seconds
  • bitrate - Bitrate in kbps
  • sample_rate - Sample rate in Hz
  • channels - Audio channels (1=mono, 2=stereo)
  • codec - Codec (e.g., 'h264', 'aac')
  • file_size - File size in bytes
  • mime_type - MIME type (e.g., 'video/mp4')
  • width - Video width in pixels
  • height - Video height in pixels
  • framerate - Framerate in fps
  • has_transcript - Boolean
  • transcript_method - 'ai', 'manual', or 'imported'

Payload:

  • file_path - Local file or cloud storage URL
  • transcript - Plain text transcript
  • timed_transcript - Array of timestamped segments: [{text, start, end}]
CONVERSATION - Metadata & Payload

Metadata:

  • platform - 'claude', 'chatgpt', 'gemini', 'remnant', or 'other'
  • model - Model identifier (e.g., 'claude-sonnet-4')
  • message_count - Total number of messages
  • turn_count - Number of back-and-forth exchanges
  • duration_minutes - Estimated conversation duration
  • total_tokens - Total token count (if available)
  • started_at - First message timestamp
  • ended_at - Last message timestamp

Payload:

  • messages - Array of messages: [{role, content, timestamp, metadata}]
  • summary - AI-generated conversation summary
  • key_points - Array of extracted key insights

Additional Fields

signal_tagsArray of tags (initially from synthesis)
signal_embeddingVector embedding (1536 dimensions) for semantic search

History & Annotations

signal_historyAudit trail: [{timestamp, action, field, user_id}]
signal_annotationsUser notes and synthesis feedback: {user_notes[], synthesis_feedback[]}

Visibility Levels

PUBLIC

Anyone can view this signal (if they have access to the realm).

SANCTUM

Only users with SANCTUM role or higher can view.

PRIVATE

Only the realm owner can view this signal.

SHARED

Reserved for future multi-user realm features.

Signal Lifecycle

  1. Capture — Signal is created with minimal data: type, context, raw payload
  2. Synthesis — AI processes signal and generates METADATA/SURFACE (title, description, tags)
  3. Enrichment — Title/description/tags copied to signal table for display
  4. Clustering — Can be grouped with related signals in clusters
  5. Deep Synthesis — STRUCTURE, PATTERNS analysis for cross-signal insights
  6. Reflection — MIRROR, MYTH, NARRATIVE generation at cluster level

Key Principles

Signals are input for pattern recognition

Not content for consumption. Raw documentation of lived reality that AI synthesis processes to identify patterns and generate insights.

Every signal belongs to a realm

Signals don't exist in isolation. They're always part of a realm, ensuring clear ownership and access control.

Title/description/tags come from synthesis

These display fields are initially AI-generated, then user-editable. Changes are tracked in signal_history.

Location is metadata, not a signal type

Geographic coordinates attach to any signal. Places are clustering context, not signals themselves.

Related Concepts