Skip to content

SEOCHO | The OS for Agentic Knowledge Graphs

SEOCHO is open source

Ontology-First
Graph Memory for Agents

1. Define one ontology and use it across extraction, querying, validation, and runtime artifacts.

2. Keep graph behavior inspectable with schema-aware query planning and bounded repair.

3. Promote the same local contract into a runtime other agents can consume over HTTP.

Neuro-symbolic Knowledge Graph Visualization
seocho-ontology-first
from seocho import Seocho, Ontology, NodeDef, RelDef, P
from seocho.store import Neo4jGraphStore, OpenAIBackend

ontology = Ontology(
    name="finance_core",
    nodes={
        "Company": NodeDef(properties={"name": P(str, unique=True)}),
    },
    relationships={
        "ACQUIRED": RelDef(source="Company", target="Company"),
    },
)

client = Seocho(
    ontology=ontology,
    graph_store=Neo4jGraphStore("bolt://localhost:7687", "neo4j", "password"),
    llm=OpenAIBackend(model="gpt-4o-mini"),
)

client.add("ACME acquired Beta in 2024.")
print(client.ask("Who did ACME acquire?", reasoning_mode=True))

Everything you need to keep
agents aligned to one graph contract.

Ontology-first extraction and querying, graph-backed runtime APIs, bounded repair, and explicit advanced mode only when comparison work is worth the cost.

Ontology As The Runtime Contract

One ontology controls extraction hints, query prompts, SHACL-derived validation, and runtime semantic artifacts instead of drifting across multiple schema layers.

Graph-Native Query Path

Ontology profile, SHACL, vocabulary, and graph metadata constrain retrieval before query generation. The default path stays inspectable and evidence-grounded.

Semantic Debate Engine
Deterministic Routing

Bounded Repair

When retrieval is insufficient, SEOCHO retries within a bounded repair budget instead of jumping straight to uncontrolled free-form query generation.

Local Authoring, Runtime Consumption

Build locally with your ontology and graph store, then expose the same contract through HTTP runtime surfaces for other agents and clients.

Choose Your Entry Path

Start from the surface that matches your job.

SEOCHO has three distinct entry points: product framing, local ontology-first authoring, and runtime consumption. Pick one instead of reading the repo like a file dump.

Runtime Walkthrough

Quickstart

Stand up the runtime, ingest data, and ask ontology-aware questions without starting from the full architecture first.

1 Start Services

git clone https://github.com/tteon/seocho.git
cd seocho && make setup-env && make up

2 Ingest Your Data

from seocho import Seocho client = Seocho(base_url="http://localhost:8001", workspace_id="default") client.raw_ingest( [ {"id": "r1", "content": "ACME acquired Beta in 2024."}, {"id": "r2", "content": "Beta provides risk analytics to ACME."}, ], target_database="kgruntime", )

3 Query Through the Semantic Layer

from seocho import Seocho client = Seocho(base_url="http://localhost:8001", workspace_id="default") semantic = client.semantic( "What is ACME related to?", databases=["kgruntime"], reasoning_mode=True, repair_budget=2, ) print(semantic.response)

System Map

System Architecture

From raw ingestion to semantic-layer-constrained retrieval, with advanced debate only when needed.

graph TD
    classDef external fill:transparent,stroke:#52525B,stroke-width:1px,stroke-dasharray:5 5,color:#A1A1AA
    classDef pipeline fill:#ffffff10,stroke:#d4d4d8,stroke-width:1px,color:#e4e4e7
    classDef db fill:#ffffff05,stroke:#a1a1aa,stroke-width:1px,color:#d4d4d8
    classDef agent fill:#ffffff15,stroke:#f4f4f5,stroke-width:1px,color:#ffffff
    classDef user fill:transparent,stroke:none,color:#E4E4E7,font-weight:bold
    classDef mode fill:#ffffff05,stroke:#a1a1aa,stroke-width:1px,color:#ffffff,stroke-dasharray:5 5

    User(("User Query")):::user -->|Chat| UI[Custom Platform :8501]:::external
    UI -->|Toggle| ModeExecution Mode:::mode

    Mode -->|Router| Router[Router Agent]:::agent
    Mode -->|Debate| Debate[DebateOrchestrator]:::agent
    Mode -->|Semantic QA| Sem[Semantic Layer]:::agent

    subgraph Router_Mode[Router Mode]
        Router --> Graph[GraphAgent]:::agent
        Router --> Vector[VectorAgent]:::agent
        Router --> Web[WebAgent]:::agent
        Graph --> DBA[GraphDBA]:::pipeline
        DBA -->|Cypher| Neo4j[(Neo4j / DozerDB Partitions)]:::db
        Vector -->|Search| FAISS[(FAISS)]:::db
        DBA --> Sup1[Supervisor]:::agent
        Vector --> Sup1
    end

    subgraph Debate_Mode[Parallel Debate Mode]
        Debate -->|Fan-out| A1[Agent_kgnormal]:::agent
        Debate -->|Fan-out| A2[Agent_kgfibo]:::agent
        Debate -->|Fan-out| AN[Agent_...]:::agent
        A1 --> Collect[Collect]:::pipeline
        A2 --> Collect
        AN --> Collect
        Collect --> Sup2[Supervisor Synthesis]:::agent
    end

    subgraph Semantic_Mode[Semantic Agent Flow]
        Sem --> DedupResolve[Entity Dedup & Fulltext Resolve]:::pipeline
        DedupResolve --> Route2[RouterAgent]:::agent
        Route2 --> LPG[LPGAgent]:::agent
        Route2 --> RDF[RDFAgent]:::agent
        LPG --> Ans[AnswerGenerationAgent]:::agent
        RDF --> Ans
    end

    subgraph Pipeline[Data Extraction Pipeline]
        DS[Raw Sources]:::external --> Bridge[OntologyPromptBridge]:::pipeline
        Bridge --> Extract[EntityExtractor]:::pipeline
        Extract --> Link[EntityLinker]:::pipeline
        Link --> Dedup[EntityDeduplicator]:::pipeline
        Dedup --> DBM[DatabaseManager]:::pipeline
        DBM -->|CREATE DB| Neo4j
        DBM --> AF[AgentFactory]:::pipeline
    end
        

Writing

The SEOCHO Blog

Concepts, reasoning, and major release announcements.

All Posts

Repository Signal

Latest Updates

Continuously integrated signals from GitHub Releases.

View Commits
  • 2026-04-19 Sync integration established. Awaiting first release. #init

Operator Notes

FAQ

How do I put my own data into SEOCHO?

Start with `raw_ingest(...)` if you already have records or documents, or `add(...)` for one-off text. Use one `target_database` per dataset, then query it through `ask(...)`, `semantic(...)`, and only later `advanced(...)` if you need graph comparison.

When should I turn on reasoning mode?

Turn on `reasoning_mode=True` when the first semantic retrieval pass is too weak, a relation path is missing, or slot fill is incomplete. It is the preferred escalation path before debate because it stays bounded and evidence-focused.

When should I use advanced debate?

Use `advanced(...)` when the point of the request is explicit cross-graph comparison, disagreement analysis, or multi-agent synthesis. It is not the default retrieval mode and it should not replace the semantic-layer-first path for ordinary developer queries.

Do I need to author ontology, JSON-LD, or SHACL before first use?

No. Start by ingesting representative data and validating the semantic path. Add approved ontology or shape artifacts when you need stronger constraints, more repeatable retrieval, or tighter governance around graph behavior.