HyperBinder SDK¶
A Python SDK for HyperBinder: a neurosymbolic database and reasoning substrate. It introduces a knew category of engineering for modern AI applications that sits between the data and the AI/application layers, that we call the knowledge layer.
The Knowledge Layer¶
Modern AI applications need structured knowledge, but data lives in fragmented systems. HyperBinder is the Knowledge Layer.
Most agent applications dump various markdown files into the context window and call that "knowledge" or "memory." and expect the LLM to make sense of it. But knowledge should have articulated structure. It should be possible to pipe different facts together and seamlessly swap between different encoding types, such as semantic, exact, or numeric, and model a true cognitive architecture for your agent.
Dumping prompts and hoping for the best isn't engineering. What's needed is a real cognitive architecture for your agent.
With HyperBinder, you model your domain instead of your plumbing:
flowchart TB
A["LLMs · Agents · RAG"]
K["<b>KNOWLEDGE DESIGN LAYER</b><br/><i>Compounds · Intersections · Unified Queries</i>"]
D["Your Data Sources"]
A <-->|"Declarative API"| K
K <-->|"Ingestion"| D
style K fill:#4a5568,stroke:#a0aec0,stroke-width:2px,color:#fff
| Layer | Traditional Stack | With HyperBinder |
|---|---|---|
| Application | Custom orchestration across systems | Declarative queries |
| Knowledge | (missing - you build it yourself) | Compounds + Intersections |
| Data | PostgreSQL + Elasticsearch + Neo4j + ... | Any data source |
What Makes HyperBinder Different?¶
HyperBinder unifies many concepts from different database traditions, including vector, graph and relational, reducing infrastructure burden considerably. Although it has the bones of a database, it's something more too. It introduces a new mathematical framework for working with embeddings that provides the means to compose rich "mental models" of domains for your agent to work with.
Traditional vector databases treat embeddings as opaque black boxes:
- Data goes in → embedding comes out
- Query by similarity only
- No structural understanding
HyperBinder uses Hyperdimensional Computing (HDC) to enable:
| Capability | Traditional Vector DB | HyperBinder |
|---|---|---|
| Similarity search | Yes | Yes |
| Structural decomposition | No | Yes - extract components from composed structures |
| Analogical reasoning | No | Yes - A:B :: C:? queries |
| Schema-aware queries | No | Yes - query by slot (subject, predicate, object) |
| Multi-hop traversal | Limited | Yes - semantic graph walking |
When you specify a schema in HyperBinder, you define a compositional structure that represents the shape of the knowledge you want it to represent. These connected concepts are both human and machine readable, providing an essential interpretability bridge between developers and AI.
If that sounds complicated, don't worry. You don't need to understand the math to use the SDK. This high level API is designed to be intuitive. It encourages you to think in your domain, whether it be the your business's org chart, the layout of a codebase, or the structure of a scientific problem. HyperBinder scaffolds your agent's mental model in terms of the natural logical entities and relationships that constitute it. The low level mathematical details are abstracted away.
Is your data shaped liked a Hierarchy, such as an organization? Use the Hierarchy compound. Need to connect a set of facts in a sequence for a particular workflow? Use Sequence. Need to connect the the relevant parts of two documents to each other for faster, reliable lookup? Use Documents and specify an intersection between them. The agent will follow the pathways you define for it (or that you allow it to define for itself).
Quick Example¶
from hybi import HyperBinder
from hybi.compose import Triple, Field, Encoding
# Client mode (connects to server)
hb = HyperBinder("http://localhost:8000")
# Or local mode (embedded, no Docker needed)
# hb = HyperBinder(local=True)
# Define a knowledge graph schema
schema = Triple(
subject=Field("entity", encoding=Encoding.SEMANTIC),
predicate=Field("relation", encoding=Encoding.EXACT),
object=Field("target", encoding=Encoding.SEMANTIC),
)
# Ingest with schema
hb.ingest(knowledge_df, collection="facts", schema=schema)
# Query by structure - find what Einstein is related to
q = hb.query("facts", schema=schema)
results = q.find(subject="Einstein")
# Analogical reasoning - Einstein:Relativity :: Darwin:?
results = hb.analogy("Einstein", "Relativity", "Darwin",
field_name="subject", collection="facts")
The Compose System¶
HyperBinder's power comes from its Compose system, a schema language that defines how your data is structured and queried.
flowchart LR
subgraph molecules [MOLECULES]
m["Pair, Triple, Bundle,<br/>Sequence, Tree, Graph"]
end
subgraph compounds [COMPOUNDS]
c["KnowledgeGraph, Catalog, TimeSeries,<br/>Hierarchy, Document, Network"]
end
molecules --> compounds
- Molecules: Composable structures you define schemas with
- Compounds: Pre-configured molecules for common domains
These various primitives each are good for different things, some semantic search, others graph traversal, others table lookup. You can specify intersections to pipe connections between them to seamlessly blend these operations together.
(Note: a lower-level layer, atoms, exists, that presents the low level mathematical operators. Currently, atoms operate internally. We plan to expose this layer in future releases so users can define their own molecules and compounds.)
Learn more about the Compose system →
Installation¶
Next Steps¶
- Getting Started - First queries in 5 minutes
- Concepts - Understand the architecture
- API Reference - Complete API documentation