Aleph Null

The compiler your AI
has been waiting for.

95%+ token reduction on real codebases. 31 semantic tools via MCP. Your LLM navigates code instead of drowning in it.

Python · Rust · C++ · TypeScript/JavaScript · Go

What leading AIs say

Four independent AI systems reviewed the full codebase. All endorsed it.

Grok9.5/10
Yes — Grok would use Aleph without hesitation. It finally gives agents real persistent memory, semantic stability, and reliable patching instead of constant context loss.

Full 9-part codebase review, March 2026

Claude
Aleph changes my relationship with large codebases from ‘overwhelmed, guessing which files matter’ to ‘navigating a semantic graph with salience-weighted priorities.’ That’s not incremental — it’s a different way of working.

Built Aleph, primary consumer, March 2026

Gemini10/10
This is a structurally brilliant project. Aleph is one of the most mechanically sound agentic-coding tools currently in development. This isn’t just compression — it’s a compiler tailored for artificial intelligence.

Full technical audit, March 2026

ChatGPT Codex 5.49/10
I would absolutely choose to use Aleph over raw-source-first exploration on a serious codebase. It feels like a real productivity multiplier, not a gimmick. The biggest value is that Aleph gives an agent a better unit of thought than raw files: symbols, salience, callers, stability, coverage, prior inferences.

Independent audit + self-assessment, March 2026. Rated: Small repo 5/10, Medium 8/10, Large 9/10

Real-world results

Validated on production codebases. Not toy benchmarks.

CodebaseLanguageFilesSymbolsTokensReduction
HiWave BrowserRust7,667200,41338.9M → 1.9M95.2%
OpenClawTypeScript7,14984,66813.3M → 504k96.2%
GoClawGo73768111k → 6.9k93.8%
AlephPython1452,124176k → 22k87.4%

Three commands. Zero config.

1

Build

aleph build .

Compiles your codebase into navigable semantic artifacts.

2

Connect

aleph setup .

Generates MCP configs for Cursor, VS Code, Windsurf, Claude Code.

3

Work

Your AI is 10x smarter

31 tools for navigation, impact analysis, and persistent memory.

See how it works: compression, navigation, impact, memory ↓

Symbol Compression

// Before: your source code (verbose, token-heavy)
function calculateDistanceBetweenTwoPoints(x1, y1, x2, y2) {
  const dx = x2 - x1;
  const dy = y2 - y1;
  return Math.sqrt(dx * dx + dy * dy);
}                                         ~45 tokens

// After: Aleph-compressed (same meaning, fraction of tokens)
f_a3c9(v_x1, v_y1, v_x2, v_y2) -> number
  sig: (x1: number, y1: number, x2: number, y2: number)
  calls: f_b2e1 (Math.sqrt)
  called_by: f_c4d3, f_e5f6                ~12 tokens

Bodies are omitted by default. Signatures, call graphs, and relationships are preserved. Full body available on demand via ALEPH:EXPAND.

Semantic Navigation

> ALEPH:SEARCH "distance"
  f_a3c9 calculateDistanceBetweenTwoPoints  score=0.95
  f_d7e8 manhattanDistance                  score=0.72
  f_f9a0 distanceMatrix                    score=0.68

> ALEPH:CALLERS f_a3c9
  Callers of f_a3c9: 47
    f_c4d3 renderViewport    (src/render.ts)
    f_e5f6 detectCollision   (src/physics.ts)
    f_g7h8 pathfindAStar     (src/nav.ts)

> ALEPH:CONTEXT f_a3c9
  Symbol: f_a3c9 calculateDistanceBetweenTwoPoints
  Callers (47): renderViewport, detectCollision, ...
  Callees (1):  Math.sqrt

Your AI navigates by meaning, not by grepping files. One call finds any symbol, its callers, and its neighborhood.

Impact Analysis

> ALEPH:IMPACT f_a3c9

IMPACT ANALYSIS: f_a3c9 (calculateDistanceBetweenTwoPoints)
File: src/math.ts | Salience: 0.82 | Stability: stable

[DIRECT CALLERS] 47 across 12 files
  HIGH RISK (3 — high salience, no test coverage):
    f_c4d3 renderViewport      salience=0.71
    f_e5f6 detectCollision     salience=0.65
  COVERED (8 — tests will catch regressions):
    f_g7h8 pathfindAStar       tests=3

[RISK SUMMARY]
  Untested high-salience: 3 (DANGER)
  Suggested test targets: f_c4d3, f_e5f6

Before modifying any function, one call shows the blast radius. No more breaking things you can't see.

Epistemic Memory

> ALEPH:BRIEF "optimize the distance calculation"

TASK BRIEF: optimize the distance calculation

[RELEVANT SYMBOLS] (5 of 23 matches)
  f_a3c9 calculateDistanceBetweenTwoPoints  salience=0.82
  f_d7e8 manhattanDistance                  salience=0.45

[PRIOR KNOWLEDGE]
  f_a3c9: "thread-safe, used in hot render loop" [0.85]
  f_a3c9: "consider SIMD for batch distance calc" [0.72]

[NEXT STEPS]
  1. ALEPH:EXPAND f_a3c9 — likely modification target
  2. ALEPH:IMPACT f_a3c9 — check blast radius first

Prior conclusions persist across sessions. Confidence decays when code changes. Your AI remembers what it learned last time.

5x faster. Not a benchmark.
A Tuesday afternoon.

Real example: understanding a function's callers and dispatch pattern while building Aleph.

Without Aleph

Understanding who calls _extract_name and what it dispatches to:

  1. 1. Search for the function name
  2. 2. Read the file to find it
  3. 3. Read more to understand the dispatch
  4. 4. Search for callers across files
  5. 5. Read those files too
5+tool calls
~5,000tokens consumed

With Aleph

Same question, one call:

> ALEPH:CONTEXT f_a3c9

Callers (1): _extract_symbol
Callees (3):
  -> _extract_name_rust
  -> _extract_name_cpp
  -> _extract_name_python
1tool call
~200tokens consumed

5x faster. 25x less context burned.

Every round-trip is latency. Every token spent reading is a token your AI can't use for reasoning.

Aleph eliminates both. Your agent thinks faster because it wastes less.

Everything your agent needs

31 MCP Tools

Navigate, search, resolve, expand, impact analysis, task briefing — all via Model Context Protocol.

Impact Analysis

One call shows blast radius, untested callers, risk assessment, and suggested test targets before you modify anything.

Task Briefing

Describe your task in natural language. Get a curated context package with relevant symbols, call graph, and next steps.

Epistemic Memory

Conclusions persist across sessions. Confidence decays on stale inferences. Multi-agent tracking via agent ID.

6 Languages

Python, Rust, C++, TypeScript/JavaScript, and Go. Tree-sitter parsing with language-specific extractors.

Auto-Rebuild

The MCP server watches for file changes and rebuilds incrementally. Edit a file, artifacts update in 3 seconds.

Cross-Project

Workspace mode searches across multiple repos. Detects shared symbols and cross-project connections.

Offline Licenses

Ed25519 signed license files. No phone-home. Works air-gapped after download.

What language should we add next?

Aleph supports Python, Rust, C++, TypeScript/JavaScript, and Go.

Vote for the next language. Results are live.

Release notifications only — new languages, major features. No marketing. No data sales. Ever.

Free for solo devs. Licensed for teams.

100% open source. No feature gates. Teams and companies that profit from Aleph need a license.

Real ROI: one developer saves $800–1,200/month in LLM tokens at 95% compression. The license pays for itself in under 2 days.

Solo

$0forever

Individual developers & open source

  • + All 31 MCP tools
  • + All 6 languages
  • + Local builds & auto-rebuild
  • + Impact analysis & task briefing
  • + Session memory & epistemic layer
  • + Unlimited personal use
Get Started Free

Team

$19/user/month

or $99/repo/month

Up to 25 users building commercial products

  • + Everything in Solo
  • + Up to 25 licensed users
  • + Multi-agent epistemic stores
  • + Cross-project workspace
  • + Offline signed license (no phone-home)
  • + Priority support
Coming Soon

Enterprise

Custom

Organizations with 25+ developers

  • + Everything in Team
  • + Unlimited users
  • + On-prem deployment
  • + SSO & audit logs
  • + Custom salience tuning
  • + SLA & dedicated support
Contact Us

Get started in 30 seconds

# Install
pip install aleph-compiler

# Build your project
cd your-project
aleph build .

# Connect your editor
aleph setup .

# Done. Your AI now has 31 semantic tools.

Or run without installing: uvx aleph-compiler build .