We present an architecture where the wall-clock time to solve a SAT instance is polynomial, while the computational cost scales in dollars. A single Kissat process handles easy instances in milliseconds. Hard instances are decomposed by a dictionary-guided binary splitter that dispatches parallel solver workers on AWS Lambda. The dictionary learns from every UNSAT branch, pruning the search tree exponentially. Time complexity decouples from problem size when enough workers are available.
A LogicSpace Dictionary maps assumption sets to deduction sets: if all literals in A hold, all literals in B hold, where A ⊂ B. Three polynomial operations maintain it:
After SEED+CHAIN+CONFLICT, the dictionary contains all unit-propagation consequences in a single O(1) lookup structure. For 3-SAT at the phase transition, this takes <100ms up to 10,000 variables.
The optimal CNF is the original formula enhanced with dictionary knowledge:
optimal = original clauses
+ backbone unit clauses [l] for each forced literal
+ learned implications [¬a₁, ..., ¬aₖ, g] from entries
This formula is equisatisfiable with the original but contains pre-compiled propagation chains. Kissat on the optimal formula is strictly faster than Kissat on the raw formula.
When Kissat alone cannot solve the instance within budget:
D.deduce(branch) prunes contradictory branches before dispatch.Solver(branch=[a, b, c]) → UNSAT Dictionary learns: (¬a ∨ ¬b ∨ ¬c) Chain propagates → new backbones emerge Conflict detects → more forced literals Optimal rebuilt → smaller search space for next solvers
The deduced set grows exponentially with each split level.
At level k, the dictionary has accumulated clauses from all previous UNSAT branches.
D.deduce(branch) at level k forces far more variables than at level 0.
Most branches at level k are pruned before any solver is dispatched.
| Formula | Depends on | |
|---|---|---|
| Tcompile | O(m0.47) | clauses only, sub-linear empirically |
| Tround | O(atom + λ) | atom = Kissat budget, λ = Lambda overhead (~200ms) |
| Ttotal | O(R × (atom + λ)) | R = rounds to convergence |
| R | O(log n) with learning | dictionary prunes exponentially |
| $total | O(W × R × atom) | W = workers per round |
Key insight: with enough workers, time complexity is independent of problem size. Every variable is probed in parallel. The bottleneck is rounds × (atom + network latency), not n × atom.
Client → EC2 (Kissat, 2s) → SAT/UNSAT? done.
↓ TIMEOUT
LogicSpace Λ (SEED+CHAIN, 30ms, sole S3 writer)
↓
Tree Λ (generate branches, fire solvers)
↓
N × Solver Λ (Kissat on optimal + assumptions)
↓ on completion
LogicSpace Λ (learn clause, chain, update optimal)
↓
Tree Λ (new branches on stronger optimal)
↓
... converge ...
| Instance | Kissat direct | Method |
|---|---|---|
| 3 vars, 2 clauses (trivial) | 1.4ms | EC2 Kissat |
| 100 vars, 430 clauses (UNSAT, phase transition) | 31ms | EC2 Kissat |
| 500 vars, 2130 clauses (hard) | TIMEOUT | Splitter swarm |
The dictionary is a LogicSpace (Dana, 2026) — a polynomial-time knowledge compiler for SAT. The theoretical foundation is the Dana Theorem (2024): any indicator function over a finite discrete domain can be encoded as a SAT instance in polynomial time. Decision tree bucketing reduces this to linear time.
AUMA (Dana, 2023) established that any f:{0,1}n→ℝ is a weighted MAXSAT instance. npdollars applies this insight operationally: the dictionary compiles SAT structure in polynomial time, and parallel Kissat workers exploit that structure to solve in dollars what would otherwise cost in time.
Charles Dana · Monce SAS · npdollars.aws.monce.ai · April 2026
Co-Authored-By: Claude (Anthropic)