Pathfinding Performance Benchmark:
A* vs Field-Based Navigation

Rigorous benchmarks on standardized grids with verified path validation. All results are reproducible. Benchmark scripts available to evaluation partners.

Test Conditions

Grid

256×256

8-directional, √2 diagonal cost, 20% obstacle density

Agents

100

Dynamic obstacle changes between ticks

Seeds

20

Random seeds per scenario for statistical robustness

Validation

100%

All paths verified obstacle-free

Head-to-Head Comparison

ScenarioA* BaselineStrataNavSpeedup
Game tick (100 agents, dynamic)111.7ms7.06ms15×
Replanning (1000 queries, query-only)baseline13.49× faster13×
Replanning (incl. precompute amortized)baseline8.78× faster8.8×
Batch pathfinding (50 queries, 256²)baseline2× faster
Path success rate100%100%parity
Path quality vs optimaloptimal~1.28× costnear-optimal

Why the Difference Is Architectural

A* Scaling

Cost = Agents × Complexity × Changes

All three multiply together. At 100 agents, A* consumes 111.7ms per tick — more than an entire 60fps frame (16.6ms).

StrataNav Scaling

Cost = Number of Queries

Grid complexity absorbed at precompute time. Goal changes are free. Agent count is irrelevant to per-query cost.

Benchmark scripts are available to evaluation partners for independent verification. Speedup ratios are hardware-independent.

See It On Your Maps

7-day evaluation. Your grids. Your workloads. No cost. No integration required. Just connect over HTTP and measure.

Request Evaluation Access