Pathfinding Performance Benchmark:
A* vs Field-Based Navigation
Rigorous benchmarks on standardized grids with verified path validation. All results are reproducible. Benchmark scripts available to evaluation partners.
Test Conditions
256×256
8-directional, √2 diagonal cost, 20% obstacle density
100
Dynamic obstacle changes between ticks
20
Random seeds per scenario for statistical robustness
100%
All paths verified obstacle-free
Head-to-Head Comparison
| Scenario | A* Baseline | StrataNav | Speedup |
|---|---|---|---|
| Game tick (100 agents, dynamic) | 111.7ms | 7.06ms | 15× |
| Replanning (1000 queries, query-only) | baseline | 13.49× faster | 13× |
| Replanning (incl. precompute amortized) | baseline | 8.78× faster | 8.8× |
| Batch pathfinding (50 queries, 256²) | baseline | 2× faster | 2× |
| Path success rate | 100% | 100% | parity |
| Path quality vs optimal | optimal | ~1.28× cost | near-optimal |
Why the Difference Is Architectural
Cost = Agents × Complexity × Changes
All three multiply together. At 100 agents, A* consumes 111.7ms per tick — more than an entire 60fps frame (16.6ms).
Cost = Number of Queries
Grid complexity absorbed at precompute time. Goal changes are free. Agent count is irrelevant to per-query cost.
Benchmark scripts are available to evaluation partners for independent verification. Speedup ratios are hardware-independent.