Skip to content

Latest commit

 

History

History
18 lines (10 loc) · 6.71 KB

10-findings-and-recommendations.md

File metadata and controls

18 lines (10 loc) · 6.71 KB

| Prev | Next |

10. Findings and Recommendations

The following findings and recommendations are based on the results of the empirical study as well as informal observations made during the development of the Central64 library. Unless otherwise stated, the discussion pertains mainly to the heuristic methods.

  1. The Central A* algorithm in Goldstein et al. (2022) is still recommended as a simple and effective means of extending a typical grid-based A* solver to produce more direct paths. The Central64 project involved a number of attempts to accelerate the path counting procedure. One such attempt was to (1) eliminate the priority queue from the path counting operation, (2) replace it with a standard C++ vector, and (3) rely on the structure of the directed acyclic graph to ensure vertices are processed in a viable order. This change produced no observable benefit, so the priority queue was restored. As previously mentioned, the Central64 library does introduce a technique of traversing g-costs to produce the directed acyclic graph. This technique is advantageous in that it allows central path planning to be applied to variations of Jump Point Search. However, if one intends to implement Central A* without jumping, the results suggest that the algorithm in the original paper may introduce slightly less overhead than the new g-cost traversal approach.

  2. The Bounded Jump Point Search method of Sturtevant & Rabin (2016) is recommended as the fastest grid path search method that does not rely on precomputation. The Central64 project involved a number of attempts to accelerate this established search procedure. These attempts included alternative ways of bounding the jump operation, such as bounding the f-cost instead of the g-cost. None of these attempts produced a noticeable benefit. Different jump costs were also informally tested, but all values between roughly 4 and 16 appeared to be equally effective. A jump cost of 8, as used in the experiment, seems to be a reasonable default. Bounded Jump Point Search is marginally faster than Jump Point Search for the tested benchmark set, and roughly twice as fast as A*. The relative speedup obtained by jumping is slightly reduced for central grid path planning, since jumping accelerates the search without accelerating the path counting operation.

  3. The new mixed search methods were not effective at improving runtimes relative to the established search methods. Mixed A* was the slowest of all the methods tested. Mixed Jump Point Search was faster than A*, but failed to surpass the established variations of Jump Point Search. One of the original intentions of the mixed search methods was to improve cache locality by representing all nodes in each block using a contiguous chunk of memory. However, all attempts to improve performance by changing the memory layout failed, so the traditional row-based memory scheme was ultimately adopted. Different block sizes were also informally tested, but the results were either similar or worse than the 8-by-8 convention used in the experiment. The results suggest that precomputing path lengths within each block, as in the Block A* method proposed by Yap et al. (2011), may be necessary to fully benefit from a block-based search.

  4. Adopting central grid path planning is generally recommended over doubling the neighborhood size, if one much choose between the two enhancements. An exception to this rule may be made for the 4-neighborhood, where path counting introduces considerable overhead. However, the 4-neighborhood performs poorly relative to the 8-neighborhood, and should generally be reserved for applications where a 4-neighbor grid path is the desired output. For the 8-neighborhood and up, switching from regular to central path planning leads to a greater improvement in path quality than doubling the neighborhood size, and in many cases these higher quality results are obtained in a shorter amount of time. For example, 8-Neighbor Central A* dominates 16-Neighbor Regular A*. Also, 16-Neighbor Central Jump Point Search dominates 32-Neighbor Regular Jump Point Search, regardless of whether jump distances are bounded. These observations assume that all grid paths are subsequently smoothed.

  5. Overall, 16-Neighbor Central Bounded Jump Point Search with Tentpole Smoothing is recommended as the investigated method that provides the best tradeoff between quality and speed. The speed of this variation is essentially the same as the standard method, 8-Neighbor A* Search with Greedy Smoothing, but the quality of the resulting paths is dramatically improved. The recommended method also appears to offer a clear performance advantage over the basic Theta* any-angle path planning method by Daniel et al. (2010), which integrates line-of-sight checks into the search procedure. The recommended combination of 16-Neighbor Central Bounded Jump Point Search with Tentpole Smoothing yielded an average runtime 4 times lower than the runtime of Theta* reported by Goldstein et al. (2022). The 4-fold difference in speed may be partially due to differences in the design of the underlying data structures, though both experiments were implemented in C++ and executed on the same machine. The recommended method yielded an angular suboptimality score roughly 3 times lower than that of Theta*, so both speed and quality are significantly improved.

  6. For the all-nodes methods such as Dijkstra Search and Canonical Dijkstra Search, the relative runtime cost of path counting is higher than it is for the corresponding heuristic methods. The relative runtime cost of increasing the neighborhood size, however, is lower. This observation assumes that roughly 25 paths are sampled for each source vertex, and therefore the central path approach requires roughly 25 path counting operations for every search. Similar to the heuristic methods, 16-Neighbor Central Bounded Canonical Dijkstra Search with Tentpole Smoothing provides a reasonable trade-off between quality and speed. The 32-neighbor variation of the same method produces even higher quality paths in exchange for a modest increase in runtime, though it is not clear that this additional improvement in quality is necessary for most applications. Regardless of whether path counting is employed, the results show no significant difference in performance between Canonical Dijkstra Search and the less conventional Bounded Canonical Dijkstra Search.

| Prev | Next |