Beam search
In the main A* loop, the OPEN set stores all the nodes that may need to be searched to find a path. The Beam Search is a variation of A* that places a limit on the size of the OPEN set. If the set becomes too large, the node with the worst chances of giving a good path is dropped. One drawback is that you have to keep your set sorted to do this, which limits the kinds of data structures you'd choose.
Iterative deepening
Iterative Deepening is an approach used in many AI algorithms to start
with an approximate answer, then make it more accurate. The name
comes from game tree searches, where you look some number of moves
ahead (for example, in Chess). You can try to deepen the tree by
looking ahead more moves. Once your answer doesn't change or improve
much, you assume that you have a pretty good answer, and it won't
improve when you try to make it more accurate again. In ID-A*, the
"depth" is a cutoff for f
values. When the f
value is
too large, the node won't even be considered (i.e., it won't be
added to the OPEN set). The first time through you process very few
nodes. Each subsequent pass, you increase the number of nodes you
visit. If you find that the path improves, then you continue to
increase the cutoff; otherwise, you can stop. For more details, read
these
lecture nodes on ID-A*.
I personally don't see much need for ID-A* for finding paths on game maps. ID algorithms tend to increase computation time while reducing memory requirements. In map pathfinding, however, the "nodes" are very small―they are simply coordinates. I don't see a big win from not storing those nodes.
Dynamic weighting
With dynamic weighting, you assume that at the beginning of your search, it's more important to get (anywhere) quickly; at the end of the search, it's more important to get to the goal.
f(p) = g(p) + w(p) * h(p)
There is a weight (w >= 1
) associated with the heuristic. As
you get closer to the goal, you decrease the weight; this decreases
the importance of the heuristic, and increases the relative importance
of the actual cost of the path.
Bandwidth search
There are two properties about Bandwidth Search that some
people may find useful. This variation assumes that h
is
an overestimate, but that it doesn't overestimate by more than
some number e
. If this is the case in your search, then
the path you get will have a cost that doesn't exceed the best path's
cost by more than e
. Once again, the better you make
your heuristic, the better your solution will be.
Another property you get is that if you can drop some nodes in the
OPEN set. Whenever h+d
is greater then the true cost of
the path (for some d
), you can drop any node that has an
f
value that's at least e+d
higher than the
f
value of the best node in OPEN. This is a strange
property. You have a "band" of good values for f
;
everything outside this band can be dropped, because there is a
guarantee that it will not be on the best path.
Curiously, you can use different heuristics for the two properties, and things still work out. You can use one heuristic to guarantee that your path isn't too bad, and another one to determine what to drop in the OPEN set.
Bidirectional search
Instead of searching from the start to the finish, you can start two searches in parallel―one from start to finish, and one from finish to start. When they meet, you should have a good path.
This sounds like a good idea, but I don't think it'll give you very much. The idea behind bidirectional searches is that searching results in a `tree' that fans out over the map. A big tree is much worse than two small trees, so it's better to have two small search trees. With A*, however, my experiments suggest that you don't get a tree. You get a path that has nearby map areas explored, but it doesn't fan out like Dijkstra's algorithm. In fact, that's what makes A* so fast―no matter how long your path is, it doesn't search like crazy, unless the path really is crazy. It tends to search only small areas of the map. Bidirectional search may be more useful if your map is complex.
The front-to-front variation links the two searches together.
Instead of choosing the best forward-search node―g(start,x) +
h(x,goal)
―or the best backward-search node―g(y,goal) +
h(start,y)
―this algorithm chooses a pair of nodes with the best g(start,x) + h(x,y) + g(y,goal)
.
The retargeting approach abandons simultaneous searches in the forward and backward directions. Instead, it performs a forward search for a short time, chooses the best forward candidate, and then performs a backward search―not to the starting point, but to that candidate. After a while, it chooses a best backward candidate and performs a forward search from the best forward candidate to the best backward candidate. This process continues until the two candidates are the same point.
Dynamic A* and Lifelong Planning A*
There are variants of A* that allow for changes to the world after the
initial path is computed. D* is intended for use when you don't have
complete information. If you don't have all the information, A* can
make mistakes; D*'s contribution is that it can correct those mistakes
without taking much time. LPA* is intended for use when the costs are
changing. With A*, the path may be invalidated by changes to the map;
LPA* can re-use previous A* computations to produce a new path.
However, both D* and LPA* require a lot of space―essentially
you run A* and keep around its internal information (OPEN/CLOSED
sets, path tree, g
values), and then when the map
changes, D* or LPA* will tell you if you need to adjust your path to
take into account the map changes.
For a game with lots of moving units, you usually don't want to keep all
that information around, so D* and LPA* aren't applicable. They were
designed for robotics, where there is just one robot―you don't need
to reuse the memory for some other robot's path. If your game has
only one or a small number of units, you may want to investigate D* or
LPA*.
Back: Implementation | Up: Home | Next: Moving Obstacles |