If the problem say «find the cheapest/fastest/shortest path» from here to there, the answer it's Dijkstra's Algorithm. Anyone who says otherwise is lying or selling something.

The usual trick is turning your input into a graph, in order to run Dijkstra on it. The real trick is using Dijkstra without an actual Graph...

## The Problem

There's a two-dimensional elevation map. Each position
holds a single lowercase character representing the
elevation, with `a`

being the lowest and `z`

being the
highest. There's a starting point holding an `S`

,
having the same height as an `a`

. There's a destination
point holding with an `E`

, having the same height as a `z`

.
Both `S`

and `E`

could be *anywhere* in the grid.

The sample input looks like this

```
Sabqponm\n
abcryxxl\n
accszExk\n
acctuvwj\n
abdefghi\n
```

You can move horizontally or vertically, one step at a time.
You can move from any position to a neighboring one having
altitude equal or at most one level higher, e.g. if you're
currently on a position having elevation `c`

, you can move
to any neighboring `c`

or `d`

, but not to `e`

or higher.
You can move from any position to a neighboring one having
any altitude less than the current one, with no limitations,
e.g. if you're currently on a position having elevation `h`

,
you can move to any neighboring position with any elevation
from `h`

to `a`

.

Starting from `S`

, the goal is to reach `E`

in as few
steps as possible. That's as obvious an application of
Dijkstra's Algorithm
can get.

## Main Program

The main program is pretty much the same as Day 1.

## Parsing the elevation map

Solving Day 8
included a trick to load a two-dimensional `Array`

without
needing to know the number of rows or columns, as long
as all rows have the same length. I will use the same
trick again, without describing it.

First, we parse a row of letters. I'm not checking to
see if they're only lowercase letters, or `'S'`

, or `'E'`

,
because we assume the input is correct.

```
parseRow :: Parser [Char]
parseRow = many1 (satisfy isLetter)
```

Second, we parse a matrix a row at a time, producing
a list of tuples denoting positions `(Int,Int)`

and the `Char`

stored there.

```
parseMatrix :: Int -> Parser [((Int,Int),Char)]
parseMatrix r = (eof >> pure [])
<|> do cs <- parseRow <* newline
rs <- parseMatrix (succ r)
pure $ zip (zip (repeat r) [0..]) cs ++ rs
```

The only trick worth nothing is dealing with the special `'S'`

and `'E'`

in the input, as they could appear *anywhere*. One
way of doing that is by embedding the search with the `Parser`

,
maybe using state, and save the coordinates when we find
them. I opted for a simpler solution because the array is
not that big, and it needs to be read in full anyway:
look for the starting position after loading the full list.

`Data.Array`

API's main function is

```
array :: Ix i => (i, i) -> [(i, e)] -> Array i e
```

The second argument holds the list of pairs with indexes
and elements to place, precisely what `parseMatrix`

produces.
The documentation mentions that in case of *multiple* tuples
with *the same* coordinate, the *last* one in the list will
be the one going into the array.

The trick is quite simple:

Load everything using

`parseMatrix`

Find the coordinates for the position holding the

`S`

and save it.Pass everything to

`array`

and*add*a new`'a'`

with the coordinates saved in step (2), so it ends up overwriting the original one.

This design decision came while solving Part A, and it worked.
For Part B, we are asked to find the shortest path starting
from *any* of the positions having an `'a'`

. This did not change the
trick much: instead of returning a *single* starting point,
return a *list* of starting points. In the interest of brevity,
I present the final version for the top-level parsing function implementing the latter.

```
loadMatrix :: String -> (Array (Int,Int) Char, [(Int,Int)])
loadMatrix input =
case parse (parseMatrix 0) "loadMatrix" input of
Left _e -> undefined
Right l -> (array',start:allOtherAs)
where
(start,_) = head $ filter ((=='S').snd) l
allOtherAs = map fst $ filter ((=='a').snd) l
array' = array ((0,0),maximum $ map fst l)
(l ++ [(start,'a')])
```

## Graph-less Dijkstra

Dijkstra's Algorithm requires a set of nodes and the weighted edges connecting them. In our case, each position is a node, the edges are the valid steps from one position to another, and all have the same unit weight («one step», duh).

At this point, one tries to transform the `Array`

into a graph
data structure, such as Haskell's `Data.Graph`

, and then
use either a library implementation of Dijkstra's Algorithm,
or choose a language with better libraries. You do not
implement Dijkstra's Algorithm unless doing homework or
being punished for your (or your manager's) poor selection
of programming languages.

I won't be implementing Dijkstra's Algorithm in Haskell.
As a matter of fact, I won't be loading the data into a
graph *at all*. There's already a Haskell implementation
in the form of a higher order function that is able to
navigate the graph *implicitly* without it needing to
exist as a data structure.

There will never be a need.

### Graph Searching Algorithms

Module search-algorithms provides efficient implementations for BFS, DFS, Dijkstra's Algorithm, and A*. Arguably the most frequently used Graph Searching algorithms. It provides both pure implementations, but also Monad-embeddable versions as well -- this means you can use them as standalone functions, as I will, or within the context of any given Monad that makes sense for your complex computation.

On top of that, the implementations are written as higher
order functions: the graph does not need to exist explicitly
as a data structure, it will be explored *implicitly*. This
makes the implementations as general as they can be, because
you can use them with or without explicit graphs, as long
as you feed them the helper functions they need.

So, to use the pure graph-less Dijkstra implementation

```
import Algorithm.Search ( dijkstra )
```

and let's figure out how to feed the helper functions. The signature is as follows. Comments are mine to make the explanation easier.

```
dijkstra :: (Foldable f, Num cost, Ord cost, Ord state)
=> (state -> f state) -- Successors
-> (state -> state -> cost) -- Cost of step
-> (state -> Bool) -- Are we there yet?
-> state -- Initial state
-> Maybe (cost,[state])
```

As mentioned before, each position in our array would correspond
to a state in the graph. We can substitute `state ~ (Int,Int)`

because
tuples have `Ord`

instances, satisfying the restriction.

```
dijkstra :: (Foldable f, Num cost, Ord cost)
=> ((Int,Int) -> f (Int,Int)) -- Successors
-> ((Int,Int) -> (Int,Int) -> cost) -- Cost of step
-> ((Int,Int) -> Bool) -- Are we there yet?
-> (Int,Int) -- Initial state
-> Maybe (cost,[(Int,Int)])
```

The fourth argument is just the initial state. We get that
out of our parsing, so it's a matter of passing it to `dijkstra`

.

The second argument is a function that return the cost
of moving between two particular states. For our problem,
the cost is always one. For starters, we can make `cost ~ Int`

```
dijkstra :: (Foldable f)
=> ((Int,Int) -> f (Int,Int)) -- Successors
-> ((Int,Int) -> (Int,Int) -> Int) -- Cost of step
-> ((Int,Int) -> Bool) -- Are we there yet?
-> (Int,Int) -- Initial state
-> Maybe (Int,[(Int,Int)])
```

and since *every* step costs 1, it follows we can use

```
(\_ _ -> 1)
```

as second argument. We can also write it as

```
(const $ const 1)
```

which is fancier and longer to type.

The third argument is a function that, given a state, decides
if the search has reached its target state. Let's say we have
loaded our input into an array `mx`

; we can check if a state
is the final state with

```
(\s -> mx ! s == 'E')
```

Finally, the first argument is a function that, given a state,
produces *all valid successors* in a `Foldable`

of our choice.
Since we don't need anything particularly fancy for this
problem, let's make `f ~ [a]`

to have

```
dijkstra :: ((Int,Int) -> [(Int,Int)]) -- Successors
-> ((Int,Int) -> (Int,Int) -> Int) -- Cost of step
-> ((Int,Int) -> Bool) -- Are we there yet?
-> (Int,Int) -- Initial state
-> Maybe (Int,[(Int,Int)])
```

and now the signature for the first argument is complete:
given a particular state, we need to produce a list of
all successors that are valid given the rules of the particular
graph. Since our elevation map is stored in an `Array`

, we write
a function that will receive the array as it first argument,
and thanks to currying will fit nicely as successor generator.

```
viableNeighbors :: Array (Int,Int) Char
-> (Int,Int)
-> [(Int,Int)]
viableNeighbors arr (r,c) =
[ (r',c') | (r',c') <- [ (r - 1, c)
, (r, c - 1)
, (r, c + 1)
, (r + 1, c)
]
, (minR,maxR) `inRange` r'
, (minC,maxC) `inRange` c'
, arr ! (r,c) >= arr ! (r',c') ||
arr ! (r,c) == pred (arr ! (r',c')) ||
arr ! (r,c) `elem` ['y','z'] && arr ! (r',c') == 'E'
]
where
((minR,minC),(maxR,maxC)) = bounds arr
```

A list comprehension is the cheapest cleanest way to generate
the four *viable* successors from a particular position:

The generator computes the positions up, left, right, and down of the given one.

The first two guards keep only those within the bounds of the array.

The last guard verifies the movement is to climb zero or one above, or any amount below, or the border case to reach the

`'E'`

.

## Solving Part A

Part A asks what's the number of steps (the cost) of the cheapest
path from the start node `S`

to the target node `E`

. Using `dijkstra`

and all the auxiliary functions, we write

```
partA :: String -> Int
partA input = fromJust $ fst <$> dijkstra (viableNeighbors mx)
(\_ _ -> 1)
(\s -> mx ! s == 'E')
start
where
(mx,start:_) = loadMatrix input
```

As discussed before `loadMatrix`

not only loads the data into the
array `mx`

, but also figures out the list of possible starting
positions. For Part A, we're only interested in the original
one that had the `'S'`

which is conveniently returned first in the
list of possibilities.

With the appropriate arguments in place `dijkstra`

will try and
find the cheapest path from the `start`

ing position to the
desired destination. The return value is a tuple holding the
cost and the actual path, wrapped in a `Maybe`

. On account of
the input being assumed correct, we are sure there will be
a tuple wrapped in a `Just`

, so we `fmap`

`fst`

over it to extract
the cost, and them remove the `Just`

wrapper to get the result.

## Solving Part B

Part B asked to find the cheapest path considering
*all positions holding* `'a'`

as possible starting
positions. It was at that point that I quickly refactored
`loadMatrix`

to return a list of all the possible starting states
and adjusted `partA`

to use only the first element.

The solution for Part B is essentially the same, with a convenient tweak.

```
partB :: String -> Int
partB input = minimum
$ map fst
$ mapMaybe (dijkstra (viableNeighbors mx)
(\_ _ -> 1)
(\s -> mx ! s == 'E')
)
allStarts
where
(mx,allStarts) = loadMatrix input
```

We want to apply `dijkstra`

to *every* element of the list of
possible starting positions `allStarts`

built by `loadMatrix`

.
That sounds like a `map`

. But each answer from `dijstra`

would
be a `Maybe`

so we'd need to extract each result. Function ```
mapMaybe
```

from `Data.Maybe`

is quite convenient

```
mapMaybe :: (a -> Maybe b) -> [a] -> [b]
```

because it `map`

s a function returning `Maybe`

over a list,
and does the unwrapping as well. We extract (`fst`

) the costs
from all the paths found, and compute the `minimum`

.

## Conclusion

As mentioned on
Day 9 there are
cues for selecting data structures and algorithms. This challenge
screams Dijkstra's Algorithm, and it's just a matter of using it.
If you did not know about algorithms, I encourage you to get
a book on Algorithms & Data Structures and read about the many
algorithms already proven to be *the best* for particular problems.

Graph searching is a fundamental skill for programmers. You must know when and how to use BFS, DFS, Dijkstra, A* and a few others. More importantly, you need to choose a language that has decent library support so you don't need to write these algorithms by hand. These algorithms have applications in game development, robotics, generalized parsing, and other areas of planning and relation analysis.

I made a point of *not* building an explicit graph. It would've
meant re-processing the `Array`

into a `Data.Graph`

structure. It's
not hard, but and extra step that results in a data structure
that requires slightly more memory. That's not so bad for this
tiny graph, but still.

Some times graphs are so big that they are impractical to
reify as a data structure in memory. But they are easy to
represent as a relation from states to successors. As a matter
of fact, there are problems that can be modeled as *globally
infinite graphs* (as in infinite number of nodes and edges)
that are *locally finite* (as in every node has a finite number
of successors). You cannot model them in-memory, but you can
model them as *functions*: generate successors for a node,
check if a node is the target, and so on.

Having a Dijkstra implementation that works on *explicit*
finite graphs is great. Having an implementation that works on
*implicit* graphs is better, because it can handle both. In
a functional setting, a polymorphic higher-order implementation
is the right way to go.