# Block Puzzles and Graph Search

September 08, 2019

In this post we are going to take a wooden block puzzle game, map it into a framework for solving it computationally and then using said framework, we'll generate optimal solutions for any given puzzle layout.

[picture of grid with all cars placed]

And this is the kind of results we want to have generated. A minimal list of moves required to solve a puzzle layout.

[Example moves]

## Sections

We will build up our understanding of the puzzle and how to generate solutions by looking at distinct concepts, with each concept building on the previous. Here is a quick summary of the sections to come.

• Initially I will simply describe the puzzle, going over how it's pieces are moved, are setup, various rules, etc.
• Next, we'll go over a reusable and general modeling abstraction which we can use to describe not just the puzzle but many other problems like it.
• Then given we take our newly learned abstraction and see how we can map it to our specific puzzle.
• We then learn an algorithm to search over and find the optimal solution.
• Finally we recap and provide motivation for applying similar concepts to any future problems you might want to solve.

## Setup

To setup the puzzle you take wooden blocks, I sometimes also refer to them as cars, and arrange them on a grid. Each car is either in the vertical track or horizontal track.

The grid is uniform except on the right-side where there is a notch cut out. This notch allows a car to escape the board if placed on the same row. For all layouts, a special car is placed along the row such that the other cars are blocking it and have to be carefully moved.

When you have copied the layout, you begin to move the cars on the uniform grid. The puzzle is solved when your able to have the specially marked car, red and labeled X, escape out of the board through the notch.

[Gif of puzzle being solved]

There is a couple of subtle features I want to call out right now, so we can repeatedly refer back to them as we go.

• The games board is a 6 by 6 grid (We can remove this limitation later).
• There is only two possible directions the pieces may move. Either vertical or horizontal, later I'll use the X and Y axis respectively.
• Each car has a name, a width, and location. One important note, is we assume the location coordinate (x,y) is the leftmost and topmost section of the car. [TODO]
• The puzzle is complete not just when the row is open but when the special car makes it all the way outside the board.

We can now take our descriptions of our physical wooden puzzle and convert them into a representation we can work with computationally.

But before we just jump in, I want to provide a framework which will allow use to create a representation which is useful for not just this puzzle but many other problems.

This framework originates from Artificial intelligence: A modern approach by Stuart Russell Peter Norvig

Now with a model we can use simple graph structures to construct search trees and apply search algorithms on our model to find solutions.

If your not familiar with the infrastructure required for search algorithms you might be asking why would we need to talk about graphs?

Well search algorithms need some way to keep track of the states they have encountered, and following each action, maintaining a pointer to the previous state, we subsequently we generate a search tree with from each state to the next. Each node `n` of the tree will usually be represented like so:

``````Node {
// the snapshot of the environment this node represents
state
// the previous node which generated this node
parentNode
// the action taken from the previous state to
// generate this nodes state
action
// the additive cost of the path all the way from the initial
// state, through each parent, to this node
pathCost
}``````

Well if we buy into the graph infrastructure of nodes and edges forming a graph, we can apply many different types of algorithms. The graph structure of nodes and edges, is the O.G. app framework.

With such a graph structure in place, we can now use it to construct search trees by applying various different search algorithms.

## Model Representations

Agent oriented models, where you consider an agent making actions or planning, are supposed to maximize some goal. The agents are made from the following components.

### The Components

• Environment & State - The space in which agents participate and the descriptions of the environment at a point in time are called state. The initial state is the state of the agents and environment at time `t=0`,
• Transition function - The description of actions an agent may take in the environment. As actions are taken in succession, they create a path, a sequence of states resulting from each subsequent action by an agent.
• Path costs - Numeric values assigned to the paths taken, providing a way to compare the path costs taken to arrive at states.
• Goals - A test or metric one can qualitatively and deterministically tell if the current state is a goal or end state.

### Search Data Structure

FOOTNOTE In A.I (modern approach) its formulated in terms of a single agent throughout the text.

## Parking Lot Representation

Our wooden puzzle is already pretty simplified, and it doesn't take much to map it into a representation we can work with computationally. This is a nice property for learning how to model and even its just a toy problem such, it is instructive for showing how to leverage abstractions to solve problems. However in real world problems it often takes considerable effort to prune away the messy details and convert the problem domain into a model one can work with.

### Prebuilt abstraction

Given our simplified problem domain, the wooden puzzle game, where there is no fuzzy complex goal and everything is directly observable. we can begin to read off the properties of the game and map them into a model representation. For example, Ou we are given an explicit goal to take the X Car, colored red, and move it outside the grid. Therefore we can simply read off the properties of the game and relate them to the earlier descriptions of each component.

### State

The board is determined by the location and orientation of all the pieces on the board. Given the grid `6^2`, the pieces `2x3` TODO

The initial state is provided as the puzzle card.

### Transition function

The possible ways a car can move is either exclusively up or exclusively down, depending on the car's initial starting orientation. Each move of a car could move along its "track" as far as it can, left/right, up/down, until it runs into either another car, a wall or in the case of the special marked X car, outside of the board.

### Path Costs

Each move of the car simply costs 1, so the total path cost is the number of moves taken.

### Goals

The puzzle is completed when we take the X Car, colored red, and move it outside the grid through the notch. We can therefore use this condition as a simple goal state.

### Uniform-cost Searching

This algorithm improves upon a general graph search by using a sorted set of to keep track of the best solutions found thus far.

To be specific in the implementation, Uniform-cost needs two capabilities in its data-structure holding solutions. It needs to maintain sorted order as paths are inserted and it needs efficient `O(1)` lookup.

The first capability is needed to prioritize searching on shortest found paths and the second is to check if the path was already explored.

``````UniformCostSearch(environment) => returns (Solution, error)

// setup the initial state conditions
node = environment.getInitialNode()
frontier = new Set&PriorityQueue({})
frontier.set(node)

explored = new Set({})

loop and do
if (frontier.isEmpty()) return error // loop invariant

// node is the lowest cost path in the frontier
node = frontier.pop()

// this goal is the first, and shortest solution possible.
if(environment.isGoal(node.state)) return  node.solution()

explored.set(node.state)

possibleActions = node.actions()
for action in possibleActions
// applies the costs involved and moves the nextNode into
// a possiblly different state or maybe its an existing
// state but with a different path cost.
nextNode = transitionFunction(environment, node, action)

// never before seen state
if(!explored.has(nextNode.state) || !frontier.has(nextNode.state)))
frontier.set(nextNode.state)
continue

// will replace the state with the newly found
// cheaper path to such a state
existingNode = frontier.get(node.state)
if (existingNode.pathCost() > nextNode.pathCost())
frontier.set(nextNode.state)``````