Topic 2 — Algorithm Design
Revisiting Prim’s Algorithm
Remember from Topic 1 that Prim’s Algorithm finds the minimum spanning tree of a weighted and undirected graph. The actual algorithm was created in 1930 by Vojtěch Jarník, and was then republished/stolen by Robert Prim in 1975. As such, the algorithm is sometimes called "Jarník’s Algorithm".
Prim’s algorithm [JonesEtAl2016]:
-
Choose a starting node (it doesn’t matter which one you go with).
-
Look at the edges coming out of that node and choose the one with the lowest weight (if there are multiple with the same weight, it doesn’t matter which one you pick). These nodes and the edge that connects them will form the start of the MST.
-
Now look at all the edges coming of from the nodes currently in your tree and select once again the edge with the smallest weight. But do not pick an edge that would connect the graph to itself.
-
Repeat this process until all the nodes are connected.
Prim’s algorithm is considered a greedy algorithm. That means that it will always take the best option that is immediately available, even if there is a better option that appears later on. To prove that Prim’s algorithm works, even though it is greedy, we need to use a specific proof method. In this case, it is proof by contradiction, but there are other kinds of proof that will be discussed later.
Proof by Contradiction
The basic idea of proof by contradiction is that, given a statement that you want to prove is true, assume that the statement is false, and then show that assumption leads to nonsense [Hammack2009.6]. From this, you know that it is wrong to make the assumption that the statement is false, and therefore the statement must be true.
Let’s have a look at the statement "If \(a,b \in \Z\), then \(a^2 - 4b \neq 2\)".
Suppose that the statement is false, this would mean that there will be the integers \(a\) and \(b\) for which the equation \(a^2 - 4b \neq 2\) is false. Thus, there exists integers for which \(a^2 - 4b = 2\).
This equation can be rewritten as \(a^2 = 2 (1 + 2b)\). Because the expression on the right is always multiplied by 2, we know that it is even, and therefore \(a^2\) must also be even. And since \(a^2\) is even, it follows that \(a\) itself is even. This gives us the equation \(a = 2c\) for some integer \(c\).
We can then sub that in to our equation \(a^2 - 4b = 2\) to get \((2c)^2 - 4b = 2\). Dividing this all by 2 gives us \(2c^2 - 2b = 1\). Therefore, \(1 = 2(c^2 - b)\). And because of the multiplication by 2 on the right-hand side, this tells us that 1 is even.
But hang on, 1 is not even, something must have gone wrong. Since we know that all the logic after the first assumption is correct, it must mean that the first line — there one where we assume the statement to be false — is incorrect. Therefore, the statement "If \(a,b \in \Z\), then \(a^2 - 4b \neq 2\)" must be true. \(\blacksquare\)
Basically, to prove statement \(P\), assume \(!P\) and then deduce \(C ~ \land ~ !C\). Note that you probably won’t know what the statement \(C\) is when you begin proving it.
Proof Prim’s Algorithm Works
Now that we’ve learnt about Proof by Contradiction, we can prove that Prim’s algorithm always finds the minimum spanning tree (MST).
-
Let \(G\) be a graph that has vertices, \(V\), and a known MST, \(T^*\).
-
Let \(T\) be the MST generated by running Prim’s algorithm on \(G\).
-
Assume that \(T \neq T^*\) (We’re assuming that \(T\) is not minimum weight). Let the edge \((u, v)\) be in \(T\) but not in \(T^*\).
-
When \((u, v)\) was added to \(T\) by Prim’s algorithm, it was the minimum edge between any vertex already in \(T\) and any other one not in \(T\) yet. Let the set of vertices already in \(T\) before \((u, v)\) was added be \(S\).
-
Let \((x, y)\) be an edge on the path from \(u\) to \(v\) in \(T^*\), such that \(x\) is in \(S\) and \(y\) is in \(V-S\).
-
Form a new tree \(T^{**}\), by adding \((u, v)\) to \(T^*\) and removing \((x,y)\) from \(T^*\)
-
Because \(T^{**}\) is the same as \(T^*\) but with the longer edge replaced by a short one in \(T\), therefore \(T = T^{**}\)
-
And ecause the edge that has been added is a lesser weight than the one that was removed, the weight of \(T\) is less than the weight of \(T^*\). This contradicts our statement that \(T\) is not minimum weight, therefore \(T\) is the MST.
(In a really hand-wavey way: Prim’s works because any other edge not chosen by the algorithm is worse)
To really get this proof into your head, try running it on a graph of your own devising. You’ll have to make up a \(T^*\) that’s wrong (so that it satisfies \(T \neq T^*\)), but you’ll see that the graph created by Prim’s algorithm is correct. (And that if \(T = T^*\), then you’ll notice that \((u, v) = (x, y)\))
Graph Traversal
Graph traversal describes moving throughout a graph. There are many times when this will be useful in VCE Algorithmics. For example, you might need to find a path through a maze, or one that visits every point on a map. Both of these problems can be described by using graph traversal. There are two main ways to traverse (or search) a graph: depth-first, and breadth-first.
Depth-First Search
Depth-First Search algorithms follow a single pathway, going all the way down one route until it reaches a dead end, where it will then turn around and backtrack to the last point where it had to make a choice. It then goes down the other option and repeats until the entire graph has been traversed, or until an optimal solution has been reached.
g.nodes("a,b,c,d,e,f".split(",")).add();
const edges = "ad ab ac bc bd ef fd fa ed".split(" ");
g.edges(edges).add();
g.node('a').color('blue').highlight().pause(1);
"ad de ef fe ed db bc".split(" ").forEach(edge => {
g.edge(edge).traverse('blue').pause(0.5);
g.node(edge[1]).color('blue').highlight();
});
// Adapted from wikipedia.
function DFS(G, v):
// input: G is a graph that contains the vertex v
label v as discovered
for all directed edges from v to w that are in G.adjacentEdges(v):
if vertex w is not labelled as discovered:
call DFS(G, w)
Breadth-First Search
A Breadth-First Search visits all neighbours of the current node, then visiting the unseen neighbours of that node, and so on. It repeats this until either the entire graph has been traversed, or until an optimal solution has been found.
g.nodes("a,b,c,d,e,f".split(",")).add();
const edges = "ad ab ac bc bd ef fd fa ed".split(" ");
g.edges(edges).add();
g.node('a').color('blue').highlight().pause(1);
["ad af ac ab", "fe fd fa", "de df db da", "bd bc ba", "cb ca"].forEach(edges => {
edges.split(" ").forEach(edge => {
g.edge(edge).traverse('blue');
g.node(edge[1]).color('blue').highlight();
});
g.pause(1)
});
// Adapted from wikipedia.
function BFS(G, v):
let Q be a queue
label v as explored
Q.enqueue(root)
while Q is not empty:
v = Q.serve()
if v is the goal:
return v
for all edges from v to w in G.adjacentEdges(v):
if w is not labelled as explored:
label w as explored
Q.enqueue(w)
Best-First Search
Best-First Search is just saying that at any point, the algorithm will choose the "best" option for it to explore. What makes something the "best" choice is described by a heuristic that the algorithm has. A heuristic is kind of like the algorithm having a "rule of thumb" to go by. For example, in a maze the heuristic might be "always take the left turn".
Decision Trees
A Decision Tree is a tree whose branches represent the outcomes of a series of decisions. Decision Trees can be used to compare outcomes and choose decisions that lead to good results. They can even be used to solve Sudoku puzzles.
Recursive Algorithms
A Recursive Algorithm is one that calls itself during its runtime. As an example, here is a simple recursive algorithm to calculate \(n \mapsto n!\).
def fact(n):
if n == 1:
return 1
return n * fact(n-1)
When the function is run, it basically goes through and calls fact()
with decreasing values of n
until it equals one.
In which case it can then return a value, which then prompts the previous call to return, and then the one above, and so on.
So, for n == 3
, it would look like:
In the above function, the n == 1
is at the top of the function because recursive functions need to be told when to stop calling themselves, and start returning values.
This case, where n == 1
is called the base case.
And every other case in the function is the recursive case.
Tail Recursion
Tail Recursion is a special case of recursion, where the last value returned by the function is the function call itself. So all the data has to passed all the way down to the base case. As an example, the factorial program from before becomes:
def fact(n, val=1):
if n == 1:
return val
return fact(n-1, val * n)
Which then looks a little something like:
Iterative Algorithms
An Iterative Algorithm is one that "loops" some process a given number of times, or until it satisfies a specific condition. So, instead of calling itself like in a recursive algorithm, an iterative algorithm iterates over a set of data or values. If you’re coming from a Python background, iterative algorithms are probably more what you’re used to.
The factorial example from before looks like this when done iteratively:
def fact(n):
val = 1
for i in range(n):
val = val * i
return val
Dijkstra’s Algorithm
Dijkstra’s Algorithm is an algorithm that finds the shortest paths between one node and all other nodes in a weighted and directed graph [Black2006]. The algorithm was invented by a Dutch computer scientist called Edsger Wybe Dijkstra (pronounced "Dyke-stra"). Dijkstra’s algorithm might be useful in situations when you want to figure out which location out of a list of different places is the closest to you. It also can be used to find the shortest path from one specific node to another specific target node, but it is likely to be inefficient at this.
Dijkstra’s Algorithm [GassAndFu2013]:
-
Select a starting node, let’s call it \(v_i\).
-
Let every node be "unvisited".
-
Assign every node (\(v\)) a cost value (\(d(v)\)) that represents the upper bound of the shortest path between that node and \(v_i\). So \(d(v_i)\) would be \(0\) and all the other nodes would be \(\infty\).
-
Give every node (\(v\)) a value representing the node before \(v\) on the shortest path from \(v_i\) to \(v\). At the start, this will be "none" for all of them.
-
Let the unvisited node with the lowest cost be the current node (\(v_c\)). Look at the unvisited neighbours of \(v_c\) and redefine their costs according to the rule \(d(v) = \text{min}(d(v), d(v_c) + \text{distance between } v_c \text{ and } v)\), where \(v\) is a neighbour. Set the previous node value of \(v_c\) to be whatever neighbour made the minimum \(d(v)\) value (or don’t change it if \(d(v)\) was already the minimum). Mark \(v_c\) as visited.
-
Repeat step five until all nodes have been visited.
-
Use the previous node label to work backwards from the node you want to find the shortest path from \(v_i\) to.
The algorithm can be sped up in the case that you only need to find the distance between one start node and one target node. In this case you just tell it to stop once the target node has been visited.
Dijkstra’s algorithm might break if there is a negative weight edge in the graph. This because Dijkstra’s algorithm operates under the assumption that by traversing an edge, the cost increases. However, having negative weights mean that the cost can decrease. And since once a node is marked as visited it is assumed that the shortest path between it and \(v_i\) has been found, so it can not be updated. A more detailed explanation can be found on GeeksforGeeks.
Let’s try running Dijkstra’s algorithm on this graph:
g.nodes("a,b,c,d,e".split(",")).add();
const edgeLengths = {
"a,b":1,
"b,c":4,
"d,e":2,
"b,d":3,
"a,c":3,
"a,e":1
};
const edges = Object.keys(edgeLengths).map(k => k.split(','));
g.edges(edges).data(Object.values(edgeLengths)).add({
length: d => d,
labels: d => ({ 0: { 'text': d } })
});
-
We’ll start with node a as the beginning node.
-
Then we assign all the nodes a cost value of \(\infty\), and node a a value of \(0\). As well as setting their previous node value.
Node Cost Previous a
0
none
b
\(\infty\)
none
c
\(\infty\)
none
d
\(\infty\)
none
e
\(\infty\)
none
-
The current node is going to be node a, and its neighbours are e, b and c.
The cost of e will be \(d(e) = \text{min}(d(e), d(v_c) + \text{distance between } v_c \text{ and } e) = \text{min}(\infty, 0 + 1) = 1\). The cost of b will be \(d(b) = \text{min}(\infty, 0 + 1) = 1\). The cost of c will be \(d(c) = \text{min}(\infty, 0 + 3) = 3\).
We can update our table now, marking node a as visited and updating the previous node column.
Node Cost Previous a (visited)
0
none
b
\(1\)
a
c
\(3\)
a
d
\(\infty\)
none
e
\(1\)
a
-
The current node is now b (as it is the lowest cost unvisited node). Its unvisited neighbours are c and d.
The cost of c will be \(d(c) = \text{min}(3, 1 + 4) = 3\). The cost of d will be \(d(d) = \text{min}(\infty, 1 + 3) = 4\).
We can now update the table and mark b as visited.
Node Cost Previous a (visited)
0
none
b (visited)
\(1\)
a
c
\(3\)
a
d
\(4\)
b
e
\(1\)
a
Note that only d was updated, for the other neighbour there is a shorter path between it and node a that doesn’t go via node b.
-
The unvisited node with the smallest cost is node e, so it is the current node. Its unvisited neighbour is d.
The cost of d will be \(d(d) = \text{min}(4, 1 + 2) = 3\). Note that because the distance to d from a is shorter going via e than b, the previous node value for d will change.
Node Cost Previous a (visited)
0
none
b (visited)
\(1\)
a
c
\(3\)
a
d
\(3\)
e
e (visited)
\(1\)
a
-
The current node is now c. Its has no unvisited neighbours. So nothing changes in the table, except for c being marked as visited.
Node Cost Previous a (visited)
0
none
b (visited)
\(1\)
a
c (visited)
\(3\)
a
d
\(3\)
e
e (visited)
\(1\)
a
-
The current node is now d. It has no unvisited neighbours, so once again, nothing but the visited marking changes.
Node Cost Previous a (visited)
0
none
b (visited)
\(1\)
a
c (visited)
\(3\)
a
d (visited)
\(3\)
e
e (visited)
\(1\)
a
-
Now that every node has been marked as visited, we can find the shortest distance from node a to any other node in the graph. As an example, let’s find the shortest path from a to d.
Remember that the previous node column for node \(v\) tells us about the node before \(v\) on the path from the starting node to node \(v\). So we look at the previous node column for node d and see that it is node e. We then look at the previous node column for node e and see that it is node a. From this we are able to see that the shortest path between a and d is \(a \to e \to d\).
The Bellman-Ford Algorithm
Dijkstra’s algorithm is great and all, but it cannot cope with negative weight edges. Clearly we need a better algorithm, and that’s where the Bellman-Ford algorithm comes in. The Bellman-Ford Algorithm finds the shortest distance from one node to every other node in a directed graph, and can deal with negative weight edges, but it is slower than Dijkstra’s algorithm. The algorithm was first proposed in 1955 by Alfonso Shimbel, but is named after Richard Bellman and Lester Ford Jr who published it in 1958 and 1956 respectively.
Note that the Bellman-Ford algorithm works only for directed graphs. If you want to use it on an undirected graph, you have to consider each edge as two directed edges (one going one way and one going the other direction).
The Bellman-Ford Algorithm [Cormen2013]:
-
Let the starting node be \(v_i\).
-
Initialise the distances of the nodes in the same way as Dijkstra’s (\(0\) for \(v_i\), and \(\infty\) for all other nodes).
-
Create an order for the edges to be looked through (the order can be anything you want, it just needs to stay constant throughout).
-
Let the number of nodes in the graph be \(n\), and repeat step five \(n-1\) times or until the distance values stop changing.
-
Go through all edges in order and update the distance value of the current node by the following rule \(d(\text{end node}) = \text{min}(d(\text{end node}), d(\text{start node}) + \text{edge weight})\). If the distance value changes, store the start node of the edge.
-
To detect a negative weight cycle, run step five again, and if the distance values change, then there is a negative weight cycle present.
The Bellman-Ford algorithm will generate paths of at most length \(i\) on the \(i\)th iteration.
Let’s have a go at running the Bellman-Ford algorithm on this graph:
g.nodes("c,b,a,d".split(",")).add();
const edgeLengths = {
"b,c":3,
"c,a":-2,
"a,b":7,
"c,d":4
};
const edges = Object.keys(edgeLengths).map(k => k.split(','));
g.edges(edges).data(Object.values(edgeLengths)).add({
directed: true,
length: d => d,
labels: d => ({ 0: { 'text': d } })
});
-
Let’s make our starting node a.
-
We can set up our distance table first, setting node a's distance to \(0\) and the other nodes to \(\infty\).
Node init 1st 2nd 3rd a
\(0\)
b
\(\infty\)
c
\(\infty\)
d
\(\infty\)
-
And let’s define the edge order: \((a,b), (b,c), (c,a), (c,d)\).
-
Now for the first loop, lets go through every edge in order.
-
\((a,b)\): \(d(b) = \text{min}(d(b), d(a) + 7) = 7\)
-
\((b,c)\): \(d(c) = \text{min}(d(c), d(b) + 3) = \infty\) (uses the distance value from the previous iteration, in this case the initialisation step)
-
\((c,a)\): \(d(a) = \text{min}(d(a), d(c) + (-2)) = \infty\)
-
\((c, d)\): \(d(d) = \text{min}(d(d), d(c) + 4) = \infty\)
-
-
We can update the table (saving the previous node for the changed value), and it looks like this
Node init 1st 2nd 3rd a
\(0\)
\(0\)
b
\(\infty\)
\(7\) (a)
c
\(\infty\)
\(\infty\)
d
\(\infty\)
\(\infty\)
-
We then rerun the looking through all the edges step.
-
\((a,b)\): \(d(b) = \text{min}(d(b), d(a) + 7) = 7\)
-
\((b,c)\): \(d(c) = \text{min}(d(c), d(b) + 3) = 10\) (uses the values from the previous iteration)
-
\((c,a)\): \(d(a) = \text{min}(d(a), d(c) + (-2)) = 0\)
-
\((c,d)\): \(d(d) = \text{min}(d(d), d(c) + 4) = \infty\)
-
-
So the table now looks like this:
Node init 1st 2nd 3rd a
\(0\)
\(0\)
\(0\)
b
\(\infty\)
\(7\) (a)
\(7\) (a)
c
\(\infty\)
\(\infty\)
\(10\) (b)
d
\(\infty\)
\(\infty\)
\(\infty\)
Note that on the second iteration, the only paths found are a maximum of two edges long.
-
And for the final round:
-
\((a,b)\): \(d(b) = \text{min}(d(b), d(a) + 7) = 7\)
-
\((b,c)\): \(d(c) = \text{min}(d(c), d(b) + 3) = 10\)
-
\((c,a)\): \(d(a) = \text{min}(d(a), d(c) + (-2)) = 0\)
-
\((c,d)\): \(d(d) = \text{min}(d(d), d(c) + 4) = 14\)
-
-
So the table looks like:
Node init 1st 2nd 3rd a
\(0\)
\(0\)
\(0\)
\(0\)
b
\(\infty\)
\(7\) (a)
\(7\) (a)
\(7\) (a)
c
\(\infty\)
\(\infty\)
\(10\) (b)
\(10\) (b)
d
\(\infty\)
\(\infty\)
\(\infty\)
\(14\) (c)
From this table you can find the shortest path from node a to any other node in the graph. Figuring out that path works the same as it does in Dijkstra’s algorithm: work backwards from the end node. So the path from a to d would be \(a \to b \to c \to d\).
Warshall’s Algorithm
At some point you may want to know whether a path actually exists between two nodes. Warshall’s algorithm does just that. Warshall’s Algorithm finds the transitive closure of a graph, that is, it finds whether a path exists between two nodes. And, as we will see, it can be modified to tell you what that path is, not just that it exists.
As an example of transitive closure, take the following graph.
g.nodes("a,b,c,d".split(",")).add();
const edgeLengths = {
"b,a":1,
"a,c":1,
"b,d":1,
"d,b":1
};
const edges = Object.keys(edgeLengths).map(k => k.split(','));
g.edges(edges).data(Object.values(edgeLengths)).add({
length: d => d,
directed: true
});
If we add (dashed) edges that represent its transitive closure, it looks like:
g.nodes("a,b,c,d".split(",")).add();
let edgeLengths = {
"b,a":1,
"a,c":1,
"b,d":1,
"d,b":1
};
let edges = Object.keys(edgeLengths).map(k => k.split(','));
g.edges(edges).data(Object.values(edgeLengths)).add({
length: d => d,
directed: true
});
edgeLengths = {
"b,b":1,
"b,c":1,
"d,d":1,
"d,c":1,
"d,a":1
};
edges = Object.keys(edgeLengths).map(k => k.split(','));
g.edges(edges).data(Object.values(edgeLengths)).add({
directed: true,
svgattrs: { 'stroke-dasharray': 4 }
});
This tells us information about the graph’s connectivity. For example, that there is a path between nodes b and c.
Warshall’s algorithm is designed to find this transitive closure of a graph from its boolean adjacency matrix, that is, an adjacency matrix that only tells us if an edge exists or not. Below is the algorithm’s pseudocode [Skeina2020].
n <- the number of nodes in the graph
T(0) <- the boolean adjacency matrix of the graph
for k = 1 to n:
for i = 1 to n:
for j = 1 to n:
T(k)[i, j] = T(k-1)[i, j] or (T(k-1)[i, k] and T(k-1)[k, j])
return T(n)
The innermost statement is divided into two "rules", either one of which has to be true for the nodes to be connected in the transitive closure graph [Lin2011].
- Rule one
-
T(k-1)[i, j]
: If an element in the previous iteration’s matrix at this position is true, then it stays true. - Rule two
-
(T(k-1)[i, k] and T(k-1)[k, j])
: If an element in position[i,j]
in the previous iteration’s matrix is false then it needs to be changed to true, if and only if the elements in the previous matrix at the positions[i,k]
and[k,j]
are both true. Basically this rule is saying that ifi
is connected tok
, andk
is connected toj
, theni
is connected toj
.
So let’s take the previous graph and see how Warshall’s algorithm would find its transitive closure matrix. Below is the initial adjacency matrix of the graph, \(0\) is false and \(1\) is true (I’ll be leaving off the labels in further matrices).
Which then after the four iterations looks like:
As you can see, this adjacency matrix matches what we would expect looking at the transitive closure graph for this graph.
g.nodes("a,b,c,d".split(",")).add();
let edgeLengths = {
"b,a":1,
"a,c":1,
"b,d":1,
"d,b":1
};
let edges = Object.keys(edgeLengths).map(k => k.split(','));
g.edges(edges).data(Object.values(edgeLengths)).add({
length: d => d,
directed: true
});
edgeLengths = {
"b,b":1,
"b,c":1,
"d,d":1,
"d,c":1,
"d,a":1
};
edges = Object.keys(edgeLengths).map(k => k.split(','));
g.edges(edges).data(Object.values(edgeLengths)).add({
directed: true,
svgattrs: { 'stroke-dasharray': 4 }
});
The Floyd-Warshall Algorithm
By slightly adjusting Warshall’s algorithm, we get the Floyd-Warshall algorithm. The Floyd-Warshall Algorithm solves the "all pairs, shortest paths" problem. That is, it finds the shortest path for all pairs of nodes in the graph.
It is almost fully the same as Warshall’s algorithm, but the inner assignment is changed to the following, and it takes the proper adjacency matrix as input (not just the boolean one).
T(k)[i, j] = min(T(k-1)[i, j], T(k-1)[i, k] + T(k-1)[k, j])
Basically, it sets the distance between i
and j
to either itself, or to the distance via k
, depending on which one is shorter.
PageRank
The PageRank algorithm was the original algorithm used by Google in ordering its search results. It was used to basically determine the importance of a website. The more important the site, the higher up the search results it got.
PageRank takes into account how many other sites linked to it, assuming that more important sites had more links pointing to it elsewhere on the web. It also takes into account the random chance that a user would visit the site without clicking any links to it (this user is called the "random surfer").
Here is pseudocode for the PageRank algorithm, where the internet is represented as a graph of websites with edges (links) between them:
N <- number of nodes in the graph
d <- 0.85 // this is the "random surfer" factor
for each node in graph:
pagerank_of_node <- 1/N
repeat until PageRank values stop changing:
for each node:
n_outgoing <- the number of outgoing edges from the node
set outgoing edge weights to (pagerank_of_node / n_outgoing)
for each node:
total <- 0
for each incoming edge into node:
total <- total + incoming edge weight
pagerank_of_node <- (1 - d) / N + d * total
And here’s the formula its based on:
Where \(PR(n)\) is the PageRank score of \(n\), and \(L(n)\) is the number of links pointing towards \(n\). This means that any given page’s PageRank score is largely dependent on the PageRank scores of the sites linking to it.
Algorithm Design Patterns
An Algorithm Design Pattern is basically the method, strategy, or technique used in solving a problem with an algorithm. You’ve already met some design patterns before, but there are some you haven’t seen yet. Here are the ones that we consider in VCE Algorithmics.
A Brute Force approach means trying every single option until you find the solution. It is often considered a pretty simple solution to a problem, but there are cases where all other algorithm design patterns fail, and you will have to use a brute force algorithm. This is because a brute force algorithm is guaranteed to find the right solution for most problems eventually, even if it takes ages.
Greedy algorithms are ones that pick the best option at each step. Prim’s algorithm is an example of a greedy algorithm because it always selects the edge with the lowest weight at every step. Greedy algorithms usually work well, but there can be situations where a greedy algorithm will select a worse overall solution because a good option led it down a bad path.
Divide and Conquer algorithms work by dividing a problem into several sub-problems. These sub-problems are then solved and are joined back together to get the solution to the original problem.
Decrease and Conquer algorithms work by shrinking the main problem into a smaller problem that can then be solved more easily. The solution to this smaller problem can then be used as the basis for the solution to the main problem.
Transform and Conquer involves converting a given problem into a different problem that still would give the same answer as required by the original problem. The problem that the original problem is being transformed into is usually easier to solve, or in some cases already has a known solution.
Backtracking involves systematically generating possible solutions to a problem and then going back and regenerating a part of the solution when it reaches a dead end or an incorrect solution.
-
Construct a matrix that represents the transitive closure of the following graph.
g.nodes("a,b,c,d".split(",")).add(); const edgeLengths = { "a,d":1, "d,b":1, "b,c":1, "c,a":1, "a,c":1 }; const edges = Object.keys(edgeLengths).map(k => k.split(',')); g.edges(edges).data(Object.values(edgeLengths)).add({ length: d => d, directed: true });
-
Jeremy has written the following algorithm for finding the \(n\)th Fibonacci number:
pseudocodefunction fib(n, a=0, b=1): if n == 0: return a if n == 1: return b else: return fib(n-1, b, a+b)
-
What type of algorithm has Jeremy written?
-
Write an iterative version of Jeremy’s algorithm.
-
-
Jordan wants to organise his bookshelf, placing the most important books at the start of his bookshelf, and less important ones at the end. What is the name of an algorithm that he could use to organise them, and how could it be used?
-
Given the following graph and starting at node \(a\):
g.nodes("a,b,c,d,e".split(",")).add(); const edgeLengths = { "a,d":1, "a,b":1, "b,c":1, "c,e":1, "d,b":1, "d,c":1 }; const edges = Object.keys(edgeLengths).map(k => k.split(',')); g.edges(edges).data(Object.values(edgeLengths)).add({ length: d => d, });
-
Determine the number of steps it would take a depth-first search to reach node \(e\), assuming that when it needed to pick a path it would pick the one closest to the start of the alphabet.
-
Repeat the above question, but for breadth-first search.
-
-
For the following algorithmic puzzles, determine the best algorithm design pattern to solve the problem and state why you chose it. (You don’t need to actually create the algorithm, just explain why)
-
You’ve just started a job at Aden’s music shop, and as a test, you’ve been given a pile of coins and are told that one of the coins is a fake. And it’s a really good fake: the only difference between it and a normal coin is that the fake weighs slightly less. Find the fake coin.
-
A salesperson wants to quickly travel around the state and sell their wares in all the towns, starting and ending in the same town. Find the shortest path that the salesperson could use to visit all towns in the state.
-
There are four knights on a three by three chessboard. Two white knights in the upper corners, and two black in the lower corners of the board. The goal is to switch the knights in the minimum amount of moves so that the white ones are at the bottom and the black ones are at the top of the board.
-
-
Prove, using contradiction,
Answers
-
Here is the transitive closure matrix:
\[\left[ \begin{array}{c c c c } 1 & 1 & 1 & 1\\ 1 & 1 & 1 & 1\\ 1 & 1 & 1 & 1\\ 1 & 1 & 1 & 1 \end{array} \right]\] -
Jeremy’s Fibonacci algorithm
-
Jeremy has written a tail recursive algorithm.
-
An example of an iterative version of the Fibonacci algorithm
pseudocodefunction fib(n): if n == 0: return 0 else: a = 0 b = 1 for i = 1 to n: c = a + b b = a b = c return c
-
-
Jordan could use the PageRank algorithm to sort his books. Assuming that all the books are of a similar topic, then he could have the connections between the books be references within them.
-
Graph search question
-
DFS: 5 (or 4 if you didn’t count the start node).
-
BFS: 5 again (or 4 if you didn’t count the start node).
-
-
Algorithmic Puzzles
-
Decrease and Conquer. You could divide the pile in half and figure out which pile weighs less, then repeat the halving with that pile. Repeating until you end up with two coins and the one that weighs less is the fake coin.
-
Brute Force. To find the least cost path, you would need to try every single possible path. This is because for the algorithm to know that it is the absolute lowest cost path, it would need to have every other path to compare with.
-
Transform and Conquer. You can transform the possible moves for the knights into a graph. Where the graph’s nodes represent the squares on the board, and the edges represent a knight move between the two squares connected by them. From this chessboard graph, you can place the knights on it and see that the problem can be solved by moving the knights clockwise or anticlockwise around the graph.
-
References
-
[Black2006] https://xlinux.nist.gov/dads/HTML/dijkstraalgo.html
-
[JonesEtAl2016] https://www.cambridge.edu.au/education/titles/Further-Mathematics-VCE-Units-34-Revised-print-and-interactive-textbook-powered-by-HOTmaths%3ACambridge-Senior-Mathematics-Australian-Curriculum-VCE/ (Graphs and networks are covered in Unit 4 of Further Mathematics)
-
[Hammack2009.6] Book of Proof by Richard Hammack (Chapter 6: Proof by Contradiction) https://cse.unl.edu/~choueiry/S11-235/files/BookOfProof.pdf
-
[Cormen2013] https://mitpress.mit.edu/books/algorithms-unlocked
-
[Skiena2020] https://doi.org/10.1007/978-3-030-54256-6_2
-
[VCAA2018] https://www.vcaa.vic.edu.au/Documents/exams/algorithmics/2018/2018algorithmics-w.pdf
-
[GassAndFu2013] Dijkstra’s Algorithm. In: Gass S.I., Fu M.C. (eds) Encyclopedia of Operations Research and Management Science. Springer, Boston, MA. https://doi.org/10.1007/978-1-4419-1153-7_200148