Content
Brute-Force Greedy Algorithms Examples with Brute-Force and Greedy Algorithm Divide and Conquer Algorithm Combine and Conquer Algorithm Dynamic Programming Algorithms Backtracking & Branch and Bound Algorithms <- Go BackUniversity of Michigan at Ann Arbor
Last Edit Date: 06/08/2023
Disclaimer and Term of Use:
We do not guarantee the accuracy and completeness of the summary content. Some of the course material may not be included, and some of the content in the summary may not be correct. You should use this file properly and legally. We are not responsible for any results from using this file
This personal note is adapted from Professor Marcus Darden and David Paoletti. Please contact us to delete this file if you think your rights have been violated.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Definition: Solves a problem in the most simple, direct, or obvious way
Not distinguished by structure or form
Pros:
Cons:
May do more work than necessary
May be efficient, but typically is not
Somtimes, not that obvious
Example: Counting Change
Problem Definition:
Cashier has collection of coins of various denominations
Goal is to return a specified sum using the smallest number os coins
Brute-force Counting Chnage
Try all subsets S of coins, C to make change totaling A
Since there are n coins, there are $2^n$ possible subsets
Check if sum of subset soins equals A
Called "feasible solution" set
$O(n)$
Pick a feasible subset that minimizes |S|
Called "objective function"
$O(n)$
Time Complexity
Best Case: $\Omega (n 2^n)$
Worst Case: $\Omega (n 2^n)$
Definition: Algorithm that makes sequence of decisions (best at each point), and never reconsiders decisions that have been made
Must show that locally optimal decisions lead to globally optimal solution
Pros:
Cons:
Greedy Counting Change
Go from largest to smallest denomination
Return largest coin $p_i$ from P, such that $d_i \le A$
$A = A – d_i$
Find next largest coin …
If money is already sorted (by value), then the algorithm is $O(n)$
Always pick quarter if possible
Pick dimes if possible
Pick nickles if possible
Pick pennies if possible
However, the Greedy algorithm does not always produce the optimal result. For example, only use pennies, dimes, and quarters to make 30 cents.
The following uses greedy algorithm:
The following uses brute force algorithm:
Proving Greedy Optimality
Need an optimal substructure
Need a greedy-choice property
Applied recursively through often programmed iteratively
Example 1: Sorting
Precond: A random array of int
called myArr[]
Postcond: For all i < n - 1
, myArr[i] <= myArr[i + 1]
Brute-Force Approach
Generate all permutations of array myArr[]
: $O(n!)$
For each permutation, check if all myArr[i] <= myArr[i + 1]
: $O(n)$
Greedy Approach
Find smallest item, move to first location: $n$ operations
Find next smallest item, move to second location: $n - 1$ operations
Repeat till the last location
Leave the largest item in the final location: 1 operations
Example 2: Mountain Climbing
Brute-Force Approach
Lay out a grid in the area around the mountain
Visit all possible locations in the grid
The highest measured altitude was the top
Greedy Approach
Take a step that increases altitude
Iterate untill altitude is no longer increasing in any direction
Definition: Divide a problem solution into two (or more) smaller problems, preferably of equal size. (Top down)
Often recursive
Often involve $\log n$
Pros:
Efficiency
"Elegance" of recursion
Cons:
Recursive calls to small subdomains often expensive
Sometimes dependent upon initial state of subdomains
Definition: Start with smallest subdomain possible Then combine increasingly larger subdomains until size = n. (Buttom up)
Definition: Remember partial solutions when smaller instances are related
Solves small instances first, stores the results, look up when needed
Pros:
Cons:
Example: Fibonacci
Fibonacci numbers
$F_0 = 0$
$F_1 = 1$
$F_n = F_{n-1} + F_{n-2}$
$F_{50} = F_{49} + F_{48} = (F_{48} + F_{47}) + (F_{47} + F_{46}) = ((F_{47} + F_{46})) + (F_{46} + F_{45}))) + ((F_{46} + F_{45}) + (F_{45} + F_{44})) = ...$
Types of Algorithm Problems
Constraint Satisfaction Problems
Can we satisfy all given constraints?
If yes, how do we satisfy them? (need a specific solution)
May have more than one solution
Examples: sorting, mazes, spannig tree
Stop when a satisfying solution is found
Can rely on Backtracking algorithms
Optimization Problems
Must satisfy all constraints
Must minimize an objective function subject to those constraints
Examples: giving change, MST
Usually cannot stop early
Must develop set of possible solutions
Called feasibility set
Usually just the best complete solution so far, and the current partial solution being developed
When done, the best solution seen is the best
Can rely on Branch and Bound algorithms
For particular problems, there may be must more efficient approaches
Think of these as a fallback to a more sophisticated version of a brute force approach.
i. Backtracking Algorithms
Definition: Systematically consider all possible outcomes of each decision, but prune searches that do not satisfy constraint(s). (Think of as DFS with Pruning)
Pros:
Cons:
Example: graph coloring in four colors
Assign colors to vertices such that no two vertices connected by an edge have the same color
Some graphs can be 4-colored, and some cannot
Graph properties
Cartographic maps cna be drawn as planar graphs
Planar graph: a graph that can be drawn with no crossing edges
Conversion of a map to a planar graph
States become nodes
Shared borders become edges
From Enumeration to Backtracking
Enumeration
Take vertex $v_1$, consider 4 branches (colors)
Then take vertex $v_2$, consider 4 branches
Then take vertex $v_3$, consider 4 branches
...
Suppose there is an edge $(v_1, v_2)$
Backtracking
Branch on every possibility
Must maintain the current partial solution being developed
Might print or maintain all complete solutions
Check every partial solution for validity
If a partial solution violates some constraint, it makes no sense to extend it (so drop it), i.e., backtrack
M-Coloring Algorithm
Input: n (number of nodes),
m (number of colors),
W[0..n)[0..n) (adjacency matrix)
( W[i][j] is true if there is an edge
from node i to node j, and false otherwise)
Output: all possible colorings of graph
represented by int vcolor[0..n), where
vcolor[i] is the color associated with
node i
Algorithm m_coloring(index i = 0)
if (i == n)
print vcolor(0) thru vcolor(n - 1)
return
for (color = 0; color < m; color++)
vcolor[i] = color
if (promising(i))
m_coloring(i + 1)
bool promising(index i)
for (index j = 0; j < i; ++j)
if (W[i][j] and vcolor[i] == vcolor[j])
return false
return true
When is Backtracking Efficient?
Backtracking avoids looking at large portions of the search space by pruning, but this does not necessarily improve the asymptotic complexity over brute force.
However, backtracking works well for constraint satisfaction problems that are either:
Highly-constrained: Constraint violations are detected early in partial solutions and lead to MASSIVE amounts of pruning.
Under-constrained: Acceptable solutions are densely distributed, so it is quite likely we find one early and can terminate.