An algorithm is defined as a finite sequence of unambiguous instructions for solving a problem.
- Every algorithm requires an input
- The output depends only on the input
- The algorithm must terminate in a finite time for all inputs (it doesn't get stuck in a loop)
The order of an algorithm is a measure of the approximate run time for an algorithm, depending on the size of the problem. For large problems we only need to consider the dominant term to get an approximate run time.
The efficiency of an algorithm is a measure of how well an algorithm copes with an increase in the size, n, of a problem. The efficiency decreases as run time increases.
|Order||Run time proportional to|
A computer takes 2 seconds to solve a problem of size 30. Estimate the times taken to solve a problem of size 300 if the algorithm used has
- an order n
- an order n2
- an order n3
- an order 2n
Therefore, using n = 300
See the other D1 notes: