Sunday, September 3, 2017

Basic Big O Notation Rough Draft

Measuring the "time" it takes for a machine to compute a particular algorithm brings with it several difficulties. It's not as simple as simply passing an algorithm to a computer and clocking the time it takes for the computer to complete the instructions. Different hardware and software may implement algorithms and operations in distinct matters; computers and structures differ in their use of cache, differ in the number and speed of their processors and the size of their ram, their use of ssd vs hdd etc. This makes it impractical to truly understand the speed of an algorithm itself as distinct from the architecture implementing it. Therefore, to measure the speed of an algorithm we refer to the number of operations, using Big O notation, rather than a simplistic measure of actual clock speed.

Imagine we have an array of elements that we wish to print. To do this we iterate over the array one element at a time, adding the next integer to the previous total. If we were to double the number of elements in the array we would double the number of operations we have to perform. This is known as linear speed (due to the fact that the number of operations increase directly with the size of the input) and in Big O notation is signified by O(n). Contrast this with adding two random elements in the array. Due to the fact that arrays are preallocated, access to it's elements is random and an index can be accessed directly, regardless of the total number of elements in the array. As such, all things being equal, it would take no more time to implement this sum if we were to double the size of the array. This time is known as constant speed(since the time taken is independent of the size of the input) and is signified by O(1).

There are of course all kinds of time complexities, such as exponential and poly-logarithmic times, and these follow naturally from what is defined here.

No comments:

Post a Comment