So today I learned something drastically new. One of my classmates, who just happens to be a beast at programming, went to an interview yesterday for a very prominent company here in New York. They asked him a lot of CS questions because this was not for a junior position, and lucky, besides the fact that my friend is just really smart, he had a background in CS. So he smoked some of these questions.

One of the things he thought I should have a grasp on was O Notation. Now what I am about to say is me trying to replay the 20 minutes conversation I had with my classmate this morning.

O(1) describes an algorithm that will always execute in the same time (or space) regardless of the size of the input data set.

O(N) describes an algorithm whose performance will grow linearly and in direct proportion to the size of the input data set.

O(N2) represents an algorithm whose performance is directly proportional to the square of the size of the input data set.

O(2N) denotes an algorithm whose growth will double with each additional element in the input data set.