In this first mission of our Algorithm Complexity course, you'll start learning about the time complexity, or speed, of algorithms.

Moore's law predicts that every two years, the speed of computers doubles. However, despite this, if our algorithms are poorly designed, these algorithms take longer to execute over time. The reason for this phenomena is that the speed of computers is not the only thing that is increasing over time: data is also growing, and it seems that it is growing at the same rate as the speed of computers!

Since both speed and data are growing at the same rate, shouldn't the computation time remain the same? After all, we double the amount of data that we want to process every two years, but we do it on a computer that is twice as fast. Well, that depends on the algorithm.

For some algorithms, the execution time is proportional to the amount of data that it processes. This means that for such algorithms, the execution time will remain somewhat the same over time. However, not all algorithms exhibit this property. In this mission, you'll start to dive into why that is, and how we can use that knowledge in a data engineering context.

Objectives

  • How to measure the execution time of an algorithm
  • How to generate random inputs for an algorithm
  • How to model the execution time of an algorithm

Mission Outline

1. Introduction to Time Complexity
2. Measuring the Execution Time
3. Generating Random Inputs
4. Analyzing Execution Times
5. Modeling Execution Times
6. Worst Case Analysis
7. Quadratic Complexity
8. Simplifying Further
9. A Common Misconception
10. Next Steps
11. Takeaways