Asymptotic Algorithm Performance Analysis: Know Which One Is Better


Algorithm analysis enables developers to measure the complexity of a code. However, when you introduce an algorithm analysis mode, there are a few things you need to keep in mind. Make sure you keep an eye on the analysis models’ user-friendliness, security, and durability. The asymptotic analysis is one of the effective methods used to conduct algorithm performance analysis. In this particular analysis model, the focus on performance is driven by the terms of input size and not on the running time. In one of their articles, GeeksforGeeks discusses performance analysis and how it can benefit your code.
Significance of Performance Analysis
Focusing on the performance analysis automatically gives you precise and reliable data on the existing code. You can measure the speed and scale it for further usage. Familiarizing yourself with the speed aspect of your code unravels many code mysteries you did not pay attention to before.
How to Choose the Right Performance Analysis
One of the easiest ways to decide upon the safer and more reliable algorithm analysis is by comparing them. Implement both algorithms on two separate systems, compare their inputs, and observe which takes less time. However, this method poses several concerns:
- For some inputs, one of the algorithms can perform better than the other.
- It is also possible that the algorithms run better on a specific machine.
When you implement asymptotic analysis, this is how it works: Suppose you run a linear search on computer A and a binary search on computer B. Choose the constant values for both computers and measure the exact time to perform the search. For example, assume the constant for computer A is 0.2 seconds, and for computer B, it is 1000. It means that system A is 5,000 times more potent than B.
However, you will notice after a period that the binary search takes less time compared to the linear search. The reason behind this is the order of growth of binary search. It means that you can ignore the machine-oriented constants after a specific input value.
How Effective Is the Asymptotic Approach?
The asymptotic approach is undoubtedly one of the best tools to assess algorithms, but it has several drawbacks that need to be improved. For instance, two algorithms take 1000nLogn and 2nLogn time on a machine. Both algorithms are asymptotically the same. Therefore, when you measure them with the asymptoic analysis process, it is difficult to judge which one is better.
Click on the link to read the original article: https://www.geeksforgeeks.org/analysis-of-algorithms-set-1-asymptotic-analysis/?ref=lbp
Updated in response to a question from one of our Social Media channels:
What’s an ‘Asymptote’, and why would one matter here?
In mathematics, an asymptote of a curve is “a line which is tangent to a curve at the point of infinity”. (See Wikipedia for more of that).
In the chart below, the orange line is an asymptote. As our data set size increases, the time taken to process it increases to such a degree that the line plotting data set size against processing time will continue to infinity without ever crossing the asymptote.


In the real world, this means:
- The first part of the graph is good. Increases in data set size don’t have a huge time impact, and in fact we even get more efficient in early stages of data set growth. Maybe there are things to be don e before and after the processing that take the same amount of time no matter what size the data set is.
- The middle of the graph is OK. An increase in data set size means an increase in processing time, but it may still be acceptable.
- The last part of the graph is bad. A small increase in data set size means a bigger increase in processing time. This probably means there is an upper limit on the size of data set which can be processed.
Applying this to the original example:
Taking the example from the original article, there are two algorithms being examined. One is linear, and the other is binary, and we know how long each takes to process a certain number of records in seconds.


After graphing this (with logarithmic scales to show the early values better) we can see that the linear algorithm does appear to perform better, but only up until the point where the chart lines cross.


After that, the binary algorithm is the way to go – at the final data volume the linear algorithm takes about 6.3 years but the binary algorithm takes around 8.3 hours.
This data will help select the better algorithm. If the data set will always be smaller, then the linear alogrithm is best. For larger data sets, the binary algorithm should be selected.
If we were only judging by code size, complexity and readability with a small test data set the degraded performance with larger data sets might not be discovered until much later on, when the code is in production.
Other factors are at play here too. The scenario given in the original article tells us that the binary algorithm is being run on a slower machine.