Are algorithms with high time complexity ever used in the real world for small inputs? [closed]

This does happen in the real world! For example, a famous sorting algorithm is Timsort:

Timsort

Details of the below implementation:

We consider the size of the run as 32 and the input array is divided
into sub-array.

We one-by-one sort pieces of size equal to run with a
simple insertion sort. After sorting individual pieces, we merge them
one by one with the merge sort.

We double the size of merged subarrays
after every iteration.

Insertion sort has complexity O(N^2) but is faster for tiny lists, Merge Sort has complexity O(N logN) so it is better for longer lists.

Introsort

Another example is introsort, the sort used in the C++ standard library:

Introsort or introspective sort is a hybrid sorting algorithm that
provides both fast average performance and (asymptotically) optimal
worst-case performance. It begins with quicksort, it switches to
heapsort when the recursion depth exceeds a level based on (the
logarithm of) the number of elements being sorted and it switches to
insertion sort when the number of elements is below some threshold.
This combines the good parts of the three algorithms, with practical
performance comparable to quicksort on typical data sets and
worst-case O(n log n) runtime due to the heap sort. Since the three
algorithms it uses are comparison sorts, it is also a comparison sort.

More complexity downside

The downside of using more algorithms for a single task is clearly increased complexity. It is worth it if you are writing standard library code for a programming language that will be re-used millions or even billions of times. For smaller projects focusing on saving developer time over machine time by implementing only one algorithm is often the better choice.

References:

Leave a Comment