Get the second largest number in a list in linear time
You could use the heapq module: >>> el = [20,67,3,2.6,7,74,2.8,90.8,52.8,4,3,2,5,7] >>> import heapq >>> heapq.nlargest(2, el) [90.8, 74] And go from there…
You could use the heapq module: >>> el = [20,67,3,2.6,7,74,2.8,90.8,52.8,4,3,2,5,7] >>> import heapq >>> heapq.nlargest(2, el) [90.8, 74] And go from there…
One billion is not a very big number. Any reasonably modern machine should be able to do this in a few seconds at most, if it’s able to do the work with native types. I verified this by writing an equivalent C program, reading the assembly to make sure that it actually was doing addition, …
The performance difference has been irrelevant since at least January 2012, and likely earlier: Single quotes: 0.061846971511841 seconds Double quotes: 0.061599016189575 seconds Earlier versions of PHP may have had a difference – I personally prefer single quotes to double quotes, so it was a convenient difference. The conclusion of the article makes an excellent point: …
It really doesn’t matter in most cases. The large number of questions on StackOverflow regarding whether this method or that method is faster, belie the fact that, in the vast majority of cases, code spends most of its time sitting around waiting for users to do something. If you are really concerned, profile it for …
The other thread mentioned Marsaglia’s xorshf generator, but no one posted the code. static unsigned long x=123456789, y=362436069, z=521288629; unsigned long xorshf96(void) { //period 2^96-1 unsigned long t; x ^= x << 16; x ^= x >> 5; x ^= x << 1; t = x; x = y; y = z; z = t …
These problems usually boil down to the following: The function you are trying to parallelize doesn’t require enough CPU resources (i.e. CPU time) to rationalize parallelization! Sure, when you parallelize with multiprocessing.Pool(8), you theoretically (but not practically) could get a 8x speed up. However, keep in mind that this isn’t free – you gain this …
This effect is caused by Type Profile Pollution. Let me explain on a simplified benchmark: @State(Scope.Benchmark) public class Streams { @Param({“500”, “520”}) int iterations; @Setup public void init() { for (int i = 0; i < iterations; i++) { Stream.empty().reduce((x, y) -> x); } } @Benchmark public long loop() { return Stream.empty().count(); } } Though …
2019-04: Reached EOL. Suggested alternative: LLVM-MCA 2017-11: Version 3.0 released (latest as of 2019-05-18) 2017-03: Version 2.3 released What it is: IACA (the Intel Architecture Code Analyzer) is a (2019: end-of-life) freeware, closed-source static analysis tool made by Intel to statically analyze the scheduling of instructions when executed by modern Intel processors. This allows it …
sum is quite fast, but sum isn’t the cause of the slowdown. Three primary factors contribute to the slowdown: The use of a generator expression causes overhead for constantly pausing and resuming the generator. Your generator version adds unconditionally instead of only when the digit is even. This is more expensive when the digit is …
Can MySQL reasonably perform queries on billions of rows? — MySQL can ‘handle’ billions of rows. “Reasonably” depends on the queries; let’s see them. Is InnoDB (MySQL 5.5.8) the right choice for multi-billion rows? — 5.7 has some improvements, but 5.5 is pretty good, in spite of being nearly 6 8 years old, and on …