I've been thinking about this for the last little while, and I wanted to see if it holds up to reality. I'm sure Narf the Mouse would have a natural interest in this, but you're all experts to me.
In short, as described by Moore's Law, technology advances at an exponential rate. This means a slow program (running at 140% delay rather than 100%, being arbitrary numbers) will gradually become normalized by increases in processor power, until the difference becomes almost invisible. In short:
Generation 1: Normal program runs at 100% delay, slow program runs at 140% delay.
Generation 2: Normal program runs at 50% delay, slow program runs at 70% delay.
Generation 3: Normal program runs at 25% delay, slow program runs at 35% delay.
Generation 4: Normal program runs at 13% delay, slow program runs at 18% delay.
And so on. So the idea is that no matter how inefficient the code is, both programs will eventually reach near-identical performance at a certain point in time. The conclusion is that, over a period of perhaps ten years, a flawless but slow program far exceeds the value of a quick but buggy program. Now the question: is that conclusion actually valid in reality, or are there other factors to keep in mind?
Theory: Over time, bloated code becomes fast
- Jay_H
- Posts: 4062
- Joined: Tue Aug 25, 2015 1:54 am
- Contact:
-
- Posts: 195
- Joined: Thu Apr 02, 2015 9:03 am
Re: Theory: Over time, bloated code becomes fast
Not sure what you understand under "flawless program" and "buggy program", but if a program/code is buggy, it is worthless in my eyes. Or at least it is not comparable with a functional equivalent which is not buggy.
But it seems to mee that you meant two programs/codes with identical inputs and results. One being carefully/laboriously tuned for quick execution, the other being put together in a fast careless manner. So after many many years, the second one would be the better value/effort. But I fear, it would be obsolete by that time anyway.
But it seems to mee that you meant two programs/codes with identical inputs and results. One being carefully/laboriously tuned for quick execution, the other being put together in a fast careless manner. So after many many years, the second one would be the better value/effort. But I fear, it would be obsolete by that time anyway.
- Jay_H
- Posts: 4062
- Joined: Tue Aug 25, 2015 1:54 am
- Contact:
Re: Theory: Over time, bloated code becomes fast
That's right. By comparison (to use a familiar subject), we could talk about a 1996 Daggerfall where it takes 16 seconds to load a town, but there are no glitches at all, compared to a Daggerfall of the same year where it takes 9 seconds to load a town but the game itself is very glitchy. This is theoretical since hopefully a well-made program will be both fast and error-free, but it sounds like the errors weigh far more in the scale than the speed, so far as I've seen.ifkopifko wrote:But it seems to mee that you meant two programs/codes with identical inputs and results. One being carefully/laboriously tuned for quick execution, the other being put together in a fast careless manner.
- Nystul
- Posts: 1501
- Joined: Mon Mar 23, 2015 8:31 am
Re: Theory: Over time, bloated code becomes fast
Moore's Law is already considered as no more valid.Jay_H wrote:I've been thinking about this for the last little while, and I wanted to see if it holds up to reality. I'm sure Narf the Mouse would have a natural interest in this, but you're all experts to me.
In short, as described by Moore's Law, technology advances at an exponential rate. This means a slow program (running at 140% delay rather than 100%, being arbitrary numbers) will gradually become normalized by increases in processor power, until the difference becomes almost invisible. In short:
Generation 1: Normal program runs at 100% delay, slow program runs at 140% delay.
Generation 2: Normal program runs at 50% delay, slow program runs at 70% delay.
Generation 3: Normal program runs at 25% delay, slow program runs at 35% delay.
Generation 4: Normal program runs at 13% delay, slow program runs at 18% delay.
And so on. So the idea is that no matter how inefficient the code is, both programs will eventually reach near-identical performance at a certain point in time. The conclusion is that, over a period of perhaps ten years, a flawless but slow program far exceeds the value of a quick but buggy program. Now the question: is that conclusion actually valid in reality, or are there other factors to keep in mind?
inefficient code will remain inefficient code - e.g. p-hard problems
there are no flawless programs
"flawless" programs can become "broken" on newer hardware easily - even vanilla daggerfall runs better on a native 486 on dos compared to current systems
-
- Posts: 177
- Joined: Sun Mar 22, 2015 9:52 am
Re: Theory: Over time, bloated code becomes fast
Moore's law is still valid in theory (i.e. the density of transistors in a given area is still doubled) although expected to stop quite soon (Gordon Moore said 2025 IIRC). However, Moore's law has long been irrelevant in practice for the purpose of running a program twice as fast each year.
In the last 5-6 years, the technological advances brought by Moore's law have been used to increase energy efficiency (to make smartphones, etc. possible) and to release processors with more cores, but raw speed for non-parallel tasks has remained almost constant. In fact, I work in a field that involves some sequential algorithms and when I read a paper from 2010 or 2011 that says "we processed this dataset in 4 hours", I know I'll get roughly the same time if I reproduce, maybe I might do it in 3h 30 but definitely nowhere near half the time.
So this observation may apply for massively parallel code (e.g. some neural networks) and maybe for code that is I/O bound (apparently much faster disk drives are coming) but for CPU bound code I'm afraid we can't count on the passage of time to optimize for us...
In the last 5-6 years, the technological advances brought by Moore's law have been used to increase energy efficiency (to make smartphones, etc. possible) and to release processors with more cores, but raw speed for non-parallel tasks has remained almost constant. In fact, I work in a field that involves some sequential algorithms and when I read a paper from 2010 or 2011 that says "we processed this dataset in 4 hours", I know I'll get roughly the same time if I reproduce, maybe I might do it in 3h 30 but definitely nowhere near half the time.
So this observation may apply for massively parallel code (e.g. some neural networks) and maybe for code that is I/O bound (apparently much faster disk drives are coming) but for CPU bound code I'm afraid we can't count on the passage of time to optimize for us...
- Jay_H
- Posts: 4062
- Joined: Tue Aug 25, 2015 1:54 am
- Contact:
Re: Theory: Over time, bloated code becomes fast
Interesting. Thanks for the feedback guys