Scott Mayers wrote: ↑Sat Jun 08, 2019 7:29 pm
If this is an appeal to philosophers, shouldn't you first define "Computational Complexity" at least?
commonsense wrote: ↑Sun Jun 09, 2019 2:40 pm
Scott M, between Scott A’s essay and a flurry of Wikipedia references, I was able to grasp a working definition of what computational complexity is.
Among all the tasks that are computable, some can be computed more efficiently than others. What do we mean by efficiency? In this case it means the growth rate of the resources consumed by a computation, as a function of the length of its input
. That's the key idea.
Let me illustrate with four examples.
* Constant time
. Example: Determining whether a number is odd or even.
If I give you a number, like 3425 and ask you if it's even, you just look at the low-order digit, the 5, and if that's even the entire number is even and if it's odd the number is odd. If I give you the number 48594859043590438583 you do exactly the same. No matter how big the given number is, it always takes you exactly the same amount of computing work to determine if the number is even or odd. So this problem can be done in "constant time" as they say.
* Linear time
. Example: Counting characters in a string.
To count the characters in "dfsljg" we iterate through each character and increment a counter for each character we see. If the length is n then we have to count n characters. This is "linear time."
* Polynomial time
. Example: Multiplying two n-digit numbers.
We all know the old grade school algorithm. I hear Common Core doesn't teach that anymore and has a lot of bizarre new methods. I hear bad stories. But I'm assuming most readers learned multiplication the old-fashioned way.
If it's a 3-digit number by a 3-digit number, you have a total of 9 multiplications plus the addition at the end. If it's 4x4 you have 16 multiplications. In general you always have n^2 multiplications plus a big addition at the end. n^2 is a polynomial and the addition at the end doesn't add much work. So multiplication is "polynomial time."
Since n is a polynomial of first degree, we see that linear time is a subset of polynomial time.
In general, an algorithm that's polynomial or better is regarded as efficient; and anything worse is inefficient.
* Exponential time
. Example: Brute-force guessing of an n-character password.
Suppose for simplicity a password must consist of n decimal digits. I the password length is 1, then we have to make possibly up to 10 guesses, averaging around 5.
If the passwords are two digits long, we have to make in the worst case 100 guesses and in the average case around 50.
In other words for each 1-character input, the amount of work to guess the password goes up by a factor of 10. So the formula for the amount of guesswork is around 10^n. This is called "exponential time" and it is regarded as bad. The resource usage goes up so fast that even though we can solve the problem in theory, we haven't got enough resources to solve it in practice.
That's complexity theory. My examples all involve the time domain, but we can also analyze algorithms in terms of how much space (or memory) they require. Complexity theory builds out all these different complexity classes and figures out which problems are in which class.
Why is computational complexity important?
* One reason is that performing computations on a physical computer consumes resources and takes time. We always want to be able to go faster, so it's important to find more efficient algorithms.
* P = NP. You may have heard of this famous problem, one of the Clay Millennium problems that pays a million dollars to the first person to prove it true or false, as the case may be.
P is the class of problems that can be solved by a deterministic Turing machine; and NP is the class that can be solved by a nondeterministic TM. A nondeterministic TM essentially takes all possible paths at once. So even though P and NP solve exactly the same set of problems, the nondeterministic algorithms can solve SOME problems more efficiently. We think. Nobody knows for sure.
* Quantum computers will break Internet security and cryptocurrencies. You may have heard or read this. All Internet and cryptocurrency security is based on Public key cryptography
, which is based on the ancient mathematical problem of factoring integers into primes.
Since factoring on conventional computers is hard (not as bad as exponential but still not polynomial), it takes the attacker too much time to decode an encrypted message.
On the other hand if we use a quantum computer, there's an algorith called Shor's algorithm
that can factor an arbitrary integer in polynomial time. This is truly an amazing fact.
If we eventually develop practical quantum computers (much farther off than the hype would have you believe IMO), then public key cryptography is defeated and all of Internet security is compromised.
* Going beyond all that, Scott Aaronson believes that computational complexity has deep philosophical import. That's what his article's about.