# Computer generated math proof is too large for humans to check

(Phys.org) —A pair of mathematicians, Alexei Lisitsa and Boris Konev of the University of Liverpool, U.K., have come up with an interesting problem—if a computer produces a proof of a math problem that is too big to study, can it be judged as true anyway? In a paper they've uploaded to the preprint server arXiv, the two describe how they set a computer program to proving a small part of what's known as "Erdős discrepancy problem"—the proof produced a data file that was 13-gigabytes in size—far too large for any human to check, leading to questions as to whether the proof can be taken as a real proof.

Anyone who has taken a high level math course can attest to the fact that math proofs can sometimes grow long—very long. Some have dedicated years to creating them, filling whole text volumes in the process. Quite naturally then, mathematicians have increasingly turned to computers to perform some of the more mundane parts of proof creation. It wasn't long, however, before some began to realize that at some point, the proofs spit out by the would be too long, complicated or both for a human reader to fully comprehend. It appears, with this new effort that that day might have come.

Erdős discrepancy problem revolves around trying to find patterns in an infinite list of just the two numbers "1" and "-1". Named after Paul Erdős, the discrepancy problem arises when cutting off the infinite sequence at some point and then creating a finite sequence using a defined constant. When the numbers are added up, the result is called the discrepancy figure. Lisitsa and Konev entered the problem (with a discrepancy constant of 2) into a computer running what they describe as state of the art SAT solvers—software that has been written to create mathematical proofs. The proof that the computer came up with proves, the two researchers claim, "that no sequence of length 1161 and 2 exists."

Unfortunately the file produced was too large to read—for comparison's sake, it was a couple of gigabytes larger than the whole of Wikipedia. This leads to an interesting conundrum for mathematicians going forward. Do we begin accepting proofs that computers create as actual proofs if they are too long or perhaps too difficult for our minds to comprehend? If so, we might just be at a crossroads. Do we trust computers to handle things for us that our beyond our abilities, or constrain our reach by refusing to allow for the creation of things that we cannot ever possibly understand?

Explore further

Mathematician announces that he's proved the ABC conjecture

More information: A SAT Attack on the Erdos Discrepancy Conjecture, arXiv:1402.2184 [cs.DM] arxiv.org/abs/1402.2184

Abstract
In 1930s Paul Erdos conjectured that for any positive integer C in any infinite +1 -1 sequence (x_n) there exists a subsequence x_d, x_{2d}, ... , x_{kd} for some positive integers k and d, such that |x_d + x_{2d} + ... + x_{kd}|> C. The conjecture has been referred to as one of the major open problems in combinatorial number theory and discrepancy theory. For the particular case of C=1 a human proof of the conjecture exists; for C=2 a bespoke computer program had generated sequences of length 1124 having discrepancy 2, but the status of the conjecture remained open even for such a small bound. We show that by encoding the problem into Boolean satisfiability and applying the state of the art SAT solvers, one can obtain a sequence of length 1160 with discrepancy 2 and a proof of the Erdos discrepancy conjecture for C=2, claiming that no sequence of length 1161 and discrepancy 2 exists. We also present our partial results for the case of C=3.

Journal information: arXiv

Feedback to editors

Feb 19, 2014
No, because you cannot check that the program did not simply glitch and produce a load of rubbish.

Feb 19, 2014
doesn't the discrepency law basically boil down to whether n-d is even?...
I mean, the only way to build to any particular discrepency is to have that discrepency leading and then alternating positive and negative nodes that quantify to 0. You can't have a quantity of 0 with any amount of nodes that isn't even. soo, 1161-2 = 1159 isnt even so no sequence of 1161 nodes has a discrepency of 2... or am I missing something?

Feb 19, 2014
also, I'd be interested in seeing what a heavy non-mutating compression algorythm would spit out- I presume something much more heavily modulated. Break proofs down to previously estabished proofs and proposed proofs and let humans check the inputs and outputs of the modules for veracity. You don't need to solve the whole problem, just understand the steps well enough to proof each step... Otherwise personal computers would be full of catastrophic errors. tollerable error percentages in any binary operation is less than .0000000001% for continued reliable performance. if the systems used to set it up are understandable then anything it produces should be considered... I don't see why the nomenclature really matters, what's really important is if it can be used practically, or even impractically. I see very little realistic use for this proof.

Feb 19, 2014
Well, we could write a program that checks the calculations, perhaps?

This reminds me of "The Matrix", when the Agent Smith says "This is the peak of your society. I say 'your society' because once we began thinking for you, it really became 'our society'."

I think we can trust computer generated proofs because they so calculation intensive. If the computation arrives at a result that makes sense, it most likely is correct, since it was done using hardware and software designed specifically for huge calculation tasks.

Feb 19, 2014
I only know just enough about it to ask: Is this outside the scope of Gödel's theorem?

Feb 19, 2014
The particular proof created does not matter. This proof is merely proof that computers can prove something in ways that only other computers can verify.

Feb 19, 2014
A crucial point is involved here, which many seem to overlook.
You can avoid the need for such huge computations by simply devising a theory. A branch of the discipline being studied involving the concepts you are specifically examining, complete with theorems that describe the entire nature of the subject, even out to infinity. You don't need to look at every prime larger than 2 to know it's odd, because, if it was even, it would be divisible by 2 and so not be prime. An early instance of listing all instances was proving there are only five Platonic solids using Euler's Formula, which is universal, and seeing all the cases that satisfy. In the '70's, they proved the 4 Color Map Problem by generating a list of all possible map configurations. Computers can give great latitude of action, but they can cause many to forget the value of deriving universal laws that can answer questions, or replace long conventional proofs, in a snap.

Feb 19, 2014
The devil's advocate retorts with yes because was a:
New Short Cut Found For Long Math Proofs

http://www.nytime...wanted=1

Seriously though, can you find research debunking this 1992 claim?

Feb 19, 2014
Perhaps this idea might have come to the researchers, but with my little knowledge, I would like to mention it anyways. Maybe some other people could devise a similar program and then check the solutions, like the design of Shut Down System # 1 (SDS1) and Shut Down System #2 (SDS2) of CANDU reactors. The designers are not allowed to talk to each other to prevent collaboration and mixing of ideas. The CANDU reactors are required to have two independent Shut Down systems completely independent of almost any mechanism that derives them. This way, two independent yet reliable programs will produce exact solutions that can be checked easily.

Feb 19, 2014
I think if we can trust computers with calculating processes in atomic bomb, we can trust them in providing mathematical proofs.

Feb 19, 2014
A proof is nice, but it's not like we have to worry about it until we start using it for something. Then we can check the output of that work and if it gives us what we need, then we can begin to assign a level of trust or reliability to the proof. Where the output turns up incorrect, then we can start back-stepping and find the errors. This is the case with human- and computer-verified proofs.

Feb 19, 2014
"if a computer produces a proof of a math problem that is too big to study, can it be judged as true anyway."

Who cares what humans think?
Other machines will accept it and (if need be) verify it.

Feb 19, 2014
Is anybody able to comprehend all of the mathematics, I mean all at once? Or go through every bit of it even not in the same time?
I think - no, and if it's "no" then we already rely on partial comprehension, partial proofs done by different people and breaking complex things to simpler comprehensible parts /how this is different?/.
And if a step by step process of proving parts of mathematics works we still believe in its robustness and integrity.
Am I wrong?

Feb 19, 2014
The actual question probably should be - does it worth to be checked by humans - to /let's say 1000 people/ go through all of it?
And maybe for the specific case the answer is "no". If so then it may be checked by several solvers and ruled as "probable", "computer proven" - not so unthinkable, there are still probable theorems even in basic math without actually being properly proven.

Feb 19, 2014
"If so, we might just be at a crossroads. Do we trust computers to handle things for us that our beyond our abilities, or constrain our reach"

Traction control and ABS on cars or how about computer automated quant-trading as a few of a billion possible examples? We are not at any such crossroads. We crossed it long ago.

Feb 20, 2014
Up to which level of data is it still proof and not a complicated way of defining what worked and what didn't.

Feb 20, 2014
No, because you cannot check that the program did not simply glitch and produce a load of rubbish.

Easy. Just re-run the process. Use different equipment if you wish. If you suspect a flaw in the logic, have others examine your work. There's nothing ineffably different in computers than any other tool we use.

Feb 20, 2014
"I only know just enough about it to ask: Is this outside the scope of Gödel's theorem?"

I'd say you understand it perfectly, and that the answer is no.

Feb 20, 2014
The mind blower is if we do allow computers to do this heavy lifting, will there come a time when they will need more processing power than we can supply to do our complex computations, and what problems will we be solving then ?

Feb 23, 2014
No, because you cannot check that the program did not simply glitch and produce a load of rubbish.

I disagree.

If something is true, then it is true whether or not anybody understands it.

Feb 24, 2014
Traction control and ABS on cars

That's why cars still have doubled unassisted hydraulic brake circuits as a failback.

Besides, these systems don't rely on mountains of data. They're very simple algorithms that can be easily understood by a single person to verify that they're doing what they're supposed to do. Even nuclear bombs were first proven on pen and paper before the numerical simulations.

If something is true, then it is true whether or not anybody understands it.

The problem is, can you know that it's true?

You'd have to first prove that the proof-producing algorithm itself is absolutely flawless for all possible inputs before you could trust its output, which I believe would be such a massive undertaking that it would require the computer to prove itself... but then do you trust that proof?

And if you do, you still have to check the output for the case that the computer made a random bit error and subsequently produced 13 Gigabytes of nonsense.

Mar 06, 2014
The real question is why is this "proof" 13GB in size? I suggest the principle of Occam's Razor is relevant here in that often the simplest explanation is the the most correct; ergo, whether the solution is valid or not the algorithm to produce it is clearly far from optimum. A computer can only verify what it's been told to verify and in the way in which it has been instructed - no more. If you want the chair screwed to the ceiling then the robot will obey - it's neither right nor wrong.

Mar 11, 2014
the algorithm to produce it is clearly far from optimum

Are you stating this only because the size is inconvenient for you or do you have something concrete to base this on?

Mar 11, 2014
This comment has been removed by a moderator.