A legend about Carl Friedrich Gauss says that when the great mathematician was eight years old, a teacher asked him to add up all the integers from 1 to 100, expecting it would occupy him for an hour or more. The boy barely thought for a few seconds before writing down that the answer was 5050. He explained to his stunned teacher that he could add them in pairs proceeding inward from the ends: 1+100 = 101, 2+99 = 101, 3+98 =101, and so forth until he got to 50+51. This meant 50 pairs of numbers, each pair totaling to 101, and that was the same as 50 times 101, which was 5050. We all love these kinds of elegant solutions to complex problems, but some problems resist shortcuts like the one young Gauss discovered. The branch of mathematics that covers this is called complexity theory, and it breaks such problems down into two major groups: P and NP, based on how they are solved by computers.
P problems can be solved by a computer using relatively simple algorithms, like the shortcut Gauss used.
NP problems can be checked using such simple algorithms, but a computer can only solve them the long way, which to a computer means brute force.
One of the pressing questions of mathematics is whether P = NP. That is, can all solutions that are easily checked by a computer be easily obtained by a computer? Note that “easy” is relative here, and refers to whether the computer has to resort to brute force. Problems that resist easy solutions can still be solved using brute force, but this won’t scale as the type of problem becomes more and more complex. Even in the coming generation of quantum computers, these problems will remain relatively hard if the only solution is to try every single option.
So, does P = NP? No one knows, we can only predict. To answer “yes” is to predict that there are one or more shortcuts that, once discovered, will allow computers to take over huge areas of problem solving, from engineering to scheduling to protein folding. To answer “no” is to predict that while some of these NP problems may get elegant solutions, there are entire classes of problems (such as the NP-complete subset of NP) that a computer will never solve without brute force...which may be a good thing since data encryption is riding on that bet. Many people believe that P =/= NP, including many of our foremost mathematicians, but none of them have proved it and claimed the $1M prize from the Clay Mathematical Institute so we can let the matter rest.
The long road to P = NP has generated a lot of puzzles and games, as people naturally share puzzles that defy solution until someone solves them or shows they can’t be solved. Whether to get assistance or just laugh at someone else’s struggle, there’s something about a tough puzzle that demands we pass it along. Even after we know the answer, hard puzzles are an opportunity to instruct and expound on the things that make them hard in the first place, and great mathematicians view the toughest puzzles as their teachers. One classic insoluble puzzles was the Seven Bridges of Konigsberg, which stumped puzzlers for decades until Gauss’ student Euler eventually showed why it was unsolvable. Euler created graph theory in the process of demonstrating this, which led to a whole new kind of math, and a new family of puzzles called Eulerian paths, where you must trace a path among points using each connecting edge only once.
Eulerian paths are still in children’s puzzle books today, but they were also taken up by mathematicians Thomas Kirkman and William Rowan Hamilton, who created the even harder class of Hamiltonian path problems that restricts the use of vertices rather than edges. In addition to giving rise to the “traveling salesman problem” (one of the most recognizable NP problems ever), the work of Kirkman and Hamilton went on to become the Icosian Game, a pegboard game for finding a Hamiltonian cycle which was the ancestor of several of our games, such as Peg Solitaire, The Last Fighter and Pythagoras Star. Like Eulerian paths, Hamiltonian paths remain a regular fixture of children’s puzzle books.
Another interesting detour on that road to P vs. NP happens when we take a closer look at that category of difficult problems called NP-complete. All NP-complete problems can be reduced to the same problem, which at this time computers can check easily but only solve by brute force. This includes popular puzzles like the Rubik’s Cube, Kakuro (Cross sums), Nonograms, and general Sudoku. This group includes a few 2-player games like Battleship and Mastermind, but not Chess or Go or Checkers; the complexity of such games exceeds even the realm of NP-Complete, because the addition of another player’s moves keeps changing the game. In essence, multiple players is what distinguishes a puzzle from a game, and it’s easy to see how this arose from the desire to share hard puzzles--the players take turns making the puzzle harder for each other. This explains how a solo puzzle like Peg Solitaire can lead to Chinese Checkers, Dead End, Ludo and more.
While computers have beaten grandmasters in Chess and Go, they do so by calculating faster, but speed--even processing speed--isn’t everything. Even if P = NP and we can delegate these huge problems to computers, there will still be things that humans do better because we process information differently. For example, humans have what psychologists call semantic memory, meaning we know things we were never taught. No one ever formally taught you that Ludwig von Beethoven didn’t have a telephone number, but you know he didn’t. When I ask Google for Beethoven’s phone number, it pulls up dog movies, and Cleverbot just stares at me in pixelated binary. If you want to sharpen the kinds of cognitive skills where humans beat computers, I’d look at Tangrams. After all, if Capchas are any indication, computers are a long way from being able to tell a candle from a stop sign.
Thanks for reading, and happy puzzling!
By Matthew Barrett
SIGACT News Complexity Theory Column 74 - Lane A. Hemaspaandra
Diagram of P versus NP problem, https://en.wikipedia.org/wiki/P_versus_NP_problem