The Atlantic gets in on the AI discussion in an article about a Turing test, where computers try to passably imitate humans, to the point where real humans can’t tell the difference. I learned about Turing tests in college, where the professor used scenes from Blade Runner to exemplify. (The tests were actually called Voight-Kampff tests in the film – they measured an actual physical response, not the cognition underlying verbal interaction)
It’s interesting to note how already our conception of computers and their relationship to human beings has changed:
In the early 20th century, before a “computer” was one of the digital processing devices that permeate our 21st-century lives, it was something else: a job description.
From the mid-18th century onward, computers, many of them women, were on the payrolls of corporations, engineering firms, and universities, performing calculations and numerical analysis, sometimes with the use of a rudimentary calculator. These original, human computers were behind the calculations for everything from the first accurate prediction, in 1757, for the return of Halley’s Comet—early proof of Newton’s theory of gravity—to the Manhattan Project at Los Alamos, where the physicist Richard Feynman oversaw a group of human computers.
Our frame of reference is already different:
In the mid-20th century, a piece of cutting-edge mathematical gadgetry was said to be “like a computer.” In the 21st century, it is the human math whiz who is “like a computer.” It’s an odd twist: we’re like the thing that used to be like us. We imitate our old imitators, in one of the strange reversals in the long saga of human uniqueness.