Wonder & Fear

The emergent properties of the latest large language models — their ability to stitch together what seems to pass for a primitive form of knowledge of the workings of our world — are not well understood. In the absence of understanding, the collective reaction to early encounters with this novel technology has been marked by an uneasy blend of wonder and fear.

It is not at all clear — not even to the scientists and programmers who build them — how or why the generative language and image models work. And the most advanced versions of the models have now started to demonstrate what one group of researchers has called “sparks of artificial general intelligence,” or forms of reasoning that appear to approximate the way that humans think.

Alexander Karp, CEO of Palantir Technologies, a company that creates data analysis software and works with the U.S. Department of Defense, writing in the New York Times