@djc i agree it has been surprising! but the question of a capacity for accuracy rather than mere verisimilitude is essential to evaluating apocalypse scenarios. you can’t plot to take over the world as a mere exuberant brainstormer often indifferent between a superficially plausible model of the world and an accurate one. 1/
@djc the AI community itself suffers from this. it is disproportionately populated by “g” (general intelligence) / IQ enthusiasts (who perceive themselves as blessed in this dimension). 2/
@djc but in terms of accomplishing real things in the real world, many capabilities are necessary, some of which don’t correlate or might correlate inversely to IQ test proficiencies. MENSA braggarts notoriously don’t cure cancer. only institutions that may include them among others do. 3/
@djc i think the “foom” story starts with a very scalar model of capability. a thing is “smart” enough to develop a smarter thing, recurse, voila the singularity. 4/
@djc but i think that gets you more to synthetic VanGoghs cutting off their silicon ears than, say, the ecosystem of capabilities that makes a WWII-style war effort (even a physics heavy Manhattan Project) possible. 5/
@djc and “judgment”, being able to calibrate the accuracy of superficially plausible conjectures, being able to choose in which direction its best to err given uncertainties and fallibilities and where you’s start again post-failure is about the most basic capability. 6/
@djc systems that include LLMs certainly will include this. they already do: the apocalyptic AI is the profit-seeking joint stock firm as it has been for centuries. i’m not sure how much exuberant LLMs much alter that ling-running apocalypse. /fin