@wikicliff I don’t think we’re disagreeing very much. I don’t claim they will “bring the receipts” in a useful form. On the contrary. I think the authors of these devices will design side processes that search for elements of their vast training sets that in some sense seem related to the output, and have the outputs bibliographize that. It will be retrospective, rationalization, but they might train this second thing to do a pretty good job of selecting things mostly consistent with the 1/
@wikicliff output of the first thing. They won’t be “real” receipts, in the sense of “here is the evidence that persuaded me that should (normatively) persuade you.” The objective will be to persuade you that whatever the LLM spouted was right, not because the LLM learned from these sources and believes them to be solid, but because the LLM doesn’t know or believe anything at all and this other thing just trains on some measure that it has convinced people. /fin