@LouisIngenthron @emilymbender It won’t look like that though. They will have build an algorithmic product that sends email or summarizes articles or whatever. No (prompter-side) human will have been in the loop, in products that would be uneconomical if one had to be. The argument will be the usual claim: you’ll break the future you you make us take responsibility for this text we are conveying. 1/

in reply to @LouisIngenthron

@LouisIngenthron @emilymbender It will not seem obvious the prompter did anything wrong. They won’t have prompted “Defame that Louis!” LLMs are unpredictable! Weird outputs are inevitable, some will slip through despite our best intentions, just like Twitter can’t be perfectly moderated. Surely purveyers of these amazing products, with no ill intent, shouldn’t be held to the impossible standard strict liability would impose. /fin

in reply to self