@glennf They did it twice before, pre-Musk. Twitter is a recidivist encourager of app development they later kill. Musk just adds Bond villain look to what has long been an abysmal record.

in reply to @glennf

@LouisIngenthron @emilymbender It wasn't, at the time. That's retconning, I think. It was discussion forums that prompted 230, to encourage curation of harmful speech. (It's Section 230 of the Communications Decency Act.)

in reply to @LouisIngenthron

@LouisIngenthron @emilymbender According to EFF, Section 230 protects you when you, say, forward an e-mail to a public list. (It's part of their disingenuous it-protects-you-and-me-not-just-big-firms spin.) Mail providers were protected before 230, bc they didn't curate. If you retweet a defamatory tweet, you are not liable, even though you affirmatively chose to do so. Sectio 230 exists to shield discretionary decisions to publish or not, distribution without discretion was already protected.1/

in reply to @LouisIngenthron

@LouisIngenthron @emilymbender (If 230 didn't exist would mail providers become liable on the theory that spam filtering is editorial discretion? That's an interesting question!) 2/

in reply to self

@LouisIngenthron @emilymbender However counterintuitive, at least under EFF's description, if I privately dish to you by e-mail, and you forward the defamatory speech to a big public mailing list, I am protected but you are liable. Defamation doesn't depend on an intent to publicize. Leaked private defamation is actionable if it's harmful, and in the digital realm all but the original defamer are often shielded. 3/

EDIT: I screwed this up, meant to say u r protected but I am liable. Thx Louis!

in reply to self

@LouisIngenthron @emilymbender Now we are saying the people must protect accurate reports of what ChatGPT said from any leak if they prompted it, because whatever the eff ChatGPT said it's as if they said it themselves, from a liability perspective. A bit weird. /fin

in reply to self

@LouisIngenthron @emilymbender These questions of who is the “prompter”, “publisher”, “creator”, “author” get very vague. A friend uses ChatGPT, gets a funny but defamatory response, forwards it to me privately by mail. I then publish it. Section 230 clearly protects me. Is my friend then liable?

in reply to @LouisIngenthron

“So I nailed it, then.” ~ toot.community/@openculture/10

@LouisIngenthron @emilymbender The prompter is the end user! User writes a question on a help forum, firm presents that to LLM OpenAI mostly trained but firm has customized. It replies to user. Is support-seeker author of the reply she receives, for “using the tool” that is the vendor’s support forum? Under your “prompter is author” theory, she would be! Is the firm responsible bc it customized the model? If so, why not OpenAI, whose training forms the bulk of it?

in reply to @LouisIngenthron

Just wait until you see Amazon Frown.

@LouisIngenthron @emilymbender It won’t look like that though. They will have build an algorithmic product that sends email or summarizes articles or whatever. No (prompter-side) human will have been in the loop, in products that would be uneconomical if one had to be. The argument will be the usual claim: you’ll break the future you you make us take responsibility for this text we are conveying. 1/

in reply to @LouisIngenthron

@LouisIngenthron @emilymbender It will not seem obvious the prompter did anything wrong. They won’t have prompted “Defame that Louis!” LLMs are unpredictable! Weird outputs are inevitable, some will slip through despite our best intentions, just like Twitter can’t be perfectly moderated. Surely purveyers of these amazing products, with no ill intent, shouldn’t be held to the impossible standard strict liability would impose. /fin

in reply to self

@LouisIngenthron @emilymbender That’s your view! We’ll see if it’s also the courts’, despite clever protestations by all the organizations eagerly plugging LLMs into their products, and likely the vendors selling them.

in reply to @LouisIngenthron

@LouisIngenthron @emilymbender That’s an answer. I don’t think it’s necessarily wrong, but I don’t think it’s as clearly right as you do. We’ll have to collectively decide. It will come up soon, as orgs are publishing and conveying LLM outputs, and LLM outputs are very unpredictable. I’m bringing out the :popcorn: for how those controversies pan out. Both OpenAI and the prompter will argue that Section 230 means no one is accountable, just like anonymous speech.

in reply to @LouisIngenthron

@LouisIngenthron @emilymbender Right. People under the law aren’t necessarily natural people. So is OpenAI (the organization) the right person to attribute authorship to? The prompter? Should it be determined by some complicated analysis specific to each particular case? Should (like anonymous speech) it be for all practical purposes authorless?

in reply to @LouisIngenthron

@LouisIngenthron @emilymbender But it’s not just the prompt of the user! MS has almost no role in determining the content of MS Word. Open AI typical has done much more than the prompter to determine what speech any particular prompt will yield. I prompt, “Say something jiggly” and it spits out a graf. Who more “authored” that, OpenAI or me (or nobody)?

in reply to @LouisIngenthron

@LouisIngenthron @emilymbender If you encounter a person’s speech, even anonymous, that is defamatory, and you pass it along by forwarding emails, usually you’d be protected by . (This is one of EFF’s disingenuous talking points in favor of Section 230.) What if it turns out the anonymous speaker was . Are you still protected? Do we deem it the speech of the prompter, or Open AI, or whom?

in reply to self

@LouisIngenthron @emilymbender if a corporate entity has a substack, it’s not a person, the tools by which the content is created are entirely opaque to substack, yet substack still would be protected.

in reply to @LouisIngenthron

@LouisIngenthron @emilymbender But AI language models are a form of human expression. Open AI is not the model. It’s the human organization that hired Kenyan workers to decide the model would be trained on this speech, not that, with this structure and these parameters, not those. That’s entirely unlike MS Word, which is neutral as to content. If anyone is to be responsible for Chat GPT speech, why shouldn’t it be the people who most determined its character?

in reply to @LouisIngenthron

@LouisIngenthron @emilymbender if a corporation posts something, it’s a 1st party but it’s not a person. is OpenAI the 1st party? The intent of Section 230 was to encourage a diverse range of internet forums, in terms of particip8n and moder8n. Section 230 shields even when in practical terms there is no 1st party to hold responsible, e.g. anonymous speech. AI tools are arguably an important new participant in online forums. should they be uniquely perilous?

in reply to @LouisIngenthron

@LouisIngenthron @emilymbender note that soliciting speech or even being paid to host it doesn’t invalidate Section 230 protection. substack actively solicits the participation of particular authors, and earn a cut of their revenue, but remains shielded. if a platform solicits a LLM’s speech, why is that any different?

in reply to self

@lauren why is he concealing it?!?

in reply to @lauren

@funnymonkey :openoffice:

in reply to @funnymonkey

airpods gave us a world where i text my wife when we're in the same room.

"[H]egemony is active: it is 'structural' only to the extent that the hegemon hegemons. And if domestic institutions complicate that performance, then the hegemon can’t." @profmusgrave musgrave.substack.com/p/fricti

So often user interfaces are made less informative in the name of "simplifying" them. I dislike this trend.