in terms of user experience, kindle is by far the best of the ebook platforms. but you can never fucking trust them. they arrogate to themselves a role of continuing control of what you think is yours. mastodon.social/@joeross/10996

overall AI will make life

43.1%
better
(25 votes)
56.9%
worse
(33 votes)

Are there precedents for state-imposed blocklists like this in the US? Has anyything like this been enforced or adjudicated before? mastodon.lawprofs.org/@blakere

If a firm makes a harmful error, a role for AI/machine-learning tools in the chain of events that lead to the error should be an aggravating rather than mitigating factor, like drunkenness for car accidents.

At first it seems unfair ("I wasn't myself!" or "The AI did it!") but the point is it's your responsibility when you create the circumstances under which inadequate or harmful or insufficiently accountable choices are likely to be made.

one way to address AI risk might be very strict liability early on (already arguably we are late to the game) for whomever deploys it.

it's weird how schools now monetize your kids back to you. buy the photo we took of him and the class picture too, buy a mug with this artwork we had him make in class.

when it’s over we’ll call it the sungularity.

you don’t, actually, gotta respect the hustle.

i love it when Apple flags the sent mail i cc to myself junk.

large language models are ghosts of us, all of us, the living and the dead.

Ben Sperry on AI-generated content and Section 230: truthonthemarket.com/2023/03/0

algorithmic feeds are influence ops camouflaged among abdications of responsibility.

The energy devoted to establishing the truth or falsity of conjecture X should grow with the distance between (optimal action conditional on X) and (optimal action conditional on not X).

If you are going to do the same thing whether X or not X, who gives a F about X?

"In fact, artificial intelligence is something of a red herring. It is not intelligence that is dangerous; it is power. AI is risky only inasmuch as it creates new pools of power. We should aim for ways to ameliorate that risk instead." @Meaningness betterwithout.ai/scary-AI ht @rezendi

"As digital platforms, more or less invisibly, use homophily to guide us to people, purchases, destinations, and ideas, they help to produce a social world in which previously held identities and positions are reinforced and concentrated rather than challenged or hybridized." e-flux.com/architecture/are-fr

We talk about structural racism, but maybe prior to and upstream from that is "structural homophily" — the tendency of like to associate with like. That tendency, the degree to which it obtains, is obviously socially contingent. But our networks and algorithms are often designed according to a self-fulfilling that like prefers like. cf e-flux.com/architecture/are-fr

The fascist impulse among the leading political faction in my new home state is cartoonishly evident. wfla.com/news/politics/florida ht @mrbadger42 @morgfair

"platform companies have become knowledge intermediaries, like newspapers or school curriculum boards, while insulating themselves from traditional accountability."

From "The Moral Economy of High-Tech Modernism", an excellent, provocative essay by @henryfarrell crookedtimber.org/2023/03/01/t

My basic contention is that anything Russia or China should be constrained from doing with respect to our domestic affairs so too should Elon Musk be constrained from doing.
@stephenjudkins @failedLyndonLaRouchite

@failedLyndonLaRouchite one can always draw the line at lawbreaking, and if foreign "speech" (as the Supreme Court has defined it, to include money for influence) is criminalized, then of course foreign ops are uniquely criminal. But on a principled and practical basis, I see no reason to fear Chinese influence ops more than I fear Bezos' or Musk's. They are all covert attempts to undermine a more decentralized democratic consensus-building in ways I perceive as adverse to my interests.