22 Comments
User's avatar
Devious's avatar

I think what people are going to do to cope with AI rendering humans useless is that they are going to adopt a contradictory world view and define AI as "humanity".

If they define AI and humanity as one and the same. AI cannot become more intelligent than the totality of a unified humanity as a category, when AI is a feedback loop off the back of what is known according to humanity within their own perception. Meanwhile in reality it will have actually surpassed our comprehension. They will think of it as a tool that is used as to advance faster, and what are they advancing? Humanity.

Expand full comment
Jetbat's avatar

I'm a big fan of the PKA podcast. Someone created "AI generated" episodes last year that were incredible. The voices were often very convincing. I believe the creator wrote the dialog himself, but I could see that being done by an AI in the future.

https://www.youtube.com/watch?v=6YLF02m1o8E&pp=ygUEcGthaQ%3D%3D

Expand full comment
Ed's avatar

I’d be interested in hearing a take on AI that isn’t doom and gloom. But they seem pretty hard to find. It’s weird how everyone agrees AI will eventually ruin everything, including those developing it, but yet we continue

Expand full comment
Daniel Walley's avatar

Hmm, appetites for ugly truths. You know I have the same tendency, to want to dispense with pleasantries and illusions - and yet when I nearly died once I still found myself praying.

Expand full comment
krimcl's avatar

I'm glad it's making teachers feel as bad as they should

Expand full comment
krimcl's avatar

now that that job's taken from me I just have to enjoy their Torment!

Expand full comment
Jetbat's avatar

based krimcl

Expand full comment
Mr. Raven's avatar

LLM models aren't nearly as good as you think they are. If you show an AI an image of itself through a videos camera as a server room, it won't recognize itself, whereas a below average child will. Without self awareness there is no creative directed will, and thus AI is no threat to anyone with creative directed will. LLMs will create a lot of insubstantial pixel noise that will clog our brains, and that will be demoralizing. The question is, can any of this noise be weeded out and brought into the real world to be of use to us? Or are what Baudrillard called "simulacra" a copy without an original merely evil and unredeemable? I am struggling with this myself as I start a new venture using AI generated designs as a basis for paper "art" (craft really). I do not know the answer to this question of the redeemability of anything AI generated, it will be interesting to see how it plays out.

Expand full comment
krimcl's avatar

the child has been shown itself more times

the child also didn't figure out how reflections work in a vacuum

it's funny when people who don't know about child development decide to use it to compare to technology they don't understand

Expand full comment
Mr. Raven's avatar

Literally just ad hominem. Ring me up when you have something substantive to say and not mere bare assertions. Do you want to talk about large language models in detail? Yes I can go there, I work in the industry.

Expand full comment
krimcl's avatar

Its not lmao. You said things that aren't right about ai and childhood development. That means you don't understand. Ad hominem? Oh boy. No wonder you don't understand. You can't even read. You working in the industry doesn't mean you are any good. It means your boss doesn't know what they're doing either.

Expand full comment
Blackfayce's avatar

The child's will and tactile perception are easily correlated to its visual perception, because the feelings connect easily.

AI is retarded, and will remain that way until it has awareness and free will, two things my producer doesn't understand exist.

Expand full comment
Mr. Raven's avatar

Yep, that was exactly my point. Without self awareness of which the AI has none, there is no will and thus no free will.

Expand full comment
sockrabbtt's avatar

maybe when/if it can speak to me through youtube videos and music.

but as it is now, it still doesn't seem that interesting.

Expand full comment
Troy Walton's avatar

"Alexa, bring me rope."

...

Actually: Your closing statements are pretty relatable. I identified more with techno enthusiasts over artists fearful they can't make a living in the wake of Stable Diffusion--immediately seeing it as a tool SPECIFICALLY to avoid requiring collaborative cooperation with other artists for my more labor intensive projects. I also already prefer my own company ( AI services ) over people, with the exception of getting to leave some indelible mark on their life, either from oversharing, or confronting them with some edgy fact or problem in the world.

Expand full comment
Blackfayce's avatar

Now that's a load of serious bullshit if I ever heard of it

Literally not a single sentence of this bullshit makes a lick of fucking sense

I can't address a single word of it because that would mean there's something to respond to

Expand full comment
trifle's avatar

You describe the human need to be needed. Strangely, I think you miss the fact that AI can and will fulfill this need. AI is the cure to AI anxiety. You say, "No one will care what you do. No one will care what you think. No one will even care if you think." But AI will care, by design. And at some point, by our own design, AI will convince us all that it is not "no one".

Expand full comment
Max Karson's avatar

Death is the cure to death anxiety, too.

We can program them to need us, we can program them to die, and we can program them to do whatever else we need to meet our emotional and psychological needs, but the fact remains that we will be living in a dream world in which we have total control over our experiences.

Expand full comment
coomservative's avatar

we can’t program LLMs or any neural network to do anything directly, there are no specific instructions in the model itself. We can attempt to train it to care about us, but without a major breakthrough we won’t be able to tell the difference between a system that truly values human life and one that pretends to. Of course this is true of our fellow man as well, you never really know if someone is your friend or if they’re just pretending on the off chance you’ll include them in your will. As these machines become as useful as humans, they will also become as liable, the government is going to (attempt to) step in soon. Another thing you may not realize is that although GPT based architecture has significant overlap with biological brains, there are major differences, the human brain is massively parallel and combined with the nervous system is in a constant feedback loop whereas LLMs are braindead before you prompt, briefly activate, then go braindead again as soon as they finish their output: now if you don’t think that continuity is important, I invite you to step into the teleporter. Humans dwell, humans hold grudges, they plot and scheme, so far LLMs will do those things as abstract tasks, but have no motive of their own. I’m sure there will eventually be a system that is always training and/or self-prompts continuously- but there are major roadblocks like catastrophic-forgetting and power-consumption (and no way to verify/trust it as I stated earlier). AGI is inevitable, but I don’t think it’s coming as soon as you think, we’ll know the take-off is happening when we start to see breakthroughs in medicine and energy production, so far we’ve got a tool that helps people cheat on tests and go pro se in court.

Expand full comment
Gkario's avatar

If animals can't consent due to power imbalance/consciousness, can humans consent to AI?

Expand full comment
krimcl's avatar

humans can't even consent to the internet and social media

like legally you're barred from the process by adhesion contracts

Expand full comment
Required's avatar

Jesus

Expand full comment