My general dislike of AI writing has had a positive impact on how I read and listen to texts and scripts.
-
The extra fingers don't bother me as the lack of coherence. Here is a sentence that really turned me off:
"The balance of salt and fresh water in the body of the cone snail is an essential equilibrium between concentrations of water."
????
I can infer what the training texts that made this might have said: the interesting information about how snails can survive in a salty environment while land snails are sensitive to salt. But, this is saying next to nothing.
@futurebird @orangelantern Seems directly related to the fingers thing, though. Wasn't the main issue causing finger deformation that the model would generate a section based off of what was next to it, so the smaller, intricate, and repetitive details get futzed up?
That sentence looks kinda similar. It took a bunch of sentences that said the same thing, picked the important bits of each sentence (balance, essential equilibrium, concentrations) and just...used all of them to be more important?
-
The extra fingers don't bother me as the lack of coherence. Here is a sentence that really turned me off:
"The balance of salt and fresh water in the body of the cone snail is an essential equilibrium between concentrations of water."
????
I can infer what the training texts that made this might have said: the interesting information about how snails can survive in a salty environment while land snails are sensitive to salt. But, this is saying next to nothing.
@futurebird That is because it is basically DadaDodo (https://www.jwz.org/dadadodo/), but instead of Markov chains it uses a neural network. If you claim it really understands anything, I'm going to need stronger proof than "it couldn't answer a question if it didn't understand it."
-
The extra fingers don't bother me as the lack of coherence. Here is a sentence that really turned me off:
"The balance of salt and fresh water in the body of the cone snail is an essential equilibrium between concentrations of water."
????
I can infer what the training texts that made this might have said: the interesting information about how snails can survive in a salty environment while land snails are sensitive to salt. But, this is saying next to nothing.
@futurebird @orangelantern What you said upthread somewhere about missing out on incoherent human writing is painfully relatable. It's bad enough trying to figure out if you've misread something, missed something earlier, or if the author knew what they were talking about but made a mistake writing it, or just didn't know what they were talking about, or some combination thereof…
-
I had to talk with a chat from Optimum Online about my internet recently. I think they are using AI, but mixed with a real customer service rep in some bizarre way. Every message is so wordy positive and sycophantic. "Of course I can help you do that right away!" (can you? can you really?)
But, eventually someone made some real changes to my account that I'd HOPE they wouldn't leave to an AI. There were long gaps between each response and I still had to wait 15 to talk to "someone"
@futurebird Kind of hilarious to think about those long pauses being someone having to read the output of this great, amazing, communicative machine that will replace labor, because it cannot be trusted.

-
The extra fingers don't bother me as the lack of coherence. Here is a sentence that really turned me off:
"The balance of salt and fresh water in the body of the cone snail is an essential equilibrium between concentrations of water."
????
I can infer what the training texts that made this might have said: the interesting information about how snails can survive in a salty environment while land snails are sensitive to salt. But, this is saying next to nothing.
This is an essential failure mode of slop text: it's only recombining probable things that might come next, and it's never trying to *say a specific thing* - there's no model of the thing the text is *about*.
So there's no process by which the LLM can express that thing elegantly or tell if it has been expressed at all.
-
This is an essential failure mode of slop text: it's only recombining probable things that might come next, and it's never trying to *say a specific thing* - there's no model of the thing the text is *about*.
So there's no process by which the LLM can express that thing elegantly or tell if it has been expressed at all.
@petealexharris@mastodon.scot @futurebird@sauropods.win Well, no, of course not. Because for that, after all, you need an actual intellect. A mind. The machine can't tell if something is good or even makes sense. It just can sort data by context, which, technologically speaking, still is amazing and has lots of useful applications (even in creative work) but it's not enough to create something on its own that is reliably useful or good in the same way a mind could come up with.
AI won't replace artists and writers anytime soon. Which doesn't mean idiot bosses who don't see extra fingers or incoherent writing won't use it to kill their jobs, though.
It's already bad with english but infinitely worse with german, let me tell you. Every sentence this thing translates (because it does all generation in english and translates back and forth) is stilted and artificial at best, completely nonsensical at worst.
And it completely falls apart when confronted with dialects. Nevertheless german television is killing off subtitling jobs to replace them with this tech that barely speaks english, nevermind low german or bavarian. Madness. And lots of magical thinking. If nothing else, this will be funny once they realize it doesn't work.While truly useful in many ways, this tech is getting oversold in a way that can only be described as lunacy, complete and utter madness. Because they know it can't deliver the sales they promised their investors, so they double and triple down promising real magic their tech absolutely can't do. This could all have been avoided with a little dose of realism instead of AI-generated hallucinations. Never drink your own Kool-Aid, I guess.
-
My general dislike of AI writing has had a positive impact on how I read and listen to texts and scripts.
If I'm listening to a nature video, for example, and a sentence is empty of meaning or just illogical I turn that video off and avoid whoever made it.
Some of the things I've rejected probably weren't made by AI, but I don't see that as a bad thing.
My main issue with AI texts is I just find them kind of patronizing? You want me to sit and nicely listen but you can be bothered to write?
@futurebird @mayintoronto same.
-
I had to talk with a chat from Optimum Online about my internet recently. I think they are using AI, but mixed with a real customer service rep in some bizarre way. Every message is so wordy positive and sycophantic. "Of course I can help you do that right away!" (can you? can you really?)
But, eventually someone made some real changes to my account that I'd HOPE they wouldn't leave to an AI. There were long gaps between each response and I still had to wait 15 to talk to "someone"
@futurebird I think I’ve encountered something like this, and I just assumed that a human rep was triggering a text macro, which might have been pre-written by an AI (or a human really good at sycophancy).
-
The extra fingers don't bother me as the lack of coherence. Here is a sentence that really turned me off:
"The balance of salt and fresh water in the body of the cone snail is an essential equilibrium between concentrations of water."
????
I can infer what the training texts that made this might have said: the interesting information about how snails can survive in a salty environment while land snails are sensitive to salt. But, this is saying next to nothing.
@orangelantern @futurebird I mean, you have to give that an incredibly creative reading to even call it anything resembling reality.
Is there salt water in cone snails? No
Is there fresh water in cone snails? No
It starts at completely wrong and gets worse from there. -
@orangelantern @futurebird I mean, you have to give that an incredibly creative reading to even call it anything resembling reality.
Is there salt water in cone snails? No
Is there fresh water in cone snails? No
It starts at completely wrong and gets worse from there.As person who knows only a smattering of facts about cone snails this sentence first made me feel like my reading comprehension was getting worse... then just disgusted.
-
@futurebird I think I’ve encountered something like this, and I just assumed that a human rep was triggering a text macro, which might have been pre-written by an AI (or a human really good at sycophancy).
@adamrice @futurebird
The Uber driver support got to be almost maddeningly apologetic when I drove for them. I'd call with something simple like "this order was already picked up" or "the restaurant is closed" and they'd immediately say "I'm so sorry, I know it is very annoying for you to have to deal with this!" It was just a script. I always wanted to tell them to skip it but that would have probably led to more of it or just confusion. -
My general dislike of AI writing has had a positive impact on how I read and listen to texts and scripts.
If I'm listening to a nature video, for example, and a sentence is empty of meaning or just illogical I turn that video off and avoid whoever made it.
Some of the things I've rejected probably weren't made by AI, but I don't see that as a bad thing.
My main issue with AI texts is I just find them kind of patronizing? You want me to sit and nicely listen but you can be bothered to write?
Shouldn't it be "you can't be bothered to write"?