· Föderation EN Do 08.05.2025 13:25:50 I got a really strange reply on Bluesky, and I think it's worth quoting, because it's worth refuting: > one of the biggest problems in human communication is that people (without assistance) say things they do not mean. it's one of the primary forms of miscommunication out there. AI assistants can, actually, help ensure people _do_ say what they mean, while also anticipating how others may take it. (cotd) |
Föderation EN Do 08.05.2025 13:26:23 Strange reply about AI, continued: > in this sense what people do with LLMs is not meaningfully different from translation, in fact it is all translation between idiolects and sociolects, colloquial dialects and prestige dialects. there's no linguistically sound distinction between these forms of translation. (ok, now to refute it) |
Föderation EN Do 08.05.2025 13:27:54 My reply: The AI doesn't have access to your internal thoughts to translate from. If I ask an AI to translate a book, but I keep the book closed, and it can only look at the cover, it can't translate it unless it already knows its contents. It can guess, but that's not the same. https://bsky.app/profile/dustyweb.bsky.social/post/3lonsg4gxps2k |
Föderation EN Do 08.05.2025 13:29:44 So, it's true that communication is largely, even primarily, translation between different mediums of information. Even your thoughts to the words from your mouth is translation. But it's fully strange to say that AI generating thoughts "for you" is "translation" from a source material you cannot examine. This is like the weird thing of an "AI representation of a deceased defendant" appearing in court. What absolute, dangerous nonsense. Complete misunderstanding of life, ideas, communication. |
Föderation EN Do 08.05.2025 13:33:39 I removed the links to the post because the original poster felt it was dogpiling for me to do so, and fair enough I guess. But I am troubled by this line of thinking, and I think it *is* a line of thinking people are going down. |
Föderation EN Do 08.05.2025 14:07:44 @cwebber unrelated to the thread but somewhat related to what that person was saying: is there a reason why you boost your own self-replies? i think the overwhelming majority of fedi apps will show self-replies in timelines, so boosting yourself just makes your posts show up twice in a row, which is unnecessary. (i can understand doing it on bluesky, because bluesky collapses reply chains to max 3 or so, but fedi largely doesn't do that...) |
Föderation EN Do 08.05.2025 14:09:30 @trwnh I boost my own replies so you see it twice because then it means the reply is double good |
Föderation EN Do 08.05.2025 14:25:53 @cwebber @trwnh The lack of algorithm means mastodon's feeds are dominated by nature's most implacable algorithm, time. idk if I want to make a general case for this, but I like it when some of the people I follow boost or edit their posts because, if they didn't, I wouldn't see the posts. This is especially the case when the people are in different time zones (i.e., most people). |
Föderation EN Do 08.05.2025 13:35:35 @cwebber LLMs are the new "there are spirits living in the stones" |
Föderation EN Do 08.05.2025 13:37:03 @cwebber I think that BS was just something cooked up by the victim's side for vengeance upon the accused. |
Föderation EN Do 08.05.2025 13:36:04 @cwebber Don't point out that the AI can't access your mind! Some of your readers will respond by getting a fucking Neuralink by Elon Musk installed in their brain so the "AI" can "know" their thoughts. /s People who depend on AI grift will tend to double down rather than learn to do difficult things, like breaking that dependency. It's like the Carl Sagan quote: Once you give [an LLMentalist] power over you, you almost never get it back. |