Mastodon Skip to content
  • Home
  • Aktuell
  • Tags
  • Über dieses Forum
Einklappen
Grafik mit zwei überlappenden Sprechblasen, eine grün und eine lila.
Abspeckgeflüster – Forum für Menschen mit Gewicht(ung)

Kostenlos. Werbefrei. Menschlich. Dein Abnehmforum.

  1. Home
  2. Uncategorized
  3. This blogpost makes an astoundingly good case about LLMs I hadn't considered before.

This blogpost makes an astoundingly good case about LLMs I hadn't considered before.

Geplant Angeheftet Gesperrt Verschoben Uncategorized
27 Beiträge 19 Kommentatoren 0 Aufrufe
  • Älteste zuerst
  • Neuste zuerst
  • Meiste Stimmen
Antworten
  • In einem neuen Thema antworten
Anmelden zum Antworten
Dieses Thema wurde gelöscht. Nur Nutzer mit entsprechenden Rechten können es sehen.
  • cwebber@social.coopC This user is from outside of this forum
    cwebber@social.coopC This user is from outside of this forum
    cwebber@social.coop
    schrieb zuletzt editiert von
    #1

    This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop

    vfrmedia@social.tchncs.deV ramin_hal9001@fe.disroot.orgR copystar@social.coopC futzle@old.mermaid.townF mayintoronto@beige.partyM 18 Antworten Letzte Antwort
    1
    0
    • cwebber@social.coopC cwebber@social.coop

      This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop

      vfrmedia@social.tchncs.deV This user is from outside of this forum
      vfrmedia@social.tchncs.deV This user is from outside of this forum
      vfrmedia@social.tchncs.de
      schrieb zuletzt editiert von
      #2

      @cwebber Already the forums for VOIP software and embedded stuff (Arduino etc) are fully enshittified - they were toxic enough pre-AI) and folk have simply stopped contributing there (I think this happened just *before* LLMs became popular, so the quality of whatever does go to the training sets isn't going to be much good).

      I suspect another factor is when people are getting paid for their work *and* depending on their employers upselling SaaS or other commercial services, they are less inclined to share stuff with the competition (I had to figure out my PJSIP trunk and securing it for myself, most of my findings are tooted here on Fedi as I'm not even sure where else to put them)

      lfzz@mastodon.socialL 1 Antwort Letzte Antwort
      0
      • cwebber@social.coopC cwebber@social.coop

        This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop

        ramin_hal9001@fe.disroot.orgR This user is from outside of this forum
        ramin_hal9001@fe.disroot.orgR This user is from outside of this forum
        ramin_hal9001@fe.disroot.org
        schrieb zuletzt editiert von
        #3

        @cwebber@social.coop exactly. AI can only exist because of human-made content from the Internet, and because humans have been talking with each other over the internet for roughly 30 years at this point, and most of these conversations have been archived somewhere.

        So what would happen if this archive disappears? Maybe not the physical medium of the Internet, but the fact of humans speaking with each other over the Internet and recording their conversations, that can disappear. Less and less often people are asking questions of other humans in forums like Reddit or Stack Overflow, and more and more often humans ask questions to AI apps where the conversation cannot be used as more training data as this would create a destructive feedback loop in the signal that the AI is built to predict, which can lead to the AI behaving erratically.

        (I originally wrote the above on my blog post from last August, I'm saying this to not plagiarize myself.)

        It is kind of like the Internet is a forest of natural content. AI companies have now cut down the forest without planting new trees. They have all the raw material now and are trying (and failing) to profit from it, but in 5 years that information may become out-of-date, will any humans be asking new questions of each other on platforms not mined for content by AI companies? If not, these LLMs will rapidly become more and more innacurate over time.

        1 Antwort Letzte Antwort
        0
        • cwebber@social.coopC cwebber@social.coop

          This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop

          copystar@social.coopC This user is from outside of this forum
          copystar@social.coopC This user is from outside of this forum
          copystar@social.coop
          schrieb zuletzt editiert von
          #4

          @cwebber Thanks for sharing. To my ears, it rhymes with this response to the Microsoft-copilot-github encroachment: https://githubcopilotinvestigation.com/

          dnavinci@genomic.socialD 1 Antwort Letzte Antwort
          0
          • cwebber@social.coopC cwebber@social.coop

            This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop

            futzle@old.mermaid.townF This user is from outside of this forum
            futzle@old.mermaid.townF This user is from outside of this forum
            futzle@old.mermaid.town
            schrieb zuletzt editiert von
            #5

            @cwebber The number of times someone would DM me on a forum asking me for private help, and I would always tell them to ask on the public forum so that everyone else could benefit … and they never did.

            The “fuck the community, I’ve got mine” is stronger than ever.

            bornach@fosstodon.orgB 1 Antwort Letzte Antwort
            0
            • copystar@social.coopC copystar@social.coop

              @cwebber Thanks for sharing. To my ears, it rhymes with this response to the Microsoft-copilot-github encroachment: https://githubcopilotinvestigation.com/

              dnavinci@genomic.socialD This user is from outside of this forum
              dnavinci@genomic.socialD This user is from outside of this forum
              dnavinci@genomic.social
              schrieb zuletzt editiert von
              #6

              @copystar
              I've been thinking about adverse methods for reclaiming those debugging loops into offline models.
              I think the process of "distilling" models can probably work as a sort of shunt to bring that back to the communities.

              It's not cheap, but it's only like $10k. Not the billions they're spending on these models quarterly.

              Today you can pay AWS and they'll leave you alone in a room with Claude all day, no questions asked.

              That might change in the future.
              @cwebber

              1 Antwort Letzte Antwort
              0
              • cwebber@social.coopC cwebber@social.coop

                This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop

                mayintoronto@beige.partyM This user is from outside of this forum
                mayintoronto@beige.partyM This user is from outside of this forum
                mayintoronto@beige.party
                schrieb zuletzt editiert von
                #7

                @cwebber Sure didn't help that Stack Overflow was killing itself too.

                shom@gts.shom.devS 1 Antwort Letzte Antwort
                0
                • cwebber@social.coopC cwebber@social.coop

                  This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop

                  dave@rascalking.comD This user is from outside of this forum
                  dave@rascalking.comD This user is from outside of this forum
                  dave@rascalking.com
                  schrieb zuletzt editiert von
                  #8

                  @cwebber dear god, i knew SO was taking a beating, but that chart the post linked to was _bleak_.

                  1 Antwort Letzte Antwort
                  0
                  • mayintoronto@beige.partyM mayintoronto@beige.party

                    @cwebber Sure didn't help that Stack Overflow was killing itself too.

                    shom@gts.shom.devS This user is from outside of this forum
                    shom@gts.shom.devS This user is from outside of this forum
                    shom@gts.shom.dev
                    schrieb zuletzt editiert von
                    #9

                    @mayintoronto @cwebber
                    True, that decline was on it's way. But at least the information was publicly available.
                    This is also why I was ranting about it two years ago when people were encouraging others to delete their stackoverflow contributions (as if that meant that the AI companies wouldn't have access to it, SO made a deal for direct share vs scraping). You're just denying it to other human beings at this point. Build a better web, post in personal blogs, community forums, don't lock up knowledge in opaque systems!

                    1 Antwort Letzte Antwort
                    0
                    • cwebber@social.coopC cwebber@social.coop

                      This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop

                      r343l@freeradical.zoneR This user is from outside of this forum
                      r343l@freeradical.zoneR This user is from outside of this forum
                      r343l@freeradical.zone
                      schrieb zuletzt editiert von
                      #10

                      @cwebber Extra painful: bigger tech companies can afford to pay for plans that limit use of context data to train future versions of an LLM service's models so THEIR work is "protected" while their employees consume the commons. But smaller companies and individual users will be giving up their data.

                      1 Antwort Letzte Antwort
                      0
                      • cwebber@social.coopC cwebber@social.coop

                        This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop

                        dandean@indieweb.socialD This user is from outside of this forum
                        dandean@indieweb.socialD This user is from outside of this forum
                        dandean@indieweb.social
                        schrieb zuletzt editiert von
                        #11

                        @cwebber enclosure strikes again

                        1 Antwort Letzte Antwort
                        0
                        • cwebber@social.coopC cwebber@social.coop

                          This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop

                          tseitr@mastodon.sdf.orgT This user is from outside of this forum
                          tseitr@mastodon.sdf.orgT This user is from outside of this forum
                          tseitr@mastodon.sdf.org
                          schrieb zuletzt editiert von
                          #12

                          @cwebber this article have some good prediction on the knowledge silos LLM might be able to create, but it does not address the fact that the business model is not profitable. When the price hike hit to make LLM generating profits, people will have to balance the price of the subscription with the real value provided... just like any purchase. We might loose a few years of knowledge down the LLM silos before the collapse but personally, I think it is ok, we have plenty of good documented 1/2

                          tseitr@mastodon.sdf.orgT 1 Antwort Letzte Antwort
                          0
                          • tseitr@mastodon.sdf.orgT tseitr@mastodon.sdf.org

                            @cwebber this article have some good prediction on the knowledge silos LLM might be able to create, but it does not address the fact that the business model is not profitable. When the price hike hit to make LLM generating profits, people will have to balance the price of the subscription with the real value provided... just like any purchase. We might loose a few years of knowledge down the LLM silos before the collapse but personally, I think it is ok, we have plenty of good documented 1/2

                            tseitr@mastodon.sdf.orgT This user is from outside of this forum
                            tseitr@mastodon.sdf.orgT This user is from outside of this forum
                            tseitr@mastodon.sdf.org
                            schrieb zuletzt editiert von
                            #13

                            @cwebber tech from 2010 to 2023 that will still be fully usable. People choosing to use said tech will decrease cost and might fast-forward the downfall of LLM everywhere trend we are currently in.

                            1 Antwort Letzte Antwort
                            0
                            • cwebber@social.coopC cwebber@social.coop

                              This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop

                              jens@social.finkhaeuser.deJ This user is from outside of this forum
                              jens@social.finkhaeuser.deJ This user is from outside of this forum
                              jens@social.finkhaeuser.de
                              schrieb zuletzt editiert von
                              #14

                              @cwebber I keep repeating this in such contexts, apologies if I sound like a broken record: AI is a fascist project.

                              The purpose isn't merely enclosure of the commons. Making public stuff private is more of a means to an end.

                              There is centuries of historic precedent that shows that when a state has natural resources, it needs fewer people to extract wealth from that, and so pay for what keeps rulers in power.

                              If a state has fewer resources, it has to rely on a large, healthy population's...

                              jens@social.finkhaeuser.deJ 1 Antwort Letzte Antwort
                              0
                              • jens@social.finkhaeuser.deJ jens@social.finkhaeuser.de

                                @cwebber I keep repeating this in such contexts, apologies if I sound like a broken record: AI is a fascist project.

                                The purpose isn't merely enclosure of the commons. Making public stuff private is more of a means to an end.

                                There is centuries of historic precedent that shows that when a state has natural resources, it needs fewer people to extract wealth from that, and so pay for what keeps rulers in power.

                                If a state has fewer resources, it has to rely on a large, healthy population's...

                                jens@social.finkhaeuser.deJ This user is from outside of this forum
                                jens@social.finkhaeuser.deJ This user is from outside of this forum
                                jens@social.finkhaeuser.de
                                schrieb zuletzt editiert von
                                #15

                                @cwebber ... labour. A population that needs to be educated and mobile enough to fulfil their task. Such a population tends to demand more say in the affairs of state.

                                So natural resources lead to tyrannies, and lack thereof to democracies.

                                Privatisation of knowledge is a way of creating artificial resources to extract with fewer labourers. Plus, the more that extraction is automated, the smaller the population a ruler needs - or the more precarious their existence.

                                That is the goal.

                                kats@chaosfem.twK 1 Antwort Letzte Antwort
                                0
                                • cwebber@social.coopC cwebber@social.coop

                                  This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop

                                  svelmoe@hachyderm.ioS This user is from outside of this forum
                                  svelmoe@hachyderm.ioS This user is from outside of this forum
                                  svelmoe@hachyderm.io
                                  schrieb zuletzt editiert von
                                  #16

                                  @cwebber
                                  One thing I've noticed in the same aspect as this, is that I talk much less code with colleagues now and have much less interactions with them for working through problems, and thus limit my exposure to alternate problem solving.

                                  When ever I want to discuss a problem, it's more often than not boiled down to some LLM answer, meaning, I might as well 'cut out' the middle and ask the LLM itself if all I get are LLM answers anyway.
                                  That truly sucks.
                                  "Have you asked Claude/Copilot/Chatgpt".....

                                  1 Antwort Letzte Antwort
                                  0
                                  • cwebber@social.coopC cwebber@social.coop

                                    This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop

                                    mahadevank@mastodon.socialM This user is from outside of this forum
                                    mahadevank@mastodon.socialM This user is from outside of this forum
                                    mahadevank@mastodon.social
                                    schrieb zuletzt editiert von
                                    #17

                                    @cwebber of course guys - it was never about the LLM, it was about crowd-sourcing intelligence at an epic-scale. Every piece of code a developer writes and fixes becomes training data. Same with every conversation. I'm surprised people don't see the danger in having one single overlord and gatekeeper of all information in the world. Its crazy.

                                    People seem to have forgotten what the real meaning of democracy and multi-lateralism are.

                                    1 Antwort Letzte Antwort
                                    0
                                    • cwebber@social.coopC cwebber@social.coop

                                      This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop

                                      dianshuo@mstdn.ioD This user is from outside of this forum
                                      dianshuo@mstdn.ioD This user is from outside of this forum
                                      dianshuo@mstdn.io
                                      schrieb zuletzt editiert von
                                      #18

                                      @cwebber this isn’t really new. For all the things on StackOverflow there was a huge domain of knowledge that was just not on there.

                                      For most of my corporate developer life the knowledge/bug fixes would not be found on public forums but in internal collective knowledge, documentation or simply knowing a person in the same field. Most of this was not public domain.

                                      The biggest issue now is that those firms commit their group knowledge to LLMs and we will not get it back.

                                      1 Antwort Letzte Antwort
                                      0
                                      • cwebber@social.coopC cwebber@social.coop

                                        This blogpost makes an astoundingly good case about LLMs I hadn't considered before. The collapse of public forums (like Stack Overflow) for programming answers coincides directly with the rise of programmers asking for answers from chatbots *directly*. Those debugging sessions become part of a training set that now *only private LLM corporations have access to*. This is something that "open models" seemingly can't easily fight. https://michiel.buddingh.eu/enclosure-feedback-loop

                                        michiel@social.tchncs.deM This user is from outside of this forum
                                        michiel@social.tchncs.deM This user is from outside of this forum
                                        michiel@social.tchncs.de
                                        schrieb zuletzt editiert von
                                        #19

                                        @cwebber you calling it an 'astoundingly good case' makes me feel insightful in a way no LLM has been able to accomplish. I'm going to be insufferably smug for the rest of the day 🙂

                                        1 Antwort Letzte Antwort
                                        0
                                        • vfrmedia@social.tchncs.deV vfrmedia@social.tchncs.de

                                          @cwebber Already the forums for VOIP software and embedded stuff (Arduino etc) are fully enshittified - they were toxic enough pre-AI) and folk have simply stopped contributing there (I think this happened just *before* LLMs became popular, so the quality of whatever does go to the training sets isn't going to be much good).

                                          I suspect another factor is when people are getting paid for their work *and* depending on their employers upselling SaaS or other commercial services, they are less inclined to share stuff with the competition (I had to figure out my PJSIP trunk and securing it for myself, most of my findings are tooted here on Fedi as I'm not even sure where else to put them)

                                          lfzz@mastodon.socialL This user is from outside of this forum
                                          lfzz@mastodon.socialL This user is from outside of this forum
                                          lfzz@mastodon.social
                                          schrieb zuletzt editiert von
                                          #20

                                          @vfrmedia make a small static site blog to share your experience? Put so ethibg like Anubis to stop/slow scraping

                                          1 Antwort Letzte Antwort
                                          0
                                          Antworten
                                          • In einem neuen Thema antworten
                                          Anmelden zum Antworten
                                          • Älteste zuerst
                                          • Neuste zuerst
                                          • Meiste Stimmen



                                          Copyright (c) 2025 abSpecktrum (@abspecklog@fedimonster.de)

                                          Erstellt mit Schlaflosigkeit, Kaffee, Brokkoli & ♥

                                          Impressum | Datenschutzerklärung | Nutzungsbedingungen

                                          • Anmelden

                                          • Du hast noch kein Konto? Registrieren

                                          • Anmelden oder registrieren, um zu suchen
                                          • Erster Beitrag
                                            Letzter Beitrag
                                          0
                                          • Home
                                          • Aktuell
                                          • Tags
                                          • Über dieses Forum