Mastodon Skip to content
  • Home
  • Aktuell
  • Tags
  • Über dieses Forum
Einklappen
Grafik mit zwei überlappenden Sprechblasen, eine grün und eine lila.
Abspeckgeflüster – Forum für Menschen mit Gewicht(ung)

Kostenlos. Werbefrei. Menschlich. Dein Abnehmforum.

  1. Home
  2. Uncategorized
  3. FOUND IT

FOUND IT

Geplant Angeheftet Gesperrt Verschoben Uncategorized
82 Beiträge 66 Kommentatoren 0 Aufrufe
  • Älteste zuerst
  • Neuste zuerst
  • Meiste Stimmen
Antworten
  • In einem neuen Thema antworten
Anmelden zum Antworten
Dieses Thema wurde gelöscht. Nur Nutzer mit entsprechenden Rechten können es sehen.
  • wobweger@mstdn.socialW wobweger@mstdn.social

    @lokeloski
    alt-text screenshot of post by magicmooshka from Jan 7:
    recently my friend's comics professor told her that it's acceptable to use gen AI for script-writing but not for art, since a machine can't generate meaningful artistic work. meanwhile, my sisters screenwriting professor said that they can use gen AI for concept art and visualization, but that it won't be able to generate script that's any good. and at my job, 1/2

    wobweger@mstdn.socialW This user is from outside of this forum
    wobweger@mstdn.socialW This user is from outside of this forum
    wobweger@mstdn.social
    schrieb zuletzt editiert von
    #26

    it seems like each department says that AI can be useful in every field except the one that they know best.

    it's only ever the jobs we're unfamiliar with that we assume can be replaced with automation.
    The more attuned we are with certain processes, crafts and occupations, the more we realize that gen AI will never be able to provide a suitable replacement. The case for its existence relies on our ignorance of the work and skill required to be everything we don't. 2/2

    wobweger@mstdn.socialW 1 Antwort Letzte Antwort
    0
    • wobweger@mstdn.socialW wobweger@mstdn.social

      it seems like each department says that AI can be useful in every field except the one that they know best.

      it's only ever the jobs we're unfamiliar with that we assume can be replaced with automation.
      The more attuned we are with certain processes, crafts and occupations, the more we realize that gen AI will never be able to provide a suitable replacement. The case for its existence relies on our ignorance of the work and skill required to be everything we don't. 2/2

      wobweger@mstdn.socialW This user is from outside of this forum
      wobweger@mstdn.socialW This user is from outside of this forum
      wobweger@mstdn.social
      schrieb zuletzt editiert von
      #27

      strange conclusions by those professors,
      in my mind it works differently,
      when I say #salami output in a field where I'm expert and judge it to be inferior and conclude so-marketed gen AI will not be a competition, I would conclude this is valid for a l l other fields as well, and I as a dilettante in all other fields can be tricked to accept generated output as valid.

      wobweger@mstdn.socialW 1 Antwort Letzte Antwort
      0
      • wobweger@mstdn.socialW wobweger@mstdn.social

        strange conclusions by those professors,
        in my mind it works differently,
        when I say #salami output in a field where I'm expert and judge it to be inferior and conclude so-marketed gen AI will not be a competition, I would conclude this is valid for a l l other fields as well, and I as a dilettante in all other fields can be tricked to accept generated output as valid.

        wobweger@mstdn.socialW This user is from outside of this forum
        wobweger@mstdn.socialW This user is from outside of this forum
        wobweger@mstdn.social
        schrieb zuletzt editiert von
        #28

        the reason why mentioned professors and departments are willing to accept #salami output in other fields than theirs, may be caused by several year long propaganda on almost any channel, that claim we are living in a AI era, and that this is t h e new technology to be used, we see huge investments and think no one would just burn money on such a flawed dysfunctional slop generating "invention" so it must work

        wobweger@mstdn.socialW 1 Antwort Letzte Antwort
        0
        • wobweger@mstdn.socialW wobweger@mstdn.social

          the reason why mentioned professors and departments are willing to accept #salami output in other fields than theirs, may be caused by several year long propaganda on almost any channel, that claim we are living in a AI era, and that this is t h e new technology to be used, we see huge investments and think no one would just burn money on such a flawed dysfunctional slop generating "invention" so it must work

          wobweger@mstdn.socialW This user is from outside of this forum
          wobweger@mstdn.socialW This user is from outside of this forum
          wobweger@mstdn.social
          schrieb zuletzt editiert von
          #29

          stock prices are driven by this wild marketing stunt

          7 titles dominate the S&P500 index, all 7 are in full hype cycle

          this bubble has to go
          this bubble will go
          soon

          #salami aka #AI #AGI #genAI

          1 Antwort Letzte Antwort
          0
          • lokeloski@mastodon.socialL lokeloski@mastodon.social

            FOUND IT

            drahardja@sfba.socialD This user is from outside of this forum
            drahardja@sfba.socialD This user is from outside of this forum
            drahardja@sfba.social
            schrieb zuletzt editiert von
            #30

            @lokeloski Gell-Mann Amnesia.

            1 Antwort Letzte Antwort
            0
            • adrianriskin@kolektiva.socialA adrianriskin@kolektiva.social

              @lokeloski

              It's the Gell-Mann amnesia effect all over again.

              -----------
              The Gell-Mann amnesia effect is a claimed cognitive bias describing the tendency of individuals to critically assess media reports in a domain they are knowledgeable about, yet continue to trust reporting in other areas despite recognizing similar potential inaccuracies.

              https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect

              joepbc@mastodon.socialJ This user is from outside of this forum
              joepbc@mastodon.socialJ This user is from outside of this forum
              joepbc@mastodon.social
              schrieb zuletzt editiert von
              #31

              @AdrianRiskin @lokeloski or more generally the egocentric bias. Veritasium has a nice video on this: https://youtu.be/3LopI4YeC4I?si=ZV6CuklywzLekwHd

              1 Antwort Letzte Antwort
              0
              • adrianriskin@kolektiva.socialA adrianriskin@kolektiva.social

                @lokeloski

                It's the Gell-Mann amnesia effect all over again.

                -----------
                The Gell-Mann amnesia effect is a claimed cognitive bias describing the tendency of individuals to critically assess media reports in a domain they are knowledgeable about, yet continue to trust reporting in other areas despite recognizing similar potential inaccuracies.

                https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect

                at1st@mstdn.caA This user is from outside of this forum
                at1st@mstdn.caA This user is from outside of this forum
                at1st@mstdn.ca
                schrieb zuletzt editiert von
                #32

                @AdrianRiskin @lokeloski It's one of the reasons that, when pointing out that I do not like Generative LLMs for the work they output, I do emphasize that it's not just *my* programming expertise that I feel this for.

                Like, I feel the same way for books; if you wrote it with an LLM, and we can see because a prompt made it into the printed version, that tells me that you did not read what you claimed to have "Wrote" with an LLM - why should I read it then, when I know it can do the same thing it can do for math, or coding, or images?

                1 Antwort Letzte Antwort
                0
                • jantietje@norden.socialJ jantietje@norden.social shared this topic
                • lokeloski@mastodon.socialL lokeloski@mastodon.social

                  FOUND IT

                  geeeero@mastodon.socialG This user is from outside of this forum
                  geeeero@mastodon.socialG This user is from outside of this forum
                  geeeero@mastodon.social
                  schrieb zuletzt editiert von
                  #33

                  @lokeloski Very well put. To me, this is similar to the Gell-Mann amnesia effect, where for subjects we have deep knowlege about, we see all the flaws in media reports, but tend to assume that for all other subjects, the media reports are basically fine. @davidgerard

                  https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect?wprov=sfla1

                  davidgerard@circumstances.runD 1 Antwort Letzte Antwort
                  0
                  • gkrnours@mastodon.gamedev.placeG gkrnours@mastodon.gamedev.place

                    @lokeloski dev seems to be the only one thinking they can replace their own job with AI and everything will be fine

                    mathieugenois@fediscience.orgM This user is from outside of this forum
                    mathieugenois@fediscience.orgM This user is from outside of this forum
                    mathieugenois@fediscience.org
                    schrieb zuletzt editiert von
                    #34

                    @gkrnours
                    Some mathematicians are also on this "let's automatize our own job" path…
                    @lokeloski

                    craigduncan@mastodon.auC 1 Antwort Letzte Antwort
                    0
                    • lokeloski@mastodon.socialL lokeloski@mastodon.social

                      FOUND IT

                      bovaz@misskey.socialB This user is from outside of this forum
                      bovaz@misskey.socialB This user is from outside of this forum
                      bovaz@misskey.social
                      schrieb zuletzt editiert von
                      #35
                      @lokeloski@mastodon.social I just shared this at work.
                      With some of the people pushing for AI integration everywhere.
                      1 Antwort Letzte Antwort
                      0
                      • lokeloski@mastodon.socialL lokeloski@mastodon.social

                        FOUND IT

                        bartholin@fops.cloudB This user is from outside of this forum
                        bartholin@fops.cloudB This user is from outside of this forum
                        bartholin@fops.cloud
                        schrieb zuletzt editiert von
                        #36
                        @lokeloski or programmers asking LLMs to generate code for them, because they cannot code
                        1 Antwort Letzte Antwort
                        0
                        • lokeloski@mastodon.socialL lokeloski@mastodon.social

                          FOUND IT

                          pi_rat@shitposter.worldP This user is from outside of this forum
                          pi_rat@shitposter.worldP This user is from outside of this forum
                          pi_rat@shitposter.world
                          schrieb zuletzt editiert von
                          #37
                          @lokeloski rediscovering https://en.wikipedia.org/wiki/Michael_Crichton#Gell-Mann_amnesia_effect all over are we
                          1 Antwort Letzte Antwort
                          0
                          • lokeloski@mastodon.socialL lokeloski@mastodon.social

                            FOUND IT

                            arnotron@noc.socialA This user is from outside of this forum
                            arnotron@noc.socialA This user is from outside of this forum
                            arnotron@noc.social
                            schrieb zuletzt editiert von
                            #38

                            @lokeloski I call that Mount Stupid

                            1 Antwort Letzte Antwort
                            0
                            • geeeero@mastodon.socialG geeeero@mastodon.social

                              @lokeloski Very well put. To me, this is similar to the Gell-Mann amnesia effect, where for subjects we have deep knowlege about, we see all the flaws in media reports, but tend to assume that for all other subjects, the media reports are basically fine. @davidgerard

                              https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect?wprov=sfla1

                              davidgerard@circumstances.runD This user is from outside of this forum
                              davidgerard@circumstances.runD This user is from outside of this forum
                              davidgerard@circumstances.run
                              schrieb zuletzt editiert von
                              #39

                              @geeeero @lokeloski important to note the Gell-Mann effect is made up trash. It's literally something Crichton said once. So imagine how cognitive psychologists feel about it.

                              tattie@eldritch.cafeT hazelnot@sunbeam.cityH 2 Antworten Letzte Antwort
                              0
                              • lokeloski@mastodon.socialL lokeloski@mastodon.social

                                FOUND IT

                                W This user is from outside of this forum
                                W This user is from outside of this forum
                                woo@fosstodon.org
                                schrieb zuletzt editiert von
                                #40

                                @lokeloski Fortunately for AI pushers, most people are ignorant about most things. Optimistically, the Inverse 80/20 rule applies.

                                1 Antwort Letzte Antwort
                                0
                                • lokeloski@mastodon.socialL lokeloski@mastodon.social

                                  FOUND IT

                                  xerge@mastodon.nlX This user is from outside of this forum
                                  xerge@mastodon.nlX This user is from outside of this forum
                                  xerge@mastodon.nl
                                  schrieb zuletzt editiert von
                                  #41

                                  @lokeloski I’ve seen this attitude even in some highly skilled people.

                                  The idea that what they’re doing is obviously complex and requires deep knowledge and skills, but work that others are doing is obviously trivial. Very surprising.

                                  It’s not uncommon for undergraduates to assume some field is easy, because the introductory course they had on it was, but for accomplished professors to have similar ideas about fields outside of their expertise? Why? Is there a psychologist in the house?

                                  nymnympseudonymm@mstdn.scienceN 1 Antwort Letzte Antwort
                                  0
                                  • lokeloski@mastodon.socialL lokeloski@mastodon.social

                                    FOUND IT

                                    beemoh@mastodonapp.ukB This user is from outside of this forum
                                    beemoh@mastodonapp.ukB This user is from outside of this forum
                                    beemoh@mastodonapp.uk
                                    schrieb zuletzt editiert von
                                    #42

                                    @lokeloski An extra step to this I saw elsewhere- "People think it can do things except the things they personally are competent to do. Which is why the C Suite thinks it can do everything"

                                    1 Antwort Letzte Antwort
                                    0
                                    • lokeloski@mastodon.socialL lokeloski@mastodon.social

                                      FOUND IT

                                      jigmedatse@social.openpsychology.netJ This user is from outside of this forum
                                      jigmedatse@social.openpsychology.netJ This user is from outside of this forum
                                      jigmedatse@social.openpsychology.net
                                      schrieb zuletzt editiert von
                                      #43

                                      @lokeloski@mastodon.social Why do I always find it at best questionable for any field I look at? Like, "yeah that kind of feels like that's maybe decent, but I'd have to check out to see if it's actually stupid..." Ah well, because it's always stupid when I have the slightest bit of a clue.

                                      1 Antwort Letzte Antwort
                                      0
                                      • lokeloski@mastodon.socialL lokeloski@mastodon.social

                                        FOUND IT

                                        wolf@helvede.netW This user is from outside of this forum
                                        wolf@helvede.netW This user is from outside of this forum
                                        wolf@helvede.net
                                        schrieb zuletzt editiert von
                                        #44

                                        @lokeloski The strange thing about AI is that it generates great answers to everything I don't know much about, yet in my field of expertise it seems to be incredibly dumb.

                                        1 Antwort Letzte Antwort
                                        0
                                        • denofearth@mas.toD denofearth@mas.to

                                          @lokeloski
                                          I recently went to an opera where the composer was not only present but also performing as one of the soloists, among five other vocalists, along with a men's choir, accompanied by a full orchestra.

                                          The backdrop to this rich contribution to human musical art was AI visuals projected onto a screen.

                                          shaulaevans@zirk.usS This user is from outside of this forum
                                          shaulaevans@zirk.usS This user is from outside of this forum
                                          shaulaevans@zirk.us
                                          schrieb zuletzt editiert von
                                          #45

                                          @DenOfEarth @lokeloski 😬

                                          1 Antwort Letzte Antwort
                                          0
                                          Antworten
                                          • In einem neuen Thema antworten
                                          Anmelden zum Antworten
                                          • Älteste zuerst
                                          • Neuste zuerst
                                          • Meiste Stimmen



                                          Copyright (c) 2025 abSpecktrum (@abspecklog@fedimonster.de)

                                          Erstellt mit Schlaflosigkeit, Kaffee, Brokkoli & ♥

                                          Impressum | Datenschutzerklärung | Nutzungsbedingungen

                                          • Anmelden

                                          • Du hast noch kein Konto? Registrieren

                                          • Anmelden oder registrieren, um zu suchen
                                          • Erster Beitrag
                                            Letzter Beitrag
                                          0
                                          • Home
                                          • Aktuell
                                          • Tags
                                          • Über dieses Forum