Mastodon Skip to content
  • Home
  • Aktuell
  • Tags
  • Über dieses Forum
Einklappen
Grafik mit zwei überlappenden Sprechblasen, eine grün und eine lila.
Abspeckgeflüster – Forum für Menschen mit Gewicht(ung)

Kostenlos. Werbefrei. Menschlich. Dein Abnehmforum.

  1. Home
  2. Uncategorized
  3. uhm, did you know that waypipe with ssh is fast enough to use blender remotely over wi-fi?

uhm, did you know that waypipe with ssh is fast enough to use blender remotely over wi-fi?

Geplant Angeheftet Gesperrt Verschoben Uncategorized
47 Beiträge 13 Kommentatoren 0 Aufrufe
  • Älteste zuerst
  • Neuste zuerst
  • Meiste Stimmen
Antworten
  • In einem neuen Thema antworten
Anmelden zum Antworten
Dieses Thema wurde gelöscht. Nur Nutzer mit entsprechenden Rechten können es sehen.
  • tauon@possum.cityT tauon@possum.city

    @dotstdy@mastodon.social @uecker@mastodon.social @mntmn@mastodon.social actually then you'd run into the same problem sending the textures over the network too wouldn't you

    uecker@mastodon.socialU This user is from outside of this forum
    uecker@mastodon.socialU This user is from outside of this forum
    uecker@mastodon.social
    schrieb zuletzt editiert von
    #18

    @tauon @mntmn @dotstdy Both could work just fine with X in theory. The GLX extension - a long time in the past - could do remote 3D rendering, but pixel shuffling over X could also work fine. X is a very generic and flexible remote buffer handling protocol. The issues with ssh -X are mostly latency related because toolkits (and blender if not using a standard one then has one builtin) use it synchronously instead of asynchronously.

    dotstdy@mastodon.socialD 1 Antwort Letzte Antwort
    0
    • uecker@mastodon.socialU uecker@mastodon.social

      @tauon @mntmn @dotstdy Both could work just fine with X in theory. The GLX extension - a long time in the past - could do remote 3D rendering, but pixel shuffling over X could also work fine. X is a very generic and flexible remote buffer handling protocol. The issues with ssh -X are mostly latency related because toolkits (and blender if not using a standard one then has one builtin) use it synchronously instead of asynchronously.

      dotstdy@mastodon.socialD This user is from outside of this forum
      dotstdy@mastodon.socialD This user is from outside of this forum
      dotstdy@mastodon.social
      schrieb zuletzt editiert von
      #19

      @uecker @tauon @mntmn remote rendering for a program which is heavily reliant on the GPU like blender is the exact opposite of why you'd want remoting though. (Plus none of those virtualization things really work so well in the modern day, it's not gl1.1 anymore, the model just doesn't fit)

      dotstdy@mastodon.socialD uecker@mastodon.socialU 2 Antworten Letzte Antwort
      0
      • dotstdy@mastodon.socialD dotstdy@mastodon.social

        @uecker @tauon @mntmn remote rendering for a program which is heavily reliant on the GPU like blender is the exact opposite of why you'd want remoting though. (Plus none of those virtualization things really work so well in the modern day, it's not gl1.1 anymore, the model just doesn't fit)

        dotstdy@mastodon.socialD This user is from outside of this forum
        dotstdy@mastodon.socialD This user is from outside of this forum
        dotstdy@mastodon.social
        schrieb zuletzt editiert von
        #20

        @uecker @tauon @mntmn it's not really obvious with the default scene, but a 3d program like blender requires a pretty hefty GPU to run the UI (see also any CAD tool, or a game)

        1 Antwort Letzte Antwort
        0
        • dotstdy@mastodon.socialD dotstdy@mastodon.social

          @uecker @tauon @mntmn remote rendering for a program which is heavily reliant on the GPU like blender is the exact opposite of why you'd want remoting though. (Plus none of those virtualization things really work so well in the modern day, it's not gl1.1 anymore, the model just doesn't fit)

          uecker@mastodon.socialU This user is from outside of this forum
          uecker@mastodon.socialU This user is from outside of this forum
          uecker@mastodon.social
          schrieb zuletzt editiert von
          #21

          @dotstdy @tauon @mntmn This depends on where the strong GPU is, but as I said, pixel pushing should work also with X. I use medical image viewer over X, the image content updated very quickly. What is slow is the GTK part because it is implemented badly.

          uecker@mastodon.socialU 1 Antwort Letzte Antwort
          0
          • uecker@mastodon.socialU uecker@mastodon.social

            @dotstdy @tauon @mntmn This depends on where the strong GPU is, but as I said, pixel pushing should work also with X. I use medical image viewer over X, the image content updated very quickly. What is slow is the GTK part because it is implemented badly.

            uecker@mastodon.socialU This user is from outside of this forum
            uecker@mastodon.socialU This user is from outside of this forum
            uecker@mastodon.social
            schrieb zuletzt editiert von
            #22

            @dotstdy @tauon @mntmn But I think even for many 3D applications that render locally, a remote rendering protocol is actually the right thing, because for all intents and purposes a discrete GPU is *not* local to the CPU and whether you stream the commands via PCI or the network is not so different. In fact, Wayland is also a designed for remote rendering in this sense just in a much more limited way.

            dotstdy@mastodon.socialD 1 Antwort Letzte Antwort
            0
            • uecker@mastodon.socialU uecker@mastodon.social

              @dotstdy @tauon @mntmn But I think even for many 3D applications that render locally, a remote rendering protocol is actually the right thing, because for all intents and purposes a discrete GPU is *not* local to the CPU and whether you stream the commands via PCI or the network is not so different. In fact, Wayland is also a designed for remote rendering in this sense just in a much more limited way.

              dotstdy@mastodon.socialD This user is from outside of this forum
              dotstdy@mastodon.socialD This user is from outside of this forum
              dotstdy@mastodon.social
              schrieb zuletzt editiert von
              #23

              @uecker @tauon @mntmn Unfortunately that's really not how the GPU works at all in the present day, it made more sense back in OpenGL 1.1 when there were pretty straightforward sets of "commands" and limited amounts of data passing between the GPU and the CPU. Nowadays with things like bindless textures and gpu-driven rendering, and compute, practically every draw call can access practically all the data on the GPU, and the CPU can write arbitrary data directly to GPU VRAM at any time.

              dotstdy@mastodon.socialD 1 Antwort Letzte Antwort
              0
              • dotstdy@mastodon.socialD dotstdy@mastodon.social

                @uecker @tauon @mntmn Unfortunately that's really not how the GPU works at all in the present day, it made more sense back in OpenGL 1.1 when there were pretty straightforward sets of "commands" and limited amounts of data passing between the GPU and the CPU. Nowadays with things like bindless textures and gpu-driven rendering, and compute, practically every draw call can access practically all the data on the GPU, and the CPU can write arbitrary data directly to GPU VRAM at any time.

                dotstdy@mastodon.socialD This user is from outside of this forum
                dotstdy@mastodon.socialD This user is from outside of this forum
                dotstdy@mastodon.social
                schrieb zuletzt editiert von
                #24

                @uecker @tauon @mntmn For very simple GPU programs you can make it work, but more advanced programs just do not work under a model with such restricted bandwidth between the GPU and the CPU. Plus, as was mentioned up-thread, you still need to somehow compress and decompress those textures online, which is itself a complex task. Plus you still need the GPU power on the thin client to render it. It's very much easier to render on the host, and then compress and transfer the whole framebuffer.

                uecker@mastodon.socialU 2 Antworten Letzte Antwort
                0
                • dotstdy@mastodon.socialD dotstdy@mastodon.social

                  @uecker @tauon @mntmn For very simple GPU programs you can make it work, but more advanced programs just do not work under a model with such restricted bandwidth between the GPU and the CPU. Plus, as was mentioned up-thread, you still need to somehow compress and decompress those textures online, which is itself a complex task. Plus you still need the GPU power on the thin client to render it. It's very much easier to render on the host, and then compress and transfer the whole framebuffer.

                  uecker@mastodon.socialU This user is from outside of this forum
                  uecker@mastodon.socialU This user is from outside of this forum
                  uecker@mastodon.social
                  schrieb zuletzt editiert von
                  #25

                  @dotstdy @tauon @mntmn I use GPUs for high-performance real-time imaging applications. So I think I know a little bit on how this works.

                  1 Antwort Letzte Antwort
                  0
                  • dotstdy@mastodon.socialD dotstdy@mastodon.social

                    @uecker @tauon @mntmn For very simple GPU programs you can make it work, but more advanced programs just do not work under a model with such restricted bandwidth between the GPU and the CPU. Plus, as was mentioned up-thread, you still need to somehow compress and decompress those textures online, which is itself a complex task. Plus you still need the GPU power on the thin client to render it. It's very much easier to render on the host, and then compress and transfer the whole framebuffer.

                    uecker@mastodon.socialU This user is from outside of this forum
                    uecker@mastodon.socialU This user is from outside of this forum
                    uecker@mastodon.social
                    schrieb zuletzt editiert von
                    #26

                    @dotstdy @tauon @mntmn I use GPUs for high-performance computing. So I think I know a little bit on how this works.

                    dotstdy@mastodon.socialD 1 Antwort Letzte Antwort
                    0
                    • uecker@mastodon.socialU uecker@mastodon.social

                      @dotstdy @tauon @mntmn I use GPUs for high-performance computing. So I think I know a little bit on how this works.

                      dotstdy@mastodon.socialD This user is from outside of this forum
                      dotstdy@mastodon.socialD This user is from outside of this forum
                      dotstdy@mastodon.social
                      schrieb zuletzt editiert von
                      #27

                      @uecker @tauon @mntmn me too, i make aaa video games 🙂

                      uecker@mastodon.socialU 1 Antwort Letzte Antwort
                      0
                      • dotstdy@mastodon.socialD dotstdy@mastodon.social

                        @uecker @tauon @mntmn me too, i make aaa video games 🙂

                        uecker@mastodon.socialU This user is from outside of this forum
                        uecker@mastodon.socialU This user is from outside of this forum
                        uecker@mastodon.social
                        schrieb zuletzt editiert von
                        #28

                        @dotstdy @tauon @mntmn So you do not keep your game data in GPU memory?

                        dotstdy@mastodon.socialD 1 Antwort Letzte Antwort
                        0
                        • uecker@mastodon.socialU uecker@mastodon.social

                          @dotstdy @tauon @mntmn So you do not keep your game data in GPU memory?

                          dotstdy@mastodon.socialD This user is from outside of this forum
                          dotstdy@mastodon.socialD This user is from outside of this forum
                          dotstdy@mastodon.social
                          schrieb zuletzt editiert von
                          #29

                          @uecker @tauon @mntmn we keep gigabytes of constantly changing data in GPU memory. so yes, but unless you want to stream 10GB of data before you render your first frame, then no. (obviously blender is less extreme here, but cad applications still deal with tremendous amounts of geometry, to say nothing of the online interactive path tracing and whatnot)

                          dotstdy@mastodon.socialD uecker@mastodon.socialU 2 Antworten Letzte Antwort
                          0
                          • dotstdy@mastodon.socialD dotstdy@mastodon.social

                            @uecker @tauon @mntmn we keep gigabytes of constantly changing data in GPU memory. so yes, but unless you want to stream 10GB of data before you render your first frame, then no. (obviously blender is less extreme here, but cad applications still deal with tremendous amounts of geometry, to say nothing of the online interactive path tracing and whatnot)

                            dotstdy@mastodon.socialD This user is from outside of this forum
                            dotstdy@mastodon.socialD This user is from outside of this forum
                            dotstdy@mastodon.social
                            schrieb zuletzt editiert von
                            #30

                            @uecker @tauon @mntmn The PCIe bus lets us move hundreds of megabytes of data between VRAM and RAM every frame. And so we do that. Our engine also relies on CPU read-back of the downsampled depth buffer from the previous frame, so that's a non-starter, however that's not something you'd run into outside of games, probably.

                            dotstdy@mastodon.socialD 1 Antwort Letzte Antwort
                            0
                            • dotstdy@mastodon.socialD dotstdy@mastodon.social

                              @uecker @tauon @mntmn we keep gigabytes of constantly changing data in GPU memory. so yes, but unless you want to stream 10GB of data before you render your first frame, then no. (obviously blender is less extreme here, but cad applications still deal with tremendous amounts of geometry, to say nothing of the online interactive path tracing and whatnot)

                              uecker@mastodon.socialU This user is from outside of this forum
                              uecker@mastodon.socialU This user is from outside of this forum
                              uecker@mastodon.social
                              schrieb zuletzt editiert von
                              #31

                              @dotstdy @tauon @mntmn We found it critically important to treat the GPU as "remote" in the sense that we keep all hot data on the GPU, keep the GPU processing pipelines full, and hide latency of data transfer for the GPU. I am sure it is similar for you. But I can see that in gaming you may want to render closer to the CPU than to the screen. But this does not seem to change the fact that GPU is "remote", or?

                              dotstdy@mastodon.socialD 1 Antwort Letzte Antwort
                              0
                              • dotstdy@mastodon.socialD dotstdy@mastodon.social

                                @uecker @tauon @mntmn The PCIe bus lets us move hundreds of megabytes of data between VRAM and RAM every frame. And so we do that. Our engine also relies on CPU read-back of the downsampled depth buffer from the previous frame, so that's a non-starter, however that's not something you'd run into outside of games, probably.

                                dotstdy@mastodon.socialD This user is from outside of this forum
                                dotstdy@mastodon.socialD This user is from outside of this forum
                                dotstdy@mastodon.social
                                schrieb zuletzt editiert von
                                #32

                                @uecker @tauon @mntmn But like I hinted at before, there's also just issues like applications which just map all the GPU memory into the CPU address space and write it whenever they like (with their own internal synchronization of course). That's *really* hard to deal with, even for tools which trace GPU commands straight to disk. Doing it transparently over the internet is really really really hard.

                                1 Antwort Letzte Antwort
                                0
                                • uecker@mastodon.socialU uecker@mastodon.social

                                  @dotstdy @tauon @mntmn We found it critically important to treat the GPU as "remote" in the sense that we keep all hot data on the GPU, keep the GPU processing pipelines full, and hide latency of data transfer for the GPU. I am sure it is similar for you. But I can see that in gaming you may want to render closer to the CPU than to the screen. But this does not seem to change the fact that GPU is "remote", or?

                                  dotstdy@mastodon.socialD This user is from outside of this forum
                                  dotstdy@mastodon.socialD This user is from outside of this forum
                                  dotstdy@mastodon.social
                                  schrieb zuletzt editiert von
                                  #33

                                  @uecker @tauon @mntmn Similar, but likely at a narrower scale of latency tolerance. The issue is just the bandwidth v.s. the size of the working set, the GPU is remote (well, unless it's integrated) but PCIe 4 bandwidth is ~300 times greater than you get with a dedicated gigabit link. and vaguely ~15000 times greater than what you might have used to stream a compressed video.

                                  uecker@mastodon.socialU 1 Antwort Letzte Antwort
                                  0
                                  • dotstdy@mastodon.socialD dotstdy@mastodon.social

                                    @uecker @tauon @mntmn Similar, but likely at a narrower scale of latency tolerance. The issue is just the bandwidth v.s. the size of the working set, the GPU is remote (well, unless it's integrated) but PCIe 4 bandwidth is ~300 times greater than you get with a dedicated gigabit link. and vaguely ~15000 times greater than what you might have used to stream a compressed video.

                                    uecker@mastodon.socialU This user is from outside of this forum
                                    uecker@mastodon.socialU This user is from outside of this forum
                                    uecker@mastodon.social
                                    schrieb zuletzt editiert von
                                    #34

                                    @dotstdy @tauon @mntmn Yes, this makes sense and I am not disagreeing with any of it. But my point is merely that a display protocol that treats the GPU as remote is not fundamentally flawed as some people claim, because the GPU *is* remote even when local. And I could imagine that for some applications such as CAD, remote rendering might still could make sense. We use remote GPU for real-time processing of imaging data, and the network adds negligible latency.

                                    dotstdy@mastodon.socialD tauon@possum.cityT 2 Antworten Letzte Antwort
                                    0
                                    • uecker@mastodon.socialU uecker@mastodon.social

                                      @dotstdy @tauon @mntmn Yes, this makes sense and I am not disagreeing with any of it. But my point is merely that a display protocol that treats the GPU as remote is not fundamentally flawed as some people claim, because the GPU *is* remote even when local. And I could imagine that for some applications such as CAD, remote rendering might still could make sense. We use remote GPU for real-time processing of imaging data, and the network adds negligible latency.

                                      dotstdy@mastodon.socialD This user is from outside of this forum
                                      dotstdy@mastodon.socialD This user is from outside of this forum
                                      dotstdy@mastodon.social
                                      schrieb zuletzt editiert von
                                      #35

                                      @uecker @tauon @mntmn The reason it's flawed imo is that while it will work fine in restricted situations, it won't work in many others. Comparatively, streaming the output always works (modulo latency and quality), and you have a nice dial to adjust how bandwidth and CPU heavy you want to be (and thus latency and quality). If you stream the command stream you *must* stream all the data before rendering a frame, and you likely need to stream some of it without any lossy compression at all.

                                      uecker@mastodon.socialU 1 Antwort Letzte Antwort
                                      0
                                      • dotstdy@mastodon.socialD dotstdy@mastodon.social

                                        @uecker @tauon @mntmn The reason it's flawed imo is that while it will work fine in restricted situations, it won't work in many others. Comparatively, streaming the output always works (modulo latency and quality), and you have a nice dial to adjust how bandwidth and CPU heavy you want to be (and thus latency and quality). If you stream the command stream you *must* stream all the data before rendering a frame, and you likely need to stream some of it without any lossy compression at all.

                                        uecker@mastodon.socialU This user is from outside of this forum
                                        uecker@mastodon.socialU This user is from outside of this forum
                                        uecker@mastodon.social
                                        schrieb zuletzt editiert von
                                        #36

                                        @dotstdy @tauon @mntmn The command stream is streamed anyway (in some sense). I do not understand your comment about the data. You also want this to be in GPU memory at the time it is accessed. Of course, you do not want to serialize your data through a network protocol, but in X when rendering locally, this is also not done. The point is that you need a protocol for manipulating remote buffers without involving the CPU. This works with X and Wayland and is also what we do (manually) in compute

                                        dotstdy@mastodon.socialD 1 Antwort Letzte Antwort
                                        0
                                        • uecker@mastodon.socialU uecker@mastodon.social

                                          @dotstdy @tauon @mntmn Yes, this makes sense and I am not disagreeing with any of it. But my point is merely that a display protocol that treats the GPU as remote is not fundamentally flawed as some people claim, because the GPU *is* remote even when local. And I could imagine that for some applications such as CAD, remote rendering might still could make sense. We use remote GPU for real-time processing of imaging data, and the network adds negligible latency.

                                          tauon@possum.cityT This user is from outside of this forum
                                          tauon@possum.cityT This user is from outside of this forum
                                          tauon@possum.city
                                          schrieb zuletzt editiert von
                                          #37

                                          @uecker@mastodon.social @dotstdy@mastodon.social @mntmn@mastodon.social

                                          because the GPU is remote even when local
                                          this is a good point & why i find it so cromulent that plan 9 treats all devices network transparently

                                          1 Antwort Letzte Antwort
                                          0
                                          Antworten
                                          • In einem neuen Thema antworten
                                          Anmelden zum Antworten
                                          • Älteste zuerst
                                          • Neuste zuerst
                                          • Meiste Stimmen



                                          Copyright (c) 2025 abSpecktrum (@abspecklog@fedimonster.de)

                                          Erstellt mit Schlaflosigkeit, Kaffee, Brokkoli & ♥

                                          Impressum | Datenschutzerklärung | Nutzungsbedingungen

                                          • Anmelden

                                          • Du hast noch kein Konto? Registrieren

                                          • Anmelden oder registrieren, um zu suchen
                                          • Erster Beitrag
                                            Letzter Beitrag
                                          0
                                          • Home
                                          • Aktuell
                                          • Tags
                                          • Über dieses Forum