in reply to Re^8: Creating X BitMap (XBM) images with directional gradients
in thread Creating X BitMap (XBM) images with directional gradients

OpenGL::Sandbox already has textures, but more importantly, Buffer Objects which can be memory mapped to load them with data, and then used in shaders. In other words, I'm already to the point where I'd like to have Model data packed in buffers so I can just dump it into the graphics card quickly. Even more awesome would be if I could pre-allocate the memory-mapped buffers for the Model object prior to loading an STL file so that the data was already in the graphics card by the time it finished loading. *that* would perhaps be too much optimization.

Maybe equally awesome would be if there was an option to tell PDL to use a memory buffer of the caller's choosing as the storage for an ndarray, so that I could have it backed by one of the memory-maps.

  • Comment on Re^9: Creating X BitMap (XBM) images with directional gradients

Replies are listed 'Best First'.
Re^10: Creating X BitMap (XBM) images with directional gradients
by etj (Priest) on Aug 15, 2024 at 18:49 UTC
    PDL absolutely could acquire such an ability. It would "only" (ha!) require new code in openglq.pd to make extra PDL operations.

    Spitballing a design, they'd take an OtherPars parameter that was a pointer Perl/C object, OR a an ndarray Par that was indx to capture pointers in a way that would broadcast naturally. I think a good starting point before cutting code would be to see what quick wins there were to steal working code/ideas from OGL:Sandbox into PDL's TriD, and textures is an obvious candidate. Are there others?

      steal working code/ideas from OGL:Sandbox into PDL's TriD, and textures is an obvious candidate. Are there others?

      The single most awesome bit of code in OpenGL::Sandbox is $program->set_uniform. The documentation doesn't do it justice, but it performs magic that lets you set the uniform values for the shader in a very convenient manner. There might be an interesting way to tie that into PDL in a passive manner so that the user specifies everything in terms of PDL arrays, and PDL magically broadcasts into the graphics card for the final step instead of broadcasting across CPUs. (that's what Uniforms are, basically; global variables that get broadcast to N texture units each running a copy of the shader, and each with shared access to any of the texture or buffer objects that you have loaded.)

      Feel free to steal the code. I feel like it hasn't received its return-on-investment for the number of hours I put into it.

      For textures, you need to weigh the convenience vs. adding a dependency on PNG and JPEG libraries. But, you can always fall back to raw uncompressed RGBA files.

        I don't doubt for a second that it's awesome, though my OpenGL knowledge is still so minimal I have to take your word for it!

        PDL's texture-ish interface would have to be based on already-unpacked image data, which would sidestep the question of overly tying the two together.

        PDL doesn't really have a concept of GPUs at all at this time. That means it doesn't know about shaders, making "uniforms" a bit moot for the present time. It remains a project goal to gain such a concept (see https://github.com/PDLPorters/pdl/issues/349). An approach I think might be valuable is to mimic OpenCV in having a special kind of "Mat" (for us, ndarray) that's in GPU memory, together with ways to shift data between that and a CPU-type of object. It would also need a separated-out way of doing multi-processing, which is currently hardcoded as only POSIX threads (see also https://github.com/PDLPorters/pdl/issues/358).