in reply to Re^6: Creating X BitMap (XBM) images with directional gradients
in thread Creating X BitMap (XBM) images with directional gradients

So, this is exactly what I said I was looking for, but what I actually was thinking was some format what would be efficient for C code to operate on. Having to reach into individual scalars for the 3D coordinates isn't going to be very fast.

I thought it over a bit, and I think what I'm actually wanting is:

Maybe the specification for an object like this could be that each of these attributes may be a packed scalar, or a PDL ndarray?

The end-goal I'm trying to achieve is to pass all the buffers to OpenGL and make a shader that can render the whole mesh.

And, I don't know. Maybe this isn't the "perl way" to approach it. Maybe I should start with the inefficient expanded object and have code that packs it however required for the rendering. That adds startup cost though.

  • Comment on Re^7: Creating X BitMap (XBM) images with directional gradients

Replies are listed 'Best First'.
Re^8: Creating X BitMap (XBM) images with directional gradients
by etj (Priest) on Aug 15, 2024 at 01:51 UTC
    I think trying to be ultra-fast from day 1 might be premature optimisation? Making something that works correctly (with automated tests) can then be made quick.

    Though getting things "right" using separate ndarrays for each thing named above, and maybe sellotaping them together into objects later would probably also be processable quite quickly. Yes, I think I'm suggesting prototyping with PDL. It already has code to do OpenGL, including animation, and responding to user inputs (e.g. rotating the view around, while the molecule demo runs). It doesn't yet have textures, but that shouldn't be too agonising to add.

      OpenGL::Sandbox already has textures, but more importantly, Buffer Objects which can be memory mapped to load them with data, and then used in shaders. In other words, I'm already to the point where I'd like to have Model data packed in buffers so I can just dump it into the graphics card quickly. Even more awesome would be if I could pre-allocate the memory-mapped buffers for the Model object prior to loading an STL file so that the data was already in the graphics card by the time it finished loading. *that* would perhaps be too much optimization.

      Maybe equally awesome would be if there was an option to tell PDL to use a memory buffer of the caller's choosing as the storage for an ndarray, so that I could have it backed by one of the memory-maps.

        PDL absolutely could acquire such an ability. It would "only" (ha!) require new code in openglq.pd to make extra PDL operations.

        Spitballing a design, they'd take an OtherPars parameter that was a pointer Perl/C object, OR a an ndarray Par that was indx to capture pointers in a way that would broadcast naturally. I think a good starting point before cutting code would be to see what quick wins there were to steal working code/ideas from OGL:Sandbox into PDL's TriD, and textures is an obvious candidate. Are there others?