in reply to Re^5: Creating X BitMap (XBM) images with directional gradients
in thread Creating X BitMap (XBM) images with directional gradients

I kind of want a "Perl Standard Polygon Mesh Object"

I'd not call it a "standard", because I doubt anyone but me has used it, but <shameless plug>I wrote CAD::Mesh3D for helping me write STL files.</shameless plug> (It's based on Math::Vector::Real instead of PDL. Sorry, etj.)

like surface normals per polygon

That's the assumption mine made (because that's what STL wants).

Though mine only has triangular facets, and a helper function to define a rectangle to turn it into a triangle. No higher-node polygons have helpers.

or texture coordinates, texture references etc

Definitely not included in mine.

input/output formats:

Since my focus when writing it was STL, CAD::Mesh3D comes with a formatter to read or write STL. But I tried to make it extensible, so CAD::Mesh3D::ProvideNewFormat should allow you to define another file format, if desired.

conclusion:

Based on your description, CAD::Mesh3D isn't likely what you need or want, but this seemed a good place for a <shameless plug>...</shameless plug>, just in case you or anyone else wanted to play with it.

  • Comment on Re^6: Creating X BitMap (XBM) images with directional gradients

Replies are listed 'Best First'.
Re^7: Creating X BitMap (XBM) images with directional gradients
by etj (Priest) on Aug 11, 2024 at 17:11 UTC
    First of all: it's OK. If nothing else, in either 2017 (first copyright date) or 2020 (first CPAN), PDL::IO::STL didn't exist.

    Also, that doesn't (yet) support normals in any useful way; it drops them on input, and writes out 0s on output. Does anything actually rely on them being correct in the file?

    Finally, it seems to me that CAD, and 3D stuff in general, is really the killer app for PDL. The array-programming stuff makes things quick because fast C code, and also very easy to write. But especially because all the 3D maths stuff (including cross product) are just there already, together with any linear algebra stuff you might need to do. You can write lots of object-y Perl, with methods, or write 3 lines of PDL code. Truly, more than one way to do it ;-)

Re^7: Creating X BitMap (XBM) images with directional gradients
by NERDVANA (Priest) on Aug 15, 2024 at 00:21 UTC
    So, this is exactly what I said I was looking for, but what I actually was thinking was some format what would be efficient for C code to operate on. Having to reach into individual scalars for the 3D coordinates isn't going to be very fast.

    I thought it over a bit, and I think what I'm actually wanting is:

    • Mesh object:
    • 'vertices' attribute, which contains a buffer of packed floats or doubles
    • 'polygons' attribute, which contains a buffer of packed integers referencing the vertex numbers of the 3 corners of the polygon
    • 'normals' attribute, which contains one vector per vertex, so can be indexed by the polygon integers
    • 'textures' attribute, which is an arrayref of hashrefs describing a texture
    • 'polygon_texture' attribute, which is a packed integer for each polygon referencing one of the textures and the (S,T) coordinates for each of the 3 referenced vertices, maybe 16-bit ints for the texture coordiantes adding up to 16 bytes per polygon
    • A Polygon object that is just a reference to the mesh + an offset, and it loads its attributes on demand.

    Maybe the specification for an object like this could be that each of these attributes may be a packed scalar, or a PDL ndarray?

    The end-goal I'm trying to achieve is to pass all the buffers to OpenGL and make a shader that can render the whole mesh.

    And, I don't know. Maybe this isn't the "perl way" to approach it. Maybe I should start with the inefficient expanded object and have code that packs it however required for the rendering. That adds startup cost though.

      I think trying to be ultra-fast from day 1 might be premature optimisation? Making something that works correctly (with automated tests) can then be made quick.

      Though getting things "right" using separate ndarrays for each thing named above, and maybe sellotaping them together into objects later would probably also be processable quite quickly. Yes, I think I'm suggesting prototyping with PDL. It already has code to do OpenGL, including animation, and responding to user inputs (e.g. rotating the view around, while the molecule demo runs). It doesn't yet have textures, but that shouldn't be too agonising to add.

        OpenGL::Sandbox already has textures, but more importantly, Buffer Objects which can be memory mapped to load them with data, and then used in shaders. In other words, I'm already to the point where I'd like to have Model data packed in buffers so I can just dump it into the graphics card quickly. Even more awesome would be if I could pre-allocate the memory-mapped buffers for the Model object prior to loading an STL file so that the data was already in the graphics card by the time it finished loading. *that* would perhaps be too much optimization.

        Maybe equally awesome would be if there was an option to tell PDL to use a memory buffer of the caller's choosing as the storage for an ndarray, so that I could have it backed by one of the memory-maps.