in reply to Re^2: Dataflow programming on CPU and GPU using AI::MXNet
in thread Dataflow programming on CPU and GPU using AI::MXNet

PDL natively has forward dataflow (if you opt in by using $pdl->doflow - see the next PDL CPAN version for an updated PDL::Dataflow doc that lays all this out, or check out the GitHub version in Basic/Pod), and has for decades.

Lazy evaluation currently only happens with slice-like operations, but there are plans to switch to lazy evaluation in order to have not only loop-fusion by creating custom operations that would e.g. do the a*b + c*d with only one round-trip from RAM through the CPU to RAM, but also GPU processing. See https://github.com/PDLPorters/pdl/issues/349 for discussion and/or to participate!

Replies are listed 'Best First'.
Re^4: Dataflow programming on CPU and GPU using AI::MXNet
by etj (Priest) on Aug 04, 2024 at 23:58 UTC
    Update from the future:

    PDL now has better, working dataflow that you can use better, with the inplace-like operation flowing. It replaces doflow. See the latest PDL's PDL::Dataflow and also Example of PDL dataflow to implement 3D space calculations.

    EDIT: Having read a bit more about "dataflow programming", that's not entirely the same thing I was thinking of. PDL can (at this writing) have automatically-updated ndarrays that depend on other ndarrays. "Dataflow programming" appears to be more stream-orientated, where operations update when all their (smaller, more granular) inputs become valid. PDL can do that a little bit (see the molecule/graph-theory bit in demo 3d), but being more event-driven isn't fantastically well-supported yet.