in reply to Translating python math to Perl

You might take another crack at linear algebra. It shows up fairly often in graphics, and isn't too hard to reason about once you create a mental model around what is happening.

In case it helps, here is my mental model in a nutshell. Suppose you have an (x,y,z) point in a room. Suppose you are standing in that room at point (x2,y2,z2). The direction you are facing and angle of your head is represented as a 3x3 matrix. (ignore its contents for the moment) Suppose your task is to figure out what the coordinates of that point would be to the coordinate system of your eye. (in other words, imagine a coordinate system where your eye is (0,0,0) and forward 1 meter along your line of sight is (0,0,1) and so on) So you could describe any point in the room relative to the room, or describe it relative to your line of sight.

You solve that problem by subtracting your eye (in room coordinates) from the point (in room coordinates) then multiply that by the matrix that represents the direction you are looking. Now you have a point relative to your eye's coordinate system.

Now, what is the 3x3 matrix describing the direction you are looking? Well, the top row is the (x,y,z) vector (in room coordinates) sideways (leftward) from your eye. The second row is the (x,y,z) vector (room coordinates) if what your eye considers to be "up". And the third row of the matrix is the (x,y,z) vector (room coordinates) of what your eye considers to be "forward".

So remember that initial subtraction we had to do before multiplying by the matrix? It turns out you can hide that step if you upgrade the matrix to 4x4 and write your point as (x,y,z,1). The same happens in 2D, by upgrading to a 3x3 matrix with points as (x,y,1). I think this is what you're seeing in that code above with the funny stuff it does before multiplying the point by the matrix. This wastes a few multiplications, but lets you describe it in one operation.

If you want to map a point back out of a coordinate space, you just multiply by the "transpose" of the matrix. (swapping the columns and rows)

You can map a coordinate space into or out of a coordinate space! Remember how the matrix is really just three vectors described in the parent coordinate space? Well if you map those 3 vectors into some other matrix, now you've mapped the whole coordinate space into the other coordinate space. This turns out to happen automatically by normal matrix multiplication. Just multiply a matrix by a matrix and you've got a new matrix that represents performing both translations. Now (if you have a lot of points to remap) you can actually save multiplications by only needing to multiply each point by one matrix instead of a chain of matrix multiplications.

If that all made relative sense, then the rest is just implementation details which you can safely forget until you need them.

Replies are listed 'Best First'.
Re^2: Translating python math to Perl
by cavac (Prior) on Aug 28, 2023 at 07:55 UTC

    You might take another crack at linear algebra.

    And here is the problem: I've never taken a crack at linear algebra. Believe it or not, i left school quite early. I just couldn't deal with the pressures due to my mental state, see also PerlMonks - my haven of calmness and sanity.

    I'm pretty sure your explanation is perfectly reasonable. I'm trying to wrap my head around it, but so far all i managed to do is give myself a headache.

    While i usually don't ask for other people to do my work for me, since this is going to be an open source project i don't have that much qualms about it: Could you, uhm, provide a code example on how the python code translates into actual perl code?

    PerlMonks XP is useless? Not anymore: XPD - Do more with your PerlMonks XP

      You've got me thinking about linear algebra, and I happened to doodle this up when I ought to have been doing work:

      package QDMatrix; use v5.36; sub new($class, $n_minor, $values) { if (ref $values->[0]) { $#{$values->[$_]} == $n_minor or die "Irregular column len in +matrix: $#{$values->[$_]} != $n_minor" for 0..$#$values; $values= [ @$values ]; } else { @$values % $n_minor == 0 or die "Un-rectangular number of values in data: ".scalar( +@$values)." / $n_minor = ".(@$values/$n_minor); $values= [ map [ @{$values}[$_*$n_minor .. ($_+1)*$n_minor-1] ], 0 .. int($#$values/$n_minor) ] } bless $values, $class; } sub flatten($self) { map @$_, @$self } sub clone($self) { bless [ map [ @$_ ], @$self ], ref $self; } sub dims($self) { scalar @$self, scalar @{$self->[0]} } sub major($self, $i) { @{$self->[$i]} } sub minor($self, $i) { map $_->[$i], @$self } sub mul($self, $m2) { my ($maj, $min)= $self->dims; my ($m2_maj, $m2_min)= $m2->dims; $min == $m2_maj or die "Incompatible matrix sizes: ($maj,$min) X ($m2_maj,$m2_ +min)"; my @ret; for my $i (0 .. $maj-1) { for my $j (0 .. $m2_min-1) { my $sum= 0; $sum += $self->[$i][$_] * $m2->[$_][$j] for 0 .. $min-1; $ret[$i][$j]= $sum; } } bless \@ret, ref $self; } sub transpose($self) { bless [ map [ $self->minor($_) ], 0 .. $#{$self->[0]} ], ref $self +; } my $identity= QDMatrix->new(3, [ 1,0,0, 0,1,0, 0,0,1 ]); my $x= QDMatrix->new(3, [ 4,5,1 ])->mul($identity->mul(QDMatrix->new(3 +, [ 1,0,0, 0,1,0, 2,0,1 ]))); use DDP; p $x;
        That's very neat! If you have any tests/examples, would you be willing to put them here? I'd like to put the above in a PDL tutorial section, together with the PDL equivalents. It won't surprise you that those would probably be very concise, but I don't want to spend 5 minutes producing a somewhat unreliable and partly-incorrect version, when with tests that show inputs and outputs I could spend 6 minutes making a sufficiently-correct version :-)

        That would even help those who don't know PDL but might be a little interested in some simple idioms!

      Well, yeah I probably could, although I don't really have time to learn PDL right now so it would just be some messy plain-old-perl. But maybe you like fewer dependencies anyway.

      Could you put together a unit test? Like maybe run through it once with Python and log the interesting variables at each line and then I can work toward making the perl generate the same values and not need to consult too many implementation details of Python?

      I'd be very happy to give this a go myself in due course, if you can show the python code with inputs and outputs.

      As shown elsewhere, I remain a bit of a noob at linear algebra (LA) myself. This includes IndexedFaceSet to 3D lines in two lines of PDL and the follow-on work in updating PDL's 3d demo, for which I needed to actually learn some LA. One really helpful resource for this was the YouTube channel 3blue1brown, and in particular his linear algebra series which visualises the geometric stuff that underpins LA: https://www.youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab.