I'm seeking some insight, and maybe some code snippets, on how to mingle lvalue subs and AUTOLOAD. I've reviewed Experimenting with Lvalue Subs and peeked at Want, and a few other things Super Search turned up, but can't seem to find the right bits.

I'm using AUTOLOAD to provide a client proxy wrapper for apartment threaded objects. All works well for the usual behaviors, and I think I've worked out how to handle proxied closures. I'd also like to be able to handle proxied lvalue subs as well, but can't quite figure out how to trap the actual assignment event so it can be propagated back to the proxied object. I don't want to use tied objects, since the client proxy objects are usually threads::shared (to make it easy to pass them between threads).

So, assuming the proxied object has

sub proxiedMethod : lvalue { my $this = shift; $this->{_value}; }
and assuming the proxied object knows how to tell its proxy that proxiedMethod() is lvalue, and the proxy's AUTOLOAD() uses WANT('LVALUE') to test if the method is being used as lvalue, how can the proxy be notified of any eventual assignment, so that it can pass the value back to the proxied object ?

Update:

Given the deafening silence, it appears I've either

  1. Stumped the experts
  2. committed an egregious faux pas
(maybe both).

However, upon further reflection, this issue is a bit thornier than I realized...

If the proxy object is in fact threads::shared, then its members must likewise be either scalars or threads::shared. If the members are threads::shared, then the proxied object could automagically see the lvalue assignment. So perhaps the solution is to use threads::shared refs (incl. for scalars) and thus the proxied object receives the goods wo/ any need for notification.

However, since the client proxy can be shared by multiple threads, and hence the lvalue could be concurrently updated/read by multiple threads, there needs to be some locking added...

All in all, it may be a bad idea...or at least an idea whose time has not yet come. Perhaps Perl6 or Ponie will address this by relaxing the "can't tie a threads::shared" requirement.

FWIW: My original purpose was to create a variant of DBIx::Threaded that replaces the non-shared, tied client proxies with threads::shared, untied versions, in order to simplify and speed up the passing of proxy dbh's/sth's between threads. The current version has to do a lot of marshalling/unmarshalling when the proxies are passed around. If the proxies were made threads::shared, passing them around would be faster/simpler.

So if "DBIx::Threaded::Untied" replaced DBI's tied members (e.g., AutoCommit, PrintWarn, RaiseError, etc.) with lvalue subs of the same name, the impact on code would be a bit less painful, e.g.,

$dbh->{AutoCommit} = 1;
becomes
$dbh->AutoCommit = 1;

In reply to Detecting assignment event for AUTOLOAD'd lvalue subs ? by renodino

Title:
Use:  <p> text here (a paragraph) </p>
and:  <code> code here </code>
to format your post, it's "PerlMonks-approved HTML":



  • Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data!
  • Titles consisting of a single word are discouraged, and in most cases are disallowed outright.
  • Read Where should I post X? if you're not absolutely sure you're posting in the right place.
  • Please read these before you post! —
  • Posts may use any of the Perl Monks Approved HTML tags:
    a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, details, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, summary, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr
  • You may need to use entities for some characters, as follows. (Exception: Within code tags, you can put the characters literally.)
            For:     Use:
    & &amp;
    < &lt;
    > &gt;
    [ &#91;
    ] &#93;
  • Link using PerlMonks shortcuts! What shortcuts can I use for linking?
  • See Writeup Formatting Tips and other pages linked from there for more info.