Beefy Boxes and Bandwidth Generously Provided by pair Networks
Do you know where your variables are?
 
PerlMonks  

Re: RFC: Data::Sync

by mantadin (Beadle)
on Apr 25, 2006 at 14:53 UTC ( [id://545572]=note: print w/replies, xml ) Need Help??


in reply to RFC: Data::Sync

Hi g0n, seems to be a very useful thing ... .

As far as I understood it yet, this module helps to copy complete entries from one database to another - now I wonder if it can be used (or extended) to update a database from the changelogs of another one.

Consider this scenario:
  1. I have two ldap directories with different schemas
  2. and want to sync changes from one to another
  3. the "source ldapd" writes ldif files containing the changes made to the entries of it's DIT
  4. I want to apply the changes (and only the changes) to the target ldap-directory (doing the schema map to the entries before updating them to the target
  5. so that would yield a connector, reading the changelogs with tail -f and feeding the mapped changes constantly, as a demon, to the target directory
Before I learned of your module, I've found, that Net::LDAP can read ldif, including the format for changing entries - so currently I create Net::LDAP::Entry objects with Net::LDAP::LDIF and then apply my mapping rules to them, finally updating them to the target directory.

I wonder if Your module can be used to something like that? Did You think of such a use case?

Replies are listed 'Best First'.
Re^2: RFC: Data::Sync
by g0n (Priest) on Apr 25, 2006 at 15:21 UTC
    I didn't implement direct changelog reading, because there are multiple approaches to changelogs: OpenLDAP uses a changelog file, whereas SunOne/iPlanet & various others implement 'cn=changelog' objects in directory root, although their formats vary.

    As far as I know, the recommended approach to change detection in LDAP these days (since no two directory vendors ever managed to agree on a changelog standard) is persistent searching.

    There are two ways of doing this using Data::Sync as it stands:

    • Run a persistent search against your source DSA, detecting those changes you're interested in and flowing them.
    • Run a periodic search of all the objects you are interested in, and hash them for changes within Data::Sync.

    The latter approach has been tested, but is a hefty search overhead. The former works in theory, but hasn't been tested (I have only very recently got a fully v3 compliant DS set up for testing).

    An alternative approach (and the one I would favour for simplicity I think) would be to read LDIF as if it were an LDAP server - the code extensions to do that are fairly simple. That way your code could read the changelog file to pick up changes, perform the remapping, and write them direct to the target DS.

    The code changes for that are fairly straightforward, and would be very useful in any case - thanks for the suggestion.

    --------------------------------------------------------------

    "If there is such a phenomenon as absolute evil, it consists in treating another human being as a thing."
    John Brunner, "The Shockwave Rider".

    Can you spare 2 minutes to help with my research? If so, please click here

      Thank you very much for the quick reply. You wrote "An alternative approach (and the one I would favour for simplicity I think) would be to read LDIF as if it were an LDAP server".

      I will try it that way and post again, as soon as I have some useful results.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://545572]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others romping around the Monastery: (3)
As of 2024-04-20 03:02 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found