Interesting. I can just see it now: DBD::FTP, DBD::HTTP, ... ;-)
I'm not actually interested in parsing or dealing with the internals of these files. The files would simply be BLOBs in a database. Think of this more as systems management - the task at hand is simply to store one or more files together so that they can be retrieved together. These files could be RPMs or the like (more like tarballs in my case), but the code I'm looking at is not actually interested in the contents of those tarballs, just an easy way to copy them into multiple locations - think, for a second, of the CPAN archive as an example. We want to add a bunch of tarballs to the repository, and then later pull them out - you want 10, I want 15, and between us a few are the same. No point in both of us going to the originator of those few tarballs to get them, we'll all go to the repository to get them. This gets even more important the more people there are, imagine being the owner of File::Spec or FindBin or other such modules and being bugged by every perl developer to get your modules. The repository takes care of all that work.
Our current process goes and recreates the tarball each time it's needed. I want to nix that because we can be creating the same tarball 20+ times per day. A repository would, of course, help here. We'd create each tarball once, put it in the repository, and then extract it from there each time it's needed.
In reply to Re^2: Generic repository of files
by Tanktalus
in thread Generic repository of files
by Tanktalus
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |