in reply to How to read 1 td file
Most sensible creators of files that size, make them binary data with fixed-sized records. That way, individual records can be read without having to process the whole file in one serial progression. Or in one go.
Then all you need is a) the ability to see the remote file system from the local machine; b) the ability to seek or sysseek to positions > 2GB.
Under windows it is usually done with something like NET USE \\remotemachine\remotedir x:
For *nix, it will be some variation on the mount command.
It is probably quite rare that anyone would build perl without this these days, but to verify it, you can do:
perl -V:uselargefiles uselargefiles='define';
If it says ='define', you're good to go. If is says: ='undef', you need to reinstall or recompile perl on your system.
So yes. Given the right infrastructural access to the remote file, and a suitable build of Perl, you can access that remote, terabyte-sized file using Perl.
Access will be relatively slow compared to local disc, but that's inevitable. Depending upon the nature of the processing involved, it may make sense to pull chunks of the file across to local disc for processing, before writing in-place back to the master; if required.
If accessing such large remote files is likely to become a regular requirement, and your machine is physically close (<100m) to the remote server, then setting up a dedicated 10 gigabit network connection is a possibility; though relatively expensive. A couple of 10GBASE-T cards ($1000), and a few meters of cable, and you can have a dedicated network between you and the remote file that runs fast enough to rival a locally attached disc. Whether the local and remote hardware is capable of exploiting that bandwidth is another question.
|
|---|