Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

#!/usr/bin/perl use strict; use warnings; my $random = '/dev/prandom'; my $zero = '/dev/zero'; my %files; $files{$_} = int(-s $_) foreach (@ARGV); open(RAND, "<", $random); open(ZERO, "<", $zero); foreach my $file (keys %files) { print "$file: $files{$file}\n"; foreach my $num (1..5) { open(FILE, ">", $file); foreach (0..($files{$file} / 1024 + 1)) { read(RAND, my $rand, 1024); print FILE $rand; } close(FILE); open(FILE, ">", $file); foreach (0..($files{$file} / 1024 + 1)) { read(ZERO, my $z, 1024); print FILE $z; } close(FILE); } unlink($file); } close(RAND); close(ZERO);

Replies are listed 'Best First'.
Re: Does this script securely delete data?
by daxim (Curate) on Jul 14, 2007 at 04:22 UTC
    CAUTION: Note that shred relies on a very important assumption: that the file system overwrites data in place. This is the traditional way to do things, but many modern file system designs do not satisfy this assumption.
    man shred

    If you decide to make sense to program this anyway, and decide to reinvent a security related tool without experience in the field, the manual above has some hints about implementation.


Re: Does this script securely delete data?
by snopal (Pilgrim) on Jul 14, 2007 at 04:18 UTC

    Deleting a file so that it can not be evaluated for it's original content is such a broad topic that there is no way to certify that a file is obscured to an unreadable state.

    Let's put it this way, depending on the resources of the evaluator, it is theoretically possible to read file data at a near atomic level to determine the original content.

    At the file system level, one assumes that writes follow the inode path of the file stored. This is likely, but not guaranteed in every situation. If you write to files in "read/write" mode this may help stay on the original file inode path (but maybe not):

    Untested:

    my $fh; unless (open $fh, "+<".$file) { die "Can't open file for read/write: $file\n$!\n"; } seek $fh, 0, 0 or die "Seek failure"; # print to file code here close $fh;

    In actuality, any non-operating system level attempt to obscure a file might suffer from details not controllable at the user code level.

Re: Does this script securely delete data?
by roboticus (Chancellor) on Jul 14, 2007 at 04:22 UTC

    It doesn't appear to. The problem is that you're creating a new file with the same name as the original. So the original block of data could still be there. Open the file in '+<' mode first, seek to the beginning and then overwrite it.

    ...roboticus

      See File::Overwrite for an implementation of this. But it still suffers from all the problems already noted elsewhere in the thread. To do the job properly you'll need a filesystem-specific tool, and even then you're at the mercy of how the underlying disk decides to lay data out.
Re: Does this script securely delete data?
by swampyankee (Parson) on Jul 16, 2007 at 03:40 UTC

    Possibly, for sufficiently low values of "secure." If nothing else, one has to watch out for different types of buffering, network latencies, journaling file systems, etc. I would think that flush() doesn't really have an effect on buffering done at the hardware (vs o/s) level.

    Try sandblasting the magnetic medium of the disk. It works quite well, although it makes writing to the disk a trifle more difficult.


    emc

    Information about American English usage here and here.

    Any New York City or Connecticut area jobs? I'm currently unemployed.