http://qs1969.pair.com?node_id=869848


in reply to Re^4: Problem with ReadKey
in thread Problem with ReadKey

Thanks for the additional suggestions. I've implemented a recursive function with readdir to gather the list of filenames, but decided against loading the contents of the files up front. This script may have to deal with arbitrarily large filesystems with lots and lots of text files, on systems with arbitrarily small amounts of memory. So it reads the filenames up front, then randomly picks a subset of files to read and save a random subset of paragraphs from. I've got it pretty much working now. The slow_print function now looks like this:
sub slow_print { my $subscript = shift; my $para = $paras[ $subscript ]; if ( $wrapping ) { $para = &rewrap( $para ); } # include the filename/line number index in the paragraph instead # of printing it separately, so we don't have to duplicate the # sleeping/keypress-handling logic below. if ( $print_filenames or $debug ) { $para = $indices[ $subscript ] . $para; } my @lines = split /\n/, $para; foreach ( @lines ) { print $_ . "\n"; # If the user presses a key before the pause time for # the current line has passed, we don't necessarily skip # to the next line with no further pause. my $start = time; my $remaining_wait = $pause_len * length $_; while ( time < ( $start + $remaining_wait ) ) { my $key = ReadKey( $remaining_wait ); if ( defined $key ) { &handle_keystroke( $key ); } # the $pause_len might have been changed by user's keypress $remaining_wait = ($pause_len * length $_) - (time - $start); } } print "\n\n"; }
(I'm setting ReadMode 3 near the beginning of the script, and setting ReadMode 0 in an END{} block.) The directory read function, with its helper want_file, looks like this:
# comments refer to benchmark tests using ~/Documents/ and ~/etext/ di +rs sub want_file { my $filename = shift; if ( $check_type && -T $filename ) { # 15061 filenames in 0.692 sec return 1; } elsif ( $check_extensions ) { # 8857 filenames in . 0.794 sec with # --extensions=txt,pl,html,htm if ( ( grep { $filename =~ m(\.$_$) } @file_extensions ) && -e $fi +lename) { return 1; } } else { # this test finds 5066 files in ~/Documents and ~/etext # in 0.218 sec if ( $filename =~ m(\.txt$) && -e $filename ) { return 1; } } return 0; } sub recurse_dir { my $dirname = shift; local *D; opendir D, $dirname; my $fname; while ( $fname = readdir D ) { my $name = $dirname . "/" . $fname; if ( -d $name ) { # don't recurse on . or .. or dotfiles generally if ( $fname !~ /^\./ ) { print "$name is a dir\n" if $debug >= 3; &recurse_dir( $name ); } } elsif ( &want_file( $name ) ) { print "$name is a text file\n" if $debug >= 3; push @filenames, $name; } else { print "skipping $name\n" if $debug >= 3; } if ( $preload_paras and ((rand 100) < 1) ) { print "preload mode so printing something while still gatherin +g filenames (" . (scalar @filenames) . " read so far)\n" if $debug +; if ( scalar @filenames ) { &add_paras; &slow_print( int rand scalar @paras ); } else { print "...but there are no usable files yet\n" if $debug; } } } closedir D; }
I originally had the recurse_dir making a local list of filenames, which it returns to its caller; but after seeing how long it takes to read my entire home directory with this (vs. the benchmarks mentioned in comments above using just a couple of directories with lots of text files), I added the code near the end there, which required making the recurse_dir function add files directly to the global @filenames. This seems to work very well now. Thanks for all your help. The complete script from which the above excerpts are taken is at http://jimhenry.conlang.org/scripts/textual-slideshow.zip.