Beefy Boxes and Bandwidth Generously Provided by pair Networks
XP is just a number
 
PerlMonks  

Re^2: Problem with ReadKey

by jimhenry (Novice)
on Nov 05, 2010 at 00:37 UTC ( #869605=note: print w/replies, xml ) Need Help??


in reply to Re: Problem with ReadKey
in thread Problem with ReadKey

I apologize for the partial, unrunnable code in my post; I shouldn't post when sleep-deprived and frustrated. If I had taken the time to figure out a minimal script that reproduces the problem, I could have asked a much more specific question -- which I'll do now. The sample script provided by jethro worked fine on my system. I then tried to figure out what was the salient difference between his script and mine, and quickly remembered that my script was (because of the huge number of text files it might need to handle) reading a long list of filenames from stdin rather than the command line. It's normally run in a pipe from a find command, like so:
find $HOME -name \*.txt | textual-slideshow.pl $*
It slurps the filenames from standard input, then reads random sample paragraphs from some of them, prints random paragraphs a line at a time, and repeats. My simplified version that's partway between jethro's script and mine looks like this:
#! /usr/bin/perl use Term::ReadKey; use Time::HiRes qw(time); my @slurp_stdin = <>; ReadMode 3; while (1) { my $key; my $wait_until = time + 3; while ( time < $wait_until ) { $key = ReadKey( -1 ); if ( defined $key ) { print STDERR "keystroke $key\t"; } } print "Something\n"; }
And it has the same problem as my script, not surprisingly. The problem clearly has to do with running in an environment where stdin is redirected. So I tried to fix that by closing STDIN and reopening it after reading the filenames. That didn't help. I tried adding an explicit STDIN argument to the ReadMode and ReadKey calls; that didn't help either. This is what I've got now, and it also has the same problem as my original script or the above modification of jethro's script:
#! /usr/bin/perl -w use Term::ReadKey; use Time::HiRes qw(time); my @slurp_stdin = <STDIN>; close STDIN; open STDIN, "-"; while (1) { my $key; my $wait_until = time + 3; while ( time < $wait_until ) { ReadMode 3, STDIN; # 'noecho'; $key = ReadKey( -1, STDIN ); if ( defined $key ) { print STDERR "keystroke $key\t"; } ReadMode 0, STDIN; } print "Something\n"; }
This repeatedly prints "Something"; but when I press keys, I don't get the "keystroke (key value)" message, only the value of the key. My full original script is at http://jimhenry.conlang.org/scripts/textual-slideshow.zip.

Replies are listed 'Best First'.
Re^3: Problem with ReadKey
by jethro (Monsignor) on Nov 05, 2010 at 10:35 UTC

    open STDIN, "-"; is just reopening STDIN with the standard input, which is still redirected. I don't think that you can get at the terminal output that easy.

    Just remove the redirection and use the backticks operator `find $HOME -name \*.txt` or File::Find inside your perl script to get at the filenames. No redirection, no problem

      Hi jimhenry,

      I agree with jethro that the best way to do this is to read the list of files in the program.  You shouldn't have to specify STDIN in your Term::ReadKey calls, either.  Also, consider putting "$| = 1" at the beginning of the program to flush STDOUT.

      Here's a small subroutine which will let you avoid using File::Find and just get the files ending in ".txt":

      sub find_all_textfiles { my ($dir) = @_; my $fh = new IO::File; opendir($fh, $dir) or die "Failed to read dir '$dir' ($!)\n"; my @files = readdir($fh); closedir $fh; my @found = ( ); foreach my $fname (@files) { next if $fname eq '.' or $fname eq '..'; my $path = "$dir/$fname"; if (-f $path and $path =~ /[.]txt$/) { push @found, $path; } elsif (-d $path) { push @found, find_all_textfiles($path); } } return @found; }

      Of course this will require that you read each file individually, and somehow store their contents.

      If it matters which file the text came from, you could save the lines from each file in a separate array reference, and then return a reference to those array references (note my use of Data::Dumper, which is invaluable for visualizing your data!):

      use strict; use warnings; use Data::Dumper; use IO::File; use Term::ReadKey; use Time::HiRes qw(time); $| = 1; my @files = find_all_textfiles("."); my $a_text = read_files(@files); printf "Results of reading [@files] => %s\n", Dumper($a_text); sub read_files { my (@files) = @_; my $a_text = [ ]; foreach my $fname (@files) { print "Reading '$fname' ...\n"; my $fh = new IO::File($fname) or die "Can't read '$fname' ($!) +\n"; chomp(my @text = <$fh>); push @$a_text, [ @text ]; } return $a_text; }

      If you don't care which text came from which file, just throw it all in one big array:

      use strict; use warnings; use Data::Dumper; use IO::File; use Term::ReadKey; use Time::HiRes qw(time); $| = 1; my @files = find_all_textfiles("."); my @text = read_files(@files); printf "Results of reading [@files] => %s\n", Dumper([ @text ]); sub read_files { my (@files) = @_; my @text = ( ); foreach my $fname (@files) { print "Reading '$fname' ...\n"; my $fh = new IO::File($fname) or die "Can't read '$fname' ($!) +\n"; chomp(my @lines = <$fh>); push @text, @lines; } return @text; }

      In fact, in the last example, you could even combine the finding of files with the saving of each file's text, to create a single subroutine.  I'm using a nifty trick here, which is to pass the array reference $a_text which is the aggregate of all text in each recursive call to read_textfile_lines; only at the top level is it undefined (but in that case, you initialize the array ref with:  $a_text ||= [ ];):

      use strict; use warnings; use Data::Dumper; use IO::File; use Term::ReadKey; use Time::HiRes qw(time); $| = 1; my $a_text = read_textfile_lines("."); printf "Text from ALL textfiles in current dir: %s\n", Dumper($a_text +); sub read_textfile_lines { my ($dir, $a_text) = @_; $a_text ||= [ ]; my $fh = new IO::File; opendir($fh, $dir) or die "Failed to read dir '$dir' ($!)\n"; my @files = readdir($fh); closedir $fh; foreach my $fname (@files) { next if $fname eq '.' or $fname eq '..'; my $path = "$dir/$fname"; if (-f $path and $path =~ /[.]txt$/) { $fh = new IO::File($fname) or die "Can't read '$fname' ($! +)\n"; chomp(my @lines = <$fh>); push @$a_text, @lines; close $fh; } elsif (-d $path) { read_textfile_lines($path, $a_text); } } return $a_text; }

      Now you're ready to call some subtroutine process_text (or whatever), passing the ref to the array of all text $a_text.  It will do something like:

      sub process_text { my ($a_text) = @_; while (1) { my $key; my $wait_until = time + 3; while ( time < $wait_until ) { ReadMode 3; $key = ReadKey( -1 ); if ( defined $key ) { print "keystroke $key\t"; } ReadMode 0; } print_some_random_text($a_text); } } sub print_some_random_text { my ($a_text) = @_; # Replacing the next line with meaningful code is left as # an exercise for the OP. ## print "[Debug]\n"; }

      As you see, I've left for you the fun of deciding what to print out in the subroutine print_some_random_text, as well as adding code for trapping relevant keys from the user in the subroutine process_text.  Good luck!


      s''(q.S:$/9=(T1';s;(..)(..);$..=substr+crypt($1,$2),2,3;eg;print$..$/
        Thanks for the additional suggestions. I've implemented a recursive function with readdir to gather the list of filenames, but decided against loading the contents of the files up front. This script may have to deal with arbitrarily large filesystems with lots and lots of text files, on systems with arbitrarily small amounts of memory. So it reads the filenames up front, then randomly picks a subset of files to read and save a random subset of paragraphs from. I've got it pretty much working now. The slow_print function now looks like this:
        sub slow_print { my $subscript = shift; my $para = $paras[ $subscript ]; if ( $wrapping ) { $para = &rewrap( $para ); } # include the filename/line number index in the paragraph instead # of printing it separately, so we don't have to duplicate the # sleeping/keypress-handling logic below. if ( $print_filenames or $debug ) { $para = $indices[ $subscript ] . $para; } my @lines = split /\n/, $para; foreach ( @lines ) { print $_ . "\n"; # If the user presses a key before the pause time for # the current line has passed, we don't necessarily skip # to the next line with no further pause. my $start = time; my $remaining_wait = $pause_len * length $_; while ( time < ( $start + $remaining_wait ) ) { my $key = ReadKey( $remaining_wait ); if ( defined $key ) { &handle_keystroke( $key ); } # the $pause_len might have been changed by user's keypress $remaining_wait = ($pause_len * length $_) - (time - $start); } } print "\n\n"; }
        (I'm setting ReadMode 3 near the beginning of the script, and setting ReadMode 0 in an END{} block.) The directory read function, with its helper want_file, looks like this:
        # comments refer to benchmark tests using ~/Documents/ and ~/etext/ di +rs sub want_file { my $filename = shift; if ( $check_type && -T $filename ) { # 15061 filenames in 0.692 sec return 1; } elsif ( $check_extensions ) { # 8857 filenames in . 0.794 sec with # --extensions=txt,pl,html,htm if ( ( grep { $filename =~ m(\.$_$) } @file_extensions ) && -e $fi +lename) { return 1; } } else { # this test finds 5066 files in ~/Documents and ~/etext # in 0.218 sec if ( $filename =~ m(\.txt$) && -e $filename ) { return 1; } } return 0; } sub recurse_dir { my $dirname = shift; local *D; opendir D, $dirname; my $fname; while ( $fname = readdir D ) { my $name = $dirname . "/" . $fname; if ( -d $name ) { # don't recurse on . or .. or dotfiles generally if ( $fname !~ /^\./ ) { print "$name is a dir\n" if $debug >= 3; &recurse_dir( $name ); } } elsif ( &want_file( $name ) ) { print "$name is a text file\n" if $debug >= 3; push @filenames, $name; } else { print "skipping $name\n" if $debug >= 3; } if ( $preload_paras and ((rand 100) < 1) ) { print "preload mode so printing something while still gatherin +g filenames (" . (scalar @filenames) . " read so far)\n" if $debug +; if ( scalar @filenames ) { &add_paras; &slow_print( int rand scalar @paras ); } else { print "...but there are no usable files yet\n" if $debug; } } } closedir D; }
        I originally had the recurse_dir making a local list of filenames, which it returns to its caller; but after seeing how long it takes to read my entire home directory with this (vs. the benchmarks mentioned in comments above using just a couple of directories with lots of text files), I added the code near the end there, which required making the recurse_dir function add files directly to the global @filenames. This seems to work very well now. Thanks for all your help. The complete script from which the above excerpts are taken is at http://jimhenry.conlang.org/scripts/textual-slideshow.zip.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://869605]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others pondering the Monastery: (3)
As of 2023-02-04 18:15 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    I prefer not to run the latest version of Perl because:







    Results (31 votes). Check out past polls.

    Notices?