Well, now you're still just being too vague. Does this "different" filter program depend critically on reading exactly one word per run? If so, I think you should look for a better implementation of whatever this program does.
If, like most well-written filter programs, it accepts a stream of one or more lines of text, does its work one line at a time, and outputs each line as it finishes, then you can easily set up a process using Perl to feed it and collect its output for further operations. In this case, it's just a matter of making sure you feed it properly.
For example, if it accepts one word per line, does some transformation and outputs a line of data for each input word, then a perl snippet like this would be one "easy" way to handle the job for a given data file:
# open a pipeline in which a perl one-liner feeds
# word-tokenized data from $file to "word_filter";
# the FILT file handle is then used to read the output
# of word_filter (one line per word):
open( FILT, "perl -pe 's/^\\s*//; s/\\s+/\\n/g' $file | word_filter |"
+ )
or die "subshell: $!";
## note the "\\" in the one-liner
while (<FILT>) {
# $_ holds one line of output from "word_filter"
# so do something with that...
}
|