I have a shell script that runs 26 awk statements to separate out files based upon the first record:
awk '{ if($0~/^a/) print $0 }' $file > $file_a
awk '{ if($0~/^b/) print $0 }' $file > $file_b
awk '{ if($0~/^c/) print $0 }' $file > $file_c
..and so forth
Some of these records are extermely large(>4000 characters), and the file is pipe delimited. When I run my shell script it will break on what appears to be extremely large records.
awk: record 'fdss|90dd|open....' too long
record number: 688
Is there any work around in awk?
Can I handle this in a simpler way using Perl?
Originally posted as a Categorized Question.