There's some room for improvement here. First, I have to confess that I don't know what "MO" refers to, and I don't think I'll ever have a set of directories called "inboundd", "DN_Dump" and "MO_Dump" with files called "*.dat" that have the particular arrangement of comma-separated values that your code expects. I don't want to discourage you, but in order for people to find this useful, there needs to be more explanation about how and why it might be useful.
As for things that could be improved:
- It looks like your code only handles one file at a time, and moves that one file to some other directory. If your goal is to handle all files in "inboundd", you have to run this program repeatedly, which seems tedious. Why not have it process the whole list of files in a loop?
- Instead of system("cp ..."); unlink ... you could just use perl's "rename" function -- it'll have the same result and take less time.
- When reading the CHECKBOOK file, you might as well use "slurp" mode -- this will also save time.
- Your use of "chdir" assumes that the CWD contains "inboundd", "DN_Dump" and "MO_Dump" subdirectories, but you never check whether chdir succeeds; instead, you should test whether the CWD has the directories you're expecting, and use relative paths to do the file glob and renaming -- you don't need all those chdir calls.
- (update:) You don't check the return value from your system calls, so if "../DN_Dump" and/or "../MO_Dump" happen to be non-existent, or something else goes wrong with the copying, you'll lose your *.dat files. (Do you have some other backup for "inboundd"?)
Here's an alternative version that shows what I'm talking about (and a few other adjustments):
#!/usr/bin/perl --
use strict;
die "We don't seem to be in the right directory.\n"
unless ( -d 'inboundd' and -d 'DN_Dump' and -d 'MO_Dump' );
my @datfiles = <inboundd/*.dat>;
die "No inboundd/*.dat files were found. We must be done.\n"
unless ( @datfiles );
local $/; # set input record separator to "undef"
for my $GetFile ( @datfiles )
{
( my $PutFile = $GetFile ) =~ s{inboundd/}{};
if ( $PutFile =~ /^DN_/ ) {
rename $GetFile, "DN_Dump/$PutFile" or die "Can't move $GetFil
+e to DN_Dump: $!";
next;
}
open( CHECKBOOK, $GetFile ) || die "$GetFile: $!";
my $InString = <CHECKBOOK>;
close CHECKBOOK;
rename $GetFile, "MO_Dump/$PutFile" or die "Can't move $GetFile to
+ MO_Dump: $!";
# GETTING THE FILE OUT AND FORM MO
if (( my @InArrData = split( /,/, $InString )) >= 9 ) {
my ( $Mobile, $Msg, $Short, $Vendor, $Inbound ) =
@InArrData[1,5,6,7,9];
$Msg =~ s/[0-9a-f]{2}/chr(hex($1))/egi;
print "mo:$Mobile:$Short:$Inbound:$Vendor:$Msg\n";
}
}
(updated to add "die" traps on the "rename" calls)
I don't really know what those *.dat files contain. I'm taking your word for it that applying "split /,/" to the file content will always produce exactly the set of values, in the exact order, required by your list of assignments: no unexpected commas within quoted CSV fields, no extra lines of commentary (with commas) before the expected data, no leading or trailing whitespace that would mess up the output string, and so on. (Usually, I'd be less trustful of data file contents.) | [reply] [d/l] [select] |
Hi,
Some explanation on the FileOperation.pl
The application despatch one file at a time and extract information and from it in an pre-defined order.
The constructed string then will be pass to other application which are connected to this application via tcp/ip.
Why the application did not get all the file at one go?
The system also doing other operation so if the application keep on looping on getting the file and extracting information then there will delay with other processes.
so the application will take one file finish it up and then go to next operation finish it up and loop again.
The application will be looping permanently to serve incomming and outgoing operations.
| [reply] |