in reply to very slow processing

Looks like you only need to read the file once:

use strict; use warnings; my $fInName = "d:/Log.txt"; my $fOutName = "d:/Output/op.txt"; open my $fIn, '<', $fInName or die "Can't open '$fInName': $!\n"; open my $fOut, '>', $fOutName or die "Can't create '$fOutName': $!\n"; my %ids; while (<$fIn>) { my ($date, $id, $keyword) = /\[(.+?)\] .* \[(.+?)\] .* \[[^]]+\] \s+ (.*) /x or next; push @{$ids{$id}}, [$date, $id, $keyword]; } for my $id (sort keys %ids) { for my $entry (@{$ids{$id}}) { my ($date, $id, $keyword) = @$entry; if ($keyword eq "Orchestration Started") { print $fOut "$date,$id,$keyword \n"; next; } if ($keyword =~ /^Input Message/) { print $fOut "$date,$id," . "Input Message to P5 \n"; next; } if ($keyword =~ /^TP Service Request/) { print $fOut "$date,$id," . "Service Request \n"; next; } if ($keyword =~ /^P5 Move request :/) { print $fOut "$date,$id," . "Move request \n"; next; } if ($keyword =~ /^ProcessName:/) { my $mess = substr $keyword, 12; print $fOut "$date,$id,$mess \n"; next; } if ($keyword =~ /^Process Message :/) { my $mess = substr $keyword, 17; print $fOut "$date,$id,$mess \n"; next; } } }

Completely untested because you didn't supply any sample data, but there's a fair chance it'll just work.

Note use of strictures, lexical file handles, three parameter open and informative open failure diagnostics. Also note that variables are declared where they are first needed so that scope is managed correctly and accidental reuse is more likely to be noticed.

Oh, and the regex only needs to be given once.

Perl is the programming world's equivalent of English