in reply to Out of memory!

If I'm reading your code correctly, you're doing the following:

Also, it looks like your two queries and associated output files are completely independent of each other. In that case, I'd recommend doing the following: This will limit the amount of data in memory and free up memory once you're done with that data.

Replies are listed 'Best First'.
Re^2: Out of memory!
by Anonymous Monk on Aug 11, 2010 at 12:53 UTC
    Dasgar('dbi:ODBC:MSSQL','Url.txt'); Dasgar('dbi:ODBC:MSSQL','Url_Ex.txt'); sub Dasgar { my( $dbstring, $filename ); ..DBI->new($dbstring); ...open ... $filename ... }

      Oh no! I've been reduced to a subroutine! :D

      Although I almost suggested a subroutine solution, I opted for a conceptual answer rather than going the code route. However, since he's doing slightly different SQL queries from the databases, I would recommend that the SQL query be another parameter to be passed into the subroutine.

      Otherwise, it looks like you've managed to read my mind. Now if only I can get Perl to do that, I wouldn't have to worry about making mistakes in my code.

Re^2: Out of memory!
by santhosh.yamsani (Initiate) on Aug 16, 2010 at 06:02 UTC

    Thanks dasgar.

    Out of memory issue resolved for the first step(Copying of URLs to two different txt files).But the next step i mean comparing each url in 1st file with each URL_Exclusion in the second file and copying the matched Url along with exclusionID into the new file(matchURL.txt).

    After completion of second step.We can go for third step i.e., updating Id with ExclusionID from(matchURL.txt).

    OUT OF MEMORY error resolved in first step,But i am getting that error in second step

    Code for the second and third steps are given below:

    open (URLFHR, 'Url.txt'); open(URLEXFHR,"Url_ex.txt"); while ($ee=<URLFHR>) { while ($ee4=<URLEXFHR>) { #print "$ee - $ee4 \n"; $abc4=$ee4; my $sthID = $dbhID->prepare("select MAX(ID) from +DI_URL_EXCLUDE where [RULE] ='$ee4' "); $sthID->execute(); $ID = $sthID->fetchrow_array ; #print "$ID \n"; undef( $dbhID ); undef( $sthID ); if ($ee4 =~ /^%/) { $abc4=$ee4; $abc4=~ s/^%//;##first letter # print "$abc4 \n"; + } if ($ee4 =~ /%$/) { $abc4=$ee4; $abc4=~ s/%$//; ##Last letter #print "$abc4 \n"; + } $ee = quotemeta( $ee ); # To avoid error (U +nmatched ) in regex; marked by <-- HERE ),To escape the special chara +cters $abc4 = quotemeta( $abc4 ); if( ($ee) =~ (/$abc4/) ) { #print "In comparision of $ee and $ee4,$ID \ +n"; open (SIMILARURLFHW, '>>Similar_Url.txt'); print SIMILARURLFHW "$ee\{\|\|\}$ID \n"; close(SIMILARURLFHW); print "\n3"; } } } print "\n4"; my $a; while($a =<sampleFile>) { my $UrlName = substr $a,0,index($a,'{||}'); my $EXID = substr $a, index($a,'{||}')+4; my $sthUPEXID = $dbhUPEXID->prepare("UPDATE DI_URL SET EXCLUSI +ONID =$EXID where URLName = '$UrlName' ");# Updating EXID in emp with + ID in emp4 where emp_fname matches in both $sthUPEXID->execute(); }
      I' not sure i'm understood what you want, here is my piece of code, maybe it helps. If url exclude list is too, you can just split it into smaller batches and collect "match id" results (maybe some of them multiple times)
      use re 'eval'; my @url_list = qw( www.gmaisds.% www.gmai% www.go% www.gir% www.girs% +); my @list; my $retval; my $id = 0; for my $url (@url_list) { $url =~ s/(?=[\/\.\?])/\\/g; $url =~ s/%/.*/g; $url .= '(?{ use re \'eval\'; $retval = ' . $id++ . '})'; push @list, $url; } my $str = '^(?:' . join (')|(?:', @list) . ')$ '; my $re = qr($str); while (<>) { chomp; if ($_ =~ $re) { print "$_ excluded; id: $retval\n"; } }