in reply to Is there a problem with how large the perl file gets?

As mentioned earlier, running the script using mod_perl is the way to go. Out of interest though, if you are still using plain old CGI then you may get some benefit from splitting your code into modules and 'require'ing them only as needed. The idea is that not all of the code in your 100k file will be executed each time (depending on the choices made by the users). If you can divide up your code into a main script with modules that represent a path of execution then you only need to load up the code associated with current choices. See CGI::Application as an example of splitting up a script by execution path.

In the following example, the user makes a choice and the module associated with the choice is 'require'd. The script works for $choice=1 since Data::Dumper is probably in your system. Change the choice and the script throws an error because it can't find the module.

#! /usr/bin/perl -w # use strict; use warnings; my $choice = 1; if ($choice) { if ($choice == 1) { # Do action 1 require Data::Dumper; import Data::Dumper; print Dumper(\$choice); } else { # show the user a report require Action::Report; import Action::Report; print report(); } } else { # No choice made. show help require Action::Help; print Action::Help::showHelp(); }

As a further enhancement, the module files can be pre-compiled into bytecode using B::Bytecode to help the loading time.