Points 1&2 are correct. On the old UNIX system there was a shell script that after quering the database returned the results into 3 seperate files due to the size of the data returned. On our new W2K system the Perl script returns this into one single XLS file and this is where we get the error mentioned in point 2.
I am wondering if I can add a piece of code to the PERL script that when it is returning the data and hits for example 64K rows that it closes the file and opens another XLS file to continue with the rest of the extract until it completes.
Hope this makes a little more sense