Did you try to run code using either of these algorithms or was your posted code meant as psuedo-code?
- FILE$index generates a syntax error. If you are opening files sequentially it is also unnecessary. You need to change the name of the file associated with the file handle, not the name of the file handle. See open for details.
- Did you intend to be doing integer division when you calculated chunk size? If so, $data/$partnumber needs to be int($data/$partnumber). As it stands $chunk is a fraction.
- The second algorithm does not allocate lines as I think you were intending: For data with 3 SS lines all data ends up in a single file. If you go up to 5 SS lines, your first file has 0 objects and all remaining files have only one object. If you go up to 6 or more SS lines objects get allocated a bit more evenly, but not necessarily total lines. For example, if you have a run of small objects that add up to $chunk-1 followed by one super big one they could all end up in the same file.
If it is important to even out the total number of lines in each file as much as possible, then you might want to read up on optimization algorithms, particularly partitioning algorithms. There is no simple way to do this. A similar problem was discussed just a few days ago (see partition of an array). Although the problem discusses partitioning an array in 2, the goal is the same: even-ing out the sums among N buckets. In your case you are summing lines associated with objects rather than numbers in an array, but the basic problem is the same. As you read through that thread, pay particular attention to the dialog between Limbic~Region and BrowserUK and also the back and forth between sundialsvc4 and ikegami.
Best, beth