That's a good idea. However most of the time if the script really hits the point where it sucks up all available RAM, there's probably something to optimize :) In my case, the script indexed several millions files from 80 terabytes of storage in one big hash : I had to splice the job in parts to use less memory.