Side thought: This issue has likely been addressed by business-grade disk optimization routines which, in the 80s, had to consider the possibility of a full or nearly-full disk and still function.
That's why I love throwing out these questions to wider audiences.
Many years ago whilst contracting at IBM, I was taken through the operational steps of what was one of, if not the, first defrag in situ utilities -- called 'vfrag' from memory -- by its author, in the company refectory. No paper or whiteboards, just knives and forks and (many) salt and pepper pots (much to the annoyance of other dinners).
And you are right, it did have similar problems. The initial, slow, but very reliable, version required just one block or maybe cluster of free-space, and shuffled everything through that one space.
The second, much faster version first consolidated the whitespace by shuffling up the in-use (discontiguous lumps) clusters starting from the front of the disc and working down, gradually accumulating the free-space clusters together and thus creating a bigger and bigger buffer into which to move the downstream chunks; until all the freespace was accumulated at the end of the disk.
Once done, the process of defragging the individual files became much simpler.
Unfortunately, I'm not sure it helps here as I have no free space within the buffers. Though I can use my small out-of-line buffer to create some, as described elsewhere.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
| [reply] |