I played with reloading modules at runtime a few years back. But there are quite a few problems that really can mess things up in subtle ways:
- If you change your variable initialization, like for example adding a variable or changing a default, a saved state becomes invalid.
- If your saved state doesn't become invalid in reload, it doesn't mean your code will still work after a clean restart. The saved state may be valid, but the initialization function you just tinkered with might produce garbage.
- When doing a larger edit, you'd normally save regularly with unfinished code, just to make sure an editor or system crash doesn't loose you a lot of work.
- Autosave is a thing in some editors.
- The first error in your code can either crash the program or (much worse) mess up the in-memory state without you knowing it.
- If you working on a forking application (like a webserver or similar), you have to choose between responsiveness of changes and reducing Disk IO. Hundred or more processes hitting the disk every half second can be "not fun". Plus, if you have NOT disabled "file access time" (Linux: noatime), this is a surefire way of reducing your SSD lifespan.
I'm not discouraging the use of automatic reloads. Just saying while automatic reloads might be cool and solve some problems, they certainly introduce a set of their own. After a lot of extra gray hair, i have pretty much abandoned the concept of live-reloading code-in-development.
perl -e 'use Crypt::Digest::SHA256 qw[sha256_hex]; print substr(sha256_hex("the Answer To Life, The Universe And Everything"), 6, 2), "\n";'