As you already have a database, I would use several tables in that database to model the whole process:
- Submissions of new users go into the table new_users
- A script is triggered by cron every n minutes and checks whether there are entries in the table new_users. If so, it modifies the main users table according to the actions in new_users (insert,update,delete) and it also generates an generates an entry with the timestamp and ("regenerate","dhcpd.conf",Null) in the table jobs.
- Another script is triggered by cron every minute and checks whether there is a row dhcpd.conf,Null in jobs. If so, it regenerates dhcpd.conf and updates that row to "regenerate",dhcpd.conf,timestamp, and generates a new row in that table, "transfer","dhcpd.conf",Null
- Another script is also called by cron every minute that checks whether a transfer row is available in the database. It then executes the submitted transfer and updates the row.
This method has a latency of at least 1 minute per step involved, which should be small enough for adding new clients to dhcp, but which might not be enough for other tasks. There you could increase the polling frequency, but a hardcoded synchronized sequence via a shell script would most likely be better (combining all steps after the initial submission into one shell script).
perl -MHTTP::Daemon -MHTTP::Response -MLWP::Simple -e ' ; # The
$d = new HTTP::Daemon and fork and getprint $d->url and exit;#spider
($c = $d->accept())->get_request(); $c->send_response( new #in the
HTTP::Response(200,$_,$_,qq(Just another Perl hacker\n))); ' # web