When writing programs for the net that need to access external data I generally lean towards storing that data in a database of some sort. There are lots of reasons for this, namely security and the fact that it is easier to deal with multiple access of the database - there is no need to manually lock files, etc.
There are a large number of databases that will work on both linux and NT, MySql works well, so - or so I have heard - does Postgress SQL and both have the added advantage of being free ;) Oracle will happily work on both platforms as well. If all of the above seem too complex for your task there is always the old comma-seperated database, accessed via DBI::CSV.
DBI is my interface of choice, both for it's stability and flexibility and becase it provides an easy way to standardise calls to databases across networks, operating systems and even database types. Some access methods you could choose are DBI::mySql DBI::Oracle and DBI::ODBC although the last will requre that you have installed the ODBC driver onto linux.
All calls to the database then are made as the webserver's user and will happily access data providing that user has access rights to the database and can be nade very secure by the use of passwords and storing the database files themselves in a non-web accessable location.
Hope this helps!
$japh->{'Caillte'} = $me;
| [reply] [d/l] |
As caillte pointed out, using a database is really the only
way to go. CSV files, DBM files, or plain-text files of any
variety are very easy to hack around with, but they are hard
to scale, especiaally under a deadline. Unless you're doing
a five minute hack that will be used only once or twice,
it is best to do it properly the first time, which once you
get used to it isn't that much harder anyway.
There are DBD (Database Drivers) for Perl for nearly every
DB that you can think of, and a few you probably wouldn't
want to. Informix, Oracle, MySQL, ODBC, Solid, Interbase,
the list
is pretty large, so you can pick whatever you have access to.
MySQL is good because
it is Open Source, and runs just as well on NT as it does
on UNIX. You can even purchase a support contract on it,
which is an important factor when trying to convince IT
to use it.
Regarding "unprivileged users", this is easy to implement
using a database as a conduit to pass information:
Web <---> CGI <---> SQL <---> "Priviledged"
User App DB Process
The CGI can handle login access verification and basic
security. The "Priviledged" process can handle the actual
special work, taking its command from the configuration in
the database. You can add extra security by using SSL,
restricting access to the CGI application itself, and more.
The system user that runs the Web server (i.e. 'nobody')
just needs read/write access to the SQL database, and
you can even limit that further using SQL access control
methods (i.e. 'SELECT' and 'UPDATE' but not 'INSERT')
The configuration is managed from a central database, so
the process itself can read and re-read its config from
this database on demand, or on a regular schedule.
For simplicity, if you tie the DB to a hash,
merely putting stuff in the hash in one program,
will make it instantly available to another using the same
table. If you are new to SQL, this is by far the easiest
way to get going. | [reply] [d/l] |
All my web-CGI applications are set-up and configured in the browser. Use GDBM (or similar) for storage and put the database path outside of the web-path, then it's not accessible from the web - but is accessible to Perl and your application.
e.g.
Web-path: "/InetPub/wwwroot/website1/database/" <--- bad location
Secure-path: "/database/" <--- good
In theory, there is no difference between theory and practise. But in practise, there is. Jonathan M. Hollin Digital-Word.com | [reply] |