True. I should have qualified the statement (that Microsoft Access is not a multi-user database) by filling in some details.
My experience with Access -- or, more accurately, accessing the underlying JET engine through ODBC -- is that it didn't stand up under (simulated) heavy load, and that we could fairly reliably corrupt the repository. Under lighter load, things would seem fine for quite a while, then WHAM! Under single-user load, no problem at all. We suspected a concurrency problem, but with no source to read or debug, we were left at the "we suspect" level. Plus, conventional wisdom at the time was also that Access/JET was not suited for multi-user applications.
This was on NT4.0 in the SP5 timeframe. Something may have changed in the meantime to fix that (there have been at least two service packs, SP6 and SP6a) in the meantime, plus Win2K (and one? service pack there) and XP. I since moved on to relyling on heavier-duty RDMBSs, and haven't reevaluated Access. Nor do I plan to.
| [reply] |
From memory (and i will stand corrected) doesnt access load a copy of the entire db onto the local machine (if on a LAN)? Given this, wouldnt there be problems with locking? - Hence the scalability issues
Has the original poster considered Delphi? Its damn easy to use, got a nice RAD env, heaps of DB connectivity modules, is completely compiled, and doesnt need those silly run time libraries. Having said that, there is always Kylix for a nicer environment in which to work.. :-)
As i've mentioned in other posts, I like Postgres and perl, coz its so damn easy to set up, maintain and write code for.
If a web interface is decided on, you have the nice flexibility of a remote access gui, and its easy to make some kind of redundency in your final solution. ie a simple (cheap intel) two machine setup, if one machine fails, just bring up the other machine with the database and webserver on one box.
If you wanted to get really tricky, use one machine as primary, one as secondary, both installed with a webserver and database, and have the primary roll its data across to the standby, so in the event of a failure all the data is upto n min's/seconds old. Easy, cheap, flexible, scalable and sensible for a SME whose data is their business.
ahhhh!, the feature creep!, the damn feature creep..
| [reply] |
From memory (and i will stand corrected) doesnt access load a copy of the entire db onto the local machine (if on
a LAN)? Given this, wouldnt there be problems with locking? - Hence the scalability issues
In a Windows environment (LAN or otherwise) record locking
is handled via "*.ldb" files in the same directory as the
"*.mdb" file. On the local computer the entire db is not
copied although occasionally small temporary files may be
necessary for the jet engine.
If I remember correctly, the primary scalability issue is
poor performance under increased user load (for Windows/LAN
environments).
Again I'm not very knowledgeable about Access/Perl/ODBC DBI
performance issues so I can't comment there although I
would be interested in hearing other monks discuss their
successes/failures as dws has. Particularly if they have
had any success under heavy usage.
If it were up to me, I would also like to use an
Apache/Postgres/Perl DBI environment (even though the
learning curve would be steep for me).
--Jim
| [reply] |
ahhhh, is that what those little files are for. Its been quite a while since i've used Access (thankfully).
At a place i was once working, we had about 1Tb of data, and some smart people thought they would use Access as their GUI to it. Unfort what they didnt know, if they didnt use pass thru' queries, Access would bring down whole chunks of the data for processing on the local machine. ( Access 97 from memory ).
Needless to say, the local lan ground to a halt, and queries never finished. The end users were quite unhappy about it and wanted us to change the way Access worked! The the concept of pass thru' queries was too complicated for them. ie they had to code 'raw' sql, rather than use a GUI. Marketers, ROFL.
| [reply] |