in reply to Re: Re: Is Perl the right solution...
in thread Is Perl the right solution...

From memory (and i will stand corrected) doesnt access load a copy of the entire db onto the local machine (if on a LAN)? Given this, wouldnt there be problems with locking? - Hence the scalability issues

Has the original poster considered Delphi? Its damn easy to use, got a nice RAD env, heaps of DB connectivity modules, is completely compiled, and doesnt need those silly run time libraries. Having said that, there is always Kylix for a nicer environment in which to work.. :-)

As i've mentioned in other posts, I like Postgres and perl, coz its so damn easy to set up, maintain and write code for.

If a web interface is decided on, you have the nice flexibility of a remote access gui, and its easy to make some kind of redundency in your final solution. ie a simple (cheap intel) two machine setup, if one machine fails, just bring up the other machine with the database and webserver on one box.

If you wanted to get really tricky, use one machine as primary, one as secondary, both installed with a webserver and database, and have the primary roll its data across to the standby, so in the event of a failure all the data is upto n min's/seconds old. Easy, cheap, flexible, scalable and sensible for a SME whose data is their business.

ahhhh!, the feature creep!, the damn feature creep..

  • Comment on Re: Re: Re: Is Perl the right solution...

Replies are listed 'Best First'.
Re: Re: Re: Re: Is Perl the right solution...
by jlongino (Parson) on Dec 14, 2001 at 05:56 UTC
    From memory (and i will stand corrected) doesnt access load a copy of the entire db onto the local machine (if on a LAN)? Given this, wouldnt there be problems with locking? - Hence the scalability issues
    In a Windows environment (LAN or otherwise) record locking is handled via "*.ldb" files in the same directory as the "*.mdb" file. On the local computer the entire db is not copied although occasionally small temporary files may be necessary for the jet engine.

    If I remember correctly, the primary scalability issue is poor performance under increased user load (for Windows/LAN environments).

    Again I'm not very knowledgeable about Access/Perl/ODBC DBI performance issues so I can't comment there although I would be interested in hearing other monks discuss their successes/failures as dws has. Particularly if they have had any success under heavy usage.

    If it were up to me, I would also like to use an Apache/Postgres/Perl DBI environment (even though the learning curve would be steep for me).

    --Jim

      ahhhh, is that what those little files are for. Its been quite a while since i've used Access (thankfully).

      At a place i was once working, we had about 1Tb of data, and some smart people thought they would use Access as their GUI to it. Unfort what they didnt know, if they didnt use pass thru' queries, Access would bring down whole chunks of the data for processing on the local machine. ( Access 97 from memory ).

      Needless to say, the local lan ground to a halt, and queries never finished. The end users were quite unhappy about it and wanted us to change the way Access worked! The the concept of pass thru' queries was too complicated for them. ie they had to code 'raw' sql, rather than use a GUI. Marketers, ROFL.