Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

Hi,

sorry if this is a silly question. I do not have big experience in Perl and all my skripts so far were used only by myself.

Now I wrote a small application. The back-end is a simple database (sqlite file). The front-end is a user interface written in Tk. The front-end is then packed with PAR::Packer into an executable. This executable as well as the db file are supposed to be placed on the server (into a shared directory). The users are supposed to get a link to the executable to start the front-end from their workstations.

I tested this approach with 2-3 users simultaneously and it worked. In the real life there would be up to 20 maybe 30 users who would work with this program and probably some of them would start the packed Perl/Tk front-end simultaneously.

Are there any pitfalls on this approach? Are there any limits to the number of users who can simultaneously execute a packed Tk script? Thank you!

Replies are listed 'Best First'.
Re: Simultaneous access to the tk script
by Marshall (Canon) on Jun 02, 2018 at 05:23 UTC
    SQlite will work with multiple DB connections. That is fine. Different instances of the SQLite DB code will cooperate with each other.

    SQLite Write performance can be a big issue. In order to do a write (update, append) or whatever, the connection doing the write has to acquire an exclusive lock on the DB! SQlite has its own locking mechanism and this works fine on Windows or Unix. I would think that 20-30 readers would work fine. Whether the UI is driven by Tk or the exe is packed by PAR shouldn't matter. You might find SQlite V3 Exclusive Locking helpful.

    The big issue with SQLite is going to come down to performance of write operations.

    Some time ago, I was working on the design a similar project meaning 20-30 remote clients running a Perl Tk GUI viewing an SQLite DB. The design never progressed to an implementation phase. Some significant DB consistency issues surfaced. For this design, figured that I'd need a central server to manage the connections and the central server would be the single writer and would push notifications to the other clients.

    Say User 1 updates a row that User 2 is looking at. I would have to know that User 2's data had been updated. User 2's view would need to be updated. What happens when two users try to update the same field at the same time? Who "wins" and how to do you tell the guy who "lost" that he lost?". When a row was updated this could potentially mean updating displays on all the attached clients. This was a very weird DB application where we knew that some rows (records) we corrupted. Two different users could come up with their own ideas of what record 1234 should be. Anyway that means complicated mess!

    Hope this helps, Main point is that SQlite Write performance is limited.
    Marshall

    Update: No matter what the DB, if the DB changes a record, you have to figure out how to update the clients who are looking at that particular record. This can be complex. There are various "trigger" mechanisms, but your clients will have to be aware of a change in order to refresh their display based upon what other clients did to the central DB. WOW! This can become complicated.

    If all of the clients are "read only", this is easy. If one or more of the clients can modify the DB...The complexity is x10.

      Hi,

      and many thanks for your reply. My application has two tables. In a nutshell: the first table is inventory management and the second table is a manual log. The tables are linked over product_id. The users can write in the second table only (in essence, a current comment from the user view). This means, they will never update the same record, they just write in the same table. The likelihood of writing at the same time up to millisecond is probably low. Would you say it is still useful to set $dbh->sqlite_busy_timeout( $ms ) here?

      My concern was the packed Tk front-end as well, since many / several users would start the same programm over the link. Well, probably 20-30 is not too many. I will try to find out where the %temp% is located as Anonymous Monk proposed here.

      The clients' update would be less of a concern in my specific case. The users make their notes to the different products. In the moment that they would like to see the actual chain of log records to the certain product they load the id manually.

        I don't see any "show stoppers" with your application. 20-30 instances of a packed Perl program with Tk sounds reasonable. Of course the program will start faster if you have Perl and the modules that it needs installed because this eliminates the "unpack the .exe" step. However in my experience the users will be tolerant of a couple of seconds startup time, above 3 secs and you will get bitching (or at least I do).

        The last time I did any SQLite benchmarking, I was using an old XP laptop as my development machine. I could get maybe a dozen transactions per second. And about 50K inserts per second (as single transaction). Your server machine is undoubtedly faster than that. In your case to initialize the inventory table, run all of the inserts as a single transaction. A million row commit is just fine with SQLite. The write commit process has a lot of bookkeeping to do, but there isn't much difference between a single row commit and a 10K row commit. Each one of your writes to table 2 in operational mode will be a transaction. So max write rate is probably less than a couple of dozen per second, but even if you just get 10 per second, that is probably fine. Sounds like each user will be doing a fair amount of thinking before making a manual comment.

        Sounds like you have a fine SQLite application.

Re: Simultaneous access to the tk script
by afoken (Chancellor) on Jun 01, 2018 at 19:55 UTC

      Hi,

      thank you very much for your reply. I thought about using "real" database. However the security standard here makes it highly unlikely that I would be allowed to start another server such as PostgreSQL. The main big software in the company uses MS SQL Server, it is already there, however it is equally unlikely that they let me create a new database and in any case it would not be a writing access.

Re: Simultaneous access to the tk script
by huck (Prior) on Jun 04, 2018 at 20:13 UTC

    I would like to add that https://www.sqlite.org/whentouse.html mentions

    "If there are many client programs sending SQL to the same database over a network, then use a client/server database engine instead of SQLite. SQLite will work over a network filesystem, but because of the latency associated with most network filesystems, performance will not be great. Also, file locking logic is buggy in many network filesystem implementations (on both Unix and Windows). If file locking does not work correctly, two or more clients might try to modify the same part of the same database at the same time, resulting in corruption. Because this problem results from bugs in the underlying filesystem implementation, there is nothing SQLite can do to prevent it. A good rule of thumb is to avoid using SQLite in situations where the same database will be accessed directly (without an intervening application server) and simultaneously from many computers over a network."

Re: Simultaneous access to the tk script (details pitfalls)
by Anonymous Monk on Jun 01, 2018 at 20:46 UTC

    Howdy

    I tested this approach with 2-3 users simultaneously and it worked. In the real life there would be up to 20 maybe 30 users who would work with this program and probably some of them would start the packed Perl/Tk front-end simultaneously.

    Benchmark? That is simulate a 100 users or a 1000 users and see what happens, poke holes in your program, find the squeaky wheel in your design

    Are there any pitfalls on this approach?

    kinda -- people forget to test for pitfalls, so what may be regular sameness surprises them and they call it pitfall :D

    Are there any limits to the number of users who can simultaneously execute a packed Tk script?

    Yes, everything has limits :)

    If all the users are sharing the server, and all the perl/par is being written to the server (%temp% is on server), then you're limited by server cpu/ram and disk ... 20-30 users/instances seems very very very low

    I realize this sounds very generic, but that is the essence/details of the question, sqlite has writeups on performance metrics, Tk is only limited by cpu/ram, par is limited by PAR_TEMP...

    go learn how these apply to your setup

      Hi, and thank you very much for your response. I found the PAR_TEMP variable referenced in the PAR::Environment module. In the ende playing with the file who_am_i.txt here yelded a lot. If I see it right, I should make a copy of my executable without -gui argument to see the command line and with a debug print of $ENV{PAR_TEMP} as an addition. Could then $ENV{PAR_GLOBAL_TEMP} be set in the script to change the location? In the above document the variable is set outside the programm.

      Another question: how (in a nutshell) to benchmark the packed executable with simulation of several users. I did use the calls of subroutines with the Benchmark module. Should it be system calls to the executable in this case? However then I just start many instances of the programm as one user, not as several different users, does it make difference? Another thing - this is a GUI application, it would then open each time? Do you mean, just putting exit at the end of the script would be useful?

      Sorry for so many questions. I put some in the other replies too since you would not see me reply as Anonymous Monk :-).