in reply to Simultaneous access to the tk script

SQlite will work with multiple DB connections. That is fine. Different instances of the SQLite DB code will cooperate with each other.

SQLite Write performance can be a big issue. In order to do a write (update, append) or whatever, the connection doing the write has to acquire an exclusive lock on the DB! SQlite has its own locking mechanism and this works fine on Windows or Unix. I would think that 20-30 readers would work fine. Whether the UI is driven by Tk or the exe is packed by PAR shouldn't matter. You might find SQlite V3 Exclusive Locking helpful.

The big issue with SQLite is going to come down to performance of write operations.

Some time ago, I was working on the design a similar project meaning 20-30 remote clients running a Perl Tk GUI viewing an SQLite DB. The design never progressed to an implementation phase. Some significant DB consistency issues surfaced. For this design, figured that I'd need a central server to manage the connections and the central server would be the single writer and would push notifications to the other clients.

Say User 1 updates a row that User 2 is looking at. I would have to know that User 2's data had been updated. User 2's view would need to be updated. What happens when two users try to update the same field at the same time? Who "wins" and how to do you tell the guy who "lost" that he lost?". When a row was updated this could potentially mean updating displays on all the attached clients. This was a very weird DB application where we knew that some rows (records) we corrupted. Two different users could come up with their own ideas of what record 1234 should be. Anyway that means complicated mess!

Hope this helps, Main point is that SQlite Write performance is limited.
Marshall

Update: No matter what the DB, if the DB changes a record, you have to figure out how to update the clients who are looking at that particular record. This can be complex. There are various "trigger" mechanisms, but your clients will have to be aware of a change in order to refresh their display based upon what other clients did to the central DB. WOW! This can become complicated.

If all of the clients are "read only", this is easy. If one or more of the clients can modify the DB...The complexity is x10.

  • Comment on Re: Simultaneous access to the tk script

Replies are listed 'Best First'.
Re^2: Simultaneous access to the tk script
by Anonymous Monk on Jun 03, 2018 at 17:24 UTC

    Hi,

    and many thanks for your reply. My application has two tables. In a nutshell: the first table is inventory management and the second table is a manual log. The tables are linked over product_id. The users can write in the second table only (in essence, a current comment from the user view). This means, they will never update the same record, they just write in the same table. The likelihood of writing at the same time up to millisecond is probably low. Would you say it is still useful to set $dbh->sqlite_busy_timeout( $ms ) here?

    My concern was the packed Tk front-end as well, since many / several users would start the same programm over the link. Well, probably 20-30 is not too many. I will try to find out where the %temp% is located as Anonymous Monk proposed here.

    The clients' update would be less of a concern in my specific case. The users make their notes to the different products. In the moment that they would like to see the actual chain of log records to the certain product they load the id manually.

      I don't see any "show stoppers" with your application. 20-30 instances of a packed Perl program with Tk sounds reasonable. Of course the program will start faster if you have Perl and the modules that it needs installed because this eliminates the "unpack the .exe" step. However in my experience the users will be tolerant of a couple of seconds startup time, above 3 secs and you will get bitching (or at least I do).

      The last time I did any SQLite benchmarking, I was using an old XP laptop as my development machine. I could get maybe a dozen transactions per second. And about 50K inserts per second (as single transaction). Your server machine is undoubtedly faster than that. In your case to initialize the inventory table, run all of the inserts as a single transaction. A million row commit is just fine with SQLite. The write commit process has a lot of bookkeeping to do, but there isn't much difference between a single row commit and a 10K row commit. Each one of your writes to table 2 in operational mode will be a transaction. So max write rate is probably less than a couple of dozen per second, but even if you just get 10 per second, that is probably fine. Sounds like each user will be doing a fair amount of thinking before making a manual comment.

      Sounds like you have a fine SQLite application.

        Thank you very much!