Maybe you might want to look into a database engine that supports store procedures. Have you considered changing to a Model, View, Controller approach and/or LAMP (P->Perl), instead of the client-server approach you are using now? | [reply] |
If all the actions are in different scripts and these scripts are "called" by some marshalling application, it seems to me that this will involve numerous and repetitive starting of a Perl interpreter, loading a single script, running it and breaking it all down again, only to start all over again with the next script. This seems wasteful of CPU cycles and memory.One refactoring approach could be to gather all these scripts in one module (that shouldn't be too difficult provided all these modules run under the strict pragma: you could then put each script's main code into a separate subroutine and probably factor out some common code between the scripts in basic/general "servicing" subroutines. Then your Delphi code just has to start one main script which would perform all the actions of the job within the same invocation of the Perl interpreter. That should be more efficient.
CountZero A program should be light and agile, its subroutines connected like a string of pearls. The spirit and intent of the program should be retained throughout. There should be neither too little or too much, neither needless loops nor useless variables, neither lack of structure nor overwhelming rigidity." - The Tao of Programming, 4.1 - Geoffrey James
| [reply] [d/l] |
Better is a question answered utilizing criteria, and its unclear what has led you to seek a replacement... perhaps an 'ility' like maintainability, or perhaps scalability or extendability.
So, to focus this discussion a bit more, could you comment on what 'better' means to your team and company? Often we think of the 'triangle' of Cost/Features/Schedule and the trade-offs one makes to add or subtract one of these 'qualities'.
Could you share a bit more about what led to your decision to replace vs. refactor, and what is important to your team, what are the relative costs of hardware / software to the problem.
Long, big, complex jobs on huge server farm (think, Census data mining) presents a much different problem than quick, distributed, jobs that are isolated to prevent data corruption.
I'm personally interested in the reasoning behind the 'Temp DB' approach, and what, if any, transactions post back to the 'master' database.
I'm guessing these multiple-scripts grew up around a need to be fault tolerant and insure consistency in the data model within a database not using constraints.
If so, your design should probably begin by clearly identifying your capacity, concurrency and consistency related design constraints, and building your architecture based upon these requirements withing the budget and schedule you have available.
Depending on the degree and type of parallelism involved (if any) there might also be a need to focus on a job control framework (such as Load Sharing Facility aka LSF) to simplify the chain of dispatch/setup/execute/return_results/cleanup. If a significant portion of your code is focused on the 'control' issues, and the actual 'work' is a much smaller piece, then a batch processing framework could save you from reinventing the wheel.
You tell us!
-- Patrick Hennessey
"Strictly speaking, there are no enlightened people, there is only enlightened activity." -- Shunryu Suzuki
| [reply] |