You might want to look at http://www.firstworks.com/sqlrelay.html.
To quote:
SQL Relay is a persistent database connection pooling, proxying and load balancing system for Unix and Linux supporting ODBC, Oracle, MySQL, mSQL, PostgreSQL, Sybase, MS SQL Server, IBM DB2, Interbase, Lago and SQLite with APIs for C, C++, Perl, Perl-DBD, Python, Python-DB, Zope, PHP, Ruby, Ruby-DBD and Java, command line clients, a GUI configuration tool and extensive documentation. The APIs support advanced database operations such as bind variables, multi-row fetches, client side result set caching and suspended transactions. It is ideal for speeding up database-driven web-based applications, accessing databases from unsupported platforms, migrating between databases, distributing access to replicated databases and throttling database access.
Sounds interesting, thought I haven't had time to play with it yet.
gav^ | [reply] |
First, connection pools do not avoid multiple connections to a database. What they do is limit the overall number of connections open at any given time and re-use them between parallel threads or processes. They may reduce the number of connections needed at one time, but only if some of the threads aren't doing any database work.
Second, you don't need one. Just fork processes for the number of connections you want, connect separately in each (after the fork) and go. That's the most efficient you can get with this. | [reply] |
Are you sure this will help? Unless your database server has multiple processors it is unlikely that running multiple queries in parallel will be substantially faster than running the same queries serially.
If you do have multiple processors on your database server, then my suggestion would be to fork() N processes to each do 1/N of the work (N = number of available processors). Each process can then establish a private database connection, saving you the trouble of dealing with connection pools.
Of course, more complicated schemes are possible but they are unlikely to be worth the effort.
-sam
| [reply] |
Unless your database server has multiple processors it is unlikely that running multiple
queries in parallel will be substantially faster than running the same queries serially.
Not necessarily true. Compiling a large project (say, a
linux kernel) on a single CPU machine completes substantially
faster with two processes (make -j2 ...) than
it does in a single process, as this allows the CPU to keep
compiling in one process while the other waits for disk
access.
While I haven't tested whether this is also true of databases,
I suspect that it is likely to be.
| [reply] [d/l] |
| [reply] |
| [reply] |
What do you mean by a defined set of SQL's paralelly? Are you talking about more than one SELECT statement or are you talking about parallel query? I am an Oracle DBA and would like to help you out if I can. I try to use perl as much as I can and this sounds very interesting.
Sam | [reply] |
Why not use PL/SQL and call the stored package from you daemon? Sounds like a job for DBMS_JOB and DBMS_PIPE to me.rdfield | [reply] |
Go to :
http://www.theperlreview.com/
and dload issue-01 and read :
Design patterns : Singeltons from brian d foy ...
i think this is exaclty what u want ...:")
hth
(in the past i wanted to do a similar CGI-thing and are blaming now that i did't know that) | [reply] |