in reply to DBD::MySQL timeout during query

There are many possible reasons. net_read_timeout and net_write_timeout were added in 3.23.20. The read timeout (presumption query is select) default value is 30 seconds and setting it in the my.cnf file requires a restart (of MySQL) to take effect. It seems you know all this. I also assume that you are on Win32 as you are referring to a .INI file?

Although you query works off the command line the CGI environment is different. Webservers will abort connections after a period of inactivity, usually in the range 30-60 seconds. It is possible that your command line login runs the query with a higher priority. Establishing the exact time that it takes to timeout may be useful in tracking down the issue. Printing epoch time stamps before/after the query or the total time in the err msg may well provide a useful hint. Not that it is relevant but on *nix tcp(1) has a connect timeout of 75 seconds exact values often indicate the source.

This is a very long time for any query. Are you using appropriate primary keys/unique/indexes? You may benefit from having a look at 'DESCRIBE table_name' and 'EXPLAIN query_goes_here' to see if you can speed it up. This of course will not really solve the underlying issue but is worthwihile regardless.

If you post the query you may get some useful optimisation suggestions.

cheers

tachyon

Replies are listed 'Best First'.
Re^2: DBD::MySQL timeout during query
by rcraig (Initiate) on Sep 09, 2004 at 15:08 UTC
    Thanks. Yes, the query is a select:
    SELECT COUNT(DISTINCT seq) FROM Peptide;
    Peptide contains almost 2 millions rows (not that big a deal)
    EXPLAIN AND DESCRIBE: mysql> explain select count(distinct seq) from Peptide; +---------+------+---------------+------+---------+------+---------+-- +-----+ | table | type | possible_keys | key | key_len | ref | rows | E +xtra | +---------+------+---------------+------+---------+------+---------+-- +-----+ | Peptide | ALL | NULL | NULL | NULL | NULL | 1860342 | + | +---------+------+---------------+------+---------+------+---------+-- +-----+ 1 row in set (0.02 sec) mysql> desc Peptide; +--------+---------------------------+------+-----+------------+------ +----------+ | Field | Type | Null | Key | Default | Extra + | +--------+---------------------------+------+-----+------------+------ +----------+ | pepid | int(10) unsigned zerofill | | PRI | NULL | auto_ +increment | | proid | int(10) unsigned zerofill | | MUL | 0000000000 | + | | seq | tinytext | | MUL | | + | | mh | double | | | 0 | + | | expect | double | | | 0 | + | | start | int(11) | | | 0 | + | | end | int(11) | | | 0 | + | | charge | tinyint(1) unsigned | YES | | NULL | + | | delta | float | | | 0 | + | | dida | int(11) | | | 0 | + | | didb | int(11) | | | 0 | + | | didc | int(11) | | | 0 | + | +--------+---------------------------+------+-----+------------+------ +----------+ 12 rows in set (0.00 sec)
    The index on seq is index iseq (seq(20)) The average length of seq is 16 and the max is 113. The number with length over 50 is 2400 or so. I set the net_read/write_timeout to 60 and restarted server, but no difference. Also I am using .ini with the settings from my-huge.cnf.
    When I say that it times out from the perl script, I am only running it from the command line:
    C:\testing\scripts>perl stat_tester.pl Connected DBD::mysql::db do failed: Lost connection to MySQL server during query + at stat_tester.pl line 54.

    so I don't think that it could be a web server problem. This message comes after about 30 seconds.
    All this leads me to believe that its an issue with DBD or DBI. Quite possibly the error message is misleading?

    I will try upping the index length on the seq field, but still, I don't think it's unreasonable to have a query that takes longer than 30 seconds at some point in the life of ones database.
    Thanks for the help so far!

      I doubt that it is a problem with DBI/DBD::mysql, mainly because DBD::mysql is reporting that the server closed the connection. It sounds like somesort of timeout, too large of a packect, commands being sent out of order, or something else that causes the server to go away (segfault is a possibility too).

      You might want to enable tracing on the serverside, to see why the connection is going away.

      You can try also the DBD::mysql mailing list

      Here is the MySQL developers section on this http://dev.mysql.com/doc/mysql/en/Gone_away.html

      Anyway the query will take a long time as it needs a full table scan*. You must have a slow server/disks as it takes me 1.67 seconds to run a similar query on a million row data set. As noted by astroboy forking can cause this issue, but I suspect it is something to do with your MySQL configuration. Here is what we use with 2GB of RAM. It might be worth a try. The extended timeouts discussed before are not included (never needed them) but should be. There is some suggestion the connect timeout you can set here may be important.

      [mysqld] datadir=/var/lib/mysql socket=/tmp/mysql.sock set-variable=wait_timeout=200000 set-variable=key_buffer=256M set-variable=max_allowed_packet=32M set-variable=table_cache=512 set-variable=sort_buffer_size=64M set-variable=record_buffer=64M set-variable=read_buffer_size=32M set-variable=myisam_sort_buffer_size=64M set-variable=thread_cache=8 set-variable=query_cache_size=32M set-variable=tmp_table_size=32M set-variable=max_connections=900

      As you are on Win32 it is possible that it is a quirk related to the Win32 ports of Perl/DBI/DBD::mysql/MySQL. If you have a Linux box with MySQL it would be interesting to see if the problem is repeatable on the more native platform.

      cheers

      tachyon

        Thanks tachyon - the new settings have fixed the problem. I'm not sure which one, I haven't played with the new settings, but I think it may have been the query_cache_size, as all queries that are run more than once run faster, also from the mysql prompt.
        Still the issue remains for a query that will take longer than the prescribed 30 seconds, but whatever.
        Thanks again for helping.
        RC