in reply to Loosing value and my mind on this one, help!!!

FYI, SQL queries do have a maximum length, and you may crap out your query if you specify too many tens of thousands of items inside the query. It's also n squared efficiency to run, since it has to compare every item specified against every account in each of the tables without using an index. Depending on how many items are in each of those tables, or how often this needs to run, there may be a much better way to do it.
  • Comment on Re: Loosing value and my mind on this one, help!!!

Replies are listed 'Best First'.
Re^2: Loosing value and my mind on this one, help!!!
by Anonymous Monk on Dec 02, 2011 at 14:28 UTC
    I know on the max length on SQL queries, but this code was not bad at all based on letting a 100 values into the IN Operator at a time( out of more than 20000 values in the array been used). I couldn't find anything to use with Perl and SQL that would be better than this, but if anyone could and would show me some code it would be nice too. Thanks very!!!
      Can you describe your table structure? Go into the command line and type SHOW CREATE tablename, where tablename is each of your tables one at a time. This, incidently, is an excellent way to get index names if you didn't name your indexes and need to remove one.
        Go into the command line and type SHOW CREATE tablename...

        That's one of my favorites, too. In fact, after I wrote a perl command line tool to let me run SQL queries directly via the bash shell, I created this little shell function, which has come in very handy:

        sct () { dbget -a $1 -s "show create table $2" | perl -pe 's/^\S+\t//; s/, /,\n/g;s/\( /(\n/;s/\) \)/)\n)/' }
        My perl script "dbget" has its own hash (actually, the hash is in a separate module) of known database connection credentials for whatever servers, databases and user accounts I need to connect to (with a simple mnemonic string as a hash key to identify each one); "dbget" takes a db/user identifier "-a ...", and an sql select or show statement "-s ...", connects to the chosen database, runs the given query, and spits the result to stdout (as tab-delimited lines).

        But the raw output of "show create table ..." contains the whole list of column definitions, etc, in one "field" (on one line of output), which can be hard to read, so the shell function just pipes that to a perl one-liner to put line breaks in the right places.

        This way, whenever I want to see the output of "show create table table_name" (e.g. to see whether a given column is being indexed), I just do this in a bash shell:

        sct db_user_id table_name
        Update: This just occurred to me: in order to do "proper" tab-delimited output, my "dbget" tool has a default behavior that normalizes all field-internal whitespace strings to single spaces. But it has an option to "keep" field-internal whitespace as-is. So the perl one-liner shown above would only apply if the output from "show create table" were filtered to convert field-internal line-feeds to spaces. My shell function could have been stated more simply as:
        sct () { dbget -a $1 -s "show create table $2" -w keep | cut -f2 }