in reply to DBI strangeness

Quite possible, it is because of the fact that you are using timestamp as part of the primary key.

You don't do this, NEVER do this! I have dealt with this kind of database design before. When your program execute fast enough and the timestamps of the two operations will be the same, and can cause failure (duplicate key); When it is not that fast, and you get two different timestamps, your program will work. Unpredicatable is the word, and you don't want it.

Replies are listed 'Best First'.
Re^2: DBI strangeness
by jimbus (Friar) on Oct 14, 2005 at 16:36 UTC

    Never say NEVER

    I probably should have detailed it better, but posting here is often trying to find a balance between providing enough information to get relavant answers and putting too much information and having a million and six tangents questioning every aspect of the code except the question asked. On the other hand, if you would have read as far as the code or the traces, you would have seen that your issue was anticipated and there was more involved.

    The application is not real time, it is processing timestamped log data. The most important information coming from the file is urls/second. I'm deriving url/second as over all and by url. I build hashes with the per socond counts based on epic seconds and dump the data out to a database. Collisions can only occur if there is a time overlap between incoming logs. The insert is inside an eval block so that if it fails, the original and new record are summed with an update. This logical and it has been tested and works.

    In the end, the app users only see data on an hourly basis, so keeping the data on a second basis was expense in both diskspace and query time. In a fit of laziness, I decide to leave the original script alone and run the coalation query in a new script and create a new database based on the hourly averages, peak and totals. I leveraged the eval insert, else update logic from the previous script and it works for the over all table, which has a single component key, the timestamp.

    One difference between the new and old script is the database they are pointing at. The original one uses datatime for the tstamp field, but the new one uses timestamp. Are you inferring that there is an issue with the timestamp data type that makes it unstable? I did notice that when I made it a primary key I noticed that its default is the current time. but if you look at the data I print out and the trace's bind statements, I'm never putting a null tstamp.

    I googled on tstamp and didn't find much. Mysql's reference page pretty much ignores it. It says it is like datetime, but that it will cover the differences that and when you foolow the link to timestamp properties as of 4.1, it mostly talks about 5.0. I'll alter the table so that tstamp is a datetime and test that, but I still don't see whats wrong with what I am doing

    Jimbus

    Never moon a werewolf!
      The UPDATE statement can set a timestamp field to the current time automatically, maybe this is causing the problem.
      poj
Re^2: DBI strangeness
by pajout (Curate) on Oct 14, 2005 at 07:47 UTC
    Yes, primary keys could be numbers generated from sequence (which garants their uniqueness) in 99% of cases. Exceptionally, you can use characters or some composite types.
    Good theme for a little flame, but not perl-related :)

      You don't think time is numeric, sequential and unique when managesd in a discrete manner?

      Thats crazy talk

      Jimbus

      Never moon a werewolf!