in reply to Invert a hash... not a FAQ (I hope)

If you care about performance, the following avoids some hash lookups and so should perform better:
sub invert { my $x = shift; my $y = {}; while (my ($key1, $value1) = each %$x) { while (my ($key2, $value2) = each %$value1) { $y->{$key2}{$key1} = $value2; } } return $y; }
I don't think you can get much faster than this in pure Perl.

If you want to speed this up, save memory, etc, then you have to look into why you find the need to invert multi-level hashes regularly. Since you haven't shown an example of a design problem where you find yourself wanting to do that, I can't comment on whether your design is appropriate or whether there is an alternate way to tackle those problems that would work better.

In short I can think of some situations where inverting a hash is absolutely the right thing to do. I can think of others where there is no real need to do it. Without more context, there is no point in my guessing which applies to you.

Replies are listed 'Best First'.
Re^2: Invert a hash... not a FAQ (I hope)
by djacobow (Initiate) on Jan 22, 2009 at 01:37 UTC
    Thanks! I'll give the snipped above a go.

    Here is an example of what I'm doing. I'm downloading very detailed information on the wholesale price of electricity. In my market, there is an individual price for each of several thousand nodes in the market, and there is a separate price for each hour, and in the "real-time" market, for every five minutes.

    The data comes in flat CSV files for each day that look like

    node,hour,interval,price bob,3,4,45.64 ...

    I frequently need to generate reports from this data such as:

    • what is the price of energy at each hour averaged over all nodes (or weighted over nodes more heavily traded)?
    • what is the price of energy at each node, averaged over all hours (or weighted over the hours more heavily traded)?
    • what is the highest price observed in hour 16 over the last 5 weekdays?
    • what day had the greatest hourly variation among the following n nodes in the past three months?
    • various charts, graphs, histograms, maps, and statistical analyses, all generated across different "axes" of space and time.

    In general, what I do is load up the data I'm interested in and plop it into a hash of hashes of hashes ... that is organized most conveniently for the burning management question du jour, but then the next day, a different question would be easier to calculate with a different arrangement. Often, these analyses get accreted into a daily report generated by one script, but I'm trying to minimize the amount of shuffling going on.

    Run-time is an issue, but size is usually a bigger problem.

    dave

      The alternate design that I would use is to dump the dataset in a database then query the database multiple times.

      At first that may feel restrictive. But with some experience you'll likely find that the SQL for a report is a lot shorter and less error-prone than attempting to produce that same dataset yourself in code.

        Yeah a database is an obvious solution that I am resisting just for pure stubbornness. (and because I am dreading the learning curve for DBI, SQL, etc)

        I've also conjectured, too, that the "db that is the filesystem", indexed by date in my case, will be faster than a "real" database particularly considering that *usually* I can get all the information I need for a given subset of days in memory at once without a problem.

        I'll be sad, though if after all the trouble, DBI ends up being slower.

      When faced with a similar situation, I'd build all the data structures I'd want concurrently as I was reading my input. That was before I knew how to use a database.

      Looking at your situation now, I'd say load any given data set into a database (PostgreSQL, SQLite, MySQL) with DBI and then query it for management's burning questions. If you're not familiar with SQL already, you might not see the advantage of this right away, but databases are already designed to do this kind of work on data too big to fit in memory.