in reply to Re^4: Reducing application footprint: large text files
in thread Reducing application footprint: large text files
Can you explain why that is "not an option"? What is "wrong" with that idea?
The idea of 10's of MB of data within a Perl PM module seems really wrong to me.
That there aren't any accessor methods for this data contained within this .PM module sounds doubly wrong to me.
It sounds like this should be a binary data file or an SQLite DB instead of a PM module.
At least on the Perl distribution that I use, there are no extra modules to install in order to write SQLite code. use DBI; is sufficient. SQlite can import maybe about 50K+ records per second. However in this situation, sounds like there is nothing to import. An SQLite DB can serve the purpose of this humongous computer generated .PM file. Why not have Perl generate the SQLite DB as part of the build process? You can write accessor methods to a new Perl module that is essentially the I/F to this SQLite DB.
I will point out that it is possible to dynamically expand or contract the amount of memory that SQLite uses for its work. I doubt that you will need to do that, but be aware that this is possible. When running complex indexing operations on "large" DB's, I have used this feature to speed things up. But that is unusual and a "weird" thing to do.
As an update: When you have a Perl statement like: ADDRESS => 0x400000, even the "numeric" value, 0x400000 is stored as ASCII text until you use it in a numeric context. How Perl stores things can get very complicated. perlguts might open your eyes. I think you have short-changed the idea of SQLite. Keep in mind that every "smart phone" on the planet has an SQLite DB.
Simple Demo:
#!/usr/bin/perl use strict; use warnings; my $zero ="00000000000000000"; print "$zero\n"; # 00000000000000000 # exact text $zero = $zero + 0; print "$zero\n"; # 0
|
|---|