Firstly, Data::Dumper doesn't give you persistence directly.
It is the storing and retrieval of data serialised by Data::Dumper
that gives you persistence.
How you loop through depends on whether you use coarse or
fine grained persistence.
Coarse grained will retrieve all the data at the start of the progrma and dump it
all at the end. Has the advantage of simplicity, and the data can be manipulated using
normal Perl during the program.
Fine grained will store and retrieve each individual object each time it
is accessed. Advantages are smaller memory footprint for data, and
less likely to lose data if the program crashes. Needs more complicated
mechanism for handling the storage.
Suggested Refernces :
- The Object Oriented Perl book has an excellent tutorial on
persistence, and suggests classes that could abstract the work away into
a class.
- Module DbFramework::Persistent
- Module Tangram
Tangram is fairly heavy duty, and may be overkill, and both assume a
database.
-- Brovnik | [reply] |
Typically, you'd make a class representing the "Storage" that your objects go into, with methods allowing you to:
- Store objects
- Pull out objects by primary key
- Check for existance of a key
- Check if an object is already present in persistent storage or not
- Delete objects, etc.
So wrap those around your storage mechanism and you'll find that the loop will be a little simpler. You might want to check out some of the object data persistence libraries out there; Tangram comes with my recommendation, but there's also Class::DBI, Alzabo, and a few others.
| [reply] |
I might be misunderstanding you, but I don't think you are talking about persistence. You wouldn't want your entire CD music library in a persistent object.
All you need is the CD music library as a regular database, and then if you need a persitent object pointing to a prticular CD, you make that 1 CD object persitent - not the whole library.
I.e., when your program ends, you would serialize the object using Data::Dumper or Storable or other. Then, when your program next starts, you decode the serialized object (if it exists) and use that object again. The looping you talk about is quite seperate from the persistence.
$ perldoc perldoc | [reply] |
Sorry I don't have the book (sure wish I did though) but I'll give
it a swing.
If we are talking about an SQL database, you have
what looks like a row ("record") of separate data fields, with one
row for each CD, and if I
understand your question correctly, a column Title which is
designated the primary key, and a column CdObj which contains
a serialized data structure.
The database does the looping for you, and you are just
concerned with checking to see if there is already a row
which contains a Title field of that name. If your SELECT
query does not return any data, you are safe.
If you are not using a database but just writing to disk,
It is probably a similar problem. You need not load every
object, just your index. Maybe you have it in a separate file
from all those big objects and it can be easily loaded into a hash.
If so you don't need to loop, just look to see if your hash
key exists and if not (so long as they all got read
in correctly), you are safe.
Unpacking objects one at a time and looking inside each one
would be a terribly slow way to search a database. Much
better to let Perl or SQL "look" at a whole column at a time
for you and not call all that disk use and computing overhead.
I don't understand why you need to serialize an object at all
actually, it would seem to be much more interesting to have
a separate database field for every part of the object, so
you could search by singer, track length, genre, etc. Does this
answer the question? If not please try to explain a bit more,
thanks.
| [reply] |
I would just suggest storing a hash as class data whose keys were the primary keys of all of the "registered" CD's you've got stored. Then you just have to check with exists agains the hash to see if you've got it already or not. | [reply] [d/l] |