A database will almost certainly be overkill for a project of this scale. The overhead of a database is just not worth it when the dataset is as small this figures to be; see jwz's rant about mail summary files. I also think a database would add too much complexity to a tutoring project. My ought-two cents, anyway. | [reply] |
Well, my thought is that we use the DBI module and have a global variable $DATABASE that stores the type of database the user has. That way, they just input the name of the database and if DBI supports it, we're ready to roll. The problem then becomes ensuring that we only implement functions that all dabases can use, or develop separate routines for each database. This can be a hassle.
If the user only has CSV available, we'll have performance issues to deal with if the database grows significantly. We also wouldn't also wouldn't support transactions as many dabases (CSV and MySQL, for example) do not support them. Of course, for a small-scale indexing application, these may not be significant concerns.
Hmm... I could go on for hours. This will have to be dealt with in whatever means we come up with for communication on this project. | [reply] |
Well, the problem here is also the target market. If we're working on this as a small desktop app, then a significant portion of the users aren't going to have a database installed, regardless of whether they're a Windows user or *nix user.
The best option might be two separate versions; one that's DB driven, and select a "required" DB like MySQL, and one that's CSV driven. The CSV version would satisfy the cross-platform requirement (done properly) while the MySQL version offers higher performance.
As for project management and communication - we're working on that one.
- Ozymandias
| [reply] |
| [reply] |