in reply to RE: Community Teaching Project
in thread Community Teaching Project

Glad your in for the project.

These are some of the issues we need to decide, first. Are we even going to use a database? As in, do we want a finished project that requires mySQL or postGres to be installed, or would we prefer CSV (Comma Separated Values) files? If we want a cross platform product (and that's what this is, if we were a real company), we need to know what users are willing to install to run the product.

mySQL is free for Linux, but as I seem to recall, requires a license fee for NT (this may have changed. I can't get to the tcx.se site at the moment).

But in any case, you're definitely correct. If we don't do the design correctly from the start, it could rapidly come back to haunt us.

--Chris (look Ma! No speeling errors this time)
  • Comment on (jcwren) RE: RE: Community Teaching Project

Replies are listed 'Best First'.
RE: RE: RE: Community Teaching Project
by jlp (Friar) on Jun 19, 2000 at 03:11 UTC
    A database will almost certainly be overkill for a project of this scale. The overhead of a database is just not worth it when the dataset is as small this figures to be; see jwz's rant about mail summary files. I also think a database would add too much complexity to a tutoring project. My ought-two cents, anyway.
RE: RE: RE: Community Teaching Project
by Ovid (Cardinal) on Jun 15, 2000 at 23:39 UTC
    Well, my thought is that we use the DBI module and have a global variable $DATABASE that stores the type of database the user has. That way, they just input the name of the database and if DBI supports it, we're ready to roll. The problem then becomes ensuring that we only implement functions that all dabases can use, or develop separate routines for each database. This can be a hassle.

    If the user only has CSV available, we'll have performance issues to deal with if the database grows significantly. We also wouldn't also wouldn't support transactions as many dabases (CSV and MySQL, for example) do not support them. Of course, for a small-scale indexing application, these may not be significant concerns.

    Hmm... I could go on for hours. This will have to be dealt with in whatever means we come up with for communication on this project.

      Well, the problem here is also the target market. If we're working on this as a small desktop app, then a significant portion of the users aren't going to have a database installed, regardless of whether they're a Windows user or *nix user.

      The best option might be two separate versions; one that's DB driven, and select a "required" DB like MySQL, and one that's CSV driven. The CSV version would satisfy the cross-platform requirement (done properly) while the MySQL version offers higher performance.

      As for project management and communication - we're working on that one.

      - Ozymandias

        The best option might be two separate versions; one that's DB driven, and select a "required" DB like MySQL, and one that's CSV driven. The CSV version would satisfy the cross-platform requirement (done properly) while the MySQL version offers higher performance.

        I believe there is a DBD::CSV that would allow us to design with a database interface, and have the option of pointing that interface at a CSV file if needed.