EZDBI
1 direct reply — Read more / Contribute
|
by Juerd
on Jun 12, 2002 at 19:09
|
|
|
This review was requested by one of the module's authors, belg4mit. A
while ago, we discussed some easier ways of using DBI, and of
course his EZDBI and my DBIx::Simple were mentioned. I
will not discuss DBIx::Simple in this review. If anyone wants to review
it, please do so.
What is EZDBI?
EZDBI (Easy DBI) is a module that provides functions for database
connections, using DBI as its backend. Many people find DBI either too
hard or too much work (why fetchrow_arrayref if you can have
something shorter?), and several modules try to end that problem. EZDBI
uses no object orientation, so anyone without too much programming
experience can install and use the module immediately. I review version
0.1.
Its name
Normally, modules in CPAN have some sort of hierarchy. It isn't always
consistent or well-chosen, but most modules are grouped in top-level
namespaces. DBI extensions should be in the DBIx:: namespace (that's
right, not even DBI::), but this one uses a top-level namespace of its
own. It is hard to find when searching CPAN: when using keywords like
'Easy' and 'Simple' in a module-name search, EZDBI is not one of the
results. Even when you look for modules with 'DBI' in the name, EZDBI is
unclear until you pronounce it letter by letter in English (which may
not be a natural thing to do for those who don't natively speak
English). There is a module called EasyDB, one called DBIx::Easy and one
called EZDBI. That isn't very handy.
Connecting to a database
The connecting function is simply called Connect. It's a
straightforward function call that takes either a DBI DSN or named
arguments.
Querying
Of course, the most important thing you will want to do with a database
module is sending queries to the database and getting results. EZDBI
provides a function for each of the most used SQL commands:
Select, Insert, Update, Delete. If
you have a normal database select like SELECT foo FROM bar, you
would put the Select part outside of the quotes and have Select 'foo
From bar';. I changed FROM to From to match the
ucfirsted Select.
The Select function is rather intelligent. By default, it fetches
everything and returns it, but it can also return an object that can be
used to fetch one row at a time. Insert has a nice ??L shortcut
that is turned into (?, ?, ?, ...) with as many question marks
as the number of remaining arguments.
There is Sql for when the given functions cannot execute the
SQL you want it to. Unfortunately, this uses DBI's do, so don't
expect to be able to get information out of it (For example with mysql's
SHOW TABLES command).
Documentation
Every module needs documentation. Without the documentation, I wouldn't
be able to write this review, as I didn't actually test the module
thoroughly. I did not have to, as the manual provided almost everything
I wanted to know, and the source gave me the rest of the clues. Although
the programming style itself is not the one I like, the manual is very
clear and easy to read. It is written with beginning programmers in
mind, so the EZ is worth its bits. I especially like the vivid examples.
However, I found something in the documentation that bothers me. EZDBI's
manual states that EZDBI takes care of ? placeholders, but this
module only expands its special ??L placeholder (which is in
turn only for Insert). Placeholder substitution is performed by
DBI's execute(), but up to three times, EZDBI takes credit for what DBI
does.
Multiple databases
This is quite a hassle with EZDBI. You can have multiple databases, but
you'll have to use Use, which can be compared to Perl's own
select that selects a filehandle. This way, copying data from one
database to another (which is not uncommon: a lot of people migrate from
MySQL to Postgres, for example) has to be done using temporary variables
and a lot of Use calls. I don't think this is EZ, object
orientation would be so much better for this.
Disconnecting
When the Perl interpreter ends, it will destroy all variables, including
the DBI object that is stored inside of EZDBI. That way, database
connections are properly terminated. If the end of the interpreter never
happens (i.e. when using PPerl or mod_perl), you're stuck with the
database connection even after your script ends. You will have to
explicitly call Disconnect. EZDBI is not object oriented, so
there is no object to destroy automatically when it goes out of scope.
This is potentially very dangerous.
Conclusion
I wouldn't use EZDBI myself, but not because the module is bad. It's a very
good module, but I happen to like object orientation, and I prefer raw SQL to
semi-abstracted SQL. Maybe I'll steal the ??L idea one day, though.
If object orientation is too hard for you, or if you want to do things the
easiest way, EZDBI is perfect for you. Don't forget to Disconnect
explicitly when using EZDBI in mod_perl, because otherwise someone else might
be able to access to your database!
This is the very first module review I've ever written. Please
tell me if I did anything wrong (reply).
|
HTML::FromText
3 direct replies — Read more / Contribute
|
by FoxtrotUniform
on May 22, 2002 at 23:38
|
|
|
Summary:
HTML::FromText is a clever and handy module,
but it lacks flexibility.
Author:
Gareth D.
Rees.
Description:
HTML::FromText takes ascii text and marks it up into
web-friendly HTML based on a few fairly well-defined
conventions, including the familiar *bold*
and _underline_ indicators you'll recognize
from Usenet and email. It'll also escape HTML
metacharacters, preserve whitespace when sensical, and
even recognize and mark up tables. This is excellent
news for those of us who absolutely despise writing raw
HTML, but are too macho to use WYSIWYG editors.
Using HTML::FromText couldn't be simpler: there's only
one function you care about, and it has a nice clean
interface:
my $html_code = text2html($text, %args);
For a full description of the available options, I'll
refer you to the excellent HTML::FromText
documentation, but here are some examples of flags you
can set (or clear) in %args:
- paras: treat text as paragraph-oriented
- blockcode: mark up indented paragraphs as code
- bullets: mark up bulleted paragraphs as an unordered
list
- tables: guess/recognize tables in the text and mark
them up in proper HTML
Better yet, HTML::FromText produces remarkably clean, if
not exactly elegant, HTML.
What could be better? Well....
HTML::FromText isn't particularly flexible. Want to
mark up _foo_ as <i>foo</i>?
Too bad, you're locked into underline tags. Jealous of
Perl Monks' [link] convention? Sorry, you
can't easily build one on top of HTML::FromText -- it'll
only do two balanced delimiters. Want to treat some
indented paragraphs as code and others as quotes in the
same document? No can do, it's a global option.
Since there are only a few different behaviours (mark
up *text*, mark up _text_, mark up indented paragraphs,
mark up lists, mark up tables), I'd have liked to see a
callback-oriented interface, with the existing behaviours
as defaults. That way, I could replace the underscore
callback with my own (to do italicised text), rewrite the
headings and titles callbacks
to use somewhat less exuberant header tags, and hack
together a smarter callback for indented paragraphs that
tries to guess whether the text in question is code or a
quote, and do the right thing.
This wouldn't be so big a deal if you could escape raw
HTML, but you can't. It's either convert all HTML
metacharacters to their corresponding entities, or none.
This is an obnoxious omission: I like to be able to keep
<s and >s in my text without
having to escape them by hand, but if HTML::FromText isn't
going to do it for me, I'd like to be able to italicize
text, too.
And would it really be so hard to recognize a
line of more than, say, ten hyphens as a <hr>?
That would really come in handy.
I've spent quite a bit of text ranting about its
shortcomings, but I really like HTML::FromText. It's a
godsend to those of us who hate HTML, but love our text
editors.
Ups:
- Does an excellent job of translating text to HTML
- Table recognition is intuitive and seamless
- Good documentation
- Clean interface
Downs:
- Doesn't support some fairly obvious markups
- Inflexible
Conclusion:
Excellent for translating simple documents to HTML.
Don't use it to write your website, since it only does
bare-bones markup. On second thought: do use it
to write your website, since you'll end up with far less
bloat.:-)
Update: Added author information. Thanks
Juerd!
|
Acme::Don't
4 direct replies — Read more / Contribute
|
by rob_au
on May 09, 2002 at 23:16
|
|
|
Acme::Don't
This module, from the warped mind of Damian Conway, provides only one export, the don't command. This command is used in exactly the same way as the do BLOCK function except that, instead of executing the block it controls, to quote the module POD, "it...well...doesn't". The result of wrapping a code block by the don't command is a no-operation which, regardless of the contents of the block, returns undef.
Why review this module? Well, in contrast to many of the other modules which have found their way into the Acme:: namespace, I can actually see a use for this module in development. How many time have you been working on some code and wanted to comment out a section of it to test specific code components? With Acme::Don't, it is as simple as adding the surrounding don't block! The only caveat with this debugging and development approach is that code within the don't block must be syntactically valid, providing compile time-syntax checks without execution. Additionally, it should be recognised that this statement is not all encompassing with BEGIN blocks placed within a don't block still being executed - For example:
use Acme::Don't;
don't {
BEGIN {
print "foo";
}
print "bar!\n";
}
The above code still produces the output of "foo" as a result of the BEGIN block execution.
Other caveats of this interesting module are included in the module documentation and while this module resides within in the Acme:: namespace it might just be something more than a gimmick and an interesting use for Perl4 module pragma :-)
|
ExtUtils::ModuleMaker
4 direct replies — Read more / Contribute
|
by simonflk
on Apr 15, 2002 at 12:42
|
|
|
ExtUtils::ModuleMaker vs h2xs
ExtUtils::ModuleMaker is a replacement for h2xs. So what's wrong with h2xs
anyway and how does ExtUtils::ModuleMaker perform any better?
h2xs -AXn Foo::Bar
My annoyances with h2xs (these are purely personal):
- by default: it produces code that isn't backwards compatible [see note] (our instead of use vars, use warnings, and use 5.00?)
- you have extra work if you have more than one module in the distribution
- you have lots of editing to do before you get started, (unless your name is A. U. Thor)
- module code and tests are all dumped into the main directory
On reflection, these seem quite petty -- but I am very lazy.
perl -MExtUtils::ModuleMaker -e "Quick_Module ('Foo::Bar')"
This is more to type, and produces similar results to h2xs. However there are the following improvements:
- module files are neatly stored in a "lib" folder
- test file is created in "t" subfolder
- LICENSE file is included - defaults to perl license (GPL & Artistic)
- lib/Foo/Bar.pm is backwards compatible with perl 5.005
- useful pod sample for documenting subroutines
Advanced use of ExtUtils::ModuleMaker
The QuickModule() function still leaves A.U. Thor as the author of your work and other defaults
and leaves you with only one module in your distribution. Use Generate_Module_Files() for a more complete solution...
Generate_Module_Files()
- Specify author details, (fills in the pod, Makefile.PL, etc)
- Specify version number to start on
- Specify the license that your module is released under (over 20 licenses included - or use custom)
- Create module and test files for additional modules
Here is my code using ExtUtils::ModuleMaker that allows me to be extra lazy:
#!/usr/bin/perl5.6.1 -w
use strict;
use Getopt::Long;
use ExtUtils::ModuleMaker;
my %author =
(
NAME => 'Simon Flack',
EMAIL => 'simonflk@example.com',
CPANID => 'SIMONFLK',
WEBSITE => 'http://www.simonflack.com',
);
# Set some defaults
my $license = 'perl';
my $version = '0.1';
my $module_name = '';
my $extra_modules = '';
my @extra_modules = ();
GetOptions
(
'name=s' => \$module_name,
'version:f' => \$version,
'license:s' => \$license,
'extra:s'=> \$extra_modules
);
Usage() unless $module_name;
######################################################################
+#########
# Now make the module
######################################################################
+#########
push @extra_modules, {NAME => $_, ABSTRACT => $_}
for split /,/, $extra_modules;
Generate_Module_Files
(
NAME => $module_name,
ABSTRACT => $module_name,
AUTHOR => \%author,
VERSION => $version,
LICENSE => $license,
EXTRA_MODULES => \@extra_modules,
);
sub Usage
{
my ($prog) = $0 =~ /\/([^\/]+)$/;
print <<HELP;
$prog - Simple Module Maker
Usage: $prog <-name ModuleName> [-version=?] [-extra=?,?] [-license=?]
Eg: $prog -name My::Module
$prog -name My::Module -version 0.11
-extra My::Utils,My::Extra -license perl
HELP
}
Now I can write: "newmodule -n Foo::Bar -v 1.0 -l gpl" and I can start coding and
writing tests straight away...
Note: If you use this, don't forget to change the author info.
Problems with ExtUtils::ModuleMaker
There aren't many.
- ExtUtils::ModuleMaker won't be helpful if you are writing XS modules. You should stick to h2xs for this, probably.
- The .pm files it creates encourage inline pod for documenting subroutines. I know a lot of people do this,
but I prefer putting my pod at the bottom.
- The test files are obscurely named, you'll probably want to rename them.
Reference:
See the following docs for more information about writing modules
update: h2xs compatibility
crazyinsomniac pointed out that h2xs has a backwards compatibility option "-b" I couldn't find this documented and it didn't work when I tried it (my v5.6.0 h2xs is higher up in the PATH than my 5.6.1 h2xs). It seems that is is a new option since perl5.6.1. I'll leave my original statement in here because it will still apply to some people on older perls. Thanks to crazyinsomniac for pointing out this option.
|
Net::Telnet
1 direct reply — Read more / Contribute
|
by Rex(Wrecks)
on Apr 04, 2002 at 15:05
|
|
|
Overview
Net::Telnet is a very easy to use module for most tasks requiring telnet. It has an OO type interface that works very well for what I have used it for and it crosses platforms easily.
The Meat
I haven't actually gone through the code enough to do a review on the actual code, but what I have seen is well written and easy to follow. There are extensive help pages that are also well written and easy to follow.
I have used this module successfully on both FreeBSD and Win2k with only one glitch (see below). I have used it with great success in child processes on FreeBSD as well as "compiled" it into a freestanding .exe for Win2k. Everything worked great "out of the box" and there was only one wrinkle to iron out.
I've used it connecting to several different devices including Cisco 2511 comm servers, Cisco switches and routers (CatOS and IOS), Foundry switches, Nokia IPXXX and Nokia CCXXX boxes. All worked great with the exception of a few tweaks required for the Cisco comm servers. I have also used it for Telnet access to FreeBSD machines for remote control purposes and it handles this very well.
Update: First, thanks to rob_au for the fix below and Second, I updated some of the text. Update 2: There is a new version (3.03) of Net::Telnet on CPAN now and the author has *fixed* the issue above. I'm not sure if I like the fix (local $^W = '';), but it will now work without warnings. I did inform the author of the fixes below, however he may have reasons for discarding them and using his method. I don't know what that would be, but there may be something I'm not seeing.
The bugs Note: These bugs are in relation to the 3.02 version of this module.
I did find a couple things worth mentioning.
- A bug when running in a script with -w when on a Win2k platform, I traced down the warning to a > comparision, but was not able to fix it for some reason. Seems like it may be an ActiveState bug, but I have not debugged it further...YET! The workaround I used is in this code:
my $telnetObj = new Net::Telnet (Timeout => 20, Prompt => '/[^ <-]\>/'
+, Errmode => "return") ;
my $tmp = $^W ;
local($^W) = 0;
$telnetObj->open(Host => $gw_in_addr) ; #offending code in here
$^W = $tmp;
- The author was not responsive to this issue at all.
- The other issue was a sync problem with the Cisco comm server. however I found that feeding a known bogus command and junking the output would usually sync me back up.
Conclusion
It beats anything else out there. It has a great interface and I find it works much easier than expect for small jobs. The fact that it "compiles" into a freestanding .exe in Win2k, and works in child processes also attests to the stability in my mind.
If you need telnet...Net::Telnet is the only way to do it via perl.
|
Algorithm::Diff
3 direct replies — Read more / Contribute
|
by VSarkiss
on Mar 24, 2002 at 19:42
|
|
|
Introduction
You've probably used the Unix diff program, or one of its Win32 descendants, or some program that depends directly on it, such as patch, or the CVS source code maintenance system. All of these require calculating the smallest set of differences between two sequences (or equivalently, figuring out the longest subsequences that are the same). It's an intuitively simple problem, but surprisingly difficult to solve and implement correctly. That's what this module does, and it does it well.
Genealogy
The algorithm implemented in the module was originally published in a May 1977 Communications of the ACM article entitled "A Fast Algorithm for Computing Longest Common Subsequences" by J. W. Hunt and T. G. Szymanski. The Unix diff program was originally written by D. McIlroy and J. W. Hunt, who described their work in Bell Laboratories Technical Report 41 in 1976. This module was originally written by Mark-Jason Dominus, but the existing version is by Ned Konz. It's available on CPAN (alternate).
Details
The difficulty of the algorithm precludes sitting down and just "grinding it out"; when you need it, this module's a life-saver. It provides three separate ways to get at the algorithm -- in other words, three interfaces to the core functionality. The documentation mentions these in order of increasing generality, but they're also slightly different in what they do. They are:
- LCS
- This routine takes two array references, and returns the longest common subsequence. In other words, the returned result is an array (or a reference to one) containing the longest run of elements which are not different.
- diff
- The output of this routine is the smallest set of changes that are required to bring the two input arrays into agreement. It's similar to diff output. The result is a multidimensional array; each element is a so-called "hunk" -- that is, one logical group of differences. Within each hunk are one or more arrays of three elements each: a + or - sign, the index of the item to be inserted or deleted, and the data itself.
This is a lot easier to show than to describe. This example appears in the POD, although I've reformatted it very slightly. Suppose the two input arrays contain:
a b c e h j l m n p
b c d e f j k l m r s t
(spaces added for clarity). Then the routine will return the following result:
[
[ [ '-', 0, 'a' ] ],
[ [ '+', 2, 'd' ] ],
[ [ '-', 4, 'h' ] ,
[ '+', 4, 'f' ] ],
[ [ '+', 6, 'k' ] ],
[ [ '-', 8, 'n' ],
[ '-', 9, 'p' ],
[ '+', 9, 'r' ],
[ '+', 10, 's' ],
[ '+', 11, 't' ],
]
]
- traverse_sequences
- This function works in a callback style: it examines each input array an element at a time, and calls a supplied routine depending on whether the element is an element of the LCS or not. Up to five callbacks may be defined; referring to the input sequences as A and B, they are:
- MATCH
- Called when the element under consideration is a member of the LCS;
- DISCARD_A and DISCARD_B
- Called when the item in the respective array has to be discarded to bring the lists into agreement; and
- A_FINISHED and B_FINISHED
- This is really a special case of the above. When one array &lq;runs out&rq; of elements, this routine is called if it is available (otherwise the DISCARD callback for the other is invoked).
When would you use each one? You might consider using LCS if all you need is the longest match between the arrays. You could use diff to print diff-like output. For example, here's an extremely simple diff program (more for illustration than usefulness):
# Suppose @i1 and @i2 contain the slurped contents
# of two input files.
foreach my $hunk (Algorithm::Diff::diff(\@i1, \@i2))
{
print "---\n";
foreach my $element (@$hunk)
{
printf "line %d %s %s",
$element->[1]+1,
($element->[0] eq '+'? '>' : '<'),
$element->[2];
}
}
Finally, traverse_sequences is very handy when you already know what you need to do in response to a difference (say, delete an item).
All of these routines accept a "key generation function" as an optional final argument. This is needed because internally the routine uses numeric comparison (the <=> operator, in principle). But what if the values you have are not numeric? Or if you have a more complicated data structure, such as an array of hashes (such as object instances)? The key generation function should, given a reference to an object in your array, return a numeric value that can be used to meaningfully compare it to another element. That is, it should return some sort of hash or key that represents the element. If the elements are logically the same, it should return the same value (note that the ref operator would almost never be a good choice.)
Drawbacks
To me, the key generation functions have always seemed unnatural. I'm more used to being able to supply the actual comparison function, in the style of sort or grep. But that may cause a significant performance hit -- I don't know.
Another seeming inflexibility is that the input arrays have to be in memory. It would be nice to be able to process files without having to slurp them, by providing routines which returned the next element for the given array. (But my guess is that in the worst case -- having no elements in common -- that would still end up reading both files completely into memory.)
The above two are minor nit-picks; the only real problem I have with this module is that the documentation is hard to understand. The first time I downloaded it from CPAN, it took a good deal of trial-and-error to figure out what it was really doing. But once I got the hang of it, it's been extremely useful. Hopefully this node will help in that direction.
Summary
Algorithm::Diff is fast and relatively flexible. It implements a difficult algorithm in a nice package. Take the time to read and understand the documentation, and it will serve you well.
Update 2002-Aug-21
Brother tye has written an instructive node discussing the interfaces for returning data in Algorithm::Diff icult to use. There he also shows code for a more simplified interface that may be incorporated into the module in the future. If you're trying to figure out how to use the module, that code may be of great help.
|
Apache::ASP
3 direct replies — Read more / Contribute
|
by trs80
on Feb 19, 2002 at 20:29
|
|
|
Author
Joshua Chamas
Version
2.31
Rating
**** 4 out of 5 stars
|
HTML::Embperl
No replies — Read more | Post response
|
by trs80
on Feb 18, 2002 at 18:00
|
|
|
Author Gerald Richter
Version 1.3.4 stable / 2.0.5 beta
Rating ***** 5 out of 5 stars
HTML::Embperl is one of the earlier embedded Perl in HTML modules
available via CPAN making its debut in 1996 if memory serves me
correct. HTML::Embperl offers a robust range of features that
can be very helpful in creating complex internet applications
requiring form handling, database interaction and automated
handling of URL (un)escaping.
I was first drawn to HTML::Embperl because it lent well to my style of
coding and helped me achieve my goals without having to learn too much,
what I call fluff, to get started. In martketing terms: It presented
an intuitive interface. I started using HTML::Embperl in 1997 when it was at
version 0.25b. I owned my own internet company and was creating a
dynamic web site with a MySQL
backend. Since I owned the apache server I was able to
add in mod_perl(1) and all the additional goodies I would need for maximum
speed. The documentation is geared toward
a mod_perl environment and it
was my intent to build additional sites and use the same server to power
them so mod_perl seemed to make sense.
Form Data
The main thing that drew me to HTML::Embperl was how it interacted with
form data. To me that seemed to be one of the most frequent issues
with other CGI type development was managing the incoming variables.
HTML::Embperl uses a glob called fdat to handle interaction with the
incoming parameters. It appears inside of your "page" as a hash %fdat
that you can treat just like a normal hash. It is also well named
because of its interaction with HTML. It has several html generation
automations for tables, forms, lists and hrefs just to name a few.
There is also an array interface to the list of incoming variables, most
likely in the order they appeared on the page that sent the parameters
which makes it easy to pop or shift the item you need or don't need.
Delimiters
One thing that at the time I didn't really worry about, is its non
standard(2) code delimiters more on that later in the review. The
delimiters are:
[- -] - execute, but don't send page
[+ +] - execture and send to page
[$ $] - meta commands or loops
[* *] - EXPERIMENTAL and provides different scoping then [- -] tags.
[# #] - Comment tags
The documentation goes into detail on how each of the tags is processed
and provides some examples.
Sessions
The next feature that I like about HTML::Embperl is its session
management. It is as easy to use as the form data. You simply assign
what you want to be in the user session to a hash called %udat (a
"magical" hash like %fdat) and if Session handling is enabled in the
Apache configuration it will handle the session transparently.
Inside of you page you simply write something like:
$udat{user_name} = "fred";
The session id is created for you automaticly if the users doesn't
currently have one and you can add as much as you like to the session.
It allows for session data to be stored in several different methods.
The
session handling can be a bit confusing at first,
but with a through read of the documentation and a good understanding
of the session backend module(3) it is manageable.
Configuration
HTML::Embperl is very configurable. Almost every feature it has can be
turned off either globally via a (Perl)SetEnv directive in the
httpd.conf or at a page level via global variables accessible from the
HTML::Embperl module, such as:
[- $optDisableFormData = 1 -]
That will turn off the form (param) handling for a single page. These
options can be set in the httpd.conf using the EMBPERL_OPTIONS
directive. EMBPERL_OPTIONS is a bitmask that controls options for the
execution of HTML::Embperl. To specify multiple options, simply add the
values together. This is covered in the documentation in depth so I
won't repeat it here.
EmbperlObject
A newer feature of HTML::Embperl is HTML::EmbperlObject. This addition
allows for simplification of site building if you have consistent
headers, footers, or page level content (aka Objects). The EmbperlObject
documentation explains it in full, but basicly it allows for a single
page to be used as the base for each request. That "page" can compile
in other "pages" by default that can either wrapper or augment the page
actually requested. So if you setup your EMBPERL_OBJECT_BASE to be
base.htm, when you request page1.htm it will read base.htm first and
perform its actions and then place the compiled code from page1.htm in
the location you specify.
Summary / Odds and Ends
So who should use HTML::Embperl? My opinion is that if you come from
an HTML background and are creating robust online applications
HTML::Embperl maybe a good match for you. If you are working in a
mixed environment where non-programmers are looking at the source and
make modifications in various WYSIWYG editors it can present some
issues in dealing with what I previously called non-standard
delimiters. There are various "plugins" for editors such as VI,
emacs etc. that will do correct highlighting of HTML::Embperl
delimiters if you need them.
I think its ease of use is very high and its use of
sessions and handling of form data make it much less cumbersome IMHO to
read/write then Apache::ASP.
It is now a very mature package and the 2nd generation is in beta and
from my testing it is very stable for use with existing code and will
allow for interaction with XML and the ability to replace parts of the
"page" processing with your own custom handler.
HTML::Embperl has its own mailing list for support and most questions
are answered quickly and often by the author.
------------------------------------------------------------------
1 - HTML::Embperl does NOT require mod_perl to run however.
2 - standard in what is now considered the more accepted approach, but I
don't think there is any standard body for embedded content. The
general standard is <% %>
3 Older versions of HTML::Embperl used Apache::Session, the author has
since created a wrapper module called Apache::SessionX which works in
concert with Apache::Session and makes session handling even easier by
prompting for information on installtion and automaticly setting the
configuration files based on this. This method does not effect
httpd.conf.
|
Spreadsheet::WriteExcel
2 direct replies — Read more / Contribute
|
by abaxaba
on Jan 21, 2002 at 22:06
|
|
|
This module allows for the creation of Excel binary files. Robust cell formatting, cell merges, multiple worksheets, formulae, printer specifications, The author has well documented this work, providing working examples that illustrate many of its features.
Why should you?
If you display a lot of data, and wish to allow users to export it in a readily usable format, this is the way to go.
Why Not?
Have never benchmarked it, but it does a lot of work, which takes a bit of time -- About 5 secs or so to create a a 2-worksheet spreadsheet about 15K. This is still under development, and does not currently support Macros. Requires 5.6.0, support for IEEE 64 bit float, Text:: and Parse:: packages.
How
This is from the examples that come with the distro:
#!/usr/bin/perl -w
use strict;
use Spreadsheet::WriteExcel;
# Create a new workbook called simple.xls and add
# a worksheet
my $workbook = Spreadsheet::WriteExcel->new("simple.xls");
my $worksheet = $workbook->addworksheet();
# The general syntax is write($row, $column, $token). Note that row an
+d
# column are zero indexed
# Write some text
$worksheet->write(0, 0, "Hi Excel!");
# Write some numbers
$worksheet->write(2, 0, 3); # Writes 3
$worksheet->write(3, 0, 3.00000); # Writes 3
$worksheet->write(4, 0, 3.00001); # Writes 3.00001
$worksheet->write(5, 0, 3.14159); # TeX revision no.?
# Write some formulas
$worksheet->write(7, 0, '=A3 + A6');
$worksheet->write(8, 0, '=IF(A5>3,"Yes", "No")');
# Wrte a hyperlink
$worksheet->write(10, 0, 'http://www.perl.com/');
Edit Masem 2002-01-23 - Added CODE tags
Edit abaxaba 2003-12-23 - Corrected Type - thanks ff
|
MIME::Lite - For outging mail with attachments
2 direct replies — Read more / Contribute
|
by trs80
on Jan 18, 2002 at 21:00
|
|
|
I first came across MIME::Lite about 2 years ago when working on an online mail system. We had some in house code that we had been using to create outgoing messages, but it was ugly and brittle.
MIME::Lite in a Nutshell
### Create a new multipart message:
$msg = MIME::Lite->new(
From =>'me@myhost.com',
To =>'you@yourhost.com',
Cc =>'some@other.com, some@more.com',
Subject =>'A message with 2 parts...',
Type =>'multipart/mixed'
);
### Add parts (each "attach" has same arguments as "new"):
$msg->attach(Type =>'TEXT',
Data =>"Here's the GIF file you wanted"
);
$msg->attach(Type =>'image/gif',
Path =>'aaa000123.gif',
Filename =>'logo.gif',
Disposition => 'attachment'
);
Output a message:
### Format as a string:
$str = $msg->as_string;
### Print to a filehandle (say, a "sendmail" stream):
$msg->print(\*SENDMAIL);
Send a message:
### Send in the "best" way (the default is to use "sendmail"):
$msg->send;
MIME::Lite supports several methods to send the message via SMTP.
If you need to create a valid email with attachments then MIME::Lite is one of the easiest most mature modules I have found. Easy because you don't need to know much about RFC822 to use it. The public interface that it offers is well thought out and provides access to almost every aspect of the class. That is also why I feel it is a very mature module. As of this write up it is on version 2.117. The author of this module is also the author of MIME-tools. The interface for MIME::Lite is much easier for the end user then that of MIME-tools in my opinion. MIME-tools have a steep learning curve if you are new to Perl. Regardless the authors understanding of the MIME proctocol is much more indepth then most people will need and MIME::Lite provides a simpe way to build outgoing mail messages.
MIME-tools Review
MIME::Lite documentation
|
Proc::ProcessTable
No replies — Read more | Post response
|
by rob_au
on Nov 02, 2001 at 12:41
|
|
|
Why use Proc::ProcessTable?
This module implements platform-independent methods for obtaining information on process execution on a system from the /proc file system by filling a procstat structure for the /proc/XXX/stat file relating to a process. This process execution information includes real and effective group and user IDs, parent and group process IDs, process priorities, CPU and memory utilisation and process TTYs. The supported process attributes available via this module vary from platform to platform as a result of the changes in platform procstat structs - Supported attributes for each platform are described in the README file for each platform.
The delivery of this process information by Process::ProcessTable allows you to do away with the ugly external shell calls to utilities such as ps.
How to use Proc::ProcessTable?
The documentation for the Proc::ProcessTable module is excellent and usage of this module is very simple with only three methods exported by default - new (create new ProcessTable object), fields (returns a list of the field names/attributes supported by the module on the current architecture) and table (reads the process table and returns an array of process information).
Examples of the usage of Proc::ProcessTable module submitted previous by myself on this site can be seen here and here.
|
BioPerl
2 direct replies — Read more / Contribute
|
by scain
on Oct 16, 2001 at 20:36
|
|
|
This is a review of the BioPerl modules.
These modules are not used to not be available through CPAN,
so they must
be obtained from the BioPerl website., so you used to have to
get them from the BioPerl website. That is no longer true,
you can now use the standard CPAN shell to install BioPerl. This is a large set
of modules covering several bioinformatics tasks. This will
be a fairly high level review, as there are 174 modules that
make up this set (the full install is 5.4 M). The most
recent release as of this writing is 0.7.1, and there is
a developers release (that I have not looked at) 0.9.
The prerequisites are nothing out of the ordinary:
LWP, IO::String, XML::Node,
and XML::Writer, though BioPerl does provide interfaces
for several programs and databases, so to work with those,
you will obviously need to have them too. Bundle::BioPerl
will install all of the prerequisites for you, though I installed
doing the make tango and the installation
was flawless; a few tests failed out of over 1000,
but that wasn't a big deal.
There are several module groups:
- Bio::AlignIO::*: wrappers for several alignment programs
like clustalw and pfam.
- Bio::Annotation::*: Objects for holding annotations (simple
comments, links to other databases, or literature references).
- Bio::DB::*: Interfaces to several databases, including GenBank,
GenPept, SwissProt and several others.
- Bio::Factory::*: This is a set of objects for instanciating
Bio::SeqAnalysisParserI, which is a generic interface for
sequence analsys parsers. The idea is to give a generic
interface for parse so that annotation pipelines can be
built, and when a new parser or program comes along, a complete
rewrite is not necessary.
- Bio::Index::*: Methods for indexing several types of
databases.
- Bio::LiveSeq::*: This is a very feature rich DNA sequence
object. Several types of annotations can be added here. It
seems that there is a fair bit of overlap between these modules
and those in Bio::SeqFeature; it is not clear to me when, if
ever, you would want to use one over the other. It may just
be a matter of preference.
- Bio::Location::*: Contains methods for handling location
coordinants on sequences. As the documentation says, this may
seem easy, but it deals with fuzzy or compound (split) locations,
as well as handling rules for locations such as like 'always widest range'
or 'always smallest range'.
- Bio::Root::*: Several utility modules that are inherited from
in other modules.
- Bio::Seq::*: Contains extensions for the main object for sequences,
Bio::Seq, including LongSeq for long (genomic) sequences and RichSeq
for annotated sequences. Bio::Seq is the workhorse object,
which holds nucleotide or proteins sequences, as well as
annotations. It provides several handy
sequence manipulation methods such as revcom (reverse complement) and
translate.
- Bio::SeqFeature::*: Objects containing feature annotations
of sequences; allows fairly complex relationships to be expressed
between related sequences, as well as detail about individual sequences,
like the locations of exons and transcripts. The list of
possible options is somewhat limited, so more specific features
should probably be created by subclassing the generic class.
- Bio::SeqIO::*: Handles I/O streams for several sequence
database types (like GenBank annotations/features, GCG and SwissProt).
- Bio::Tools::*: Several items here, including result holders
and parsers for several programs. The BLAST parser is worth
its weight in gold.
- Bio::Variation::*: These appear to be modules for working with
SNPs and other mutations.
In all honesty, I have used only a few of these modules. The majority
of them are very specialized, so a "general practitioner" like
me is unlikely to need them often. There are so many modules
here that it is difficult to know if a problem you have might
be addressed by BioPerl, which is why I undertook writing this
review. I hope it has been helpful to you, and if you have
any experience with BioPerl, please add your comments.
Special thanks to the other members of my group, and
especially Ben Faga (not a monk, but still a good Perl
programmer), for their input and insight while writing this
review, as well as Arguile for pointing out that BioPerl is
now available at CPAN.
New note 2002-05-20: I plan on bringing this up to date
for BioPerl v1.0 as soon as possible.
|
HTML::Clean Review
2 direct replies — Read more / Contribute
|
by sheridan3003
on Oct 15, 2001 at 17:37
|
|
|
I work with a web site where the front-end developers use FrontPage to develop the UI.
I began wondering if there was a way I could improve/shrink down the HTML they were generating.
I found HTML::Clean and began testing it.
This is a very easy way to shrink the static HTML done on a web site.
I used a small test site one of the developers had put together to test with.
I was able to achieve a 10% shrinkage of the total size of the directory. In the scheme of things this is not a big savings, however, as I begin to move on to the larger web sites we have built I believe a 10% savings will begin to show improved download times for our customers.
Thanks to some of the other PM members I have put together this small script to demonstrate the abilities of HTML::Clean. While I have tested this on my own directories you may want to ensure that you have backups of your html files prior to running this script on your directories since this will over write the files in the running directory.
#!/usr/bin/perl -w
use strict;
use HTML::Clean;
clean_file( $_ ) foreach glob "*.html";
sub clean_file {
my ($filename) = shift;
print "$filename is being cleaned!\n";
my $h = new HTML::Clean($filename,9) or die "Couldn't load $filena
+me: $!\n";
$h->compat();
$h->strip();
my $myref = $h->data();
open(OUTPUT,">$filename") or die "Can't open $filename: $!";
print OUTPUT $$myref;
close(OUTPUT);
}
Edit kudra,
2002-02-26
Added paragraph breaks; added 'review' to title per ntc req
|
CDB_File
1 direct reply — Read more / Contribute
|
by ehdonhon
on Oct 03, 2001 at 01:26
|
|
|
Review: CDB_File
General Summary: Good, but limited applications
The CDB_File module allows you to gain access to databases
that are stored in the CDB format (by Dan Bernstein).
CDB_Files are very efficient when it comes to lookup
speed. They are especially usefull when you have a large
data set. The tradeoff is that CDB_Files are read-only.
If you want to update a CDB_File, you must pass a new
hash of data to the CDB_File::create class method and then
re-tie your old tied hash to gain access to the new data.
This means if you are in a situation where you constantly
need to be both acessing and updating your data file, the
CDB_File format is probably not your best choice.
In benchmarking of CDB_Files vs DB_Files, I found that
there is a considerable ammount of overhead in the process
of tying a CDB_File vs. tying a regular DB_File, so if you
are forced into a situation where you are tying a data file,
accessing a single value, then un-tying, you may be better
off with a DB_File (especially if you have a small data
set)
If however, you have a situation where you are tying
to a data file once, and doing multiple lookups with very
little updates, you might want to consider CDB_Files as
a very useful alternative to DB_Files.
|
Parallel::ForkManager
2 direct replies — Read more / Contribute
|
by Clownburner
on Aug 28, 2001 at 16:07
|
|
|
Description
Parallel::ForkManager automates the sometimes tedious process of running
multiple parallel forks from a script. It also allows you to control how
many forks run at once, and keeps that many running, which simplifies a
rather complex task. It is extremely simple in operation and well
documented.
Who should use it?
Anyone who wants to implement forking in a script. I wrote a fair bit of forking
code before discovering this module, and I must say, I don't think I'll go back.
It makes life much easier, and isn't that what good perl modules are supposed to
do?
Who should NOT use it?
There are some forking scenarios which are probably too complex for it, although
I haven't found any of those myself yet. It might be difficult to use for
socket server applications, since controlling the timing of the forks could be
tricky. Or fun, depending on your perspective.
Documentation
Excellent - clear examples and concise wording. Even I understood it.
Personal Notes
I've converted almost all of my old fork code to run with this module. It took
just minutes, and performance improved (I was spawning forks N at a time, waiting for them all to die, and then spawning N more. This thing keeps N forks running all the time without Ne+02 lines of code ;-). Highly recommended!!
|
|