User Questions
Config::IniFiles
6 direct replies — Read more / Contribute
by wardk
on Oct 09, 2000 at 17:38

    Module: Config::IniFiles

    Author: Scott Hutton, currently maintained by Rich Bowen

    Where is it? http://cpan.perl.org/modules/by-module/Config/RBOW/

    Config::IniFiles is a module intended to make life easier for those who rely on flat configuration files. Specifically, files that use the "Windows style" format.

    Config::IniFiles offers an easy and reliable method of reading and writing configuration files that utilize the Section key=value format. You can place the entire contents into a multil-dimensional hash, keyed on section, and/or easily pull those sections in their own hash.

    I tried this module out as an alternative to what were/are multiple methods of "configuration" being used by my currrent client. Currently, my client's Perl CGI/batch scripts utilize a combination of database, hard-coding and tab-delimited flat files for setting default/site-specific configurations within the applications. Method used seems to depend on the author/maintainer of the day.

    My intent was to implement the practice of a single standard method for storing configuration information. Although we develop and run on HP-UX, the development team that is responsible for maintenance and development are more comfortable with Windows and are all familiar with the "Windows style" configuration file format. I am referring to this style as "windows style", if a monk can identify this format properly, I will gladly update my use of this designation!

    The style of configuration file is as follows:

    [Section1] key1=value1 key2=value2 [Section2] key1=value1 key2=value2

    I am sure you've been exposed to this formatting. If not, it is pretty self-explanatory.

    My goal was to create a file similar to the following which would isolate common variables associated with the "current" environment that the Perl was executing in. In this case, dev/test/prod. The single configuration file would look similar to this (simplified for this example):

    [prod] ORACLE_HOME=/path/to/oracle/prod ORACLE_SID=prod_sid [test] ORACLE_HOME=/path/to/oracle/test ORACLE_SID=test_sid [dev] ORACLE_HOME=/path/to/oracle/dev ORACLE_SID=dev_sid

    The code itself determines which environment it is in based on the hostname

    so lets proceed to using Config::IniFiles

    # Make the module available use Config::IniFiles; # now nab the entire config file contents into the %ini hash my $ConfigFile = "/path/to/inifile/inifile"; tie my %ini, 'Config::IniFiles', (-file => $ConfigFile); # now grab the relevent section into it's own hash, we're assuming th +at it's "prod" for this example my %Config = %{$ini{"prod"}}; # now we have access to our keys in the hash print "Oracle home is $Config{ORACLE_HOME}"; print "Oracle SID is $Config{ORACLE_SID}";

    While I haven't used all the features of this neat tool, here are a few features that would be real useful if we in fact utilize such a method.

    - val ($section, $parameter) - Returns the value of the specified +parameter in section $section. - setval ($section, $parameter, $value, [ $value2, ... ]) - Sets t +he value of parameter $parameter in section $section to $value (or t +o a set of values). - newval($setion, $parameter, $value [, $value2, ...]) - Adds a new v +alue to the configuration file. - delval($section, $parameter) - Deletes the specified value from th +e configuration file - Sections - Returns an array containing section names in the configu +ration file. If the *nocase* option was turned on when the config object + was created, the section names will be returned in lowercase.

    While I have yet to receive authorization to make such changes to my client's applications, I have a good tool for making it happen.

    That is all for now...

    Note: Updated typo 8/12/02 per demerphq's head up

    Note: Updated 9/30/02 - modify <cite> tags to italic since cite wasn't

Win32::OLE
2 direct replies — Read more / Contribute
by Rudif
on Sep 28, 2000 at 00:41

    Description

    For win32 platforms only. Win32::OLE module gives you Automation access to Windows applications such as Word, Excel, Access, Rational Rose, Lotus Notes, and many others. Which means that your Perl scripts can harness these applications' capabilities, data and methods. Other Win32 perl modules build on this module (for example DBD::ADO, the DBI driver for Access data base).

    The module lets you create perl objects that act as proxies for the application and it's components in your script. In the example below, $word is your perl proxy connected to a running instance of Word. You can call Word's Automation methods as perl methods on this object, and access the Word properties as hash elements in your perl object.

    Automation-friendly Win32 applications expose hierarchies of objects and collections. The module ties these to Perl hashes and arrays for you. Your script just has to navigate the hierarchy and invoke the methods and properties as needed.

    How do you know what methods and properties these objects support? Well, you RTFM that comes with the applications, and you use Object browsers that display a minimal documentation extracted from objects themselves (actually from their type libraries). The Win32::OLE module comes with Browser.html, a client-side dynamic html page. The perl code embedded in this page uses Win32::OLE to extract information from type libraries, and displays it in the html browser (IE required).

    A short example will illustrate

    #! perl -w use strict; use Win32::OLE; use Win32::OLE::Const 'Microsoft Word'; ### open Word application and add an empty document ### (will die if Word not installed on your machine) my $word = Win32::OLE->new('Word.Application', 'Quit') or die; $word->{Visible} = 1; my $doc = $word->Documents->Add(); my $range = $doc->{Content}; ### insert some text into the document $range->{Text} = 'Hello World from Monastery.'; $range->InsertParagraphAfter(); $range->InsertAfter('Bye for now.'); ### read text from the document and print to the console my $paras = $doc->Paragraphs; foreach my $para (in $paras) { print ">> " . $para->Range->{Text}; } ### close the document and the application $doc->SaveAs(FileName => "c:\\temp\\temp.txt", FileFormat => wdFor +matDocument); $doc->Close(); $word->Quit();

    Why use Win32::OLE


    You work on a win32 platform and you want to tap into existing applications from your Perl scripts.
    You work on creating Automation components and you want to use Perl to test them.

    Why not use Win32::OLE


    You don't work on a win32 platform
    You do, but you prefered scripting language is VBScript (just kidding ;-)

    Where is the doc, tuts and code


    The module and it's html doc are included in the ActiveState Perl installation. Look up the TPJ article by Jan Dubois, the module's coauthor, for an extended example. Doc is also found on CPAN mirrors
    If you want to study the module code, it is on CPAN (6700+ lines in ole.xs and 2500+ lines of perl in several packages - good reading).
    For introductory tutorials, check the ppt presentation and demos ( Word, Excel)
    Not enough ? You can find more short tutorials down under ...

Text::Template
3 direct replies — Read more / Contribute
by mirod
on Sep 20, 2000 at 15:00

    Description

    Text::Template let's you store templates in separate files.

    • the templates can be anything, typically pure text (like email message templates) or HTML.
    • the templates can include Perl code, between 2 matching { } (you can change the delimiter if you want to.
    • you can pass variables to the template, usually in a separate package (my variables won't be passed to the template)
    • you can trap errors when filling the standard with a custom handler.

    Why use Text::Template

    Warning: Text::Template is the only templating module I use, so I cannot compare it with other similar modules.

    • it works just fine, I never did anything too fancy with it, but I never found a bug in it
    • it is quite powerful: you can change the delimiters, evaluate the template in a safe environment, add custom code (like common subroutines) to existing templates...
    • the documentation is very good

    Why NOT use Text::Template

    • you prefer an other module

    Personal Notes

    Text::Template is a no-brainer, it is powerful enough to handle most needs and won't cause you any trouble. Get it now!

File::Find
2 direct replies — Read more / Contribute
by Corion
on Sep 16, 2000 at 08:48
    File::Find is the way if you want to look at all files in one or more directories. File::Find exports one function, find(), which takes two parameters, a hash or a code reference, and a list of directories where the search starts.

    Why use File::Find

    File::Find protects you from a lot of nasty things that happen on filesystems. In its standard configuration it ensures that your code reference is called once for each file encountered, even if there are more symlinks pointing to it, and it also prevents nasty loops for symlinked directories.

    Why avoid File::Find

    There is not much reason to avoid File::Find - you could want to avoid it if you want to read files in a single directory, without recursing, when you are explicitly sure that there can be no symlinks in that directory (for example, if the filesystem dosen't allow symlinks). Then, your code could load faster. But I'd file that under premature optimization.

    Caveats

    If you are starting to first use File::Find, you have to deal with some idiosyncrasies.

    First of all, File::Find uses some "optimization" by default to speed up searches under certain filesystems under Unix. Unfortunately, this "optimization" fails to work under other filesystems, such as the iso9600 filesystem used for cdroms. ncw tells you below what to do about it - in fact, you should always use the code ncw proposes.

    In the default configuration, the directory is changed to the recursed directory, and all returned filenames are relative to the current directory. Use $File::Find::name to get a fully specified filename.

    If you don't want to recurse below a certain directory, there is the (not-so-well-documented) $File::Find::prune variable, which you can set to 1 in your code reference to stop recursing into the current directory.

    Examples

    By popular demand, here are some examples on how to use the module. The documentation shows off some interesting code, but it's not helpful if you're looking for something to get started.

    A first example, printing the filename and the filename with the path to the file. The code was stolen from a node by nate.

    use strict; use File::Find; sub eachFile { my $filename = $_; my $fullpath = $File::Find::name; #remember that File::Find changes your CWD, #so you can call open with just $_ if (-e $filename) { print "$filename exists!" . " The full name is $fullpath\n"; } } find (\&eachFile, "mydir/");
Date::Manip
No replies — Read more | Post response
by ZZamboni
on Sep 14, 2000 at 01:05

    Date::Manip is often referred to as a large, monolithic and impractical module. And at almost three times the size of Date::Calc (which is itself referred to as large and monolithic), it is certainly a large module. However, it provides some useful high-level functions, including very flexible parsing capabilities.

    The Good

    I have found Date::Manip to be useful primarily for its date parsing capabilities. It can parse many different human-readable date specifications. You can say things like:
    ParseDate('today at midnight') ParseDate('in 3 weeks') ParseDate('1 year ago') ParseDate('3rd monday in october') ParseDate('3rd monday in october at 5 pm') ParseDate('in 3 hours') ParseDate('december 3, 1970')
    and they will all parse correctly. Date::Manip also handles concepts of work days and holidays, date deltas (for example, "in 3 weeks" can be parsed both as a date and as a delta), recurrent events, and named events. From its internal representation, it can output dates in any format you may need.

    The documentation is quite extensive and detailed, although it is somewhat confusing in parts.

    The Bad

    It is a very large module.

    The Bottom Line

    If you need powerful date parsing capabilities, or other high-level functions (like recurrence, work/holiday days, etc.), Date::Manip is for you. For most other things it may be overkill.
CGI::Cookie
1 direct reply — Read more / Contribute
by wardk
on Sep 13, 2000 at 21:02

    revised: 10/22/2000 - unhappy with original "review"

    CGI::Cookie is an easy way of managing cookies for your website.

    CGI::Cookie is authored by Lincoln D. Stein

    There are many available documents and examples available to show you how to utilize this module. I am hoping that I can create a review, and not just another coding example.

    Below is an example, however it is not near as complete as the examples that may be found at the following sites:

    I have used this module on the following platforms without issue.

    • Solaris
    • HP-UX
    • Linux (SuSE, Caldera)
    • FreeBSD 3.x
    • Windows (95,98,NT)

    Why use it?

    • Session management
    • User login management and security
    • Impress/scare your users by tracking what they do
    • Saving data to be used later when you lack a database
    • Because it's easier than doing it by hand

    What I've used it for

    • Secure login session management
    • Manage webpage "themeing"

    For those wanting a quickie usage without following the above links...

    This example will use CGI and CGI::Cookie. It will create a cookie, set the cookie via the HTTP header and then retrieve the cookie and display it.

    In my example, this is what gets sent to the browser to create the cookie...obtained via an NT command line. Note that at a command line, you will not see the set cookie in the print loop, as there was no browser to do the storing/retrieving.

    Set-Cookie: CookieName=CookieValue; path=/C:\wardk\cookie.pl
    Date: Wed, 13 Sep 2000 21:00:35 GMT
    Content-Type: text/html
    

    Here is the code that created the above. Happy coding!

    #!/usr/bin/perl use CGI; use CGI::Cookie; $q = new CGI; # Create a new cookie $cookie = new CGI::Cookie(-name=>'CookieName',-value=>'CookieValue'); # set the Cookie print $q->header(-cookie=>$cookie); print $q->start_html; %cookies = fetch CGI::Cookie; # print the cookie while (( $k,$v) = each %cookies) { print "<p>$k = $v"; } print $q->end_html; exit;
CGI::Carp
No replies — Read more | Post response
by redcloud
on Sep 13, 2000 at 20:56

    If you're trying to debug a CGI script, CGI::Carp could be your best friend. It makes getting error messages from CGI scripts with a browser practically as easy as from other scripts at the command line.

    My favorite feature is also the easiest to use. Import the special symbol fatalsToBrowser like this:         use CGI::Carp qw(fatalsToBrowser);

    Now, if have a syntax error in your script that would abort execution due to compilation errors, say, instead of getting a server error from your HTTP server that isn't very informative, you get a message telling you that there are compilation errors. If I used perl -c more often, I probably wouldn't be so attached to this feature.

    On the downside, information about what line the error is at doesn't show up in your browser. It would be nice if it did. However, you can retrieve that information from your HTTP server's error log.

    Once you get past syntax errors, anything that you want to die() or croak() about will go to the browser, too, if you've imported fatalsToBrowser. If you want warnings to go to the browser as well, import the carpout() function and use it to redirect STDERR.

    You can also use carpout() to redirect STDERR to your own log file, instead of the server's error log. Here's the example from perldoc CGI::Carp:

    BEGIN { use CGI::Carp qw(carpout); open(LOG, ">>/usr/local/cgi-logs/mycgi-log") or die("Unable to open mycgi-log: $!\n"); carpout(LOG); }

    These conveniences make CGI programming a lot easier, and they make CGI::Carp one of my favorite modules.

File::Spec
No replies — Read more | Post response
by tye
on Sep 13, 2000 at 19:35

    File::Spec is the long-awaited standard method for doing common tasks with file names and paths (file specifications or file specs) in a way that is portable between different operating systems.

    The Good

    What it can do:

    • $cpath = File::Spec->canonpath( $path );
    • $dirpath = File::Spec->catdir( $dir1, $dir2, $dir3 );
    • $filepath = File::Spec->catfile( $dir1, $dir2, $dir3, $file );
    • $curdir= File::Spec->curdir();  # "." on Unix
    • $nul= File::Spec->devnull();    # "/dev/null" on Unix
    • $root= File::Spec->rootdir();   # "/" on Unix
    • $tmpdir= File::Spec->tmpdir();  # "/tmp" or $ENV{TMPDIR}, etc.
    • $updir= File::Spec->updir();    # ".." on Unix
    • @list= File::Spec->no_upwards( @list ); # Strips "." and ".."
    • $ignore= File::Spec->case_tolerant();      # Returns false under Unix
    • $abs= File::Spec->file_name_is_absolute($path);
    • @path= File::Spec->path();      # Returns $ENV{PATH} as an array.
    • ($volume,$dirs,$file)= File::Spec->splitpath( $path [, $no_file ] );
    • @dirs= File::Spec->splitdir( $dirs );
    • $path= File::Spec->catpath( $vol, $dirs, $file );
    • $relpath= File::Spec->abs2rel( $path [, $base ] );
    • $abspath= File::Spec->rel2abs( $path [, $base ] );

    The Bad

    The documentation isn't OS-independent so you have to read the documentation for each OS-specific component of File::Spec. Start with perldoc File::Spec then perldoc File::Spec::Unix (since that part of the module is the most complete).

    Not all methods are available on all platforms.

    This module isn't available for even slightly old versions of Perl. Until and unless that changes, you may want to back-port the functionality to older versions of Perl yourself so that your code will still port to different OSes.

    The Ugly

    Getting File::Spec functionality for old versions of Perl in a portable, robust manner. I hope to add more details on this later.

            - tye (but my friends call me "Tye")
Data::Flow
No replies — Read more | Post response
by knight
on Sep 13, 2000 at 19:24

    Description

    The Data::Flow module allows you to specify a hash of "recipes" that fetch or manipulate data. Each recipe has a name (the hash key), which you can use to set the input or fetch the output value of the recipe using the Data::Flow set and get methods. A recipe can depend on other prerequisite recipes, which means that the input of one recipe can be taken from the output of other recipes. You can use this to create a hierarchy (or other structure) of dependent recipes. The cool thing is that you then call for a specific type of output, and the data flows from its input recipes through all of the dependent recipes to your output, without you having to code the calling structure directly. This allows for a lot of flexibility, because you can manipulate your calling structure by changing your hash recipes and dependencies, not by having to find all of the places that subroutine X is called in your code.

    Why should you use it?

    The Data::Flow module would be especially appropriate if your data can be processed in a lot of complicated and inter-related ways, and you want a lot of flexibility in how you'll want it processed, now or in the future.

    Why should you NOT use it?

    For straightforward processing of your data, the recipe structure of Data::Flow can be overkill. It's not as intuitively obvious as just hacking the code.

    Related Modules

    The C::Scan module uses the Data::Flow module. (Both were written by Ilya.)

    Example

    The Data::Flow documentation provides an okay example, but looking at the C::Scan module will teach you more about using Data::Flow powerfully.
strict.pm
3 direct replies — Read more / Contribute
by tye
on Sep 13, 2000 at 18:46

    There are (currently) three options for "strictness". Though you should usually use all three by placing:

    use strict;
    near the top of each of your Perl files. This will help to make your source code easier to maintain.

    use strict "vars"

    If you use strict "vars", then you will get an error if you ever use a variable without "declaring" it. You can "declare" a variable via:

    • use vars qw( $scalar @array %hash );
    • my( $scalar, @array, %hash );
    • our( $scalar, @array, %hash ); (only for Perl 5.6 and higher).
    • Having the variable imported from a module (not common).
    • Including the package name in the variable name: $main::scalar

    Detection of undeclared variables happens at compile time.

    use strict "subs"

    If you use strict "subs", then you will get an error for most uses of "barewords". Without this strictness, Perl will often interpret an unadorned identifier as a string:

    print " ", Hello, ", ", World::ALL, "!\n"; # prints " Hello, World::ALL!"

    It is good to use this strictness because forgetting to "adorn" an identifier is a common mistake. Perhaps you meant to write:

    print " ", Hello(), ", ", $World::ALL, "!\n";

    There are lots of ways to adorn an identifier so that it won't be a bareword:

    • Put it inside quotes or any of Perl's many other quoting operators.
    • Put $, @, %, or & in front of it.
    • Put parentheses after it (making it a function call).
    • Put : after it (making it a label).

    And there are several ways you are expected to use barewords that will not be complained about even if you use strict "subs":

    • Reserved words (of course): print if "a" le $_ and not -s $_;
    • File handles: open(BAREWORD,"<$file"); print BAREWORD "Text\n";
    • Hash keys: $hash{bareword}
    • Hash keys in front of =>: %hash= ( bareword => "value" );
    • Declared subroutines (where use strict "subs" got its name). You can declare a subroutine several ways:
      • Import from a module:
        use Fcntl qw( LOCK_EX LOCK_NB ); flock FILE, LOCK_EX|LOCK_NB or die "File already locked"
      • Just define the subroutine before you use it:
        sub Log { warn localtime(), ": ", @_, "\n"; } Log "Ready.";
      • Predeclare the subroutine before you use it and then define it elsewhere:
        sub Log; Log "Ready."; sub Log { warn localtime(), ": ", @_, "\n"; }
        Note that you aren't protected from naming your subroutine "log" instead of "Log" which would result in you trying to take the natural logarithm of "Ready." in both of the examples above because just declaring a subroutine doesn't override built-in routines like log(). So it is a good idea to name subroutines with mixed letter case [you can also invoke your subroutine named "log" using &log("Ready.") but trying &log "Ready." won't work even if you have predeclared sub log as using & without parentheses is a special form of function invocation that takes no arguments, reusing the current values in @_ instead].
    • Package names used to invoke class methods:
      $o= My::Package->new();
      Here My::Package is usually a bareword. Note that this is a bit of an ambiguous case because you could have a subroutine called sub My::Package and the above code would be interpretted as:
      $o= My::Package()->new();
      which is why, for Perl5.6 and later, you may wish to write:
      $o= My::Package::->new();
      to avoid the ambiguity.

    Detection of barewords happens at compile time. This is particularly nice because you can make a policy of making sure many of your subroutines are declared before you use them (especially in the case of constants which are usually imported from modules) and them call them as barewords (no parens and no &) and then Perl will detect typos in these names at compile time for you.

    use strict "refs"

    If you use strict "refs", then you will get an error if you try to access a variable by "computing" its name. This is called a symbolic reference. For example:

    use vars qw( $this $that ); my $varname= @ARGV ? "this" : "that"; ${$varname}= "In use"; # This line uses a symbolic reference.

    Symbolic references were often used in Perl4. Perl5 has real references (often created with the backslash operator, \) that should be used instead (or perhaps you should be using a hash and computing key names instead of computing variable names). Perl5 also has lexical variables (see the my operator) that can't be accessed via symbolic references. Catching symbolic reference is good because a common mistake is having a variable that should contain a real reference but doesn't. Dereferencing such a variable might silently fetch you garbage without this strictness.

    Detection of using symbolic references happens at run time.

    If you have one of those truely rare cases where you need to do symbolic references in Perl5, then you can delcare no strict "refs" for just that chunk of code or use eval.

    Updated to cover a few more cases.

            - tye (but my friends call me "Tye")
Bone::Easy
3 direct replies — Read more / Contribute
by Guildenstern
on Sep 13, 2000 at 18:18

    I was advised (jokingly, I'm sure) that I should write a review of this module that's shown up on CPAN recently. I've done my best to bring you the best review I could scrape together.

    Bone::Easy does nothing more than generate random pickup lines. What it produces range from the simple to outright explicit. (Let's just say that I'm glad that nobody was looking over my shoulder when I tested it.)

    In order to properly test the module, I tried some of the tamer lines on coworkers (and some of the not so tame on my fiancee). The module works as advertised - guaranteed to get you slapped.

    One interesting aspect is that the rules portion (Bone::Easy::Rules) is not GPLed - commercial use requires a fee. Just exactly who would use this module (or why) for commercial purposes is beyond me.

    I'm not sure what the intended context of this module is - do you run a few iterations before hitting the club for ideas or do you call over that cute girl in the computer lab to read your screen? Either way, you'll be left with quite an impression - hand shaped, right across the cheek.

Net::FTP
6 direct replies — Read more / Contribute
by vaevictus
on Sep 13, 2000 at 17:25

    Net::FTP

    Description

    Net::FTP, part of the Libnet install, implements RFC959, the FTP protocol.

    Who should use it?

    • Anyone wishing to transfer files to an FTP server
    • Anyone wishing to transfer data from STDIN to an FTP server
    • New Perl Scripters who need to practice with OO interfaces

    What are the drawbacks or problems?

    • It does not allow you to upload scalar data as a file
    • It does not allow you to use streams other than STDIN

    Example

    #!/usr/bin/perl -w use Net::FTP; my $destserv="ftp.perlmonks.org"; my $destuser="root"; my $destpass="joph"; my $file="yourmom.jpg"; $ftp = Net::FTP->new($destserv) or die "error connecting\n"; $ftp->login($destuser,$destpass); $ftp->binary(); $ftp->get($file) or die "error downloading\n"; $ftp->quit();
GD
1 direct reply — Read more / Contribute
by Jouke
on Sep 13, 2000 at 08:37
    I've been using the GD module for a few months for creating dynamic images for our website, and I must say: I couldn't think of anything better

    GD offers a set of functions to draw lines, points, brushes, text and use existing imagefiles, and offers output functions to output the image as JPEG or PNG (GIF support has been removed due to the fileformat patent-thing :( ).

    Some people have also written subclasses like GD::Graph3d (I believe), which offers a nice, clean interface to draw piecharts, linegraphs and bargraphs. I've never really used these though.

    Like the POD documentation says: " GD.pm is a port of Thomas Boutell's gd graphics library.", and I'm happy that Lincoln D. Stein ported it!

    Jouke.
Lingua::Ispell
No replies — Read more | Post response
by jeffa
on Sep 13, 2000 at 04:22

    (module written by JDPORTER)

    This module simply encapsulates the Unix utility ispell, a text file spell checker that was first written by Ralph E. Gorin in 1971.

    .

    The Good

    The great thing about Lingua::Ispell is the ability to incorporate a fully functional spell checker into an application, great for finding spelling suggestions on any CGI search engine. You can specify a dictionary file for the module to use, on my system it defaulted to /usr/dict/words. You can also add words to the dictionary file and save it, all via the module's functions.

    To use Lingua::Ispell, you simply use the module, and pass the text you wish to spell checker to the spellcheck function. This function will break the text up into words and return a list of hashes containing at least two keys, term and type. Term is the word being checked and type is one of six types the module uses to identify different possibilities. Out of these six types, I found only 2 of them to be usefull: miss and none. If the type is none, then the term is not found in the dictionary file and no corrections were found. If the type is miss, then an additional array that contains spelling corrections for that term is made available, named misses.

    Example from the perldoc:

    use Lingua::Ispell qw(spellcheck); for my $r ( spellcheck( "ys we ave no banans" ) ) { if ( $r->{'type'} eq 'miss' ) { print "'$r->{'term'}' was not found in the dictionary;\n"; print "Near misses: @{$r->{'misses'}}\n"; } elsif ( $r->{'type'} eq 'none' ) { print "No match for term '$r->{'term'}'\n"; } }

    The Bad

    You have to specify where the ispell executable is located - not really bad, but it does affect the portability of you application - so keep this in mind when you have to move your app to another computer.
    $Lingua::Ispell::path = '/usr/bin/ispell';
    As a matter of fact, you will find that the above example probably will not work on your box without this line.

    The Not So Ugly Code

    Here is an example of Lingua::Ispell in action - a CGI form that allows the user to enter some text for spell checking. Remember to specify the location of ispell on your box.
    #!/usr/bin/perl use strict; $|++; use Lingua::Ispell qw(:all); use CGI qw(standard); # override the default path - Your Mileage May Vary $Lingua::Ispell::path = '/usr/bin/ispell'; my $cgi = new CGI; print $cgi->header; if (my $query = $cgi->param('query')) { &print_form; &print_corrections($query); } else { &print_form; } #thanks to Randall for this slick dereferencing trick sub print_form { print <<_FORM_; <H1 align=center>Spell Checker</H1> <HR align=center> @{[$cgi->startform('POST',$cgi->script_name)]} <P> Enter Text To Spell Check: @{[$cgi->textfield('query')]} @{[$cgi->submit('Go')]} @{[$cgi->endform]} <P> <HR> _FORM_ } sub print_corrections($) { my $query = shift; print "Results for <i>'$query'</i> :<p>"; for my $result (spellcheck($query)) { my $term = "<font color=red>$result->{'term'}</font>"; if ($result->{'type'} eq 'miss') { print "'$term' was not found in the dictionary,<br>\n"; print "<u>Near misses</u>: "; print join(',', @{$result->{'misses'}}), "<p>\n"; } elsif ($result->{'type'} eq 'none') { print "No match for term '$term'.<p>\n"; } } }
SNMP
1 direct reply — Read more / Contribute
by Nooks
on Sep 13, 2000 at 00:52

    I used this module to write a testing suite for an SNMP-enabled application written by my company: it worked well and I was able to do what I wanted with a minimum of fuss.

    One thing that impressed me was that I was able to construct the appropriate objects necessary to do an SNMP get on the command-line, or take the care necessary to do it right, in a script.

    I do seem to remember having trouble with the names of sysDescr values which I had to work around, but that may well have been my own misunderstanding.

    The module also lacked a copying constructor, this hurt when I wanted to make a copy of an SNMP::Varbind object: I had to resort to a simple (ie non-deep) copy and hope it worked.

    IIRC, the module also comes packaged with the UC Davis SNMP code, which is pretty handy.