skx has asked for the wisdom of the Perl Monks concerning the following question:
As part of a online forum I'm setting up I need to display user submitted content, which is stored in a database and then displayed.
Because the site uses cookies for authentification, and as a general preventative measure I wish to strip out dangerous tags, javascript, images etc.
I think that I would be safe just leaving a minimal subset of HTML, such as the following tags P, B, I, A (with only a subset of attributes, HREF and TITLE for example).
I realise that a regular expression approach is unlikely to be workable, so my two choices seem to be HTML::Sanitizer and HTML::Scrubber. Both of these will do the job, without too much effort. (I'm still suprised this isn't done here on the home nodes, maybe its a hard thing to do efficiently? Either that or its not yet been considered important enough)
As they do a real parse of the HTML they rely upon the various parsing modules, HTML::Tree and HTML::Parser respectively.
Is there another approach I'm missing, with less dependencies? Or a simpler system I could use instead?
Whilst I can use either of the two packages above I'm keen on using something that's less hungry - so that I can keep it upto date on my Debian Stable webhost.
|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: Sanitizing HTML
by simon.proctor (Vicar) on Sep 29, 2004 at 13:37 UTC | |
|
Re: Sanitizing HTML
by gellyfish (Monsignor) on Sep 29, 2004 at 13:57 UTC | |
|
Re: Sanitizing HTML
by dragonchild (Archbishop) on Sep 29, 2004 at 13:31 UTC | |
by JediWizard (Deacon) on Sep 29, 2004 at 13:47 UTC | |
by skx (Parson) on Sep 29, 2004 at 13:37 UTC | |
|
Re: Sanitizing HTML
by pingo (Hermit) on Sep 29, 2004 at 15:54 UTC | |
|
Re: Sanitizing HTML
by ccn (Vicar) on Sep 29, 2004 at 13:39 UTC | |
by merlyn (Sage) on Sep 29, 2004 at 16:05 UTC | |
by helgi (Hermit) on Oct 07, 2004 at 12:49 UTC | |
|
Re: Sanitizing HTML
by Anonymous Monk on Sep 30, 2004 at 09:24 UTC |