As part of a online forum I'm setting up I need to display user submitted content, which is stored in a database and then displayed.
Because the site uses cookies for authentification, and as a general preventative measure I wish to strip out dangerous tags, javascript, images etc.
I think that I would be safe just leaving a minimal subset of HTML, such as the following tags P, B, I, A (with only a subset of attributes, HREF and TITLE for example).
I realise that a regular expression approach is unlikely to be workable, so my two choices seem to be HTML::Sanitizer and HTML::Scrubber. Both of these will do the job, without too much effort. (I'm still suprised this isn't done here on the home nodes, maybe its a hard thing to do efficiently? Either that or its not yet been considered important enough)
As they do a real parse of the HTML they rely upon the various parsing modules, HTML::Tree and HTML::Parser respectively.
Is there another approach I'm missing, with less dependencies? Or a simpler system I could use instead?
Whilst I can use either of the two packages above I'm keen on using something that's less hungry - so that I can keep it upto date on my Debian Stable webhost.
In reply to Sanitizing HTML by skx
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |