Here's the thing: I was looking at playing around with some basic website spidering/parsing for interesting things. However, I only want to parse the actual content and most of the sites I'm looking at (Typically news type sites, reuters, theregister, that sort of thing) have lots of generic content on each page, such as titles, menus and so forth. So the question is: how do ensure you just get the actual content and not the layout? I don't think there's any real definite answer for this sort of thing, which is why I'm posting in meditations, but perhaps someone will suprise me. The only solution I've come up with so far is try to visit multiple pages, format them all the same and then diff them to find the common features, but this sounds a tad buggy and hard to do. Anyone have a better idea?
-
Are you posting in the right place? Check out Where do I post X? to know for sure.
-
Posts may use any of the Perl Monks Approved HTML tags. Currently these include the following:
<code> <a> <b> <big>
<blockquote> <br /> <dd>
<dl> <dt> <em> <font>
<h1> <h2> <h3> <h4>
<h5> <h6> <hr /> <i>
<li> <nbsp> <ol> <p>
<small> <strike> <strong>
<sub> <sup> <table>
<td> <th> <tr> <tt>
<u> <ul>
-
Snippets of code should be wrapped in
<code> tags not
<pre> tags. In fact, <pre>
tags should generally be avoided. If they must
be used, extreme care should be
taken to ensure that their contents do not
have long lines (<70 chars), in order to prevent
horizontal scrolling (and possible janitor
intervention).
-
Want more info? How to link
or How to display code and escape characters
are good places to start.
|