No, the whole idea is that he wants to automatically separate content from material repeated between pages - headers, footers, menus, etc. A proper solution won't care if the design was changed, so long as it has a sufficient number of recent pages to work from.
Here's the node I was talking about:
Imploding URLs
The connection may not be readily apparently, but the problem is essentially the same, only on a much larger scale. You can probably speed comparisons up some by storing all the words in an array and then converting them to a value corresponding to their subscript. You should only need two bytes per word. You can also speed things up by doing detailed comparisons only between pages that haven't had their common material determined yet. If page A and page B have common material x, and page C also has all of x, then you can be pretty sure that it doesn't need to be checked. And you can speed things up by starting comparisons with pages closest in location to the current page - differences in query string first, then in page name second, then a single folder, and so on.
-
Are you posting in the right place? Check out Where do I post X? to know for sure.
-
Posts may use any of the Perl Monks Approved HTML tags. Currently these include the following:
<code> <a> <b> <big>
<blockquote> <br /> <dd>
<dl> <dt> <em> <font>
<h1> <h2> <h3> <h4>
<h5> <h6> <hr /> <i>
<li> <nbsp> <ol> <p>
<small> <strike> <strong>
<sub> <sup> <table>
<td> <th> <tr> <tt>
<u> <ul>
-
Snippets of code should be wrapped in
<code> tags not
<pre> tags. In fact, <pre>
tags should generally be avoided. If they must
be used, extreme care should be
taken to ensure that their contents do not
have long lines (<70 chars), in order to prevent
horizontal scrolling (and possible janitor
intervention).
-
Want more info? How to link
or How to display code and escape characters
are good places to start.
|