Beefy Boxes and Bandwidth Generously Provided by pair Networks
good chemistry is complicated,
and a little bit messy -LW

Re: RSS feeds to most of

by esskar (Deacon)
on Feb 26, 2005 at 20:46 UTC ( #434817=note: print w/replies, xml ) Need Help??

in reply to RSS feeds to most of

great job. i like it. Is it done on the fly?
How about adding a code-tag which includes the code of the item or a raw-tag which includes the item as raw CDATA.

Replies are listed 'Best First'.
Re^2: RSS feeds to most of
by EvdB (Deacon) on Feb 26, 2005 at 21:05 UTC

    It is not done on the fly, there is a daemon which checks for new nodes and adds them to a database. This is the only way to do it without hammering This can mean that the nodes get out of date or end up in the wrong section. Hopefully I'll find a solution to this soon.

    As for adding the code etc I'm keen to keep it all qute simple and to send people back here for the actual nodes. I'm not aiming to replace perlmonks, just to add features that make it more useful.

    Glad you like it.

    --tidiness is the memory loss of environmental mnemonics

      In my ideal world, there would be such a link between the RSS feeds and PM that they'd always be within a few minutes of up-to-date, if not actually always up-to-date. And there'd be a link in the header of the html code which Firefox could use to find out about the RSS feed.

      In my slightly less ideal world, we'd just get the link in the html code to this RSS feed (the one for the current node only - whatever that current node is, although some supernodes may not need it - such as the comment on node. This would still require a bit of work from the PM developers, although I would hope not much. Something like:

      <link rel="alternate" title="PerlMonks RSS" href=" +/rss/$nodeid.xml" type="application/rss+xml" />
      would need to be added to the header

      In my real-as-in-now world, I'd like to express my appreciation of such a service! I just need to figure out how to get this somewhat automated :-)

        It occurs to me that this could done best with a nodelet. Then the RSS links could be presented for the node, the thread, the author and the section.

        The daemon should never be more than five minutes behind on getting new nodes, although it is very prone to being out of date wrt to the content and possibly the section.

        --tidiness is the memory loss of environmental mnemonics

      Good stuff - well done. I have added this as a live bookmark straight into Firefox.

      You mentioned that your process used a daemon that checks for new nodes. This means that firstly, you need to run a daemon and secondly, you are interrogating PM on a regular basis.

      You could simplify the model by caching the RSS for a particular page and interrogating the cache each time you wanted to serve a page. A cached page could time out after a short period of time (e.g. 10 mins). A cache miss (or timed out page) would initiate a request to the monastery. The result would be cached for next time. This means that when nobody was using the feed, PM wouldn't be hit.

      Caching can be implemented using a simple file cache with timestamp checking or something more involved using a database. Either way, you periodically need to clean the cache of expired documents. You would also want to guard against an attack where a malicious user tried to access every node as a feed and therefore used up lot's of cache space.

      This article may be of interest with respect to the database solution.

        I've added a page that describes (quickly) what is going on here. As you can see I am caching stuff already. It turns out that it is not possible to get the latest nodes or perlmonks on demand as it is not possible to know which nodes are needed for things like generating RSS feeds for threads.

        Thanks for your comments, I'm glad that you like it.

        --tidiness is the memory loss of environmental mnemonics

Log In?

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://434817]
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others meditating upon the Monastery: (4)
As of 2023-02-02 15:36 GMT
Find Nodes?
    Voting Booth?
    I prefer not to run the latest version of Perl because:

    Results (19 votes). Check out past polls.