nat47 has asked for the wisdom of the Perl Monks concerning the following question:

I'm fairly new to Perl and learning how to use Modules. I was able to sign in and scrape some links successfully! However, I haven't figured out how to do a few things, such as scrape from the Top list (all time ever?) (fetch_links()) The limit for fetch_links() also seems to be maxed at 100, I can't get more? Also, I'm not sure that this module has the ability to save time stamps in the hash?
  • Comment on Does anyone have experience with Reddit::Client?

Replies are listed 'Best First'.
Re: Does anyone have experience with Reddit::Client?
by kroach (Pilgrim) on Feb 15, 2015 at 16:33 UTC
    The 100 posts limit of fetch_links() is imposed by the Reddit API, so you can't get more in a single request.
      Okay, thanks! So, is there a way to scrape from page# so that I can grab 100 (the first 4 pages) and then make another request and grab from 5-8?

        Yes, there is. One way is to remember the id of the last post in a chunk and pass it as the 'after' parameter to fetch_links(). Here is an example fetching post titles in chunks of size 25 (though, any size up to 100 will, of course, work):

        use strict; use warnings; use feature 'say'; use Reddit::Client; my $subreddit = '/r/perl'; my $limit = 25; my $reddit = Reddit::Client->new(user_agent => 'MyApp/1.0'); my $last_post; foreach my $page (1 .. 8) { my $links = $reddit->fetch_links( subreddit => $subreddit, limit => $limit, after => $last_post ); foreach my $link (@{ $links->{items} }) { say $page, ': ', $link->{title}; } $last_post = $links->{items}->[-1]->{name}; }