in reply to Re: Does anyone have experience with Reddit::Client?
in thread Does anyone have experience with Reddit::Client?

Okay, thanks! So, is there a way to scrape from page# so that I can grab 100 (the first 4 pages) and then make another request and grab from 5-8?
  • Comment on Re^2: Does anyone have experience with Reddit::Client?

Replies are listed 'Best First'.
Re^3: Does anyone have experience with Reddit::Client?
by kroach (Pilgrim) on Feb 15, 2015 at 21:18 UTC

    Yes, there is. One way is to remember the id of the last post in a chunk and pass it as the 'after' parameter to fetch_links(). Here is an example fetching post titles in chunks of size 25 (though, any size up to 100 will, of course, work):

    use strict; use warnings; use feature 'say'; use Reddit::Client; my $subreddit = '/r/perl'; my $limit = 25; my $reddit = Reddit::Client->new(user_agent => 'MyApp/1.0'); my $last_post; foreach my $page (1 .. 8) { my $links = $reddit->fetch_links( subreddit => $subreddit, limit => $limit, after => $last_post ); foreach my $link (@{ $links->{items} }) { say $page, ': ', $link->{title}; } $last_post = $links->{items}->[-1]->{name}; }