there are at least two ways to approach this.
The first is to use WWW::Mechanize::Chrome which is like running a browser but without the gui (headless) from inside your script. With it you will be able to dive into the fetched page's DOM and extract anything you like from it, including those divs that you don't see with a view-page-source because they are fetched later via javascript/ajax.
The second is to open the site with your browser, open the developer tools (firefox, but also other will have similar functionality). Go to the network tab, select XHR and reload the page. You will see all the data fetched via ajax. And you will see where does that data come from, it comes from urls just like the one you tried to download. Copy that url as CURL (its on the right-click menu somewhere) and you can see exactly what the url is, what its parameters are. Now, note the url, its parameters and whether it is a POST or a GET and what request-headers it has. It's easy to translate those into LWP::UserAgent.
Edit: converting a beast of a CURL commandline to LWP::UserAgent can be done easily by using Corion's curl2lwp (see http://blogs.perl.org/users/max_maischein/2018/11/curl2lwp---convert-curl-command-line-arguments-to-lwp-mechanize-perl-code.html)
In reply to Re: downloading a file on a page with javascript
by bliako
in thread downloading a file on a page with javascript
by Aldebaran
| For: | Use: | ||
| & | & | ||
| < | < | ||
| > | > | ||
| [ | [ | ||
| ] | ] |