Have you checked on CPAN? There's a bunch of modules to help you read PDFs.
------
We are the carpenters and bricklayers of the Information Age.
Please remember that I'm crufty and crochety. All opinions are purely mine and all code is untested, unless otherwise specified.
| [reply] |
I did check CPAN, but they only have modules to create PDFs or manipulate them, but not to simply grab the content off the web. To be precise its the content I'm bothered with, I need the text each time, as I am working on information retrieval and parallel texts.
cheers!
| [reply] |
| [reply] |
Any chances we could have a look at your program? If it is too long, perhaps put it on your scratchpad?
CountZero "If you have four groups working on a compiler, you'll get a 4-pass compiler." - Conway's Law
| [reply] |
Have a look at merlyn's WWW::Mechanize example on CPAN here. Look under the title 'get-despair, by Randal Schwartz'. Randal's example sucks down all the pictures, you only need minor modification to suck down html and PDF's, with the mirror method.
| [reply] |
Do you just want to download the PDFs, or do you want to follow the links in PDF documents as well? | [reply] |
Yes, I want to follow the links in PDF as well, But the spider does this already. It is simply the grabbing the PDF page bit so I also have a hard copy that is the problem.
With html it works fine,it scours through links given a start link and then nabs all the pages it gets to. All links already spidered get put into a hash, so it doesn't go back there twice. I will giev you guys the code, It is long, so I can probably show u the bit doing the job in html. Its 2:33am right now, and I still havn't got further, so bed beckons!!!
Thans guys!
| [reply] |