H4z3 has asked for the wisdom of the Perl Monks concerning the following question:

I am trying to open a webpage, grab a link from that webpage (open that link in the same session) parse the data from it and post it in the form on the original page, My problem is that the file i am trying to open is random for each session, So everytime i try to open it and grab the source it says the page can't be find, If anyone understands what i mean do you have any ideas of what to do? Thanks in advance

Replies are listed 'Best First'.
Re: Cookies and Sessions
by ww (Archbishop) on Sep 18, 2007 at 19:57 UTC

    Welcome to the Monastery!

    But as to your post, < ...sigh >

    Please...

    1. read How do I post a question effectively?
    2. Show us the RELEVANT code (inside <code> tags)
    3. Update you question so we can tell what you're asking (and include the actual error: is it a 404?)

    And, just as a random thought, which may or may not be helpful, are you sure the page exists where you're looking for it?

Re: Cookies and Sessions
by pemungkah (Priest) on Sep 18, 2007 at 20:17 UTC
    Your statement is pretty vague, I'm afraid, but I'll take a shot.

    You might want to try WWW::Mechanize to get the original page; you can then use follow_link(text=>$link_name) to access the link. If there's a question of cookie management, Mech will handle that for you.

    This guess assumes that the name of the link you want to follow is constant, even though the URL is not (this is a wild guess based on your description).

    Go ahead and post whatever code you have (you can change the link to something else if you don't want to specify what site you're scraping). It'll make it easier for us to see that you're trying to do. (English descriptions of processes are notoriously poor.)

      Sorry guys, I should of read that before, Didn't mean to make such a poor entrance.. I will take a look at WWW::Mechanize and update my post with some code.. Thanks