in reply to Re: An Unreal Perl Hacker!
in thread An Unreal Perl Hacker!

Whew, artist, it'll be a while before I would be ready for that. Let's kick it around a bit.

As I see it, to accomplish my first-level goals (Re^2: An Unreal Perl Hacker!), at least of making suggestions that would make the newbie's code run, we'd have to build a database of code from nodes which would include the poster, the XP rating, the node id, an output or error string, and the code itself. When somebody submits a node for consideration, the engine would go to that node and fetch its code samples. If the code does not fail the exclusion criteria, the engine would attempt to run the code in an eval block. This would result in either some success or a failure and an error message.

The engine's next step would be to fetch a list of other nodes with the same error message. It would then build a structural model of the code surrounding the error location and compare that to other nodes with errors until it found one that had a similar structure.

Once it had a list of nodes with similar errors and structures, it would then seek out follow-on nodes in the same thread, and attempt to modify the problematic code by mapping the Perl structure changes into the newbie's code.

Note that I'm not saying that we can assess the _proper_ functioning of the code, only whether it is sane perl code. I can see enhancing the engine after this works to allow newbie to insert preferred output into the request form.

This is a non-trivial challenge, I know, but I think it would be useful, both by itself and as a basis for an expanded problem domain and requirement set.

Replies are listed 'Best First'.
Re^3: An Unreal Perl Hacker!
by chanio (Priest) on Oct 16, 2005 at 02:14 UTC
    That could make another kind of PM Search tool, don't you think?

    Imagine presenting several options of correcting the newbie's code. By choosing one of them, she could read more about that code.

    Next step, could be to know how to put names asociated to those alternatives. And classifying them by those names. That might help the search of such words when asked.

      Next step, could be to know how to put names asociated to those alternatives. And classifying them by those names. That might help the search of such words when asked.

      Not to be negative, but, why? The reason this is just sifting old nodes, as opposed to me/us building a hardcoded parser, is that it can be expanded with such things as neural nets to match different kinds fo patterns... without us having to step in and create useless things like names. It's the association of patterns that matters, and having some kind of name property would only add overhead. Unless I misunderstood what you meant by names and classifying by names?

      That said, I do envision this as a kind of search tool. Besides returning the 'corrected' (we hope) code, it should give a list of answer-nodes which were relevant (in its estimation).
        I share your point of view. I was only refering to the words that newbies might need to search to know more about the code viewed. A kind of 'keyword nodelet' that could be used to search more references about the code exposed.

        It would be great to do with words what del.icio.us does. But this is not related with your project but with enhancing the search tools.