Whew,
artist, it'll be a while before I would be ready for that. Let's kick it around a bit.
As I see it, to accomplish my first-level goals (
Re^2: An Unreal Perl Hacker!), at least of making suggestions that would make the newbie's code
run, we'd have to build a database of code from nodes which would include the poster, the XP rating, the node id, an output or error string, and the code itself. When somebody submits a node for consideration, the engine would go to that node and fetch its code samples. If the code does not fail the exclusion criteria, the engine would attempt to run the code in an eval block. This would result in either some success or a failure and an error message.
The engine's next step would be to fetch a list of other nodes with the same error message. It would then build a structural model of the code surrounding the error location and compare that to other nodes with errors until it found one that had a similar structure.
Once it had a list of nodes with similar errors and structures, it would then seek out follow-on nodes in the same thread, and attempt to modify the problematic code by mapping the Perl structure changes into the newbie's code.
Note that I'm not saying that we can assess the _proper_ functioning of the code, only whether it is sane perl code. I can see enhancing the engine after this works to allow newbie to insert preferred output into the request form.
This is a non-trivial challenge, I know, but I think it would be useful, both by itself and as a basis for an expanded problem domain and requirement set.