Anonymous Monk has asked for the wisdom of the Perl Monks concerning the following question:

Hi. A quick question.

I want to embed Perl in a C++ program (win32 service actually). No big deal, got that working.

Next I want to use a single embedded Perl interpreter from all of the threads in my C++ program. ie: I want to externalise the thread creation and management.

My questions are:

  1. Is it safe to call one PerlInterpreter instance from multiple threads?
  2. If so, what hoops do I need to jump through (if any) from the C++ side?
  3. What considerations/limitations does my perl script (running within the embedded interpreter) have to deal with?

Thanks in advance, and apologies if it's a stupid question.

Phil.

  • Comment on Externally managed threads using embedded Perl

Replies are listed 'Best First'.
Re: Externally managed threads using embedded Perl
by BrowserUk (Patriarch) on Jan 08, 2004 at 03:31 UTC

    The answer varies slightly depending upon which version of perl you are using as a base, but the basic answer to question 1. is "No", which answers the other two questions.

    With perl releases greater than 5.8.0, perl itself uses ithreads ( 1 thread == 1 intrepreter ) internally for implementing it's threading support. There are no internal mechanisms to prevent the inevitable internal corruption that will ensue from calling one interpreter concurrently from more than one thread.

    Prior to 5.8.0 back as far as 5.005, perl supported pthreads, where multiple (perl internal threads) use a single copy of the interpreter, but that was never designed to be used with external threads calling into the interpreter concurrently, and it is most unlikely that you would achieve good results by trying.


    Examine what is said, not who speaks.
    "Efficiency is intelligent laziness." -David Dunham
    "Think for yourself!" - Abigail
    Hooray!

      OK, thanks for that. I am happy to run with Perl 5.8 or later as a requirement.

      So it seems I do this by having a single interpreter that I perl_clone(my_perl, CLONEf_CLONE_HOST) for each thread, throwing away the clone when the thread ends and/or pulling from a pool of pre-existing clones?

      A good analogy for what this is doing is a basic web server (note: this is not a web server - I'm not that stupid). ie: Short-lived requests serviced over a network connection. Is perl_cloning going to be fast or do I need to do some sort of thread pooling (which I may do anyway) to get it to perform? (or should I stop being so damn lazy and just test the peformance myself ;-))

      Or, do I want to learn Perl and try to make it all happen with Perl threads (which will involve learning Perl - I didn't write the script we want to execute)?

      Thanks again,

      Phil

        Reading between the lines of what you've told us, you have a pre-written perl script that you want to be able to run on behalf of networked users on a single machine, with concurrent access, but no sharing of data between the instances? And you aren't a perl programer :)

        It really will depend on how the pre-existing perl script runs, but assuming that the script returns the results via stdout?

        If this is the case, cloning an interpreter for each request, or building a pool of clones would probably work ok. I haven't done enough with it embedding -- nothing beyond the simple examples in perlembed -- to be able to predict the performance. Pre-cloning a pool and returning a "busy...try again" message if the pool is fully utilised, ought to be fast enough, if the loading isn't too extreme.

        Personally, I would probably use a thread-pool design using threads or maybe a pre-forking design written in perl using perl's win32 pseudo-fork support, as I find perl so much more productive that C/C++.


        Examine what is said, not who speaks.
        "Efficiency is intelligent laziness." -David Dunham
        "Think for yourself!" - Abigail
        Hooray!