in reply to Re^4: Really, really suffering from buffering
in thread Really, really suffering from buffering

This is almost certainly related to the problem I encountered at Inline C: using stderr segfaults?. See the subthread starting at Re: Inline C: using stderr segfaults? that discovers that stderr is redefined to a complex macro expansion. stdout probably sufferes the same fate.

The most expedient and reliable solution I found was to use &_iob[2] for stderr and &_iob[1] for stdout when calling stdio routines from Inline C.


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
"Too many [] have been sedated by an oppressive environment of political correctness and risk aversion."

Replies are listed 'Best First'.
Re^6: Really, really suffering from buffering
by syphilis (Archbishop) on Nov 22, 2007 at 06:06 UTC
    Hi BrowserUK,

    I had missed that final post of yours to the thread Inline C: using stderr segfaults?. I think you've got down to a level that I'd like to stay well clear of :-)

    Incidentally, as regards *your* thread, did it make any difference if you rewrote test2() to use the perl abstraction layer ?
    void test2 ( char* text ) { PerlIO_printf( PerlIO_stderr(), "Got:'%s'\n", text ); }
    As you say, the problems that you faced there may well be related to the problems I've been looking at - though I still have this notion that (at least part of) your problem might arise from the involvement of *2* C runtime libraries.

    As for my particular issue, the following (somewhat kludgy) workaround seems to work reliably without any need to turn on $| ... though I haven't yet tried it on anything other than Win32:
    use warnings; use strict; use Inline C => Config => INC => '-IC:/_32/C', LIBS => '-LC:/_32/C -lmylib', BUILD_NOISY => 1; use Inline C => <<'EOC'; #include <mylib.h> void _foo(PerlIO * stream) { FILE * stdio_stream = PerlIO_exportFILE(stream, NULL); my_puts(stdio_stream); fflush(stdio_stream); PerlIO_releaseFILE(stream, stdio_stream); } void _foo2(PerlIO * stream, SV * suffix) { FILE * stdio_stream = PerlIO_exportFILE(stream, NULL); my_puts(stdio_stream); fflush(stdio_stream); PerlIO_releaseFILE(stream, stdio_stream); PerlIO_printf(stream, "%s", SvPV_nolen(suffix)); PerlIO_flush(stream); } EOC for(1 .. 2) { foo(*stdout, "\nhello from perl\n"); } for(1 .. 2) { foo(*stderr); print "\nhello from perl\n"; } sub foo { if(@_ == 1) { _foo($_[0]) } elsif(@_ == 2) { _foo2($_[0], $_[1]) } else {die "Wrong no. of args to foo" } }
    Cheers,
    Rob
      did it make any difference if you rewrote test2() to use the perl abstraction layer ?

      I didn't try...nor will I. I will never opt for adding a layer, when I can achieve my aim by removing one. (Or 3 or 4 :)

      I'm afraid this is a prime example of why I avoid doing anything serious with the P5 sources, and a prime example of what is wrong with source-level macros (and backward compatibility) in general.

      As each new generation of maintainers works on a system, they add new layers of macro wrappers to incorporate their latest thinking, whilst maintaining BC code. The problem is that as each new layer gets added, about 50% of it is added not for architectural or system design reasons, but 'just in case'.

      With each layer added, the programmers involved are further and further removed from understanding the implications of their additions and further and further away from understanding the final expansions. The result is that you end up with systems that require undocumented and undocumentable understanding of what you can and cannot do; what you must and must not do; in order for things to work.

      Anytime new functionality is required, the only option is to start with a similar, existing piece of functionality that is known to work and make incremental changes to move it toward the desired functionality. It becomes almost impossible to generate new code from scratch because the rules and requirements for combining the layers upon layers of macros are simply undocumentable. It's copy&paste programming at it's very worst.

      The best P5 example I know of this is the whole OO/vtable architected memory management layer that is wrapped over the malloc/realloc/free et al. in the win32/*.(h|c) sources. There are so many definitions and redefinitions of malloc that indirect through runtime filled dispatch tables (probably a legacy of reusing preexisting C++ sources and converting them for compilation by C compilers), that it is impossible to consider trying to unwind and simplify them because the patch would be so pervasive to the source tree.

      This is the same basic problem I have with OO methodology, particularly MI, the effects of which can be seen quite clearly in a lot of CPAN modules, including some in the core. As each new abstraction is developed, methods are added to interfaces, often for no better reason than 'for completeness'. The result is simple functionality; functionality that should be and essentially is simple and fast, ends up being complex and slow because it goes through so many layers to reach the underlying actual, functional code. And as the layers of Perl code are translated to C, each Perl layer goes through the same C-level layers of macro expansion, over and over.

      To witness this for yourself, track a few Perl-level calls that should result in a single call to the OS,from your perl program into the kernel using a disassembling debugger and see the number of times Perl_get_context() is called. It's ludicrous to the point of being frightening.


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.