in reply to Re^5: OT: Why does malloc always give me 24?
in thread OT: Why does malloc always give me 24? [SOLVED]

See also Reini Urban’s version/fork at https://github.com/rurban/ctl - if you haven’t discovered it yet.

And thank you very much for the hints.

Comparison of different C libraries providing generic containers capabilities might be helpful as well.

Update: I noticed in this context that none of the relevant C-libs mentioned are thread-safe, except mlib. And the author, who is otherwise so thorough, suddenly becomes very general and brief on the topic.

  • Comment on Re^6: OT: Why does malloc always give me 24?

Replies are listed 'Best First'.
Re^7: OT: Why does malloc always give me 24?
by NERDVANA (Priest) on Aug 20, 2024 at 23:19 UTC
    If you mean multiple threads calling into the same data structure, I don't blame them. That's a mess of platform complications and potential runtime overhead. I've pulled out enough hair over that model of programming to permanently switch to "each thread gets its own data, and they exchange ownership of the whole structure via passing a pointer through a pipe". This model plays very nicely with Perl, too.

    Thanks for the tips on additional libraries to investigate.

      "…each thread gets its own data”

      That is probably the obvious.

      ”…they exchange ownership of the whole structure via passing a pointer through a pipe"

      Any links to instructive examples are gladly accepted.

      Links for thread victims:

      Update: Perhaps a solution?

      Update 2: …seems to be incomplete…last update 10 years ago or so

        Well, to more fully explain, any time you have more than one thread touch a data structure, you need to synchronize access to it. (unless its immutable of course) One way is to have thread-aware containers that lock their own mutex any time that you call one of their functions. On some platforms with special CPU features (like all modern x86) there are ways to do this with almost zero overhead, which is great, but then when you port the code to other platforms it can become a performance impact. Even on the good platforms, you can end up in deadlock scenarios where one thread grabbed MutexA and then MutexB, but another thread grabbed MutexB and then is waiting on MutexA, so you have to reason about that as you write the code. Often you end up with your own mutexes to help organize whole groups of resources, and then the mutexes built into the containers become redundant.

        So, against that backdrop, I usually choose not to have any synchronization built-in with my data structures or objects. Instead, I plan out very specific APIs for the data that truly needs shared, and put the mutexes in the API functions that access it. Then, don't share anything that doesn't need shared.

        The shared data structures like queues (where one thread is sending a message to another) can be done using pipes just like how you would do them if you had separate processes. It maybe isn't as efficient as can be done with special threading functions, because it involves a syscall to read and write, but it plays nicely with the external events you might also be waiting on, like socket data.

        It isn't a simple example, but I did this in callback_dispatch of VideoLAN::LibVLC where VLC was handing me filled picture buffers in a second thread, and I needed to get them to perl's main thread. I pass the pointer through a pipe, then pass it up to perl when the user's event library sees the handle is readable. Meanwhile the perl event library can also be reacting to sockets and timers and all that.

        Rephrased, the design here is that there are a collection of picture objects, and at any one time they are either owned by the main thread, owned by the video thread, or in transit in a pipe. It's a few fractions of a millisecond slower than thread synchronization functions, but perfectly fine for real-time video playback.