Your explanation of what you are trying to do is still lacking.
Just for a moment, lets assume that you can arrange for multiple threads to have a copy of the same socket. At this point, you have 1 client writing to 1 socket and multiple server threads wanting to read from it. The first thread that reads the socket will get the data, and all the others won't.
You mention "re-broadcasting". How?
Sockets are point to point. In order for any one server thread to be able to re-broadcast data to every other server thread, it would need to have a separate socket connection for each of those other servers and re-transmit the data to all of them.
For 2 server threads, you would need 2 sockets. 1 to the client and 1 between the threads.
For 3 server threads, you would need 3 sockets + the client: s1<->s2; s1<->s3; s2<->s3; + s(n)<->C;
For 4 server threads, you would need 6 sockets + the client: s1<->s2; s1<->s3; s1<->s4; s2<->s3; s2<->s4; s3<->s4;
For 5 server threads, you would need 10 sockets + the client: ...
For 6 server threads, you would need 15 ... I think you can see where this is going.
And, remember we are just assuming that you could successfully share the socket to the client between multiple threads, which you cannot. You would also have to arrange for each thread to "monitor" all of these sockets waiting for input.
Put succinctly, what you are trying to do is not a "limitation of threads", but a limitation of your understanding. A bad design that could never work.
In order for us to suggest solutions, you will need to explain what you are trying to achieve, rather than how you are trying to achieve it.
If the idea is that all threads will have access to data read from a client, then reading that data from the socket on one thread only and then placing it into shared memory such that all clients have access to it is possible--but there is a problem.
- When will you know that the data has been seen by all the threads that need access to it? That is to say, when will you know that any given piece of data is finished with?
If you do not have some mechanism for deciding when a piece of data will be discarded, then all inbound data will simply accumulate in memory and you have a memory problem.
The classic way of dealing with this problem for some applications is a shared queue. This works well for producer-consumer type problems where each piece of data placed on the queue by a producer is consumed by only one consumer.
But, from the little you have said, I think you are more likely talking about something like an IRC server or MUD server. For this type of application you are better off using one queue per client thread, and a central dispatcher or controller thread with its own queue, plus a listener/client thread factory which may or may not need its own queue depending upon the details of the application.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
The "good enough" maybe good enough for the now, and perfection maybe unobtainable, but that should not preclude us from striving for perfection, when time, circumstance or desire allow.