Re: High Transaction Data Persistence Requirement across Different Process / Programs
by BrowserUk (Patriarch) on Jun 28, 2011 at 06:38 UTC
|
I think you need to clarify the operating environment and operational requirements of your question.
- Is this 3 long running concurrent processes? Or with one or more transient (web-server-like) processes?
- Is that a single 40-byte string? Or one 40-byte string per id? (If so, how many ids?)
- You mention persistence. Is that persistence between connections? Or persistence across server outages?
- Are the other two processes also Perl? Or C? Other?
- Do all three need read-write access? Do you need coherency?
- Many other questions that would be answered or avoided by a much clearer description of the actual purpose.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] |
|
|
Hi let me elaborate a bit MORE to get Better Suggestions from all Perl Monks
Platform Red Hat Linux 64 Bit 5. 4 with Perl 5.8
1. There are "A" number of Perl ProcessA running, which read from Linux QUEUE (ID - Amount - Details ) and make a Call to FCGI process B , Collect the response, based on the Response(Response gives Session ID for the ID )....TO 3
2. There is a FCCI process C , which is invoked by External systems for Session ID to provide value Success or Failure ..., which is read and Mapped to the DB with Session ID - Success/Failure
3 . The FCGI B after getting the response, based on the Session ID will query DB for the Session ID and Write in to a Linux Queue
All of these 3 Process are in Synchronous mode dependent on one another, we are trying to make this Asynchronous and achievable at high throughput. So to make the same , we need to share data across the 3 Process to remove the Database interaction
Coming to the Questions asked
1.These are Non stop running process , running in multiple instances
2.Mapping is required to be like Id- SessionID - Amount - Unit - Success/Failure - Number
3.Persistence - In terms of Data persistence across these processes on single server
4.All the 3 are perl Process
5. Yes worst case scenario need all the 3 to read write
6. Sorry for not being clear in my first description
| [reply] |
|
|
Sorry, I don't think I can help. I've read and re-read your description of the 3 processes, but I cannot make sense of the flow of initiation at all. Hopefully someone with more experience of clustered/cloud-based CC processing will be able to help.
Specifically I don't understand your nomenclature "process A ... make a s call to FCGI process B"? Are you saying that Process A makes an HTML connection to process B?
Nor do I understand "FCGI process C , which is invoked by External systems".
Basically, I'm out of my knowledge zone.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
| [reply] |
|
|
Re: High Transaction Data Persistence Requirement across Different Process / Programs
by Corion (Patriarch) on Jun 28, 2011 at 06:09 UTC
|
If you don't need the data to be safe, you can look at something like memcached. Perl has modules for accessing memcached.
If you can tie all three processes onto one machine, maybe it is enough to pipe the data from the first process to the next process and so on? Or does the first process need (write) access to the session ID after it has given the ID to the second process?
| [reply] |
|
|
| [reply] |
|
|
| [reply] [d/l] |
|
|
|
|
|
|
| [reply] |
Re: High Transaction Data Persistence Requirement across Different Process / Programs
by zek152 (Pilgrim) on Jun 28, 2011 at 12:44 UTC
|
I would suggest doing some reading on IPC (interprocess communication). (see Perl Doc perlipc) There are several immediate options that I can think of at the moment.
1) Use FIFOs (aka named pipes). These are unidirectional so you would need 4 FIFOs (M->A,A->M,M->B,B->M).
2) Use Unix-domain sockets. This will allow bidirectional communication so you will need 2 connections. (M<>A,M<>B)
3) Use TCP sockets. This is probably overkill unless you plan on the processes being on different computers.
Assuming that the 40bytes is 1/message you have a need of about 5000*40=200,000 or 200kB/s. All of the proposed methods should be able to reach that throughput. I have seen benchmarks that suggest that 93mB/s should be obtainable with local TCP sockets. 93mB/s was the lowest benchmark I saw.
Hope this helps.
| [reply] |
|
|
| [reply] |
|
|
Yes. From your requirements you have different processes and you want them to communicate with each other. I still believe that interprocess communication is the way to go.
I think that the unix-named sockets are a good starting point. You should have no trouble meeting the required throughput on the communication side of the system.
| [reply] |
Re: High Transaction Data Persistence Requirement across Different Process / Programs
by tokpela (Chaplain) on Jun 29, 2011 at 18:02 UTC
|
This sounds like a job for a queue - something like RabbitMQ. It would give you the persistence and allow you to scale even farther out if necessary across multiple servers or even instances of your processes.
I would also consider using a NoSQL db such as CouchDB to hold your data if passing your data via the message body in the queue is not enough.
| [reply] |