Beefy Boxes and Bandwidth Generously Provided by pair Networks
Just another Perl shrine
 
PerlMonks  

Re: Continuity: Continuation-Based Web Applications

by dragonchild (Archbishop)
on Jan 24, 2005 at 15:06 UTC ( [id://424599]=note: print w/replies, xml ) Need Help??


in reply to Continuity: Continuation-Based Web Applications

I've always thought that this would be an neat way to program websites, especially after reading Paul Graham's articles. The problem I see is that you're going to start spending more time in each request serializing and deserializing your continuations vs. actually serving up the request.

Not to mention that most corporate websites aren't a single continuation - they're groups of vaguely related continuations. How would you handle the situation of where someone has done 10 clicks in one area, then clicks on the navbar to go to a completely unrelated area? Say, going from "Reports" to "Messaging" ... do you continue to handle the continuation from "Reports" every pageview while the user is going through the "Messaging" continuation?

Being right, does not endow the right to be rude; politeness costs nothing.
Being unknowing, is not the same as being stupid.
Expressing a contrary opinion, whether to the individual or the group, is more often a sign of deeper thought than of cantankerous belligerence.
Do not mistake your goals as the only goals; your opinion as the only opinion; your confidence as correctness. Saying you know better is not the same as explaining you know better.

Replies are listed 'Best First'.
Re^2: Continuity: Continuation-Based Web Applications
by awwaiid (Friar) on Jan 24, 2005 at 22:04 UTC

    So in the existing continuation-based systems they "cheat" -- they don't serialize the continuation to disk on every request and get away with that by running their own webserver. The advantage is as you stated -- it has less overhead. But the disadvantage is when it comes time to serve your application off of a cluster, migrating continuations without serializing them is nontrivial :)

    I'm pretty pleased with my solution. I serialize to disk, but I have a backup plan -- PersistentPerl (aka SpeedyCGI) or even mod_perl; then my serialize-to-disk is merely a backup, and I have a live version in memory all the time just like those other folks. Except I didn't have to write my own webserver.

    For your second comment -- yes, websites aren't a single continuation. I have more in mind web-based applications... more like a tax-form assistant for example. Here, when you click on a breadcrumb to go back to the beginning or to another part of the application, you are implicitly saying "cancel what I'm doing, and go to this other place". The way I have thigns set up I can actually intercept such commands, to do such things as "Are you sure you want to leave this part of the application without commiting your changes?"

    In any case I'm currently building a control-panel for a nonprofit webhosting place I helped start... so I am experiencing performance characteristics of a larger application first-hand. That has helped me work out the bugs quite a bit. So far its working out very well.

    If anyone else is interested in working out some bugs by writing your own application using this stuff... just let me know :)

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://424599]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others avoiding work at the Monastery: (5)
As of 2024-03-28 11:32 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found