|
|---|
| Replies are listed 'Best First'. | |
|---|---|
|
Re: The Case for Javascript
by diotalevi (Canon) on Nov 18, 2002 at 15:02 UTC | |
It's hard to tell but I think you've completely missed the point. I just want to very quickly speak to your security complaint. The issue comes in two flavors - user-submitted javascript can do nasty, evil things to you. You know that, I know that. Lots of people don't know it and we agree it's the application's responsibility to accept only good data. The other part of this concern is where new programmers try to use JavaScript to validate data prior to submitting something to the web server. It's that sort of usage that promotes a false sense of security since for many reasons you absolutely, 100% cannot count on JavaScript validation to be sure your data is sane. This is a task that must occur in a trusted context - usually in your serverside program prior to committing the value somewhere. There are many ways a not-particularly competent advesary can bypass your JavaScript validation. So... perhaps we're speaking at cross-issues but it's a valid point you didn't raise and one that is higher profile, IMHO. As for CSS, decent browsers and JavaScript - I'll be honest and say I've never been particularly interested in people who turn off their JS and then complain that stuff doesn't work. What does get my goat is web site designers who put things up that my lynx client can't handle. There are two reasons you need to support text based browsers - accessibility and convenience. I'll just mention that for me it's really convenient to use lynx over ssh in some circumstances but that's really a corner case and probably only of interest to geeks who would actually do that. The primary issue is that of making your web site and it's data innaccessible to people with various handicaps. You could compare designing a inaccessible web site as being similar to parking in the handicap spot. It's not only evidence of poor taste (if you're aware of the issue and are just ignoring it) if you're doing something commercial in the US then it also has legal implications. I'm very inexpert on the legal bits (and I speak only about the US here) but I understand they run from the Americans with Disabilities Act (ADA), Section 508 of some part of federal law (I've just seen it called Section 508) and then there's the various state and local laws in addition to the client company's employment policies. Obviously this will vary between companies but where I work the Business Conduct Policy requires that Imation and it's employees not descriminate on the basis of ability (among a host of other things as well including race, religion, gender, sexual orientation etc). For me, designing something that is inaccessible can put me at risk of termination. That'd be highly unlikely seeing as how most of the people I work with aren't even aware of what our policy requires but that doesn't mean that we aren't legally liable for our compliance with all the various laws and internal policies. I guess when it comes down to it, I do use JavaScript but I make sure the application doesn't require it in order to operate. Yes, that's also known as coding twice but oh well, there really isn't any other sane choice (unless you really just leave it all up to the server side application).
| [reply] [d/l] |
by dreadpiratepeter (Priest) on Nov 18, 2002 at 15:49 UTC | |
1. Client-side validation should be done in addition to server-side validation, not instead of it. The server should never trust any data from the client. The purpose of client-side validation is report on errors closer to the cause of it. To avoid a round trip to the server with data that is going to fail. And to make the reporting of errors simpler. 2. The lowest common denominator principle of browser support makes for tedious browsing on all but that lowest browser. A better approach is to support the best level of browsing on all the browsers you support. It is easy enough to detect the browser's capabilities and use them. -pete "Worry is like a rocking chair. It gives you something to do, but it doesn't get you anywhere." | [reply] |
by diotalevi (Canon) on Nov 18, 2002 at 16:29 UTC | |
| [reply] [d/l] |
|
The Case Against Javascript
by Aristotle (Chancellor) on Nov 18, 2002 at 17:18 UTC | |
The number one issue that most people cite when talking about Javascript as a Security Risk, is a so called cross site scripting attackNo, the number one security risk is activating JavaScript in Internet Explorer and having a malicious or just plain infected site exploit you. Cross site scripting only holds the second spot. It is the Website's responsibility to validate all the data it sends to youThat holds true for data submitted to a site by third parties, but not for the site itself. If the webmaster himself has nefarious intentions, this assertion is useless. Blame javascript for the fact that someone, somewhere can use it maliciously, is like blaming email because people write outlook virusesDo you surf with ActiveX enabled? Get a fricking decent browser already. I have. Links lets me surf without all the flashy colours and blinking GIFs, and it's actually very good at producing a close resemblance of the actual layout using a TTY. And a graphical browser is hardly viable if you're connecting via SSH over an ISDN line's 7.6kb/s anyway. Yes, some of us do. And what about the folks who disable Javascript because they don't want to be annoyed with popups, Geo***tties or Tripod overlay ads and the like? To Hell With Bad BrowsersDoes that mean "to hell with the people who use PDAs, smartphones or other similar appliances" too? client side form validationThat was the reason Javascript was invented in the first place. I admit that dingus' node's somewhat ambiguous wording misled me. So long as you don't omit checking the data again on the server, using JS for this purpose is fine. In fact, it's the one and only purpose Javascript can and should serve. DHTML menus, allowing you display a great deal of information in a small area Unfortunately, your DHTML menus won't work for 50% of your audience unless you put in a gargantuan effort to develop for multitudes of browser brands and versions. Even if it works satisfactorily, the dynamically client-generated information is then out of any search engine's spider's reach. Along the same vein, folks with PDAs/smartphones, voice synths, braille readers and so on are out of game. With purely CSS-based menus such as those shown on css/edge, there's a fighting chance that the menu information can be made available even using uncommon media that aren't a mouse/computer screen combo - I want to see you try that with Javascript. Now, if you want me to rant about inconceivably abysmal CSS compliance in just about every current browser, six years after the standard was finalized and published (and several more after it was first talked about), that I can do.. I'm glad Mozilla is getting usefully close - though even it has its issues. Makeshifts last the longest. | [reply] |
by jryan (Vicar) on Nov 18, 2002 at 21:14 UTC | |
I have. Links lets me surf without all the flashy colours and blinking GIFs, and it's actually very good at producing a close resemblance of the actual layout using a TTY. And a graphical browser is hardly viable if you're connecting via SSH over an ISDN line's 7.6kb/s anyway. Yes, some of us do. No offense, but why should a commercial developer developing for a commercial firm feel the need to design to as low of a case as yours? The extra traffic that might be kept by a flashier design will more than make up for the traffic that is lost by a visitor that is using a browser that is too-primitive to view the site. By the way, at home I connect via a 56k modem, and I use mozilla 1.1 as my browser. ISDN must be luxury ;) And what about the folks who disable Javascript because they don't want to be annoyed with popups, Geo***tties or Tripod overlay ads and the like? I don't know about you, but my browser specifically allows me to turn off popups. Unfortunately, your DHTML menus won't work for 50% of your audience unless you put in a gargantuan effort to develop for multitudes of browser brands and versions. Incorrect. Netscape lost the browser war serveral years ago; Internet Explorer 5.0+ is now used by over 92% (and rising) of the internet populatation. Please see http://www.thecounter.com/stats/2002/October/browser.php for more info. Even if it works satisfactorily, the dynamically client-generated information is then out of any search engine's spider's reach. Why? The DHTML menus that I've seen involve lists of links in divs, that are then hidden, showed, and moved by JavaScript. Theres no reason that a search engine wouldn't be able to follow that. Does that mean "to hell with the people who use PDAs, smartphones or other similar appliances" too? No; If you read the article, you'll find that is mostly about not retro-designing for dead, non-CSS supporting browsers (NN4), when the future of web browsers promises to be rich with CSS support. A List Apart has always advocating support for wireless browsers; in fact, that article even mentions that. Along the same vein, folks with PDAs/smartphones, voice synths, braille readers and so on are out of game. With purely CSS-based menus such as those shown on css/edge, there's a fighting chance that the menu information can be made available even using uncommon media that aren't a mouse/computer screen combo - I want to see you try that with Javascript. You're comparing apples to oranges. Javascript is a scripting language, CSS is a descriptive language. Don't forget that DHTML includes CSS as well; it would be impossible to implement DHTML menus without the 'visibility' and 'position' CSS attributes. In fact, going the other direction:
Now, if you want me to rant about inconceivably abysmal CSS compliance in just about every current browser, six years after the standard was finalized and published (and several more after it was first talked about), that I can do.. I'm glad Mozilla is getting usefully close - though even it has its issues. Thats why DHTML menus came to be in the first place. Even a couple of years ago, it was impossible to implement a pure CSS menu like the ones you describe. I'm not saying that there is any excuse nowadays for using a DHTML menu, but there is a reason they exist. | [reply] [d/l] |
by Aristotle (Chancellor) on Nov 18, 2002 at 22:24 UTC | |
The extra traffic that might be kept by a flashier design will more than make up for the traffic that is lost by a visitor that is using a browser that is too-primitive to view the site.You're missing the point. If you treat Javascript as nothing more than form-handling gravvy for those whose browsers support it, and rely on CSS for formatting, and do it right (that is, no tables for layout, H? headers, P and DIV sections formatted using classes, and so on), then, funnily enough (or is it?), low-capability browsers like Lynx suddenly are able to produce a very usable browsing experience. You miss the eyecandy, but you get the content. And that should be a given. I have seen almost no use of Javascript so far that wasn't avoidable. Internet Explorer 5.0+ is now used by over 92% (and rising) of the internet populatation.Incidentally, IE's CSS support is the most idiosyncratic of all current browsers. I hope they don't keep that sort of market share. Oh, and what about the other 8%? That means 2 in 25 customers - a small, but not insignificant percentile. Can you afford to disgruntle them? If you read the article, you'll find that is mostly about not retro-designing for dead, non-CSS supporting browsers (NN4), when the future of web browsers promises to be rich with CSS support. I have read the article quite a while ago - but BUU was referring to DHTML, not CSS. As far as heavy reliance on CSS and the departure from HTML3.2 design is concerned, you're preaching to the choir - as the last paragraph of my previous node might have indicated. I hate the fact we still have to pay attention to fastidious browsers when using CSS even so many years after the standard was publish. It's a huge shame - the web would look better and be more useable at the same time and also work well for the low-capability browsers as well if CSS was widely and properly supported. We could have our cake and it eat it too. Sigh. You're comparing apples to oranges.I'm not comparing anything. I was saying that DHTML locks out a small, but very important (and growing) part of your audience, which needn't happen when you can do the same thing with a different technique, and better in many ways to boot. Makeshifts last the longest. | [reply] |
by jryan (Vicar) on Nov 18, 2002 at 22:55 UTC | |
by Aristotle (Chancellor) on Nov 20, 2002 at 16:22 UTC | |
|
Re: The Case for Javascript
by dingus (Friar) on Nov 18, 2002 at 15:59 UTC | |
I think you are mostly correct. I use a JS front end and a perl back end and its great. As you stated in your reply to that post letting JS do the user friendly form validation is cool. But the important thing is that you have to degrade gracefully. What this means is that you must make absolutely no assumptions about your user's environment and still be able to present something legible and useful. Furthermore if the user turns out to be a robot you must accept bogus input without barfing. By all means print out a cryptic error message (such as parameter "year" missing or invalid) for complete garbage, but make sure that if the field is reasonably valid (in the example above that parameter year is a 4 digit number) you just accept the thing and return the null result (Fancy web search result with lots of cruft stating: 0 documents were found published in the year 1234). Likewise by all means use DHTL CSS and Javascript to make a document display nicely on sensible browsers. But for the unfortunates who don't have some or all of that enabled you should have something that displays OK. i.e. make sure that your content is logically defined (thats the whole point of CSS anyway) with clear sections that show up in a reasonable order so that someone still using netscape 1.0 which doesn't understand the <DIV> tag, let alone CSS and most javascript gets a page that they can read and that they can follow links from. It is however, IMO, perfectly reasonable to state that if they used a newer browser they'd see something that looked considerably nicer, but unless you have 100% control of your clients (and you don't, no one does*) you must not require that they have certain features! What you shoud do is design for a good experience on some reasonable lowest spec, probably IE 5.0, an adequate experience on something worse and a much nicer one on something better such as Mozilla :) Dingus Enter any 47-digit prime number to continue. | [reply] |
|
Re: The Case for Javascript
by perrin (Chancellor) on Nov 18, 2002 at 16:59 UTC | |
I have never wasted so much time debugging anything as I have with JavaScript, and it's maddening because the fault lies in different browser implementations rather than in your code. Combine that with the horrible things most people use JavaScript for and it's hard to see what's so good about it. One final note about the browser upgrades: commercial sites have to support the browsers that their users want to use. You can't just tell your potential customers that they must download the latest gigantic browser release for the privilege of shopping in your on-line store. Yahoo generally does a nice job of walking the fine line and using JavaScript only where it is useful. Even so, their pages sometimes fail for me in the latest browsers because of JavaScript problems. | [reply] |
by broquaint (Abbot) on Nov 18, 2002 at 17:32 UTC | |
I avoid it because JavaScript is a suck ass language.Don't you mean most browsers' implementation of JS is suck ass, or more to the point, the DOM API for different browsers? In my experience, all the browsers implement the language in a similar fashion and the only inconsistency between browsers might be which version is implemented (i.e JS 1.2 vs JS 1.5). I can't vouch for the various DOM APIs as I have mostly avoided them due to a particular distaste for DHTML, but from my brief experiences the DOM API differs from IE to Netscape to $other and is quite hellish to get anything working consistently across browsers.
I'd also have to say Javascript as a language is quite wonderful indeed. Its got your OO, first-order functions, lexical scoping and as of later versions there is regex support and exceptions (1.3 and 1.5 respectively). So quite perl like in a lot of ways with its polymorphic variables, open-ended OO system and many many clueless hackers (although the JS 'community' doesn't have anywhere the same amount of clued-up hackers as perl does unfortunately).
_________ | [reply] |
| |
|
Re: The Case for Javascript
by dreadpiratepeter (Priest) on Nov 18, 2002 at 15:15 UTC | |
JavaScript adds the flexibilty and power to the client-side that Perl brings to the server-side. And it is an elegant little language. And combining the two technologies (along with DHTML and CSS) forms an incredibly powerful interface tool. I routinely generate JavaScript on the fly from Perl. I routinely make a hidden channel of communication and use it to update the server from the client, make round trip calls to the server without navigating off of the page, etc. On the client side, I can make menus, collapsable tree widgets, sortable, searchable, paged tables. I've implemented perl only chat rooms (with atuo-refresh of just the chat area, unlike the Chatterbox). I routinely build entire pages on the fly in javascript based on data from the server. In one application I have a data-driven survey system, that asks multiple questions (multiple choice, quantity, free-form text and tabular) with mutli-way branching depending on previous answers with the ability to back up, all in 1 page. No round trips to the server after each question. All JavaScript using data written into JavaScript arrays by my server-side Perl script. At the end, more JavaScript collects the answers and sends them to the server. As usual, I ramble. Don't dismiss this tool. -pete "Worry is like a rocking chair. It gives you something to do, but it doesn't get you anywhere." | [reply] |
|
Re: The Case for Javascript
by BUU (Prior) on Nov 18, 2002 at 21:47 UTC | |
Also, these points are helping to slow down the development of the internet as a whole. If we look at the history of the internet, you can almost trace the progress from a basic text display, to a sophisticated interactive application for any number of things. And personally, i would much rather use javascript to do these things, instead of, say, Flash, or Shockwave. But hey, maybe thats just me, maybe everyone else loves programming GUIs in flash? | [reply] |
by grantm (Parson) on Nov 19, 2002 at 07:55 UTC | |
I'm willing to bet that probably around 99% people using ns4 that went to some website they wanted to use, such as their bank or something, saw something that said 'please upgrade', with a quick painless link to do so, would upgrade. For some people (my mother?) there is nothing either quick or painless about downloading and installing a new browser. They have a system that works and they do not want to break it. I don't see too many people still running Netscape 4.X but I'd be willing to bet that a fair proportion of those that are still using it are simply too scared or rather too cautious to upgrade. | [reply] |
by Aristotle (Chancellor) on Nov 20, 2002 at 14:58 UTC | |
saw something that said 'please upgrade', with a quick painless link to do so, would upgrade.What about folks who peruse the web for their job and don't have the rights/permission to install new software? I'm not infatuated with Lynx, but why not support it if you can still offer all the eyecandy you want at the same time? personally, i would much rather use javascript to do these things, instead of, say, Flash, or Shockwave.I would much rather use CSS for as much as can be done with it, and Flash for what cannot. But hey, maybe thats just me, maybe everyone else loves programming GUIs in flash?I write GUIs in GtkPerl and sites in XHTML/CSS. Makeshifts last the longest. | [reply] |