Re^3: How to implement a fourth protocol
by Fletch (Bishop) on Mar 27, 2007 at 17:34 UTC
|
And in what way does having a "new" protocol help? At worst you've now got yet another point of ingress (through a new, untested protocol and its implementations no less) in addition to the existing ones (because in all likelihood you're not going to supplant them and someone will insist on still using them (c.f. the installed base of Netscape 4.76 browsers just now disappearing)). At best you've pushed the problem around . . . to a new, untested protocol and its new untested implementations; you might see some benefit in that not many black hats know about it enough to start jiggling the doorknobs, but security through obscurity isn't.
I guess one person can make a difference, but most of the time, they probably shouldn't.
– Marge Simpson
Update: Not that you're incorrect that you want to stop someone as far out as you can, but a new protocol's not necessarily the best way to do that. Things like port knocking or running on non-standard ports (or better yet port knocking to connect to a non-standard port :) reduce the visibility of the service, but if the underlying protocol (say) uses rot13 or its analogue to hide user credentials (HTTP Basic authentication, I'm looking at you . . .) it doesn't gain you much.
If all you're doing is moving a service using an existing protocol to a different port you haven't gained much. If you're using a new protocol, you're tossing out years of live field testing. Things like SSH and TLS have been gone over by experts (white and black hatted both) and are at this point pretty much algorithmically sound and most of the exploits are against implementation bugs not protocol flaws.
And an aside, one of the laments I remember seeing a while back was the problem that putting everything over HTTP makes it harder to lock down access at the network layer since everything is going through one port rather than separate ports for separate protocols. Coming full circle I guess.
| [reply] |
|
|
Port knocking is discussed elsewhere in the thread - so you can see the problem with that.
I hope to be able to reduce the risks of "untested" by seeking as much tried and tested material as possible hence the reference to NetServer::Generic which I presume IS tested. I might be able to build the protocol over another for example, but I left that idea out to give people a chance to suggest it ;) e.g. Telnet. Of course I may be being naive about that idea - I'm not a networking guru, so I didn't want to put that idea in people's minds too early.
| [reply] |
|
|
. . . I'm not a networking guru . . .
Don't take this the wrong way, but: Stop now, because you don't know enough and you're probably going to screw something up (as if the mention of Telnet in the context of secure protocols didn't prove that already :). In all likelihood you probably don't even know what you don't know (if I may wax Rumsfeldian).
There's an entire very good book on the subject which one probably could summarize in one sentence: "Security is hard; doing security correctly, even for people that know what they're doing, is hard and even the experts often make mistakes.".
Now that I've at least hopefully dulled your hopes, let me say that I'm not saying 100% that you shouldn't do it (more like 99.8% that you shouldn't, lowered to a 99.4% once you've read Schneier and understand more of the implications of what you're proposing). But don't undertake this lightly and make sure you pay attention to prior art and reuse proven, tested components where possible.
And if the desire persists, repeatedly apply the Schneier book to the forehead until the urge passes. :)
| [reply] |
A reply falls below the community's threshold of quality. You may see it by logging in.
|
|
|
| [reply] |
Re^3: How to implement a fourth protocol
by gloryhack (Deacon) on Mar 27, 2007 at 18:04 UTC
|
Yep, and the firewall is at a pretty darn low level already since it's working at the packet level, isn't it? You cannot stop J. Random Cracker from spewing packets at your network unless you unplug from it. Or go to his house and unplug him from the internet :-) Short of that, his packets are coming at you even if they're just bouncing off of ports with no listeners.
I'm assuming that we're talking about a public service here, and not something that can be very easily protected by something akin to VPN. In order to detect a bot, you have to allow it some initial degree of access so you can discern its intent. There's really no other way to determine the intent of a previously unseen client. After something at the server level determines that the client is malicious, then it has to work to defend itself. The lowest level available to us is the TCP/IP layer, where we can decide at the packet level whether to accept, reject, or drop the packets without the overhead of reassembling them into messages. This is the level where the firewall lives, very efficiently sieving bits. So, the easy way to implement a defensive measure is to give the server or some lightweight middleware the smarts to detect malicious activity and the means to communicate to the firewall "I don't want to hear from IP address ww.xx.yy.zz on my port nn any more". Bam, problem solved. From that point forward you don't analyze payloads, and the firewall just sieves bits. The best efficiency comes if you just drop those packets without bothering to tell the client that you don't want them. It's a bit rude, but it's efficient.
So, again, how would the certain to introduce vulnerability new protocol be faster/easier/better than the existing tools which are readily available today? | [reply] |
|
|
I think in a way you've answered your own question and that helps me too! If the HTTP port is closed there is no need to have a firewall listening to it so more efficient than sieving those bits which could grow to high volume unwanted traffic. A new protocol would mean no old bots getting at it. I should have said that the protocol for my own purposes isn't intended to replace typical HTTP traffic but for traffic between contracting parties rather than anyone to anyone. Technically that can be as public as you like but more bother (and more security!) to hook up to.It is more likely someone spins a bot if it has a perceived chance of success - that is where I can see one sort of gain, given that no bot will ever have succeeded on this protocol, given that I can use the firewall AS WELL on the open port with the new protocol. Moreover, where ftp requests can spin your readdir, the new protocol I have in mind is intended to be more of a pure traffic session rather than file and directory rendering one - there are no directories or files to be got at with it! - And it is less likely you will be hit by a pure packet-volume attack.
| [reply] |
|
|
You're still sieving bits, even if the port is closed, even if there's no actual firewall in place. Suppose, to stay with your example, there's no firewall in place and there's no HTTP server listening on port 80 but packets destined for port 80 arrive. Your TCP/IP handler necessarily must inspect those packets to determine where they're destined even if there is no route and no endpoint. Bit sieving is unavoidable.
The "perceived chance of success" isn't even a consideration in modern bots. The bots are being run from compromised hosts, primarily windoze machines on broadband connections, so the bot operators don't much give a fiddler's frock about efficiency.
Hmmm... just for the sake of argument, let's do a high-level stream of consciousness build of your system. You've mentioned NetServer::Generic which is a fine choice if you're not handling a ton of traffic and don't need a select based server, but it attaches meaning to line terminator characters. If you can't jam all of your data into one long string, you'll need something else. Just for giggles, though, we'll implement DJB's silly Netstrings idea via Text::Netstring to solve that problem. Now we have the guts of the beastie, and we have to give it a thick hide. Let's wrap our streams in SSL to keep the nosy neighbors out of our business, and plug in some sort of authentication that's invulnerable to replay. Pick one of the trendy ones just for geek points. So far, so good. Okay, now we get to address the original points, probing and DoSing. Let's do some rolling state maintenance, SQLite works well for this. But we don't want our server to have to deal with the load because all we really need to know is that ww.xx.yy.zz shouldn't be talking to us on port nn (deja vu!) so we'll enlist the help of IPTables::IPv4. There, problems all solved. It's a lot of work, but we've done it... never mind that our server isn't going to stand up to the load if we get really popular, for now.
Or you could save a lot of time and make the server side of your wonder widget a web service running Apache on a non-standard and unprivileged port, get SSL and authentication with very little effort, add in Zdziarski's mod_evasive Apache module to avoid much DoS nastiness, take advantage of scads of CPAN modules that are already written for extending Apache and maybe even manipulating the server's iptables (assuming Linux), and focus most of your energy on the client side. Sounds like a fairly simple approach with few unnecessarily reinvented wheels and leveraging some time tested code, to me.
Whaddaythink? Spend lots of time to get something that cannot handle big loads, or spend much less time to get something that can handle big loads and much of which will continue to get better over time without any help at all from you?
| [reply] |
|
|
|
|
|
|
|
Having nothing listening on port 80 and having the firewall block it are basically identical from a network perspective: either way, a bot sends a packet, and gets back either a response indicating the port isn't open, or else no response at all. So unless the bots know in advance that you don't have anything listening on that port, it's not going to save your firewall or your network anything to actually not have anything listening on that port.
Assuming that bots attack across all valid IP address space fairly evenly, you could reduce the impact of these attacks by having fewer IP addresses routed to you. Using nonroutable IP addresses (such as RFC 1918 addresses) might help with this. If you use private addresses on the client and server network and configure routing on both ends to ensure only your real clients can talk to your server, your firewall won't have to deal with packets sent to these private addresses, since in general they can't be sent. You could get a similar effect with a nonroutable protocol (such as NetBEUI), though it would be quite a bit more work. Both of these are most easily done through a VPN, though they can be done with physical network links as well.
Still, the network and router resources used by blocking a connection are so small, going to these great lengths won't make much of a difference on the vast majority of networks.
| [reply] |
|
|
Re^3: How to implement a fourth protocol
by derby (Abbot) on Mar 27, 2007 at 17:33 UTC
|
Hmmm ... generally you're right but your layers are wonky ... almost like you're confusing network model layer with what is sometimes called the security onion model. I would agree stopping them earliest is the best but sometimes you don't really have enough info in the lower layers of the network model to make that determination, that's why we have things like firewalls and dmzs -- make the decision as soon as possible in the network stack but also as far away as possible from critical areas.
| [reply] |
|
|
Yes, my knowledge in this area as a whole is definitely wonky, hence my seeking help.
| [reply] |