Re^2: How to implement a fourth protocol
by Moron (Curate) on Mar 27, 2007 at 17:14 UTC
|
The lower the network modelling layer at which you defend against a bot, the earlier in the cycle you can stop them in their tracks. The earlier their advances are cut off, the less ballast such a measure will weigh on your system resources. The server is one such layer, the protocol such as http is another, the firewall is at another, TCP/IP at another.
| [reply] |
|
|
And in what way does having a "new" protocol help? At worst you've now got yet another point of ingress (through a new, untested protocol and its implementations no less) in addition to the existing ones (because in all likelihood you're not going to supplant them and someone will insist on still using them (c.f. the installed base of Netscape 4.76 browsers just now disappearing)). At best you've pushed the problem around . . . to a new, untested protocol and its new untested implementations; you might see some benefit in that not many black hats know about it enough to start jiggling the doorknobs, but security through obscurity isn't.
I guess one person can make a difference, but most of the time, they probably shouldn't.
– Marge Simpson
Update: Not that you're incorrect that you want to stop someone as far out as you can, but a new protocol's not necessarily the best way to do that. Things like port knocking or running on non-standard ports (or better yet port knocking to connect to a non-standard port :) reduce the visibility of the service, but if the underlying protocol (say) uses rot13 or its analogue to hide user credentials (HTTP Basic authentication, I'm looking at you . . .) it doesn't gain you much.
If all you're doing is moving a service using an existing protocol to a different port you haven't gained much. If you're using a new protocol, you're tossing out years of live field testing. Things like SSH and TLS have been gone over by experts (white and black hatted both) and are at this point pretty much algorithmically sound and most of the exploits are against implementation bugs not protocol flaws.
And an aside, one of the laments I remember seeing a while back was the problem that putting everything over HTTP makes it harder to lock down access at the network layer since everything is going through one port rather than separate ports for separate protocols. Coming full circle I guess.
| [reply] |
|
|
Port knocking is discussed elsewhere in the thread - so you can see the problem with that.
I hope to be able to reduce the risks of "untested" by seeking as much tried and tested material as possible hence the reference to NetServer::Generic which I presume IS tested. I might be able to build the protocol over another for example, but I left that idea out to give people a chance to suggest it ;) e.g. Telnet. Of course I may be being naive about that idea - I'm not a networking guru, so I didn't want to put that idea in people's minds too early.
| [reply] |
|
|
A reply falls below the community's threshold of quality. You may see it by logging in.
|
|
|
|
|
Hmmm ... generally you're right but your layers are wonky ... almost like you're confusing network model layer with what is sometimes called the security onion model. I would agree stopping them earliest is the best but sometimes you don't really have enough info in the lower layers of the network model to make that determination, that's why we have things like firewalls and dmzs -- make the decision as soon as possible in the network stack but also as far away as possible from critical areas.
| [reply] |
|
|
Yes, my knowledge in this area as a whole is definitely wonky, hence my seeking help.
| [reply] |
|
|
Yep, and the firewall is at a pretty darn low level already since it's working at the packet level, isn't it? You cannot stop J. Random Cracker from spewing packets at your network unless you unplug from it. Or go to his house and unplug him from the internet :-) Short of that, his packets are coming at you even if they're just bouncing off of ports with no listeners.
I'm assuming that we're talking about a public service here, and not something that can be very easily protected by something akin to VPN. In order to detect a bot, you have to allow it some initial degree of access so you can discern its intent. There's really no other way to determine the intent of a previously unseen client. After something at the server level determines that the client is malicious, then it has to work to defend itself. The lowest level available to us is the TCP/IP layer, where we can decide at the packet level whether to accept, reject, or drop the packets without the overhead of reassembling them into messages. This is the level where the firewall lives, very efficiently sieving bits. So, the easy way to implement a defensive measure is to give the server or some lightweight middleware the smarts to detect malicious activity and the means to communicate to the firewall "I don't want to hear from IP address ww.xx.yy.zz on my port nn any more". Bam, problem solved. From that point forward you don't analyze payloads, and the firewall just sieves bits. The best efficiency comes if you just drop those packets without bothering to tell the client that you don't want them. It's a bit rude, but it's efficient.
So, again, how would the certain to introduce vulnerability new protocol be faster/easier/better than the existing tools which are readily available today?
| [reply] |
|
|
I think in a way you've answered your own question and that helps me too! If the HTTP port is closed there is no need to have a firewall listening to it so more efficient than sieving those bits which could grow to high volume unwanted traffic. A new protocol would mean no old bots getting at it. I should have said that the protocol for my own purposes isn't intended to replace typical HTTP traffic but for traffic between contracting parties rather than anyone to anyone. Technically that can be as public as you like but more bother (and more security!) to hook up to.It is more likely someone spins a bot if it has a perceived chance of success - that is where I can see one sort of gain, given that no bot will ever have succeeded on this protocol, given that I can use the firewall AS WELL on the open port with the new protocol. Moreover, where ftp requests can spin your readdir, the new protocol I have in mind is intended to be more of a pure traffic session rather than file and directory rendering one - there are no directories or files to be got at with it! - And it is less likely you will be hit by a pure packet-volume attack.
| [reply] |
|
|
|
|
|
|
|
|
|