CUPS drama over exploitability of an RCE attack

A quite unfavorable example of how not to react

I love a good drama and CUPS just delivered.

Simone Margaritelli (aka evilsocket) has found a particularly nice RCE exploit in CUPS, which he explains in detail here.

Nicely explained and the text is also a great introduction to the subject, as the attack itself is not rocket science. It's all pretty straightforward.

Since I wasn't personally involved in the communication, there are a few “he said/she said” arguments. What is clear, however, is that Simone has found a relevant exploit that wasn't addressed in the way it probably should have been addressed with some apparent ... miscommunication.

Till Kamppeter from OpenPrinting writes about it here. Akamai reports about the issue here.

The argument that there are hardly any print servers that are directly connected to the internet should be correct. MongoDB thought the same. What kind of idiot would put a MongoDB database directly on the internet? Good question. The answer was some telcos and insurance companies ...

You have to expect stupid. And CUPS is by no means just a service for professionals.

The risk in the professional sector is also somewhat higher than assumed because if you combine this with another issue you could exploit it again. You don't need any special rights, just something that has access. 

The argument that permissions (lp user) would also stand in the way here could fall short because I could imagine an attack in the context of this exploit chain where this is entirely irrelevant.

The potential - and the frequency - of attacks that originate from within your own network is often underestimated. The argument works much better if you have 20 employees and not 2,000.

It's a serious issue that, due to a few lucky circumstances, can only be exploited to a limited extent. But it is a problem that needs to be taken seriously.

There's another fun thing about CUPS. Namely, the question of how long you should support ancient legacy systems that have known - and quite extreme - problems that are fundamentally unsolvable. In this case it is imperative that the risk is minimized by the absence of those components. Only those who absolutely need it should have it.

A common problem with risk assessment is that you can't imagine something. If you could have imagined it, you probably wouldn't have had the problem. My ability to imagine something or not is entirely irrelevant. The question is whether I can rule it out.

Security should not depend on someone imagining something or not. This approach will sooner or later bite you in the ass.