Schneier on Security: The Insecurity of Secret IT Systems

Comments: "Schneier on Security: The Insecurity of Secret IT Systems"

URL: https://www.schneier.com/blog/archives/2014/02/the_insecurity_2.html


 

A blog covering security and security technology.

« GOPHERSET: NSA Exploit of the Day | Main | My Talk on the NSA »

February 14, 2014

The Insecurity of Secret IT Systems

We now know a lot about the security of the Rapiscan 522 B x-ray system used to scan carry-on baggage in airports worldwide. Billy Rios, director of threat intelligence at Qualys, got himself one and analyzed it. And he presented his results at the Kaspersky Security Analyst Summit this week.

It’s worse than you might have expected:

It runs on the outdated Windows 98 operating system, stores user credentials in plain text, and includes a feature called Threat Image Projection used to train screeners by injecting .bmp images of contraband, such as a gun or knife, into a passenger carry-on in order to test the screener's reaction during training sessions. The weak logins could allow a bad guy to project phony images on the X-ray display.

While this is all surprising, it shouldn’t be. These are the same sort of problems we saw in proprietary electronic voting machines, or computerized medical equipment, or computers in automobiles. Basically, whenever an IT system is designed and used in secret – either actual secret or simply away from public scrutiny – the results are pretty awful.

I used to decry secret security systems as "security by obscurity." I now say it more strongly: "obscurity means insecurity."

Security is a process. For software, that process is iterative. It involves defenders trying to build a secure system, attackers -- criminals, hackers, and researchers -- defeating the security, and defenders improving their system. This is how all mass-market software improves its security. It’s the best system we have. And for systems that are kept out of the hands of the public, that process stalls. The result looks like the Rapiscan 522 B x-ray system.

Smart security engineers open their systems to public scrutiny, because that’s how they improve. The truly awful engineers will not only hide their bad designs behind secrecy, but try to belittle any negative security results. Get ready for Rapiscan to claim that the researchers had old software, and the new software has fixed all these problems. Or that they’re only theoretical. Or that the researchers themselves are the problem. We’ve seen it all before.

Tags: air travel, disclosure, economics of security, obscurity, secrecy, security engineering

Posted on February 14, 2014 at 6:50 AM28 Comments

To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.

In Europe they use quantum key cryptography in their voting systems:

http://www.idquantique.com/news-and-events/...

In Europe, the struggle to create a public realm out of the monarchy's private government extends back to populist movements in the Middle Ages such as the Ranters and Diggers and Bretheren of the Free Spirit; they seem more likely to view their government as something that really belongs to them, with the potential to work for them.

In the United States, our struggle to create a public government really begins with the 14th Amendment. Between then and the civil rights era is when we obtained universal suffrage. The franchise was highly exclusive in the Revolutionary era -- so much so that "WE, the People" probably only represents the will of 5-7% of the population at the time. Since then, the conservative battle cry has been "smaller government" and "privatization." Notwithstanding that we had private government once before -- when we were owned by the Britain -- we have this myth of obtaining self rule by fighting tyranny, when, in fact, the road to self rule has been a much more complicated struggle. But the myth prevails over history.

The 522B is ancient - at least 5-10 years old. http://aerodetection.com/rapiscan-522b/ says the units it has are 8-10 years old.

I'd be much more interested in the Rapiscan 620DV which appears to be the model used at major airports in Europe.

I still remember the first time I ran smack into security-by-obscurity. My boss at the time put me in charge of the most secure system we had at work. It was kept behind a heavy locked door and I was admitted only after a strong lecture on how important it was to keep it as secure as possible.

Even behind a closed door, he felt it necessary to whisper: "and the password is 'secret', which of course we can't tell anybody and they would never guess..."

The idea of continual improvement driven by the hostile nature of the operating environment seems very similar to evolution in the natural world. I don't mean analagous to, I mean another form of evolution, survival of the fittest. Obscurity, obfuscation, and political lobbying are all attempts to exclude a product from the security evolutionary process but even at the very highest levels - government printed currency, NSA information - it is impossible to isolate anything from security evolutionary forces.
It is better to embrace this process and to continually iteratively evolve and grow stronger and wiser during the process than to try and hold the driving forces of evolution at bay. The driving threat forces of security evolution themselves evolve, adapt and get stronger over time. You can hold them at bay temporarily through obscurity and obfuscation, but it then become only a matter of for how long? e.g. Sony Playstation 3
The Playstation 3 is an interesting example in that it was finally cracked due to sloppy cryptography implementation ( what was meant to be a random number generator was implemented as a constant ). I say interesting because peer / open review would have exposed the obvious flaw and it would have been fixed. In this case it was the obscurity and obfuscation that ultimately was responsible for the security being broken.
https://www.schneier.com/blog/archives/2011/01/sony_ps3_securi.html

OK, so if Kaspersky or Symantec or TrendMicro or Norton doesn't hand over all their source code I shouldn't buy the product... right?

OK, go ahead and post all your passwords and SSN's and your credit card info.

@beatty Well ... yes, although not just because you don't have source code. Virus scanners typically only search for known exploit code (and perhaps a few variations on it). They're fundamentally reactive and in my view not worth the effort.

@vincent You jest. The difference between secret paswords and secret algorithms has been explained to death already.

@vincent: not really. That is yours, your privacy for your own usage/protection, but when somebody offer for public usage some kind of security product, it should be available for public scrutiny on potential security threats/weaknesses and invasion of privacy (like recently Samsung on smart TV agreed to provide led indicator when camera is on. I hope that activation is hardware, not software).

The meta-problem here is that many (in my experience most) software "engineers" are incompetent and do not qualify as engineers. The result is that the typical software system sucks badly.

This machine is just a standard example. However though Windows (no matter what version) was suitable as an embedded OS has no business working on software or surrounding systems.

@beatty -- you shouldn't buy their products if you don't trust them. I don't and the systems my family uses haven't had an problems. But they don't hide what they're doing and they have some pretty vigorous competition.

@vincent -- I hope you don't think there is anything especially "secure" about your SSN. But hiding secret information used to access a system is different than hiding or obscuring information about the the system itself, which is what this post is about.

No, this just another cherry-picked instance of failed product development that is being used for self-serving purposes. It's easy, isn't it, to swoop in after the fact and point out everything that went wrong. Do you even know what the original threat model was? For all you know this failure was the RESULT of anal security engineers that suffocated product development until the project collapsed. You don't know. You pick up on all this pop news junk and fling it anyway you want.

Oh, it gets better. I keep thinking that if you control the software, you can probably control where the X-Ray beam is at. And observe that TSA agents walk back and forth through the scanner all the time.

So you keep it ON bouncing back and forth scanning at roughly chest level when not actually scanning the full body. When you see something metallic (say, a TSA badge), you immediately drop the emitter down to crotch level for the next 10 seconds...

"For all you know this failure was the RESULT of anal security engineers"

Win98 and plaintext passwords are not the result of anal security engineers. Unless, of course you mean actual anal security engineers, in which case it is not surprising, as they only know stuff about how to use a variety of rubber corks.

I somewhat disagree that engineers try to hide their bad designs through obscurity. I don't believe they even think about security.

There seems to be this idea of "Well, why would anyone attack that?" that is prevalent far too often. It's why we have empty passwords on internet-facing SCADA stuff, hopelessly outdated operating systems that can't be updated on embedded systems, etc. Nobody thinks like attackers. Closed systems are perfectly acceptable to people who don't think they will ever be a target.

It would be like me inventing my own door lock, and because no one has seen one before, I can assume I'm protected. (Of course this could also be said - Since no one has attempted to break into it yet, I hope I'm protected)

Rather than the alternative

Buying a door lock that has been proven in the real world. Paying attention to security bulletins so that if an exploit is found, I can replace it with a fixed version.

@vincent - Either way, I'm not giving you my key.

@Jason. All you said is valid when you are random target making you more protected than next target in the phishing scheme. Just to bring some relax mood: "Two young ladies were in the jungle and spotted lion. One start running, another asked is she really could run faster than lion. Nope, she reply. Just faster than you..." If you are NOT random target, all depends on the actor's resources available to break your security (psychical or informational): local thugs, organized crime, LEA local or state, LEA federal, foreign agents, etc.

@vas pup:

Well, now we're just getting deeper into security concepts. I don't think the threat of a targeted attack is a reason to abandon tried and tested methods. I think it's a reason to bring in additional expertise, add some additional layers of protection. (Instead of just a good door lock, add a surveillance system, alarm system, maybe a stronger door, bars on windows). The security system you use, no matter what you're protecting has a cost that must be weighed against the risk, and to be effective almost certainly will be layered.

@ Bruce,

    The truly awful engineers will not only hide their bad designs behind secrecy, but try to belittle any negative security results

That statment is a little unfair, because when it comes to hardware the closer you are to the metal, generaly the more competent you are as an "engineer".

The problem generaly starts and ends with managment, because,

1, Like quality, security has to be there fully functional from project day 0.

2, Security processes, training etc "cost".

You have to be an "old engineer" to remember the days befor quality processes were considered part and parcel of the job. And unfortunatly the area quality processes are least frequently found is "software engineering". Just take any modern software methodology and find the bits that are actually about "Quality Assurance"...

The simple answer is all you will find is an illusion or mirage paying lip service to any real quality process. It's also the reason grizzled old vetrans of software coding will tell you that most software development methodologies are at best "make work" and that you will get better results where team members share a common non adveserial goal and thus trust each other.

And when you look back at the development of QA systems it was the teams who bought into it and trusted the others that the most benifit was seen.

The reason QA actually got going was two fold,

1, Managment saw the financial benifit before the factory door.

2, Those who saw benifit used QA as a part of purchase decision.

Neither of these conditions is true currently for "security" thus managment treat it as "a non productive inefficiency" and thus "managment mantra" says it should be ruthlessly expunged from the work process "to increase productivity"

The way to get security into the design process as a norm is by making having it the most profitable path to walk, that way as with QA "managment mantra" will change.

Untill that time blaiming other people for "keeping their jobs" is a little unfair.


I think the article is dead wrong about the threat projection system being a big issue.

The purpose of this system is to keep the screener alert. In a normal airport, a contraband item like a bomb, gun, etc. might occur at most once a day. Rare contraband like a bomb is probably less than once in a lifetime. Hence it would be natural for a screener to simply 'pass' all luggage, even if they are being diligent. Adding these "false positives" gives the screener something to do, and increases security by "impedence matching" the task at hand to the psychology of the operator.

It's true that an attacker could have the system inject innocuous items, or perhaps have it inject items at a very high rate. I suspect that either of these new behaviors would be quickly noticed.

Actually, in any airport, contraband like this would occur at most 24hrs/airport-lockdown-time per day.

Looks like I was wrong. The other article gives more details about the system, and it is pretty crappy.

It's one thing to superimpose false images that are removed after alarming on them. It's another entirely to allow some other person to choose the time when the false image will be shown, and to replace rather than modify the image.

Hacking is illegal. Selling crappy secured soft- / hardware isn't (wearing my black and white glasses now).

"Upon seeing a weapon on the screen, operators are supposed to push a button to notify supervisors of the find. But if the image is a fake one that was superimposed, a message appears onscreen telling them so and advising them to search the bag anyway to be sure. If a fake image of a clean bag is superimposed on screen instead, the operator would never press the button, and therefore never be instructed to hand-search the bag."

If the training software assumes that the .bmp images have simulated contraband, one would think that the training software would do something if the operator doesn't press the button when a .bmp is displayed. Or does the attacker who introduces a "clean" .bmp file also modify the software?

Thank you, Bruce... "Obscurity means insecurity" is exactly what I've always meant, when I said "closed source by definition is insecure".... only open source can be secure (which doesn't guarantee that it is, only that it's at least possible).

What really worries me is that we haven't really learned a lot.

The 1983 movie wargames could happen today. Maybe not in the US (although I doubt that), but there are more countries in the world. The problem with security by obscurity is that you just don't know whether there is a WOPR that has a backdoor with the login "Joshua".

How secure are these nucleair platform systems? Just look at the stoxnet virus. Is "the west" capable of protecting itself against this kind of things? I don't think so. (looking at this news item)

And is the JSF/F-35 capable of dropping a nuke? It also contains 20 mln lines of C++ code.

I think this is way more worrying than any "terrorist attack".

I don't know. Maybe it's just BS that I am talking about. I am not a security expert. But I do know that you can't trust computers. Not yesterday, today or tomorrow.

Bruce,

Saw you at SAS, thanks for speaking!

Did you notice on the way out that all the machines in the Punta Cana airport were the make and model Billy and Terry evaluated?

Mike

Your comment about engineers (smart vs awful) was unfortunate. Well-established companies such as Diebold produce software with a workforce that is salaried and university-educated. The software produced usually conforms to management's priorities. If QA isn't isn't part of the software process, the software produced will tell the tale. The company with good management and a weak engineering staff is a rare beast. Unicorn rare. If Diebold has crappy software, then Diebold is to blame, not some mythical bumbler.

To say nothing of the procurement process.

True of Diebold and Rapiscan!

Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..

 

Original post on Hacker News