Ted Demopoulos    Demopoulos Associates
keynote speeches
Security, IT, Business Consulting
securITy newsletter
Articles

Ted Demopoulos’ securITy
___________________________________________________________

Subscribe to S e c u r IT y, our free E-newsletter on Information Technology, Security, and their intersection with Business.

Subscribing will not result in more spam! We guarantee it!
(note: We respect your privacy. We do not rent, sell, or share email addresses.)

Please forward this newsletter to anyone you know who might enjoy it!

  • A couple of new “Ted Videos” will be out soon by WatchIT.com: “Application Security Principles” and “Understanding Blogs and their Business Uses.” They are already filmed and in post production, and I’ll have a few free DVDs available. Check www.demop.com soon, first come first served!
     

  • I had a few media interviews and my comments on the privacy of email and Instant Messaging are in a United Press International article, and on lack of cooperation between InfoSec professionals in Information Security Magazine.
     

  • New articles on www.demop.com include:

RSS and Atom Feeds Explained
Who Should Write the Corporate Blog?

The Patron Saint of Sauerkraut (there’s nothing unprofessional about humor!)

Executive Coaching– not required for every Executive

  • Some quick answers to questions many of you have been asking.

Q: How’s Tyler?

A: My dog Tyler is fine! He has recovered from his car accident. I’ve gotten over 50 emails and phone calls inquiring! Thanks for your concern.

Q: What’s with all these quotes in the press?

A: I’ve always had good relations with the press, probably because I speak my mind and don’t mind being quoted. Lately, a new client, Dan Janal of PR Leads, has been helping get some additional press, and I’ve been having fun talking to reporters.

Q: Ted, your articles and blog posts are straying somewhat from technology, what’s up?

A: I’m a technology guy, and will always be fascinating by technology and especially IT security! That said, you can't spend 20+ years in high tech including reviewing dozens of business plans, surviving a few startups, and 15 years in consulting, without developing some business skills and insights.

Some clients are starting to tap my business knowledge and I’ve stopped resisting.

As Yogi Berra once said, “If you come to a fork in the road, take it.” Most of our business is and will remain techno-centric. We are also helping some clients with business-centric issues including business blogging, IT Entrepreneurial issues, and surveys.

Worst Practices in Developing Secure Software, Part II

As I’ve said before, The “Best Practices Mantra” annoys me.

A major component of success involves avoiding making any major mistakes. Instead of focusing exclusively on implementing “Best Practices,” I suggest avoiding “Worst Practices.” You can do almost everything perfectly, but if you get one thing horribly wrong you can negate everything. A soldier greatly increases his chances in a firefight by doing things right, but one serious mistake and his odds of surviving plummet. Fatal flaws and mistakes are exactly that – FATAL!

Assuming that only “important” software needs to be secure.

“Hey Ted, wasn’t this also in Part I?”

Yes, but it’s worth repeating!

All programs and services need to be secure. Even a simple game or utility could be compromised, contain a Trojan or otherwise harbor malicious code, and lead to your entire network being compromised. This includes prototype and test code as well.

Not planning for failure

Complex systems can and do fail. Both partial and complete failures need to be planned for.  Software should always fail to a secure mode, and when in failure mode, deny access by default. If the entire system fails, any secure data should be unavailable!

When failure occurs, no data should be disclosed that wouldn’t normally be available, and as little information as possible should be disclosed.  

For example, if a login fails, it is far preferable to report that the login failed than to specify “invalid password” or “no such account.” If a login fails, it should reveal no information other than failure (if even that). 

I worked on one system where the results of a successful and unsuccessful login were visually the same – the user didn’t even know their login failed until they tried to do something.

In contrast, I recently was authorized by a client to login to their Blogger account to add Google Ads and make a few other changes to their blog. They gave me an incorrect account name and password, and when I tried to login I got a message that said “non-existent account.” I now knew the account name was wrong, and I tried a couple of “obvious” account names such as the company name, etc. My second guess was correct and I got a different error message, “incorrect password.” The password was easy to guess too – it was my client’s dog’s name!

I was authorized to access his account, but even if I hadn’t been, I could have “guessed my way in.” If Blogger didn’t differentiate between incorrect passwords and non-existent accounts, it would be more secure and I probably would have given up quickly and waited for my client to give me the correct login information.

Should a significant failure occur to a critical system, e.g. a defacement of the organizations web server or a inability of a server used for ecommerce to authorize credit card purchases, there *should* be a security policy in place that specifies contingency plans. For example, should the server be taken off line? Should it report an “unavailable – try again” message? Should it be left live and fixed as quickly as possible?

Counting on “Security through Obscurity”

Security through Obscurity is the notion that hidden vulnerabilities will not be discovered. It can be used as a part of a Defense in Depth strategy, but should never be depended on alone. Secrets are hard to keep!

For example, not releasing source does not guarantee that any secrets in the binaries will remain secret! Binary code can be reverse engineered, disassembled, or decompiled.

Although full disclosure of cryptography code is perhaps somewhat controversial, most IT professionals believe that it leads to more security as more people can easily examine the code for vulnerabilities.

I actually LOVE security through obscurity as part of a defense in depth strategy! There is nothing wrong with keeping secrets! There is no reason to make life easy for hackers. For example, why advertise the OS, version number and patch level you run? Why let anyone know anything about your firewall? Why let whether you run Apache or IIS or some other webserver be public knowledge? There is typically no reason to publicize any of these, but don’t count on them remaining secret!

Disallowing bad input instead of only allowing good input.

A lot of previously discovered vulnerabilities rely on malicious input.

Input should always be validated.  For example, if the program expects an 20 digit number confirm a 20 digit number is input, if an address is expected make sure the input is in the form of an address. Always check the size of the input!

Invalid input should not be rejected – instead only valid input should be allowed. There are too many possible types of invalid input and something may be missed otherwise. For example input may be in encoded in hex, Unicode, or some other unexpected format.

Applications should have a trust boundary defined around then, with a small number of entry points through the trust boundary.  All input should pass through and be validated by one of those entry points. This includes not only input from users and other applications, but also config files, environment variables, etc.

Software that is not secure by default

By default, a system should be secure when installed. All resources should have adequate protections be default. Rarely used features shouldn’t be installed by default as they increase the attack possibilities. 

This is all easy to say, but more difficult to achieve. What does reasonably secure mean?

In the past there has been a trend for default installations to have minimal security configured, for example in Windows and Unix/Linux operating systems. The more secure, the more difficult is it for users and (inexperienced) administrators to get work done. This may result in more difficulty in getting a system up and running initially and in training new users. It may result in more calls to help desks etc.

What reasonably secure means depends on the system and environment. However it is typically MUCH easier to loosen security later than to tighten it.

Rolling your own cryptography

Cryptography is its own very complex and difficult discipline. Programmers are NOT cryptographers and should not be developing OR implementing cryptographic algorithms with VERY few exceptions!

Something proven and commercial strength should be used. For example Microsoft Windows (and other operating systems) include cryptography services that can be used by applications.

The classic bad example of “rolling your own cryptography” is the DVD Content Scrambling System (CSS).

It was certainly NOT designed by cryptographers and a number of weaknesses exist.

Decryption Code for CSS, DeCSS, was written pretty quickly and posted on the Internet by a Norwegian teenager Jon Johansen.

Proven commercial strength encryption should have been used! Then again, maybe the programmers thought it was a stupid idea and wanted it to fail?
_________________________________________________________

The free newsletter of Demopoulos Associates, www.demop.com

This newsletter is Copyright © 2004 by Demopoulos Associates, Durham, New Hampshire, USA.  All rights are reserved, except that it may be freely redistributed if unmodified.

Sharing securITy is encouraged if the copyright and attribution are included.

Subscribe to the securITy newsletter

Name
Email

We NEVER rent, sell, or share email addresses.

Please forward this newsletter to anyone you know who  might enjoy it!

© Copyright 2002-2015, Demopoulos Associates