The Three-Step Approach to Security

- Free product key for windows 10
- Free Product Key for Microsoft office 365
- Malwarebytes Premium 3.7.1 Serial Keys (LifeTime) 2019
The Three-Step Approach to Security
As we discussed earlier, security is too often treated as black or white. It is considered to be either on or off. The security group, if asked at all, all too often defaults to say no to just about everything the business tries to do. Consequently, the security group has become irrelevant.

Instead, consider what "defense in depth" really means. It really should mean that there are complementary security measures, not necessarily redundant ones. One of the authors once was brought in to evaluate a network design. When he asked what the ingress points to the data center looked like he was informed that there were two firewalls, something we usually refer to as a "dual-screened subnet." Normally that means there is a firewall in front of the data center (facing the Internet)-and another firewall behind the data center (facing the back-end or corporate network). It was not until later in the review process that he was informed that what this particular design group considered a "dual-screened subnet" was a data center that was screened by two firewalls, one right behind the other, in the front of the data center. When he probed for more information on how these firewalls were configured, he was informed that they were from two different manufacturers, but that the configuration on them was exactly the same, by design. In fact, the configuration file formats were essentially equivalent, so the operators would just copy one file to both firewalls. The back-end of the data center, meanwhile, was essentially wide open to the corporate network. This was considered to be defense-in-depth because if one firewall had a vulnerability, the other, one hopes, would not. In reality, the organization had spent a lot of effort on implementing tools that were not substantially contributing to security, in the process ignoring real problems. They were unable to even produce the likelihood that one or the other of the firewalls would have a vulnerability or anything else that quantified the value of having two firewalls. Meanwhile, the risk of leaving a data-center wide open to a corporate network should be relatively clear.

A complementary approach to security in this situation would have been to have a port filtering firewall in front, and then an application filtering firewall behind that. Those two would complement each other and provide redundant functionality. In the event that the front firewall had a problem that was so bad that it did not fail securely, the rear firewall would still be capable of catching it, even though its main job was more granular filtering. This type of design is the first of our three-step approach to security.

Keep 'em Off the Box

In the best of all worlds, the attack never gets to the box, the computer, in the first place. Something stops it from even getting there. One of the main ways to do that on a client OS, such as Windows Vista, is to use the principles of domain and server isolation. If clients listen only to requests coming from a small set of hosts, uninvited attacks, such as the common worms and a vulnerability in some default service, could be exploited only from a very small number of places.

Another way to keep attacks off the box is to prevent things from installing. By leveraging the low-rights processes that User Account Control (UAC) enables, many fewer problems can arrive through inadvertent clicks in Internet Explorer (IE). Over time, more low-rights applications, such as instant messaging applications, e-mail clients, and so on can be expected. This limits the potential for delivery of attacks through some of the historically more vulnerable channels.

There is nothing, however, that will have a bigger impact on keeping bad things off the box than user education. In the end, a user who wants to will always ensure that the naked dancing pigs get on the box. Only by partnering with the users can we make headway here. We are already hearing rumblings about organizations that will disable UAC. You would lose the low-rights IE (LoRIE) that way. However, if you at least make users run as standard users, which is considerably easier to do in Windows Vista than in Windows XP, you retain the benefit of some of the plumbing that went into UAC.

Keep 'em from Running

If malicious code should happen to get on the system, the next layer of protection must be to stop it from running, or at least doing any damage. The traditional way to do this was to use antivirus programs. Today the term "anti-malware" is much more appropriate because it is not just about viruses any more. Those programs will detect known bad things and keep those from running. Windows Vista ships with a phishing filter in IE, as well as an anti-spyware technology. Adding an antivirus solution to that is generally considered a best practice.

Another potential contender for use in keeping malicious code from running is Software Restriction Policies (SRP). SRP is a technology that you can use to specify executables that can and cannot execute on a computer. It has been around since Windows XP, and in that time has proven notoriously difficult to manage. It also turns out that SRP evaluation happens in the user's context, meaning that malicious code could actually disable the evaluation. Nevertheless, SRP has been used successfully on servers, where it is considerably more predictable what needs to run and where malicious code would have fewer interfaces to modify the evaluation available.

Another way to keep code from running is to simply not permit users to run it. For instance, if auto-run is disabled on removable media, users inserting that shiny USB flash drive they found in the parking lot, the one that was dropped there by attackers under contract by your worst competitor, will not auto-run a simple exploit when it is inserted into a USB port on the computer. Keep in mind, however, that USB devices, and Bluetooth devices, and many other things as well, cannot only change their device type ID, making the computer think they are something different; they also are Direct Memory access (DMA) devices. This means that they have direct access to system memory and can read and write to any memory segment on the computer, without auto-running a thing. If you really want to keep exploits from running on removable media, you need to keep them off the box in the first place. Device Control, can achieve that, but the far easier, and more foolproof, method is to apply the contents of a tube of epoxy to every orifice you find on the back, front, sides, top, and bottom of the computer. Your risk posture will guide whether this is the acceptable way to mitigate this particular problem.

Keep 'em from Communicating

If you can't keep 'em from getting on the box, and you can't keep 'em from running, maybe you can at least keep 'em from communicating. This can turn a compromise of an entire network into a compromise of a single system. If a client manages to pick up some exploit, and the exploit spreads via the network, but none of the other clients will listen to the infected one, you have kept the bad code from communicating. If an exploit spreads via e-mail, but the mail server finds and blocks the attachment, you have kept the exploit from communicating.

The first inclination of security people is generally to put a host-based firewall on the computer and configure its outbound filtering.  Host-based outbound filtering is useful in a few isolated scenarios in Windows Vista, but primarily because of Service Hardening. However, you cannot keep a process that runs as an end user from communicating unless you are willing to keep the end user from communicating. You should configure outbound filters for processes that you can isolate, and push vendors you buy software from to do so automatically upon installation. That means the vendor has to write its software for least privilege, which is a new experience for many.

Thinking Differently about Security

We, the two authors of this book, usually agree on how to manage security, but sometimes we have slightly different views on how an organization should react to threats. In some cases, the difference stems from one of us being wrong (at least that is what the other one said). More often, we just have different ideas about what is sufficient, or we have a difference in approach, focus, and background. For the remainder of the chapter, we decided to each give our analysis of a common scenario. It may be that neither is right, and it may be that both are, but in different instances. In either case, we thought this dialectical approach to security would help you synthesize your own opinion, be it based on one or both of ours or something entirely different. The objective in the remaining pages is not necessarily to be correct, but rather to stimulate innovative thought. If you come away from this thinking more deeply and questioning conventional wisdom, our goal has been achieved.

The Top 2 (+ or 1, or so) Client Security Hacks

To start out, we decided to analyze our top client protection tools/technologies/processes. A few years ago, Microsoft launched the "Protect Your PC" marketing campaign, in response to an outcry resulting from the Blaster worm. That campaign carried the message of "patch, firewall, and anti-virus." All three of those are important, but does it still hold that those are the top three, or that they ever were?

Jesper's Thoughts

Assuming that you have already figured out patch management, if you start only two projects based on what we have covered in this book, it should be one that investigates how to protect clients from other clients, and one that helps you implement the principle of least privilege for your end users. Let us briefly talk about each.

Implementing Server and Domain Isolation is largely a matter of determining permitted communications patterns in your network. Most of your project will be to map out what you think clients should do on the network, and on which networks. The permitted channels are almost certainly different on the corporate network and the wild Internet. The average organizational network today is semi-hostile at best. Do not design under the assumption that you are safe because you are on a shielded network. Design as if the network was hostile. Most likely, it is.

Windows Vista already limits that damage somewhat by making even administrators much less privileged by default. However, the protection offered by admin-approval mode is flawed for two reasons.

First, there almost certainly will be elevation of privilege attacks that permit an application running under admin approval mode, or even as a standard user, to escalate to a full admin. Obviously, this attack vector was considered during the implementation of the feature, but it is highly unlikely that all possible attacks were considered and closed, and in any case, logic would have it that we can never know for sure either way-until a successful attack is discovered.

Second, admin-approval mode works only if the user consistently makes intelligent security decisions. Think back to all the emergencies you have had to deal with throughout your career? What would you say the probability is that you can get all users to consistently make intelligent security decisions, all the time? We didn't think so. You can certainly improve the success rate with appropriate training, but unless you are extremely successful in removing security dependencies across systems, you will still have a situation in which one mistake by one user can have far-reaching consequences. Such mistakes are made much less likely if the users are not administrators of their systems.

Making users run as standard users requires a fair bit of forethought. What are the things they need to do that require admin privileges? Can you find out? Can you make those things not require admin privileges? Very often you can by judiciously loosening an Access Control List (ACL) on a certain binary, or a directory, or a registry key. Sometimes you can fix the problem by giving users an additional privilege or adding them to a group. For example, if roaming laptop users need to be able to release and renew their IP address leases you can grant them that ability by making them members of the Network Operators group.

Making users run as standard users also will require some planning to resolve issues that you would not otherwise have. When the VP of Marketing decides that it is absolutely critical to his job that he have iTunes on his laptop, how are you going to respond? You may be able to address that problem by making the program available as a package using Active Directory Software Installation or an Enterprise Management System (EMS). This will not always work. Apple, for example, does not currently permit redistribution of iTunes, even inside an organization using an EMS.

These two items-stopping client systems from listening to each other, and making users run as standard users-have the greatest potential benefit of any technology in Windows Vista. With other tools, such as tweaking Group Policy settings, modifying how IE acts, using anti-malware, and so on, you can stop specific attacks. Only least privilege and treating client computers as black holes, however, carry the potential to stop entire classes of attacks. Beyond these technologies, you are left with users as the primary line of defense.

Roger's Thoughts

The number one way to prevent current malicious attacks is not to be logged in with an Administrator account when it isn't needed. Because client-side attacks are so prevalent right now versus truly remote exploits, Vista's UAC will do a lot to stop current client-side attacks. But it's also my personal opinion that there is enough wriggle room in a standard user's permissions that malware writers will be able to code around UAC's protections. Some malware writers already do it. They make their malicious little creation write to HKey_Current_User instead of HKEY_Local_Machine. They can place their executable in \Windows\Temp or the Temporary Internet Files folders instead of System32. Windows doesn't care where it executes malware from. If the end user can be tricked into executing malicious executables (or data), it's always game over. Patching won't save you. Firewalls won't save you, and neither will antivirus programs.

Patching does have its benefits, but only if you patch everything-the OS and beyond. Many of today's administrators and users are lucky if they've got the major OS patches applied. According to many surveys, administrators and users don't do a good job of patching major common applications, such as Java, Macromedia Flash, and Adobe Acrobat. Right now, if I want to exploit somebody, all I would need to do is send a common, malformed data file with an embedded backdoor Trojan hidden inside.

Although firewalls help, they are almost a non-issue. Firewalls stop inbound attacks relatively well. But since most of today's maliciousness is the result of outbound client-side attacks, the rogue programs can easily bypass the filtering rules of the firewall. Nearly every network allows port 80 outbound. A smarter hacker will tell his program to piggyback on some approved process and go out on port 443 using heavy encryption. That way, the outgoing data stream is more than likely left alone and unmonitored. In fact, if a malicious outbound program doesn't use port 443, it is wasting valuable malicious computing cycles. Only a lazy hacker would choose a different port. Once the user has executed an untrusted executable, the host-based firewall can be bypassed, and it's game over-no matter what the firewall is and no matter where it is placed.

Antivirus programs are doing a horrible job of detecting previously unknown malware. I've been an antivirus consultant for 20 years now, and never have detection rates, across the board, been so bad. Blame it on polymorphic packers, blame it on sheer numbers, but an antivirus product today is more than likely not going to catch the malicious executable trying to launch on your PC.

So, all three of the major popular recommendations listed (patching, firewalls, and antivirus programs) really won't help improve your security in the long run. How do I know these three items aren't helping that much? Because most administrators and users are already doing those three things, and we haven't sent malware or malicious hackers packing yet. In fact, it's worse than ever.

If you want to protect your computer and your environment as best as you can, don't place your faith in a myriad of protection products and techniques. Spend your efforts on preventing users from executing and installing unauthorized executables. I don't care how you do it, just do it. Everything else you have to do in your security defense plan results because you don't want to do the one thing that will have the biggest, long-lasting impact on your overall security risk.

As Jesper mentioned, with Active Directory group policy and XP Pro (or later), you can use Software Restriction Policies. Group policy even has a very similar, slightly more limited, section for preventing programs from executing on Windows 2000 domain computers. Don't let your users be logged in as Administrators. Use appropriate NTFS permissions. Use third-party tools if you have to, but stop your users from executing and installing everything that they want. If you leave the security decision up to the end user, he or she will make the wrong decision. And that wrong decision compromises your network.

You might wonder whether my single recommendation is too Draconian. I get letters all the time to my weekly InfoWorld column telling me that if the reader were to implement such a software lockdown, their end users would revolt and that their company's business would come to a standstill. Hog wash!

Today, much of the Fortune 1000 already does this. They may have other names for it, such as "gold image," "base image," or something like that, but they do it. And it significantly reduces their risk of attack. The Fortune 1000 is somehow making record profits, despite the "Draconian" lockdown. It's much like a company-provided car. The company doesn't mind if you use it for occasional personal business, but they don't want you modifying it in such a way that it breaks all the time.

If I can spam your company with a malicious file attachment and get one of your employees to run it, leading to a complete network compromise in mere minutes, shouldn't you be doing something different? It's all right if you don't or can't implement software restriction policies. I've been at many companies where this is the case. It's just that everything else is second best and, in the long run, requires more work and effort.

Last, I'm a big believer in honeypots as early detection warning systems. No environment will be perfect. Mistakes happen, and malicious code eventually gets executed. Take a machine you're getting ready to throw away because it's too slow and make it a honeypot. Just place it in your network infrastructure somewhere where it's likely to get visited if a malicious insider or outsider starts exploring. Because it's a fake asset, nobody should every try to log into or port scan it. You will, of course, have to fine-tune the honeypot to filter out legitimate management pings and connections. But after that, if something touches it, have it send an alert to an incident response team. If you can't prevent an attack, the next best thing is early warning to reduce damage.

Anti-Malware Is Not a Panacea

Anti-malware software is seen as such an obvious need that few question it. But, is that right? Is it possible that anti-malware is not the panacea that many of the anti-malware vendors seem to claim it is?

Jesper's Thoughts

Anti-malware-long seen as the mainstay of client security-is able to contain exploits it is aware of. By its very nature, anti-malware technology is a reactive approach to security. Reactive approaches are very often important, but they cannot ever truly be a panacea unless they become proactive. For over 20 years, security has been about bar raising. The bad guys discover where our bar is set, and figure out a way to jump over it. We, or in the case of anti-malware, the vendors, respond with a new definition file, and the bar is raised. At some point, the bad guys jump over that bar, and so it is raised again. The cycle continues. How high can that bar be raised? Anti-malware vendors have talked for at least ten years about "behavior blocking" and "predictive analysis." They have even released products that claim to not just be an antivirus program, but an "Intrusion Prevention Technology" because those sound more advanced. The fact still remains that behavior blocking is about detecting known bad stuff. If the bad guy can figure out something that is bad, but not known, there is no protection.

Reliance on bar raising leads to an escalation of conflict. As one party, you, gets better at detecting known bad stuff, the other party, them, gets better at circumventing it. Given that there is an inherent time lag between when the exploit is produced and when the anti-malware vendor has an update available that blocks it, the attacker has a window of opportunity. Simply producing a new definitions file that can be used to detect the newest bad thing can take as long as three days. Add to that the lag in getting it deployed to all the clients that need it, and the window of opportunity can be a week or more. As long as the bad guy can make exploits that are different enough to circumvent the detection mechanism in a shorter amount of time, the detection mechanism will fail completely. If the exploit can mutate all on its own, there is no detection mechanism relying solely on knowing bad stuff that can protect against it.

A fundamentally more secure base, through the two methods I discussed earlier-host isolation and least privilege-allows the other technologies to be far more effective. By supplementing anti-malware, which, after all, can be quite useful, with other technologies that are more adaptive and/or more restrictive you can get true defense-in-depth: technologies complementing each other, instead of being redundant. By taking a minimalist approach and limiting the potential avenues for exploitation, you reduce your reliance on a single protection technology. Both least privilege and host isolation also permit you to tailor your security approach to the unique needs of your environment, whereas other technologies, notably anti-malware, are a one-size-fits-all approach that cannot be optimal in any environment.

That being said, there is no substitute for "wetware" (people). I personally have not had a virus get caught by an anti-virus scanner since 1991, and that one was on an Apple Macintosh II SE. Users who practice safe computing and do not engage in dangerous and promiscuous computing behavior are far less likely to have their computers infected with any malware. Most of the malware, after all, relies on social engineering to entice users to run them. If users were able to make better security decisions, it is possible that there might come a day when anti-malware is not needed. For that to happen, however, we need two things. First, we need software that helps people make good decisions. Let's repeat some earlier screenshots: How is a user to make an intelligent security decision based on the information given in Figures 1 and 2?

Image from book
Figure 1: How many users can make an intelligent security decision based on this dialog box?
Image from book
Figure 2: Or based on this one?

Software makers have always built security functionality on the assumption that users are stupid and cannot make decisions. This needs to change. Windows Vista provides us with an opportunity to start educating our users on how security affects them and how to react to many of the new dialog boxes they will see. However, nothing will truly change until software developers start figuring out how to give users better information, and that will not happen for as long as we buy their software without demanding that.

Roger's Thoughts

Although there are topics where Jesper and I disagree, this is one area where I couldn't agree more. I wouldn't change a sentence. The only thing I can add is that a far better goal than developers giving users better information to make decisions is to make the decision for them. Users shouldn't have to be security experts to run their PCs. They just want to run their legitimate applications, surf the Web, and maybe play a few games, read a few jokes, and watch some videos.

I believe developers should strive harder to make the appropriate decisions for users behind the scenes, so that the user never has an opportunity to make a decision. It's like anti-lock brakes on a car. Car manufacturers finally realized that too many drivers made the big mistake of completely stomping down on the brake pedal during a perceived collision, which removes direction control and actually lets the vehicle slide further than if the driver didn't push down all the way. Anti-lock brakes, when they sense the driver trying to depress the brake pedal too far down to fast, don't let the car tires completely stop moving. Instead, they apply micro stops and starts, which slows down the car as quickly as it is possible to do, while letting enough forward speed remain to give the driver control over their car.

An example of that in the computer world involves outbound blocking, host-based firewalls. Traditional host firewalls will prompt the user with messages similar to those shown previously in Figures 1 and 2. The more intelligent firewalls identify the requesting executable, look it up in their own internal database, and automatically make the decision for the user (when possible). Several host-based firewalls already have this feature, although it is often a premium option.

Tweaking It

The favorite tool of security people has always been the tweaks-the changes you make to a software product to make it "more secure." The assumption often is that the software can be tweaked to better suit your security needs, and that turns out to often be the case. Unfortunately, it is also often the case that the lists of tweaks, commonly peddled under the moniker of "Security Guides" do not take your security needs into account. As it were, many of them fail to take any security needs into account at all. They are just lists of tweaks. There are a few that are good. The Windows Server 2003 Security Guide is very good. The Windows Vista Security Guide is not quite as good, but still far better than many others. Which tweaks matter, however? Does Windows Vista really need many?

Security Tweaks You Should Make

In this section we each give a list of tweaks that we believe you should make to Windows Vista. You may be surprised to see how few there really are.

Jesper's List

My list for what tweaks to make is shorter than it has been for previous versions of Windows. The reason is that many of the changes are now enabled by default. Nevertheless, there is a short list of changes I make.

Password Strength

One of the most important things you can do is to change your password policy. It is well known that users, left to their own devices, use very weak passwords. Yet, it is not at all unknown how to improve the strength of a password. The best option is probably to not use passwords at all. Using smart cards or one-time password tokens provides far better security than passwords because it removes the fundamental weakness-human beings-from the solution. Nevertheless, many organizations cannot implement these technologies. For them, it is imperative to help (force?) users to use better passwords. The default minimum password length in Windows Vista is seven characters. If you increase that to nine you have protected yourself against virtually all password guessing attacks, as long as the passwords are not dictionary words. Only those attacks that use a dictionary with a near approximation of the actual password will have any chance of succeeding. Even if you go to only eight characters, you have stopped the vast majority of attacks.


As with its predecessor, auditing is turned off in Windows Vista by default. Most environments should consider enabling at least some level of auditing. Doing so does not make sense unless you have a process to monitor the events, but you should have one as well. The Windows Vista Security Guide gives reasonable advice on auditing. Using the settings from the relevant configuration in it would be prudent.

Note that if you use the audit settings from the Windows Vista Security Guide you must also configure the "Audit: Force audit policy subcategory settings (Windows Vista Only) to override audit policy category settings." Otherwise the granular audit policies configured by the guide will not apply.

Remove Logon Privileges from Service Accounts

Service accounts are the accounts used as process identities for services. For instance, you may have an EMS or a security program that uses a domain account as its logon identity. These accounts should not be used interactively, and can be restricted from local logons by changing who has the "Allow logon locally" user right.

Network Security: LAN Manager Authentication Level

This is the infamous LMCompatibilityLevel setting. It is set to 3 by default on Windows Vista, which means the computer will send only NTLMv2 authentication outbound, but will accept inbound LM and NTLM authentication. Of course, the generation and storage of LM hashes is turned off by default, which means that the effective setting is actually 4-Send NTLMv2 response only, refuse LM. There is no LM hash to authenticate against. However, there is scant reason not to change this to 5, and refuse inbound NTLM as well. No currently supported version of Windows requires the use of NTLM. It will affect only inbound connections from Windows NT 4.0 systems, and other down-level systems. It could be considered good security hygiene not to permit inbound authentication from a 12-year-old, unsupported operating system.

Keep in mind in the context of NTLMv2 authentication, however, that setting LMCompatibilityLevel to 4 or 5 does nothing to remove LM and NTLM from the network. The client may still send them. I once was sent to discuss security with a large government customer who had been charged with removing LM and NTLM from his network. After a bit of discussion I discovered that he was responsible only for the servers and DCs, not the clients. I was forced to inform him that he would never be able to succeed in this goal because the clients send whatever they want. The goal of removing LM authentication from the network must rest with whoever controls the clients.

Server and Domain Isolation and Firewall Rules

As discussed, one of the most valuable things you can do to protect your network is to limit who can say what to whom. Every organization should review the default firewall settings and tweak them as necessary. For instance, outbound communications rules should be created for all services that are added to the system. Limiting communications is one of the highest bang-for-the-buck areas in security.

Full Hard Disk Encryption

There should be very little excuse for most organizations not to encrypt all hard drives, at least on all laptops-preferably on all computers. Windows Vista ships with the BitLocker feature to do this. However, it can be relatively expensive, at around $100 per seat, to upgrade from Business Edition, which does not have BitLocker, to Enterprise Edition, which does. Enterprise Edition is also not available to all customers, leaving many forced to get Ultimate Edition if they want BitLocker. For this reason, every organization should also evaluate third-party products, such as Utimaco Safeguard Easy, Mobile Armor Data Armor, and PGP. In the end, it is simply prudent practice these days for every organization to use full hard disk encryption everywhere. It provides a level of safeguarding against accidental data disclosure that is well worth the cost and overhead. In addition, I highly recommend that every mobile computer use full hard disk encryption with a second factor for authentication, one which must be kept separate from the computer itself.

Roger's List

I agree with Jesper's list. I'd like to add the following suggestions.

Strong Passwords-Go Long

When considering strong passwords, go longer, and focus less on complexity. Although adding additional character sets seems to mathematically make password cracking exponentially harder, the sad reality is that there is no way to enforce complexity. The vast majority of users, when told to make a complex password, make most of the same passwords or types of passwords. They choose a word root that can be found in a dictionary and add some complexity. If told to add a number, they add a 1 or 2, and it's located at the end. If told to uppercase a letter, it's at the beginning of the password, and more than likely is followed by a lowercase vowel. Forty to fifty percent of all passwords contain two of the three vowels o, a, or e. Password study after password study has shown this to be true.

So when given a chance to increase password strength, go long instead of increasing complexity. In the real world, a nine- or ten-character password is significantly stronger than a seven- or eight-character "complex" password.

Convert All Incoming E-mail to Plain Text

E-mail continues to be a major avenue for malicious attacks. Convert all inbound e-mail to plain-text by default. The user can convert back to HTML if they like, but in taking the less risky default, you will have significantly reduced the risk of attack. Conversion to plain-text removes embedded active content, reveals spoofed URLs, and removes malicious scripts.

Run Services on Non-Default Ports

Security by obscurity works. When running a popularly attacked service that runs on a known port, run it on another, non-default port if the general public doesn't need access to it. For example, if your environment runs Microsoft SQL servers, don't run them on ports 1433 or 1434. Instead, run them on something like 34133 and 34144. Why?

Because 99.99 percent of all remote attacks use automated malware that looks for the default ports only. By simply moving your services-be they SSH, RDP, SQL, or HTTP-to a high, non-default port, you significantly reduce your risk of attack. Yes, hackers can port scan your environment, but most don't. Of the ones that do, most run the default Nmap portscan, which scans only well-known ports. Running your services on a non-default, high port is one of the best security defenses you can have.

Don't Use Public, Shared Computers

I'm a freak about not using public, shared computers when using services that require you to input logon credential information. But at conferences and hotels, I often see people picking up their e-mail and logging on to their corporate network. Public computers are often compromised, especially at computer security conferences, and should never be trusted for anything more than picking up the latest headlines from the news Web sites.

Turn on DEP for Internet Explorer

Internet Explorer's main executable, Iexplore.exe, is not protected by Date Execution Protection. Microsoft intentionally disabled it because a few popular add-ons and Sun's Java virtual machine could not execute if DEP was enabled. The popular add-ons have been updated. If you are not running Sun's JVM, enable DEP for Internet Explorer.

Security Tweaks You Shouldn't Make

There are a few tweaks that are just bad. First, there is the obvious that you should not disable default security settings. Then there is the danger of turning on a setting with severe side effects. This list is not exhaustive but includes some of the tweaks we have seen cause large problems in many organizations.

Jesper's List

My list of "don'ts" focuses on things that will destroy your computer, make it unsupportable, or make no difference. Several of these are a combination of the three. That's the beauty of bad tweaks: They are usually bad on so many levels.

Wholesale ACL Changes

I have seen many organizations completely destroy their network by making one simple change: modifying the default ACL on built-in objects in Windows. One favorite practice is to tweak the ACL on the root of the system drive.  The best advice is that while it is OK to change ACLs on individual binaries, it is extremely dangerous to change ACLs on entire directory hierarchies. This practice should be avoided at all costs. Few threats justify that type of modification to the OS.

Some organizations even use wholesale ACL changes to permit users to do things they could not otherwise do. For example, I have seen organizations change ACLs on %ProgramFiles% to permit standard users to run a particular program. This is highly unadvisable. A better approach would be to spend the effort determining which files and/or folders the program needs access to and loosen the ACL only on those. An even better approach is to return the program to the vendor that wrote it and demand either a program that works properly as a standard user or a full refund.

Many User Rights Changes

User rights and privileges are often extremely powerful. Many organizations will give users certain rights because some program or other requires it. Some of the privileges they assign to users effectively turn those users into administrators. Examples include the "Act as part of the operating system," "Debug programs," "Backup files or directories," "Load and unload device drivers," and "Modify an object label" privileges.

Disabling MSV1_0

Only once have I seem someone try something as absurd as attempting to disable MSV1_0. MSV1_0 is the authentication package that handles authentication to the local system, as well as passing authentication on to Kerberos for domain authentication. One organization I saw had apparently heard that MSV1_0 implemented NTLM, which is true, and that NTLM is bad, which is not quite true. From this, the IT Security personnel at the organization drew the absolutely absurd conclusion that it should disable MSV1_0 entirely, by removing its reference from the Registry as a logon package. This broke all local authentication. Domain authentication still worked. However, the organization also required resetting machine passwords every seven days. (Machine passwords are probably going to resist attacks for a few billion years.) In other words, if a computer was ever turned off for a couple of weeks the only fix was to re-image the system. Don't do that.

System Cryptography: Use FIPS-Compliant Algorithms for Encryption, Signing, and Hashing

FIPS refers to the U.S. Federal Information Processing Standard 140, the standard that defines which encryption algorithms are approved for use in Federal Government systems. Generally speaking, FIPS 140–1 (the old standard, which this setting enforces) advocates good algorithms, and using solid crypto is not bad. However, this particular setting is problematic because the Secure Sockets Layer (SSL) protocol used on virtually every secure Web site on the Internet is not FIPS 140-x compliant. In other words, if you enable this setting you disable use of secure Web sites. Not only that, but in current versions of Windows (Windows XP Pro SP2 and later), enabling FIPS will actually reduce the cryptographic strength of the ciphers used in many features in Windows (EFS, hashing, and so on). They already use FIPS 140–2 compliant ciphers.

Roger's List

People make a lot of tweaks, and some of these tweaks are fraught with peril. Some mistakes are naïve. Others are the result of over-enthusiastic experimentation. And some mistakes are clearly worse than others. While I could go on about these mistakes all day, two in particular are real pet peeves of mine.

Disabling UAC

Many Web sites and computer stores are recommending disabling UAC because it inconveniences the casual user too much. I've been using UAC for over a year now, and I can tell you that Microsoft has it pretty well fine-tuned. The average user, after a few weeks of use, will barely notice it. But when it pops up during installs and control panel activities, the user will be forced to make a single approval mouse click (if already logged in with an administrator account). It isn't much effort, it's almost not noticed, and it prevents a significant amount of current attacks.

During a recent Web-surfing session, I went looking for the Web Attacker Trojan. It is responsible for two-thirds of the client-side exploits that happen from a user visiting a compromised web page with an unpatched browser. Although I was intentionally looking for it, I was surprised when Vista popped up a UAC prompt when I visited a compromised Web site. No warning-just an admin approval prompt. I, of course, declined. Later, I visited the same Web site on my Windows XP Pro machine, fully patched. The Web Attacker Trojan silently installed on the older machine without even a blip or flash. UAC will stop many "drive-by-downloads" and for that reason, it should not be disabled.

Disabling Process Tracking Auditing

Many guides recommend disabling (or choosing No Auditing) for the Process Tracking audit policy category. Conventional wisdom says that the category causes too much unneeded "noise" as compared to things you need to be legitimately worried about. And this is true for most computer types, such as workstations and file servers.

But on dedicated domain controllers, it should be enabled. With it enabled, if a hacker is ever able to break into the domain controller, Process Tracking will log the programs they started, stopped, and stalled. Once a dedicated domain controller is up and functioning, it doesn't stop and start a lot of processes and programs. Administrators and end users shouldn't be starting Microsoft Office and web surfing from their domain controllers. On a dedicated domain controller, auditing Process Tracking is high value and low noise.

Agreeing to Disagree

How to best secure a computer, or a network of computers, is not a black and white issue. Each security professional will develop a slightly different approach based on his or her understanding of the situation, the threats, the needs of the organization, and the technology. Those who present the situation as a simple case of "if you don't do all these things you are not doing the right thing" are not only over-simplifying the problem-they are demonstrating a fundamental lack of understanding of how security works. The authors of the book do not always agree on how to manage security either. As we were discussing our approach to one particular issue for the book, we realized that not only were we not going to agree, but that the various arguments would be extremely valuable for the book as an example of how to analyze a situation and come up with individual solutions that handled the problem according to our individual needs (or prejudices). For that reason, we will present a rather more detailed analysis of one particular setting: account lockout.

There is, and has been for many years, a raging debate about the wisdom of account lockout. Account lockout is the feature that disables accounts when an attacker, or the legitimate user, guesses the password wrong more than a certain number of times. Account lockout is meant to provide some protection against attacks on weak passwords.

Account lockout has a number of parameters associated with it, as shown in Figure 3.

  • Account lockout duration: The time to keep the account locked out in minutes. If this is set to 0, the account remains locked out until an administrator unlocks it.

  • Account lockout threshold: The number of invalid logon attempts before the account is locked out. For instance, if this is set to 3, the account is locked out after the password has been entered incorrectly three times

  • Reset account lockout counter after: This parameter can control the counter that keeps track of invalid passwords. For instance, it is generally considered prudent to lock out the account only if the password is entered incorrectly n times within a span of a certain number of minutes. If this setting is configured at 30 minutes, the invalid logon attempt counter is reset to 0 after 30 minutes.

Image from book
Figure 3: Account Lockout settings

The facts surrounding account lockout are as follows:

  • Fact: No single factor has as large an impact on password strength as length.

  • Fact: The more characters in the character set the password was composed from, the stronger the password.

  • Fact: Passwords that can be attacked with heuristics are much weaker than passwords that cannot. However, even if heuristics are possible, length is still the all-important factor. If you have a passphrase composed of 5 words, chosen at random from a dictionary of 2,500 words, we know everything there is to know about the password. Yet, there are 2,5005= 97,656,250,000,000,000 possible passwords, far exceeding the security requirements of all but a few organizations. If you know, however, that the first two words both start with E, and that there are only 200 words in the dictionary that start with E, you have (2,5003)*(2002)= 625,000,000,000,000 possibilities. That passphrase has a strength of 0.6 percent of the original. Now, if you offset that with another word, making the total length 6 words, you are back to a strength of 1.5625*1018, which is 16 times stronger than the original. Adding just one word in length makes the password 16 times stronger, even after one accounts for the fact that it can be attacked with heuristics.

  • Fact: Account lockout works only on the authentication server, the computer that hosts the account. In a domain, this is a domain controller. Therefore, account lockout does not operate on domain accounts while the computer is not connected to the domain.

  • Fact: For several versions now Windows has included a delay feature that makes password guessing at the interactive logon prompt infeasible. If the password is entered incorrectly more than five times, each successive attempt takes 1 minute. In other words, the attacker can guess only one password per minute maximum at the interactive logon prompt.

  • Fact: Enabling account lockout creates a denial of service attack threat against the computer/network. The attacker can disable all operations on the network by repeatedly guessing known bad passwords against one or more accounts. This attack can, and has in the past, been used to lock out all accounts on the network, disabling all work until the attack stops.

  • Fact: Account lockout operates either for all or no accounts on the logon server. There is no way to exempt particular accounts. That means that services can also be disabled by locking out the accounts associated with them. However, administrative accounts on the logon server-in the case of a DC, domain administrative accounts-can always log on to the logon server interactively regardless of their lockout state. This is to ensure that someone can unlock the accounts.

Jesper's Position

As mentioned earlier, the purpose of account lockout is to protect weak passwords from being guessed. Therefore, it stands to reason that you can add one fact to the preceding material:


Additional fact: If passwords are not weak, they won't be guessed, and therefore do not require protection from that particular attack.

The decision as to whether to use account lockout, and what to set it to, can therefore be reduced to an assessment of how many accounts will be protected by it. In fact, you can extend this statement to say that account lockout provides no additional protection that you cannot get simply by having better passwords. Of course, that leaves you with two issues: what is a "better password" or more simply, what is a strong password; and how do you get users to use them? We cannot really answer the latter question before we answer the first, so let us address that one.

The mathematics of password strength is quite fascinating. The only truly meaningful measure of password strength is the length of time until the password will be successfully attacked, and that measure is based entirely on just a few factors:

  • The length of the password

  • The character set used for each character in the password

  • Whether heuristics can be used to attack the password

The first two of the three factors that contribute to password strength combine to make all calculations exponential. For example, if a password has 8 characters, all randomly chosen from a character set of 62 characters, there would be 628= 218,340,105,584,896 possible passwords. The basic assumption of randomness is, of course, frequently violated, and this is often stated as one of the many reasons why passwords are a bad way to authenticate. However, one must realize that true randomness in passwords is not particularly important. What is important is seeming randomness. The password does not have to truly be random; it must just seem random from the point of view of an attacker. In other words, the attacker must not be able to use heuristics to attack the password. This assumption is also often violated, but it is a much lower bar than true randomness.

The exponential effect where the strength of a password is increased by a factor equivalent to the size of the character set for each character added to the password is crucial. It gives passwords several interesting mathematical properties:

  • No single factor is more important in password strength than length. For instance, in the 8-character password discussed previously, let us say you add 14 characters to the character set by including symbols in the password. This would give you 1,113,034,787,454,976 possible passwords, or four times as many possible passwords. That sounds great, but what if you instead added a single character in length, going to 9 characters? That would give you 13,537,086,546,263,552 possible passwords instead, or 62 times as many! Clearly, nothing contributes more to password strength than length.

  • There is very little difference in structure between an extremely weak, and an extremely strong password. For instance, a 6-character password, composed of a character set consisting of the 32 most common characters, will be guessed in roughly 10 days (using a brute force method and assuming no account lockout). Adding one character from the same set making the password 7 characters means it will be resilient for about a year. Adding another, making the password 8 characters, makes it resilient for 29 years. Go to 9 characters, and the password will last almost a millennium, and with 10 characters it will be resilient for almost 30,000 years, roughly as long as Homo Sapiens has walked the earth. (Note that none of these calculations take Moore's Law into account. Moores law says that computing horsepower will double roughly every 18 months. Historically, the industry has outperformed that prediction most of the time.)

Account lockout will be beneficial only for a certain category of passwords, and for a certain category of organizations. The passwords that benefit are those that without account lockout could be cracked within the validity period of the password, but which, with account lockout, will resist guessing attacks for as long as they are valid. For example, a password that takes a year to guess does not need account lockout protection if it is only valid for 90 days. A password that takes 30 seconds to guess without account lockout and 2 days with account lockout receives no benefit if it is valid for 90 days. However, a password that takes 2 days to guess without account lockout, and 120 days with, is protected by account lockout. The question is: what does such a password look like?

By analyzing the parameters of password strength, we can actually tell what such a password looks like. Table 1 shows an analysis, character by character, of how strong a password is. The data in this table assumes that an attacker can guess 600 passwords per second without account lockout. The account lockout figures use a lockout counter of 3 and a timer of one minute, resulting in only two guesses per minute. Strengths of passwords length one to eight characters are calculated based on heuristics from real passwords. Research has told us that passwords often have a capital letter in the first character (26 possibilities), a lower-case letter, typically an a, e, or o, in the second position, and then a mix of lowercase and uppercase, and finally either a 1 or a 2 in the last position (eighth in our case). Using these figures, you can tell that account lockout would protect five- and six-character passwords but that any password longer than seven characters would be strong enough without account lockout.

Table 1: Strength of Passwords with Various Parameters
Open table as spreadsheet












































































The calculations in Table 1 are not particularly complicated and can be replicated for a number of scenarios. The results are relatively consistent: Account lockout can protect short passwords with small character sets. Increasing the password length by one or a few characters typically makes the password strong enough on its own to not need account lockout protection. Even if passwords were using only uppercase letters, an eight-character password would last 2,014 days. Account lockout would protect five-, six-, and seven-character passwords based on only uppercase letters.

Given the mathematics of passwords, and the analysis shown in Table 1, it is my firm belief that account lockout does not belong in the modern enterprise. Passwords should be at least eight characters long, which is easily enforceable via Group Policy. If analysis shows that users use extraordinarily weak passwords (i.e.. with an even smaller character set) length can be increased to nine or ten characters, which would protect the vast majority of passwords against guessing. Based on these figures, account lockout is not needed any longer.

In addition, an organization that has a very large number of accounts is unlikely to benefit from account lockout. If an attacker is interested only in an account out of hundreds of thousands, the attack can be modified to defeat account lockout. Instead of trying an entire dictionary against one account and then moving on to the next account the attacker would try one word against all accounts in turn. By the time he starts trying the second word against the first account the lockout timer has typically expired and he can try another word. Statistically it is equally likely to find one account in a large pool that has a particular password as it is to find on password from a large pool that maps to a certain account.

Account lockout may have been needed 25 years ago when passwords could be no longer than four characters and complexity was not supported. Today, it merely increases risk, by enabling an otherwise impossible attack. That risk, in my judgment, is unacceptably high for every environment, and greatly outweighs the extremely minor protection offered by the feature.

In addition, account lockout causes the organization to lose focus on the more important issue, which is improving password strength. It obscures the real problem: weak passwords, with a fix that does not actually solve that problem in the majority of cases. By simply enforcing slightly longer passwords and educating users on how to pick better passwords, the organization could make far more headway in security. Instead, organizations turn on account lockout, declare success and move on, leaving a gaping hole in their security. This is particularly true of organizations with a large number of users, as noted earlier.

Of course, if an organization absolutely cannot enforce better passwords, it should get rid of passwords altogether and use other means of authentication, such as smart cards or one-time PIN-type devices.

Roger's Position

Without programming your own password filter .DLL, or installing a third-party program, true password complexity cannot be guaranteed. In Windows today, passwords such as Password1, Password2, and P@ssw0rd, are considered complex. I can set my automated password-guessing tool to guess as many passwords as Windows will accept, fire it off against OWA or some other Internet-accessible interactive logon prompt, and let it guess for days to months. I can get around Windows' "auto-delay" feature by guessing in multiple sessions at the same time, using a varied list of names against multiple logon computers. The popular password-guessing program Hydra makes this all easily possible. Most "complex" passwords can be broken in a few days, when using well-known password logic (for example, the first letter is probably capitalized, a number is at the end and it is a 1 or a 2, the password contains two vowels, and so on). If passwords cannot be guaranteed to be complex on a Windows computer, then you need account lockout.

Enable account lockout with any setting. I personally like to tell Windows to lock out the account for 1 minute after three bad guesses in 1 minute, with the account auto-unlocking. This further delays any password guessing program, probably to the point of keeping it from being successful. But the best reason of all, over Jesper's recommendation, is that it generates an Account Locked Out event in the security log. While most administrators might ignore a "failed logon attempt message," which is relatively common, Account Locked Out messages are rarer, and easy to notice and monitor.

If an Internet worm or attacker tries to use the account lockout feature to denial service attack a network, clear up the program, stop the attack, and all users are let in 1 minute later automatically. If a malicious program or hacker is attacking my network, I'm not sure how comfortable I am letting my users log on during the active attack. Clear up the attack, and everyone is allowed back in 60 seconds later. Seems like a good tradeoff to me.

Top 10
Free Mobile And Desktop Apps For Accessing Restricted Websites
TOYOTA CAMRY 2; 2.5 : Camry now more comely
KIA SORENTO 2.2CRDi : Fuel-sipping slugger
How To Setup, Password Protect & Encrypt Wireless Internet Connection
Emulate And Run iPad Apps On Windows, Mac OS X & Linux With iPadian
Backup & Restore Game Progress From Any Game With SaveGameProgress
Generate A Facebook Timeline Cover Using A Free App
New App for Women ‘Remix’ Offers Fashion Advice & Style Tips
SG50 Ferrari F12berlinetta : Prancing Horse for Lion City's 50th
- Messages forwarded by Outlook rule go nowhere
- Create and Deploy Windows 7 Image
- How do I check to see if my exchange 2003 is an open relay? (not using a open relay tester tool online, but on the console)
- Creating and using an unencrypted cookie in ASP.NET
- Directories
- Poor Performance on Sharepoint 2010 Server
- SBS 2008 ~ The e-mail alias already exists...
- Public to Private IP - DNS Changes
- Send Email from Winform application
- How to create a .mdb file from ms sql server database.......
programming4us programming4us