programming4us
programming4us
ENTERPRISE

Microsoft .NET : Design Principles and Patterns - Applying Requirements by Design (part 2) - Security

- Free product key for windows 10
- Free Product Key for Microsoft office 365
- Malwarebytes Premium 3.7.1 Serial Keys (LifeTime) 2019

2. Security

Located at Carnegie Mellon University in Pittsburgh, Pennsylvania, the CERT Coordination Center (CERT/CC) analyzes the current state of Internet security. It regularly receives reports of vulnerabilities and researches the inner causes of security vulnerabilities. The center’s purpose is to help with the development of secure coding practices.

Figure 1 shows a statistic about the number of identified vulnerabilities in the past ten years. As you can see, the trend is impressive. Also, you should consider that the data includes only the first two quarters of 2008.

Identified security vulnerabilities in past ten years

Figure 1. Identified security vulnerabilities in past ten years

It is broadly accepted that these numbers have a common root—they refer to software created through methodologies not specifically oriented to security. On the other hand, the problem of security is tightly related to the explosion in the popularity of the Internet. Only ten years ago, the big bubble was just a tiny balloon.

In sharp contrast with the ISO/IEC 9126 standard, all current methodologies for software development (agile, waterfall, MSF, and the like) hardly mention the word security. Additionally, the use of these methodologies has not resulted (yet?) in a measurable reduction of security bugs. To accomplish this, you need more than these methodologies offer.

Security as a (Strict) Requirement

We can’t really say whether this is a real story or an urban legend, but it’s being said that a few years ago, in the early days of the .NET Framework, a consultant went to some CIA office for a training gig. When introducing Code Access Security—the .NET Framework mechanism to limit access to code—the consultant asked students the following question: "Are you really serious about security here?"

Can you guess the answer? It was sort of like this: "Not only yes, but HELL YES. And you’ll experience that yourself when you attempt to get out of the building."

Being serious about (software) security, though, is a subtle concept that goes far beyond even your best intentions. As Microsoft’s senior security program manager Michael Howard points out:

If your engineers know nothing about the basic security tenets, common security defect types, basic secure design, or security testing, there really is no reasonable chance they could produce secure software. I say this because, on the average, software engineers don’t pay enough attention to security. They may know quite a lot about security features, but they need to have a better understanding of what it takes to build and deliver secure features.

Security must be taken care of from the beginning. A secure design starts with the architecture; it can’t be something you bolt on at a later time. Security is by design. To address security properly, you need a methodology developed with security in mind that leads you to design your system with security in mind. This is just what the Security Development Lifecycle (SDL) is all about.

Security Development Lifecycle

SDL is a software development process that Microsoft uses internally to improve software security by reducing security bugs. SDL is not just an internal methodology. Based on the impressive results obtained internally, Microsoft is now pushing SDL out to any development team that wants to be really serious about security.

SDL is essentially an iterative process that focuses on security aspects of developing software. SDL doesn’t mandate a particular software development process and doesn’t preclude any. It is agnostic to the methodology in use in the project—be it waterfall, agile, spiral, or whatever else.

SDL is the incarnation of the SD3+C principle, which is a shortcut for "Secure by Design, Secure by Default, Secure in Deployment, plus Communication." Secure by Design refers to identifying potential security risks starting with the design phase. Secure by Default refers to reducing the attack surface of each component and making it run with the least possible number of privileges. Secure in Deployment refers to making security requirements clear during deployment. Communication refers to sharing information about findings to apply a fix in a timely manner.

Foundations of SDL: Layering

The foundations of SDL are essentially three: layering, componentization, and roles.

Decomposing the architecture to layers is important because of the resulting separation of concerns. Having functionality organized in distinct layers makes it easier to map functions to physical tiers as appropriate. This is beneficial at various levels.

For example, it is beneficial for the data server.

You can isolate the data server at will, and even access it through a separate network. In this case, the data server is much less sensitive to denial of service (DoS) attacks because of the firewalls scattered along the way that can recognize and neutralize DoS packets.

You move all security checks to the business layer running on the application server and end up with a single user for the database—the data layer. Among other things, this results in a bit less work for the database and a pinch of additional scalability for the system.

Layers are beneficial for the application server, too.

You use Code Access Security (CAS) on the business components to stop untrusted code from executing privileged actions. You use CAS imperatively through xxxPermission classes to decide what to do based on actual permissions. You use CAS declaratively on classes or assemblies through xxxPermission attributes to prevent unauthorized use of sensitive components. If you have services, the contract helps to delimit what gets in and what gets out of the service.

Finally, if layering is coupled with thin clients, you have fewer upgrades (which are always a risk for the stability of the application) and less logic running on the client. Securitywise, this means that a possible dump of the client process would reveal much less information, so being able to use the client application in partial trust mode is more likely.

Foundations of SDL: Componentization

Each layer is decomposed to components. Components are organized by functions and required security privileges. It should be noted that performance considerations might lead you to grouping or further factorizing components in successive iterations.

Componentization here means identifying the components to secure and not merely breaking down the logical architecture into a group of assemblies.

For each component, you define the public contract and get to know exactly what data is expected to come in and out of the component. The decomposition can be hierarchical. From a security point of view, at this stage you are interested only in components within a layer that provide a service. You are not interested, for example, in the object model (that is, the domain model, typed DataSets, custom DTOs) because it is shared by multiple layers and represents only data and behavior on the data.

For each component, you identify the least possible set of privileges that make it run. From a security perspective, this means that in case of a successful attack, attackers gain the minimum possible set of privileges.

Components going to different processes run in total isolation and each has its own access control list (ACL) and Windows privileges set. Other components, conversely, might require their own AppDomain within the same .NET process. An AppDomain is like a virtual process within a .NET application that the Common Language Runtime (CLR) uses to isolate code within a secure boundary. (Note, however, that an AppDomain doesn’t represent a security barrier for applications running in full-trust mode.) An AppDomain can be sandboxed to have a limited set of permissions that, for example, limit disk access, socket access, and the like.

Foundation of SDL: Roles

Every application has its own assets. In general, an asset is any data that attackers might aim at, including a component with high privileges. Users access assets through the routes specified by use cases. From a security perspective, you should associate use cases with categories of users authorized to manage related assets.

A role is just a logical attribute assigned to a user. A role refers to the logical role the user plays in the context of the application. In terms of configuration, each user can be assigned one or more roles. This information is attached to the .NET identity object, and the application code can check it before the execution of critical operations. For example, an application might define two roles—Admin and Guest, each representative of a set of application-specific permissions. Users belonging to the Admin role can perform tasks that other users are prohibited from performing.

Assigning roles to a user account doesn’t add any security restrictions by itself. It is the responsibility of the application—typically, the business layer—to ensure that users perform only operations compatible with their role.

With roles, you employ a unique model for authorization, thus unifying heterogeneous security models such as LDAP, NTFS, database, and file system. Also, testing is easier. By impersonating a role, you can test access on any layer.

In a role-based security model, total risks related to the use of impersonation and delegation are mitigated. Impersonation allows a process to run using the security credentials of the impersonated user but, unlike delegation, it doesn’t allow access to remote resources on behalf of the impersonated user. In both cases, the original caller’s security context can be used to go through computer boundaries from the user interface to the middle tier and then all the way down to the database. This is a risk in a security model in which permissions are restricted by object. However, in a role-based security model, the ability to execute a method that accesses specific resources is determined by role membership, not credentials. User’s credentials might not be sufficient to operate on the application and data server.

Authorization Manager (AzMan) is a separate Windows download that enables you to group individual operations together to form tasks. You can then authorize roles to perform specific tasks, individual operations, or both. AzMan offers a centralized console (an MMC snap-in) to define manager roles, operations, and users.

Note

AzMan is a COM-based component that has very little to share with the .NET Framework. The .NET-based successor to AzMan is still in the works somewhere in Redmond. The community of developers expects something especially now that Microsoft has unveiled a new claims-based identity model that essentially factors authentication out of applications so that each request brings its own set of claims, including user name, e-mail address, user role, and even more specific information.

Threat Model

Layering, componentization, and roles presuppose that, as an architect, you know the assets (such as sensitive data, highly privileged components) you want to protect from attackers. It also presupposes that you understand the threats related to the system you’re building and which vulnerabilities it might be exposed to after it is implemented. Design for security means that you develop a threat model, understand vulnerabilities, and do something to mitigate risks.

Ideally, you should not stop at designing this into your software, but look ahead to threats and vulnerabilities in the deployment environment and to those resulting from interaction with other products or systems. To this end, understanding the threats and developing a threat model is a must. For threats found at the design level, applying countermeasures is easy. Once the application has been developed, applying countermeasures is much harder. If an application is deployed, it’s nearly impossible to apply internal countermeasures—you have to rely on external security practices and devices. Therefore, it’s better to architect systems with built-in security features.

Threat modeling essentially consists of examining components for different types of threats. STRIDE is a threat modeling practice that lists the following six types of threats:

  • Spoofing of user identity. Refers to using false identities to get into the system. This threat is mitigated by filtering out invalid IP addresses.

  • Tampering. Refers to intercepting/modifying data during a module’s conversation. This threat is mitigated by protecting the communication channel (for example, SSL or IPSec).

  • Repudiation. Refers to the execution of operations that can’t be traced back to the author. This threat is mitigated by strong auditing policies.

  • Information disclosure. Refers to unveiling private and sensitive information to unauthorized users. This threat is mitigated by enhanced authorization rules.

  • Denial of service. Refers to overloading a system up to the point of blocking it. This threat is mitigated by filtering out requests and frequently and carefully checking the use of the bandwidth.

  • Elevation of privilege. Refers to executing operations that require a higher privilege than the privilege currently assigned. This threat is mitigated by assigning the least possible privilege to any components.

After you have the complete list of threats that might apply to your application, you prioritize them based on the risks you see associated with each threat. It is not realistic, in fact, that you address all threats you find. Security doesn’t come for free, and you should balance costs with effectiveness. As a result, threats that you regard as unlikely or not particularly harmful can be given a lower priority or not covered at all.

How do you associate a risk with a threat? You use the DREAD model. It rates the risk as the probability of the attack multiplied by the impact it might have on the system. You should focus on the following aspects:

  • Discoverability. Refers to how high the likelihood is that an attacker discovers the vulnerability. It is a probability attribute.

  • Reproducibility. Refers to how easy it could be to replicate the attack. It is a probability attribute.

  • Exploitability. Refers to how easy it could be to perpetrate the attack. It is a probability attribute.

  • Affected users. Refers to the number of users affected by the attack. It is an impact attribute.

  • Damage potential. Refers to the quantity of damage the attack might produce. It is an impact attribute.

You typically use a simple High, Medium, or Low scale to determine the priority of the threats and decide which to address and when.

Note

STRIDE and DREAD is the classic analysis model pushed by the Security Development Lifecycle (SDL) team and is based on the attacker’s viewpoint. It works great in an enterprise scenario, but it requires a security specialist because the resulting threat model is large and complex. Another, simplified, model is emerging—the CIA/PI model, which stands for Confidentiality Integrity Availability/Probability Impact. This model is simplified and focuses on the defender’s point of view.

Security and the Architect

An inherently secure design, a good threat model, and a precise analysis of the risk might mean very little if you then pair them with a weak and insecure implementation. As an architect, you should intervene at three levels: development, code review, and testing.

As far as development is concerned, the use of strong typing should be enforced because, by itself, it cuts off a good share of possible bugs. Likewise, knowledge of common security patterns (for example, the "all input is evil" pattern), application of a good idiomatic design, and static code analysis (for example, using FxCop) are all practices to apply regularly and rigorously.

Sessions of code review should be dedicated to a careful examination of the actual configuration and implementation of security through CAS, and to spot the portions of code prone to amplified attacks, such as cross-site scripting, SQL injection, overflows, and similar attack mechanisms.

Unit testing for security is also important if your system receives files and sequences of bytes. You might want to consider a technique known as fuzzing. Fuzzing is a software testing technique through which you pass random data to a component as input. The code might throw an appropriate exception or degrade gracefully. However, it might also crash or fail some expected assertions. This technique can reveal some otherwise hidden bugs.

Final Security Push

Although security should be planned for from the outset, you can hardly make some serious security tests until the feature set is complete and the product is close to its beta stage. It goes without saying that any anomalies found during security tests lead the team to reconsidering the design and implementation of the application, and even the threat model.

The final security push before shipping to the customer is a delicate examination and should preferably be delegated to someone outside the team, preferably some other independent figure.

Releasing to production doesn’t mean the end of the security life cycle. As long as a system is up and running, it is exposed to possible attacks. You should always find time for penetration testing, which might lead to finding new vulnerabilities. So the team then starts the cycle again with the analysis of the design, implementation, and threat model. Over and over again, in an endless loop.

Performance Considerations

You might wonder why we’re including a sidebar on performance rather than a full "Design for Performance" section. Performance is something that results from the actual behavior of the system, not something you can put in. If you’re creating a standalone, small disconnected program, you can optimize it almost at will. It is radically different when we move up in scope to consider an enterprise-class system.

Performance is not something absolute.

What is performance? Is it the response time the end user perceives? Is it resource utilization that might or might not penalize the middle tier? Is it network latency or database I/O latency? Is it related to caching or smarter algorithms? Is it a matter of bad design? Is it merely horsepower?

Too often, a design decision involves a tradeoff between performance and scalability. You release some performance-oriented improvement to achieve better scalability—that is, a better (read, faster) response when the workload grows. Performance is never something absolute.

In an enterprise-class system, efficiency and performance are certainly requirements to take into account, but they are not fundamental requirements.

In our opinion, a bad design influences performance, but there’s no special suggestion we can share to help you to come up with a high-performance design. The design is either good or bad; if it’s good, it sets the groundwork for good performance.

As Donald Knuth used to say, "Premature optimization is the root of all evil." So optimizing is fine and necessary, but you should care about it only when you have evidence of poor performance. And only when you know what is doing poorly and that it can be improved. Optimization is timely—it is never premature, never late.

Performance is hardly something that works (or doesn’t work) in theory. You can hardly say from a design or, worse yet, from a specification whether the resulting system will perform poorly or not. You build the system in the best and simplest way you can. You adhere to OOD principles and code your way to the fullest. Then you test the system.

If it works, but it doesn’t work as fast as it should, you profile the system and figure out what can be improved—be it a stored procedure, an intermediate cache, or a dynamic proxy injection. If the design is flexible enough and leaves room for changes, you shouldn’t have a hard time applying the necessary optimization.

Other  
 
Top 10
Free Mobile And Desktop Apps For Accessing Restricted Websites
MASERATI QUATTROPORTE; DIESEL : Lure of Italian limos
TOYOTA CAMRY 2; 2.5 : Camry now more comely
KIA SORENTO 2.2CRDi : Fuel-sipping slugger
How To Setup, Password Protect & Encrypt Wireless Internet Connection
Emulate And Run iPad Apps On Windows, Mac OS X & Linux With iPadian
Backup & Restore Game Progress From Any Game With SaveGameProgress
Generate A Facebook Timeline Cover Using A Free App
New App for Women ‘Remix’ Offers Fashion Advice & Style Tips
SG50 Ferrari F12berlinetta : Prancing Horse for Lion City's 50th
- Messages forwarded by Outlook rule go nowhere
- Create and Deploy Windows 7 Image
- How do I check to see if my exchange 2003 is an open relay? (not using a open relay tester tool online, but on the console)
- Creating and using an unencrypted cookie in ASP.NET
- Directories
- Poor Performance on Sharepoint 2010 Server
- SBS 2008 ~ The e-mail alias already exists...
- Public to Private IP - DNS Changes
- Send Email from Winform application
- How to create a .mdb file from ms sql server database.......
programming4us programming4us
programming4us
 
 
programming4us