Wednesday, August 4, 2010

Why Perform Authenticated Web Application Security Assessments?

The main difference between our Basic and Standard web application security assessment services is that for Basic assessments, we only perform unauthenticated testing, unless of course we gain authenticated access through exploitation of some vulnerability. Our standard application security assessments test the application from both unauthenticated perspectives and authenticated perspectives of user roles in scope. Let's briefly describe what I mean by unauthenticated and authenticated perspectives as well as authenticated roles.

  • Unauthenticated: Application content (pages) available to a user without credentials e.g. /index.aspx, /products.jsp, etc
  • Authenticated: Content only available to a user with credentials e.g. /admin.php, /change_password.aspx, etc
  • Authenticated Role: A group of users with access to a particular subset of the application's entire authenticated content. For example, a "helpdesk" role might only have access to create and close tickets but a "helpdesk_manager" role might also have access to ticket reporting functionality.
While there are many reasons an organization may wish to identify vulnerabilities that exist from an unauthenticated perspective, there are many more reasons organizations should consider testing from both unauthenticated and authenticated perspectives.

One of the most compelling reasons to test from an authenticated user's perspective is that some vulnerabilities are exploitable without credentials but are only discoverable with credentials. Furthermore, a certain vulnerability may reside in a page only accessible by a certain authenticated role. Consider the following example scenarios:
  • SQL injection is possible within an application but only on a page that requires authentication. Without actually authenticating, an assessor would be unable to identify this major vulnerability.
  • A web application assigns predictable session identifier values but only sets them after successful authentication. Without authenticated testing a serious external weakness which would allow user impersonation would go unnoticed. Furthermore, since it is a best practice to only provide session IDs after successful authentication, the application would appear perfectly secure to the unauthenticated assessor.
  • A web application redirects users to pages that are not linked to from an unauthenticated perspective. Authorization is not enforced, allowing anyone who forcefully browses to these pages, authenticated access. Without credentials, and depending on the complexity of the pages' directory structure and file names, unauthenticated testing may very well leave this serious external weakness unrevealed.
  • A web application is vulnerable to cross-site request forgery (CSRF) and allows an attacker to force users to order checks to an address of the attacker's choice. Without authenticated testing, the check order functionality would not have been touched by the assessor even though CSRF is an externally-initiated attack, relying on the users' own session cookies.
  • A web application allows a normal authenticated user to obtain administrative user privileges based on the presence of an "admin" parameter or cookie. Without testing from both the user and administrator perspectives, this flaw may not be discovered.
Obviously a vulnerability has to be discovered before it can be exploited, but many applications have a more exposed unauthenticated attack surface due to the triviality of gaining an account. Authenticated testing may not add the same value for applications which have a small amount of users, do not have sensitive authenticated functionality, have strict out-of-band user registration, or have a high level of trust for their users. However, for those applications that do have sensitive authenticated functionality, many users, or a low level of trust for their users, authenticated testing is usually recommended.

Our Standard web application security assessment service categorizes findings based on severity and whether they are discoverable and exploitable from authenticated or unauthenticated perspectives. Authenticated testing is performed for a variety of security roles if requested, which allows us to rigorously assess applications' authorization, authentication, and session management mechanisms.

Another interesting, and sometimes confusing subject that should influence testing methodology is regulation. For example the Payment Card Industry's (PCI) Data Security Standard (DSS) mandates in 6.5 that all (internal and external) web applications are developed based on secure coding guidelines so as to prevent the ten types of vulnerabilities outlined in the Open Web Application Security Community (OWASP) Top Ten list. DSS 6.6 goes further and mandates that all publicly accessible web applications be either assessed for the presence of OWASP Top Ten vulnerabilities or actively managed by a web application firewalls that can prevent exploitation of the OWASP Top Ten vulnerabilities. An entire separate document clarifying what is meant by "Application Reviews" and "Web Application Firewalls" attempts to shed light on what testing is required for an "Application Review". It mentions that manual or automated testing may suffice but it does not spell out that unless authenticated testing is performed from multiple security roles along with unauthenticated content testing, OWASP Top Ten vulnerabilities won't be detected.

I personally think authenticated security assessments from multiple security roles is an implicit requirement in the PCI DSS (unless you deploy a WAF) given that it specifies identifying OWASP Top Ten vulnerabilities. The fact is, you can't say you tested thoroughly for vulnerabilities if you failed to test half the application's content. If I'm right, this means that companies that have purchased unauthenticated web application security assessments, knowingly or inadvertently, have not fulfilled the DSS 6.6 requirement.

Assessing the Multiple Security Postures of Targets

The majority of our assessment clients choose a full-disclosure approach to security assessments. They realize that this helps us maximize results in terms of vulnerabilities discovered thus providing the most value for a given cost. Other times assessment clients are interested in zero-knowledge assessments that simulate an attack from an outside threat with minimal knowledge of a target. Although both scenarios are valid we don’t think they have to be mutually exclusive. Why not have your cake and eat it too? And with that we now offer the testing of primary and secondary security controls separately in order to gain knowledge of a target’s multiple security postures.

For example, in the realm of web application security, an example of a primary security control might be parameterized database access to prevent SQL injection attacks or strict output encoding to prevent cross-site scripting (XSS) attacks. An example of a secondary security control might be a web application firewall (WAF) that also attempts to prevent such attacks before they reach the web application. Under these circumstances, a web application security assessment might be more of an assessment of the WAF’s security posture than an assessment of the web application’s security posture. Only those attacks that are not blocked by the WAF would actually reach the web application. The results of such an assessment would likely miss flaws that reside in the actual assessment target, the web application.

“So what? That’s what the WAF is for right?” you might ask. Well…
  • Can your web application firewall policies become disabled inadvertently?
  • Could future WAF policy changes negate some WAF protection?
  • Could the hidden flaws in your web application be replicated by your development staff in other applications, possibly applications not protected by a WAF?
  • Do you have public staging and test environments also protected with the same WAF policies?
  • What about simple due diligence to make sure that your application defends itself?
  • Could new methods of attack obfuscation bypass WAF policies in the future?
I’m not saying that it isn’t important to measure the effectiveness of secondary security controls like WAFs. To the contrary, I’ve seen too many WAFs deployed in such a way that they are all but worthless. I’ve seen security personnel who were quite certain their WAF would stop my attacks during an assessment only to find out that the WAF was still in “learning mode,” the way it was left when the WAF vendor did the initial installation. The dead-bolt doesn’t help you much if you leave the key stuck in it.

What I am saying is, if it doesn’t cost any extra, why not test both primary and secondary security controls? Assess the application’s “naked” security posture without WAF protection. Then enable the WAF protection and see which vulnerabilities are still vulnerabilities. Since it doesn’t take much time to re-validate what’s already been identified, we offer this additional service to those clients who desire it for no additional charge.

This way you get a clear picture of two different application security postures, the bare application, and the application with WAF protection. You also get to measure the true effectiveness of the secondary security controls you paid so dearly for, which might actually surprise you. Finally, you can remediate vulnerabilities at their source, in your application, so that if your WAF fails or becomes disabled, your more sensitive database content stays on your side of the firewall. These flaws are then known by your development staff and best practices to prevent them are integrated into future development efforts.

This concept goes beyond application security assessments and WAFs. Consider the following scenarios:
  1. A network vulnerability or penetration assessment will test any Intrusion Prevention System (IPS) protecting the assessment targets. Based on the client's needs, Depth Security might start the assessment exempt from preventative IPS policies to get an actual assessment of the security posture of the “naked” target hosts. Then once the "worst case scenario" results are documented, the IPS exemptions could be removed to determine which vulnerabilities are exploitable and which are mitigated by the IPS. Again, the point is to assess the "pure" security posture of the target and the "real world" security posture of the target plus any secondary security controls. For those clients with managed IPS services, this process really identifies the value (or more typically, lack thereof) that their vendor is providing. In the event that the IPS failed open or was improperly configured, the client would also have a solid idea of their network security posture without IPS protection.
  2. A social engineering scenario is requested that targets help-desk members with phishing attacks in an attempt to obtain their credentials or other sensitive information. The first control that this assessment would test would be any anti-spam functionality. In the case that anti-spam system fails to stop the phishing emails from being delivered, the next step would be to make anti-phishing exemptions to actually test the help-desk members susceptibility to any phishing emails that might make it through anti-spam controls. The first test verifies a secondary control, anti-spam; the second test targets a primary control, help desk members’ common sense.
Thinking in terms of multiple security posture or security “states” fosters a better understanding of a target’s total vulnerabilities and the effectiveness of secondary security controls. Furthermore, this method of thinking leads to a better idea of what to expect in the event of a failure or improper configuration in these secondary security controls.