Software security vulnerabilities are a serious threat to software vendors and their customers: they can result in both monetary loss and loss of reputation. Thus, implementing a rigid secure software development life-cycle (SDLC) is a competitive advantage for a software vendor. Security testing is an important part of any SDLC. Moreover, it is commonly accepted that security testing should be applied as early as possible in software development.
Interested in applying Security Testing during development? We will offer a one-day continuous professional development (CDP) training on the 13th of September at The University of Sheffield.
In this course, you will learn different security testing approaches (e.g., SAST, DAST), their specific strengths and weaknesses, how to evaluate tools and how to select the best “blend” of tools for their own software development. Moreover, the participants will learn how these tools can be integrated into various software development methods (ranging from traditional waterfall-like processes to agile processes supporting continues delivery).
This course on security testing is only one of our “compact” offerings for people working in industry. Similarly, we are also offering courses on secure programming or an introduction to secure software engineering.
For more information, please visit the website of The University of Sheffield or contact Achim Brucker. We also offer these courses as in-house courses, adapted to your needs and wishes.
]]>Cross-platform frameworks, such as Apache Cordova, are becoming increasingly popular. They promote the development of hybrid apps that combine native, i.e., system specific, code and system independent code, e.g., HTML5/JavaScript. Combining native with platform independent code opens Pandora’s box: all the security risks for native development are multiplied with the security risk of web applications.
As part of the Mobile / BYOD Security Track of the BrightTalks’ Identity, Data Protection and Securing the Modern Business Summit, we will give a webinar explaining the risk of hybrid apps and how to avoid them by applying secure software development best practices.
The recording of the webinar will be available online.
Cross-platform frameworks, such as Apache Cordova, are becoming increasingly popular. They promote the development of hybrid apps that combine native, i.e., system specific, code and system independent code, e.g., HTML5/JavaScript. Combining native with platform independent code opens Pandora’s box: all the the security risks for native development are multiplied with the security risk of web applications.
If you want to learn more, visit our talk at the OWASP AppSecEU in Belfast. Update: you can also watch the recording of our talk!.
In the first half of our talk, we start our talk with short introduction into hybrid app development, present specific attacks followed by a report on how Android developers are using Apache Cordova. In the second half of the talk, we will focus on developing secure hybrid apps: both with hands-on guidelines for defensive programming and recommendations for hybrid app specific security testing strategies.
The recording of the webinar on the benefits of applying security testing as early as possible in software development are now available online.
Security testing is an important part of any security development life-cycle (SDLC) and, thus, should be a part of any secure software development life-cycle. Still, security testing is often understood by an activity done by security testers in the time between “end of development” and “offering the product to customers”.
Learning from traditional testing that the fixing of bugs is the more costly the later it is done in development, we believe that security testing should be integrated into the daily development activities.
Based on the SDLC of a large software vendor, we will present the benefits of early security testing and discuss what is necessary to achieve a “security testing as development activity” approach.
The webinar was hosted by Checkmarx.
In the application security testing domain, the debate, if static application security testing (SAST) is better than dynamic application security testing (DAST) or interactive application security testing (IAST) is heating up. But is this really the right question to ask?
I think it is not. Static approaches (e.g., SAST) and dynamic approaches (e.g., DAST or IAST) to application security testing have fundamentally different properties. Thus, the important question is, how can we combine SAST and DAST/IAST to make an application security program as effective and efficient as possible.
The image shows an abstract architecture of a modern, multi-tiered application that, e.g., comprises
Such an application is not executed in isolation. On the contrary, such applications are executed in a complex environment that may include other defensive and active security technologies such as certain operating system configurations, network firewalls, or web application firewalls. Thus, the overall security risk of operating such an application depends on all aspects of this complex environment.
Static Application Security Testing (SAST) is a technique that analyzes the source code or byte code of your software without actually executing it (as SAST analyzes the internal details of a program, we call this a white-box test). Thus, it can achieve a (nearly) full coverage of a piece of code. Usually, SAST tools are not able to analyze flows across several tiers of a multi-tiered architecture: actually, the extent to which the environment (e.g., libraries, network interface, execution environments such as the web browser or a runtime container) are specified influences the quality of the SAST results significantly. Thus, SAST tools are good to cover the different tiers of our example application individually (“horizontal coverage”).
Moreover, as SAST tool analyze the source or byte code, they usually only support a limited subset of programming languages (or binary architectures). Thus, to cover as many tiers and programming languages used
The strengths of SAST include the ability to use it really early in the software development process (and, thus, to fix vulnerabilities before they reach your quality assurance department, or worse, your customers) as well as the ability to provide very detailed instructions for fixing a vulnerability.
Dynamic Application Security Testing (DAST) and Interactive Application Security Testing (IAST) share one property: they analyze a running application. Thus, in contrast to SAST, they analyze an application in the context of its actual runtime environment, from the client application down to the backend systems. Thus, to stay in our picture, they can achieve a “vertical coverage”, i.e., also detect vulnerabilities that only occur in the interplay of different tiers or in the interplay of our application with its environment (e.g., the underlying operating system, backend systems, or a WAF).
As DAST and SAST obverse a running application, they can only detect vulnerabilities that are observable changes in the behavior of the application: The vertical coverage depends on the number of different execution traces and, thus, is usually much lower compared to SAST.
As IAST is not yet a widely used technology, let’s briefly compare DAST and IAST.
Dynamic Application Security Testing (DAST) treats the application under test as a black-box, i.e, it only injects input into external interfaces and observes the behavior of the application by, again, only observing the external outputs. Thus, DAST tools can only point to vulnerabilities but, in contrast to SAST, are usually not able to provide information to developers on how to fix a detected issue.
Moreover, as modern applications are usually using rather complex client protocols, is challenging to support the necessary interfaces. Finally, DAST tools might inject test data that, not immediately recognizable, might harm backend systems (e.g., by storing invalid data in a database).
Interactive Application Security Testing (IAST) tries to combine transfer some of the advantages of SAST to the world of dynamic testing. IAST is based on the idea of instrumenting the application that should be tested (technically, it might, e.g., for Java, also sufficient to execute the application in a modified runtime environment). As an IAST tool knows some internal details of the application under test, this approach is classified as gray-box-testing).
First, the instrumentation allows to observe the behavior of the application under test in much more details and, e.g., block bad inputs from entering back-end systems or to detect vulnerabilities that pure DAST would not be able to detect. Second, the detailed knowledge of the executed (and not-yet-executed) parts of the applications allows for generating test inputs that stimulate not yet executed parts of the application and, thus, increase the “horizontal coverage”.
Clearly, the goal of a comprehensive security testing strategy is to maximize both the horizontal coverage and the vertical coverage. To achieve this, combing static and dynamic security testing approaches is necessary. Thus, while SAST can be considered a workhorse in secure application development (see, e.g., [1] for an experience report on using SAST at SAP), a holistic security testing strategy needs to integrate both approaches [2].
Besides achieving a better overall coverage (both in terms of the coverage of the application under test and in terms of the coverage of security risks/vulnerabilities), SAST can help to optimize the use of IAST (or DAST) tools and vice versa. For example:
If you want to implement a holistic software security program, there you definitely should look on static and dynamic approaches. Their properties are so different, that depending on your most important security risks, the technologies that you use for developing your applications and, last but not lest, to your organizational approach (e.g., who is using the tools). Choosing the right tool for your needs is not an easy task; you might recall our earlier post on “A User-Centered Classification of Security Testing Tools” that discusses a few of those challenges.
Still, one thing is true for all tools: you need to work with them and adapt them to your needs. None of the more powerful application security testing tools that I am aware of, is a “of-the-shelf” solution that works effectively and efficiently without customization. Also, your needs and priorities will change over time. Thus, as a general rule of thumb: start with one tool (or a small number of tools) tool and gain your own experience, constantly monitor the use to learn the weaknesses of your current setting and improve.
Finally, there is more to security testing than only SAST, DAST, and IAST; see, e.g., [3] for an (academic) overview of different security testing approaches. You might ask yourself why I am only mentioning penetration tests in the very last paragraph. That’s a story for a future blog post.
Many vendors of application security testing tools are classifying their tools based on the used testing techniques, e.g., static application security testing (SAST), dynamic application security testing (DAST) or, more recently, interactive application security testing (IAST). Is this really the information users of security testing tools actually need?
Most likely not. Customers are much more interested to get the following questions answered:
Answering these questions gives a nice, three-dimensional, framework for categorizing security testing tools:
In more detail, the three axis are:
Actually, if we merge the “vulnerability scope” and “technology scope” axis, we end up with a nice magic quadrant for security testing tools.
The magic quadrant has the two axis:
This two axis capture, from a central software security team of a software vendor, a very important insight:
Tools in the right upper quadrant are tools (the “generalists for developers”) that are candidates for being purchased centrally (and centrally supported), as they serve a large developer base and require a cooperatively low security skills level. Examples for tools in this quadrant are Checkmarx’s SAST solution), Coverity or HP Fortify. Also, dynamic tools such as might, depending on the maturity and the focus of a software security programs (as they, usually, do not provide detailed fix recommendations, we discuss them in the next category), be put in this category. Besides the mentioned “on-premise” tools, there are also cloud offerings available that fit into this category, e.g., from Veracode.
Tools in the left upper quadrant are tools (the “generalists for security experts”) that are candidates for being purchased by a central security team: they cover a wide range of technologies and security aspects but requires a higher security expertise than the developer-friendly tools in the right upper quadrant. This might include traditional vulnerability scanners (such as Metasploit but depending on the maturity of the security training for developers might also include dynamic tools such as Coverity Seeker or HP WebInspect.
Tools in the right lower quadrant (the “specialists for developers” are usually specialist tools for developers, e.g., they might require a high level of development expertise but only a low to medium level of security expertise (e.g., they only cover one CWE that is specific to the use case of the developed product). One could, for example, argue that BDD-Security is a candidate for this category.
Tools in the lower left quadrant (the “specialists for security experts”) are specialist tools that serve individual experts and team. Thus, they are candidates for a local purchase by the actual development team using them. Examples for tools in this quadrant are Burp Suite, DOM Inspector, OWASP ZAP, or sslyze.
Of course, such a classification is never “black-and-white-only” and depends not only on the actual properties of the tool but also on the type of application security program in your organization as well as the level of maturity. In general, if you are just starting to introduce application security testing tools to your developers, focus on developer-friendly tools that support a wide range of vulnerabilities and technologies (i.e., the right-upper quadrant).
Concluding, if you are talking to application security testing tool vendors, prioritize your needs (the type of vulnerabilities you want to detect, the type of technologies you need to support, etc.) over actual technology used for finding them. Of course, the different application security testing techniques differ in much more aspects: this article is not a comprehensive buyer’s guide - it only should give some ideas how to decide if a tool is more suitable to be bought by a local development team or if it is more suitable to be bought be a central organization supporting a large group of developers.
And last but not least, the same gap between what users need and what vendors (respectively, their marketing department) offers also exists for other software security technologies such as Runtime Application Self-Protection (RASP).
]]>Finding and fixing software vulnerabilities has become a major struggle for most software-development companies. While generally without alternative, such fixing efforts are a major cost factor, which is why companies have a vital interest in focusing their secure software development activities such that they obtain an optimal return on this investment.
Thus, investigating which factors have the largest impact on the actual fix time is an important research area. To shed some light on this area, we analyzed the times for fixing security vulnerabilities at SAP. The results of our study have been published in the Journal on Data Science and Engineering (DSEJ) [1].
The question if FLOSS (Free/Libre and Open-Source Software) is more or less secure than proprietary software is often not the right question to ask. The much more important question is: How to integrate FLOSS components securely into a Secure Software Development Process? Moreover, if you think about it, the potential challenges in the secure integration of FLOSS components are also challenges integrating other types of third-party components. As a software vendor you are finally responsible for the security of the overall product, regardless which technologies and components where used in building it (you can either read more, or watch the video of our AppSecEU presentation).
Ideally, third party components should, security-wise, be treated as your own code and, thus, they impact the all aspects of the Secure Software Development Lifecycle.
Before we continue, let;s quickly review the three most important types of third party components:
Freeware is ubiquitous, i.e., easy available to developers without triggering formal processes. Thus, it is the most problematic one as its use is usually hard to track, and it is usually hard to get fixes or updates in a timely manner (or any maintenance guarantees). FLOSS is also easily available – but it does not have the maintenance problem, as you could fix it yourself and there are also a plenty of companies offering support for FLOSS components. Thus, when you are tracking the use of FLOSS (as well as the use of proprietary third-party components) in your organization, proprietary and FLOSS components differ mainly in one aspect: FLOSS, by definition, provides you with the possibility to fix issues yourself (or ask an arbitrary third part to do it for you).
Let’s face the truth: any third party component (as any self-developed code) can contain vulnerabilities that need to be fixed during the lifecycle of the consuming applications. Thus, instead of asking which type of components is more secure (answer: neither, there is bad and good software in both camps), it is more important to control/plan the risk and effort associated with consuming third party components.
Thus, FLOSS just provides you one additional opportunity; fixing the issues yourself. Moreover, when doing research in software security, FLOSS has the additional advantage that data about software versions, vulnerabilities and fixes is available that can be used for validating research ideas. For example, we are researching methods
We published already preliminary results [1] and we are expecting much more to come in the (near) future.
Of course, one would also like to precisely predict the risk (or the likelihood that vulnerabilities are detected in a specific third-party component during the maintenance period of the consuming applications). Sadly, our research shows that this is not (easily) possible and, again, is wrong question to ask.
Let’s get back to some pragmatic recommendations if you are using third-party components in general and FLOSS components in particular as part of your software development. As we cannot predict future vulnerabilities easily, we focus on strategies for controlling the risk and effort – which should be, anyway, the main focus of a good project manager.
To control (minimize) the risk of third party components we recommend integrating the management of third-party components in your Secure Software Development Lifecycle right from the start and to obtain them from trustworthy sources (and, if you are in the lucky situation to be able to select a component from various components providing the necessary functionalities, we have some tips as well):
Prefer projects with private bug trackers: Being able to report security issues to a FLOSS project in privately allows you to discuss potential fixes with the community without putting your customers or all other customers of the FLOSS component at risk (e.g., by inadvertently publishing a 0-day).
Prefer projects with a mature (healthy) Secure Development Lifecycle: As nobody is immune from security vulnerabilities, it is important to select project that take security seriously. A good indicator is the maturity level of the Secure Software Development Lifecycle, e.g., by answering such as
To control (minimize) the effort of third party components, again, the Secure Software Development Lifecycle is the most important part to look at – followed by the project selection.
The DevOps model promises to allow software companies to significantly faster (i.e., more frequently) shipping updates to their customers. A key requirement for this is a high degree of test automation: This does not only apply to testing functional testing, it is at least as important for all security testing activities – which are still often done manually or semi-automated.
In a traditional Secure Development Lifecycle (SDL), as, e.g., used by Microsoft or SAP, security testing is usually applied in two phases:
This separation is usually not adequate for the fast release cycles of the DevOps strategy. Thus, we need to evaluate additional security testing strategies as well as investigate how tools that have been shown to be successful in more traditional development models can be adapted to fit the needs of a DevOps workflow.
A successful integration of security testing tools and strategies (see, e.g., [2] and [3] for an overview of security testing techniques and strategies) into a DevOps workflow (to achieve SecDevOps or DevSecOps) requires that the security experts listen to the requirements of the development and operations experts and adapt the security testing tools to their needs. This affects various phases of an agile SDLC:
Development: To avoid vulnerabilities right from the beginning, developers should have access (and use) tools that support them to avoid insecure programming patterns. Such (usually static) tools should be integrated into the development environment (IDE) and provide instant feedback – similar to the spelling and grammar corrections in modern word processors.
Build: On the build (and commit/repository) servers, two strategies should be implemented:
Milestone Releases: The previously discussed test strategies are target towards developers. Thus, the focus is on tools that are easy to use and require no or only a low security expertise. While such tools usually help to detect the vast majority of vulnerabilities early in the development, they might miss rare but severe issues that are hard to detect (or to analyze). Thus, the regular but not daily (e.g., every three months) milestone releases should be used for applying more thorough security tests (e.g., penetration tests) that require a high-level of security expertise. Issues found during these tests should be fixed (and thus, effort needs to be planned) during the following milestone release cycle.
Operations: The fact that the one team is developing and operating an application enables the seamless extension of security testing, monitoring, and enforcing across the whole application lifecycle:
Of course, not all security tests are useful or necessary for all products. A risk-based assessment (including the legal and regulatory requirements) that weights the operational risks with the development and operational costs and the expected revenue should form the basis of an application-specific security test plan.
Development | Commit | Nightly Build | Milestone Release | Operations | |
---|---|---|---|---|---|
automation | high | high | high | low | high |
acceptable max. runtime | < 500ms | < 10s | < 12h | several days | < 1ms |
acceptable false positives | very low | very low | low | high | medium |
acceptable false negatives | high | high | medium | low | medium |
integration | IDE | code repository | build server | not necessary | runtime (application server) |
required security expertise | low | low | medium | high | medium |
user group | developer | developer | developer (and security experts) | security experts | developer, sysadmins |
example | Find Security Bugs | BDD | Checkmarx, Coverity, HP Fortify, HP WebInspect, Seeker (potentially using a focused test scope/configuration to ensure low false positive rate) | Checkmarx, Coverity, HP Fortify, HP WebInspect, Seeker (full/large test scope), manual penetration tests (and tools like OWASP ZAP) | HPE Application Defender, Qualys |
The table below summarizes the different requirements for security test tools and methods as part of a (Sec)DevOps strategy. Depending on which phase of the development and operation lifecycle security testing is applied, different configuration/combinations of security test tools are required. In particular in early phases of the development, tools need to be easy to use for developers that are security aware they should not require a deep security expertise (thus, they need to have a low false positive rates, fast turn-around times, etc.). Similarly, tools used during operations should not require a deep security expertise during normal operations (thus, they share a lot of their requirements with the tools targeting developers). In contrast, the security tests for milestone releases should be done by security experts and, thus, can target the full range of security testing tools and methods (ranging fro0m manual design reviews, SAST/DAST/IAST, vulnerability scanners, security proxies, or human expert knowledge).
Of course, in addition to the requirements targeting DevOps teams, the well-known requirements for selecting security test tools should be considered. Finally, secure development and operations covers all parts of the development, operations, and maintenance lifecycle, i.e., it starts already with the training (and product design) and also includes a comprehensive response and patch & update process.
DevOps and secure software development are not contradictory. Still, the introduction of security testing tools needs more planing (and more adaption of the tools) than introducing the same or similar tools in traditional development strategies. Following the “Customer First” strategy, security experts should focus on the needs of the software developers: the goal of security testing tools is to support developers to develop secure software, they should not be seen as an obstacle.
Apache Cordova is a widely used framework for writing mobile apps that follows the “hybrid app” paradigm. A hybrid app is a mobile app that is partly implemented in platform-neutral HTML5/JavaScript and partly in platform specific languages (e.g., Java or Objective C).
As part of our work on developing static analysis techniques for
Cordova apps [1], we analyzed Cordova
apps from Google play: we took the Top 1000 apps (as ranked by Google
in spring 2015) from Google Play and checked if these apps contain a
config.xml
file that belongs to the Cordova framework. Using this
criterion, we could identify 50 Cordova apps. Thus, according to our
analysis, only 5% of the Top 1000 apps are using Cordova.
This actually differs significantly. Many apps do, in fact, use Cordova as intended: The app is written in JavaScript, the Java part is unmodified and simply loads the entry-point HTML file which is set in the Cordova configuration file. Some apps, however, significantly change the Java part. The most extreme apps do not ship any HTML or JavaScript code in the APK and simply specify one hard coded URL in Java to be loaded, which is often just the mobile version of their website, hosted in a remote location.
Some apps chose a middle ground: They may first load Activities like regular Android apps, and may embed HTML and JavaScript code only into some parts of the app, where Cordova Plugins may be used to communicate back and forth. Such irregular Cordova apps are the exception and are significantly harder to statically analyze, as they change the way Cordova is integrated into the app.
Many plugins take callback functions and pass them through to their
exec
call. Especially for plugins which do not simply yield
a result which can be passed to the success callback, e.g, when the
plugin is just supposed to execute a command, there are often no fail
callbacks being provided, either. Some of these actions could indeed
fail, which would not get propagated through to the app code itself,
though, because no fail callback has been passed.
Plugins generally have the character of libraries, where the
JavaScript part rarely does more than encapsulate the exec
calls. There are also no other mechanism used to conduct
cross-language calls. The official Cordova plugins adhere to these
guidelines. Our work is intended for this kind of plugins.
Anyone can write Cordova plugins, and not all developers adhere to
these guidelines. One found plugin, apparently written just for this
specific app, does not contain any JavaScript code; instead, the
exec
calls are done right in the app code itself. Other
plugins represent the other extreme and implement quite a bit of the
plugin logic on the JavaScript side, which could have been as well
written in Java. Again some other plugins do not even use
exec
to communicate with their Java side, but use methods
which are also used internally in the Cordova framework. The reason
for these unnecessary uses of workarounds remains unclear.
One plugin found in those Cordova apps is special in a different way: Combining Java and JavaScript was apparently not enough, as the APK contained some native libraries accessed via JNI to do some basic arithmetic calculations. As JSON strings get passed from the JavaScript part via Java to the C part, the attack surface gets even larger.
I am looking forward to my first OWASP meeting in Sheffield (it’s actually the second meeting of the Sheffield OWASP Chapter). I will give a talk on my experiences in introducing and implementing a security testing strategy within a large (more than 25000 developers) and international software development team. There will be even more interesting talks (as well as free beer in pizza).
For example,
Looking forward to a great OWASP meeting in Sheffield (and I am sure it will not be the last one)!
Looking forward to a great week in Trento attending the SECENTIS PhD Winter School on Security and Trust of Next Generation Enterprise Information Systems. It will be a week full of interesting lectures on building security and privacy-aware enterprise systems.
The topic and speakers are:
I will give a talk entitled Static Analysis - The Workhorse Of A End-To-End Security Testing Strategy in which I provide a broad overview of static program analysis and also report on the experiences in using static analysis at SAP.
Got lost in overwhelmingly large amount security testing research? Do not worry, there is help.
We are happy to announce that our survey on security testing has been published. We do not only provide an overview of the currents state of the art in security testing research, we also explain the role of security testing a secure software development process and discuss the various security testing approaches in the context of a multi-tiered web application.
In particular, we discuss the following security testing techniques:
If you ever tried to enforce a network policy in a large data center, i.e., needed to configure the different firewalls and routers, you will agree that this is a tedious and error-prone task. This is even more true, if you need to maintain and change those policies over a long period of time. Understanding, the actual policy enforced in a non-trivial network setup (e.g., a data center with multiple fall-back connections) is even harder.
One way of ensuring that important security (access control) properties of a network are true and are not changed during reconfiguration is testing. We developed a specification-based (model-based) testing approach for network policies that allows to represent network policies in a high-level language, to optimize the policies, and to generate test cases that can directly be executed in a real-world network.
Our approach supports complex policies that require network translation (NAT) or port forwarding as well as stateful and stateless protocols (thus, we capture the concept of widely used network filters such as iptables). For the following example, let’s limit ourselves to simple packet filters and a very simple network policy that is illustrated by the following table:
source | destination | protocol | port | action |
---|---|---|---|---|
internet | dmz | udp | 25 | allow |
internet | dmz | tcp | 80 | allow |
dmz | intranet | tcp | 25 | allow |
intranet | dmz | tcp | 993 | allow |
intranet | internet | udp | 80 | allow |
any | any | any | any | deny |
In our example, we have three sub-nets (internet, intranet, and dmz) and the intranet (i.e., the internal network) should be protected from the outside network (i.e., the internet). This representation is rather similar to the usual textbook representation of network policies. In our formalism (or DSL), we can express this policy as follows: \[ \newcommand{\cmd}[1]{\operatorname{\color{blue}{#1}}} \newcommand{\fw}[1]{\operatorname{#1}} \begin{align} \cmd{definition}~\mathrm{TestPolicy} \cmd{where}\\ TestPolicy &= \fw{allow\_port} \fw{udp} 25 \fw{internet} \fw{dmz} \\ &\oplus \fw{allow\_port} \fw{tcp} 80 \fw{internet} \fw{dmz} \\ &\oplus \fw{allow\_port} \fw{tcp} 25 \fw{dmz} \fw{intranet} \\ &\oplus \fw{allow\_port} \fw{tcp} 993 \fw{intranet} \fw{dmz} \\ &\oplus \fw{allow\_port} \fw{udp} 80 \fw{intranet} \fw{internet} \\ &\oplus D_U \end{align} \] where \(D_U\) is the policy that denies all traffic.
Using this policy definition, we can state our test specification as follows: \[ \begin{align} \cmd{test\_spec} \fw{test:}~~ & P~x \Longrightarrow FUT~x = \mathrm{TestPolicy}~x \end{align} \] In general, this specifies that the firewall implementation under test (\(FUT\)) should behave for all packages satisfying the predicate \(P\) should behave as the specification. Thus, the \(P\) allows us to specify additional test constraints such as to not to generated test cases that test traffic within one sub-net (which, most likely, cannot be monitored easily).
Using our automated test case generation approach that is implemented on top of HOL-TestGen we obtain test cases that both check that packages that should be accepted (i.e., pass through the network) are actually accepted and that all others are denied. A (simplified) test case for a denied package looks as follows: \[ FUT(1,((8,13,12,10),6,\fw{tcp}),((172,168,2,1),80,\fw{tcp}),data)= \lfloor(\fw{deny}()\rfloor \]
This approach is described in all details in our journal paper [1] in detail in our journal paper and the implementation is available as part of the HOL-TestGen distribution. Our approach has inspired the colleagues at Microsoft Research to a similar approach using Z3 that is now used for testing the configuration of the Azure data centers.
Brucker, A.D., Brügger, L., & Wolff, B. (2015). Formal firewall conformance testing: an application of test and proof techniques Software Testing, Verification and Reliability, 25 (1), 34-71 DOI: 10.1002/stvr.1544
Everybody developing software should, in fact, accept the challenge to develop secure software. This is not an easy challenge: it requires an end-to-end security development life-cycle (SDLC) that nicely integrates with your software development processes.
Security testing is an important part of any security development life-cycle (SDLC) and, thus, should be a part of any software development life-cycle. Still, security testing is often understood by an activity done by security testers in the time between “end of development” and “offering the product to customers”. Fixing bugs that late in the development process is not only expensive, it also conflicts with agile development in general and the DevOps model in particular.
SAP’s Security Testing Strategy enables developers to find security vulnerabilities early by applying a variety of different security testing methods and tools. When you want to integrate security testing into your (agile) software development, the most people emphasize how important a security awareness program for both developers and mangers is. While security awareness is important, our experience is that developer awareness is even more important! Listen to your developers and help them. Recall, building secure systems is much more difficult than finding a successful attack.
Do not expect your developers to become security experts (or penetration testers) – expect them to become security aware and help them with development friendly tools that spot security vulnerabilities early during development and that are nicely integrated into the tools and workflows used by the developers. And, finally, make the process of fixing issues as easy and painless as possible. The effort for fixing an issue should not be the main reason for not fixing something. If you want to learn more about SAP’s Security Testing Strategy, you can watch my presentation at the OWASP AppSec 2014 on youtube (slides are also available).