Dynamic Testing of Software for Aviation.
Software for aircraft systems, from navigation to the entertainment system, must be proven to be free of unwanted reaction to every possible input, whether predicted by the designers or not. Safe operation of an aircraft depends upon every component being able to operate not only when receiving expected data, but must be able to keep its head about it when the unexpected happens.
Static testing is widely used to determine that aircraft systems and components can operate in unusual situations. Given the tightly controlled environment of aircraft systems, this has been sufficient. If each and every component in a theoretically enclosed system is well designed and manufactured, then there is no expectation of out of bounds output and no high level requirement that other devices on that network be tested for resilience should they receive extraordinary, unwanted or corrupted data.
Static testing does extend into demonstrating resistance to unwanted input, but has a shortcoming in that it extends only to the range of unwanted input that can be imagined as being potentially possible or likely. A very good test designer may extend this test range to cover some additional input types, but is limited in time and resources when it comes to documenting test processes and simply must prioritize so that test budgets and time lines are met.
System and component resistance to unexpected data input or, heaven forbid, attack is best tested and ideally certified through dynamic testing. However, dynamic testing can also fail to find and document software vulnerabilities for the same budget and time limit reasons. It is at the point that the component is otherwise ready to ship that the most in-depth dynamic testing should occur, and yet products most often reach this point of development at or after their ship date.
Additionally, most dynamic testing tools depend upon a known range of specific tests that have been developed by experience with a particular protocol.
Adding stress to this situation, the trend toward highly integrated avionics systems has made it at least theoretically possible for aircraft control systems to be accessed through external communications or even in-flight entertainment systems. The tools most often used to discover any such vulnerabilities are a type of dynamic testing tool known as fuzzers. These have been in use in the hacking community for well over 10 years and are the tool most often responsible for discovering zero-day vulnerabilities in software.
In this article we would like to make the case that avionics designers should use an enterprise version of the same kind of tools that will be used against their systems after they are released into 'the wild'.
Developers produce applications that to a greater or lesser degree exchange information by adhering to a protocol as closely as possible. QA then tests application functionality against that protocol in the perfect world of the testing laboratory. Given the numerous ways programmers can make mistakes, looking for security vulnerabilities in a piece of software should be an integral part of the development process. Strangely, that is not always the case. Why? Because testing the security of a particular product can be an expensive proposition and developers often weigh expense against cost of other factors involved in releasing the product to its customers. Because of this, even software developed in an environment stringently cognizant of security risks is most likely released without full testing.
Naturally when the application is released, hackers will bash away at it with every possible corrupted form of the protocol to create an error in the application. By pushing at the edges of the envelope of the protocol they may find a way to trip up the application and create a buffer overflow, the most frequently leveraged design error.
How are hackers finding buffer overflow opportunities missed during development and standard pre-release QA? A wide range of tools have been developed by the hacker community to enable the rank and file to find new exploits. These tools, fuzzers, work by creating and feeding a wide range of unexpected or corrupted inputs looking for a combination that will break the application. The production of these tools has become a small industry of its own. The QA world has attempted to adapt these rough and ready hacker tools into their test processes with some success, but also with many headaches. Most of these hacker developed fuzzers are focused on a single type of code weakness or on just a single protocol or even on a single application.
Translation of requirements during application development is the first cause of most programming errors. For instance, during the development of a typical avionics application, a project manager translates the requirements from the desired end to the programming team, which members translate to individual programming assignments, which programmers translate the assignment into the proper syntax for the programming language written by someone else, which programming language interpreter translates that into the corresponding machine code. All those translations are sources of potential programming errors during the design stage.
Off-by-one errors, programming language use errors, integer overflows are all examples of errors generated by a programmer while translating a concept to the proper algorithm. For example, to hold 'n' items that are each 'm' bytes long, the programmer may tell the program to allocate n*m bytes. If m*n is larger than the biggest number that can be represented, less memory will be allocated than intended. This may lead to a buffer overflow. In another instance, if a programmer assumes that a variable contains only positive integers, but if the integer in question is actually a signed integer, arithmetic operations can cause the overwrite of the leftmost bit and make the results a negative number, possibly leading to exploitable behavior.
Buffer overflows, vulnerabilities caused by the application not checking space availability before copying un-trusted data into the pre-allocated space in the system memory, end up overwriting contents of memory outside the buffer. As a result, the next time the program looks at that memory space it sees data from the overflow instead of the original data. If the program tries to use values from that area, it will most likely not see what it expects, the consequences of which can range from a crash of the program to other more potentially dangerous actions like DoS or worse, execution of a new malicious code planted by someone. A stack-based buffer overflow can allow attackers to execute code on the victim's computer, as it overwrites memory addresses that will be used later, while a 'stack overflow' typically results in a DoS, as it tries to write to memory that isn't available.
Multi-protocol, environment-variable 'Smart' fuzzers like beSTORM are vital for finding buffer overflow weaknesses not only because they automate and document the process of delivering corrupted input but also because they closely watch for unexpected response from the application. For example, beSTORM will try packets with malformed headers, by manipulating packet content and providing the kind of data that the application may be looking for, using &, <, >, full stops and commas within email applications, or typical URL symbols for HTTP servers. And this is simply unviable if done manually.
Developers produce applications that to a greater or lesser degree exchange information by adhering to a protocol as closely as possible.
Device protocol APIs allow applications to talk to a device over industry standard protocols. Developers need to simply identify the device and then open a communication channel to it. Opening a channel prompts for access authorization. This is a critical step to help prevent programs from accidentally or maliciously communicating with one or more devices without the user's awareness. Once access is granted, the program can communicate with a device, including starting long data transfers.
Being able to attack the actual implementation of the protocol by generating attack vectors that target basic coding errors such as mishandling of input validation or boundary checking (defensive programming errors), design flaws (logical, design specification, and similar), and the implementation of the protocol itself, is how hackers achieve better results. The main problem with that is that the amount of possible attack combinations increases by a significant factor, making the time-to-result (crash or similar) impractical in some cases. This problem is solved by using advanced algorithms in an attempt to utilize attacks that are more likely to cause an error first, and then proceed to cover the entire combination space.
Still, these algorithms are not easy to develop. On some occasions, trying to exploit a large buffer or to send input of the wrong data-type makes for easy catches, but the more the fuzzer advances in its search, the more significant the efficiency of the algorithms used by individual fuzzers matters.
The use of more advanced manipulations based on the basic two types (value manipulation and protocol manipulation) also considerably impacts the capability of today's fuzzers to provide results. For example, trying to exploit a logical flaw in the program by sending a login request twice and then combining it into the melting pot of attacks increases the number of combinations required, and the success rate. Being able to work with more advanced protocols, requiring the fuzzer to wait for a response before sending the next request (basically establishing sessions with the attacked application-session-based fuzzing) is another step in fuzzing.
Some more advanced manipulation techniques based on the basic sets further increase bottom-line results and the success of the fuzzing. One such advanced technique is logic manipulation. Based on protocol manipulation, the fuzzer attempts to find logical programming errors resulting in a potential vulnerability. Another advanced technique is session manipulation (also based on protocol manipulation), which manipulates the actual session. For example, sending a request for a key to be issued, and then when it is received using another one or proceeding without it can cause other types of potential errors.
The main challenge faced by second-generation fuzzers when they employ these new techniques is the time required to cover the combination space exhaustively. Some exotic vulnerabilities in a product can be located at the very end of the combination space. Developing the technology to try to find the most likely attack vectors to trigger likely vulnerabilities in the shortest time possible is the solution.
beSTORM perhaps the only multi-protocol, environment-variable 'Smart' fuzzer, addresses protocol breaches by testing over 50 protocols and 'auto learns' new or proprietary protocols while providing automated binary and textual analysis, advanced debugging and stack tracing.
This is a technique that works by automatically feeding a program multiple input iterations that are specially constructed in a way to trigger an internal error indicative of a bug, and potentially crash it. Commonly referred to as "fault injection" technique, Black box testing can be applied to a network service as much as to a CPU, a cell phone, program parameters, an API, a Web browser, or a file type.
Testers with no knowledge of the internal working of the application being tested go about their work by first sniffing traffic at target protocols, generate input iterations from what they observe, and then drive 'Random'or 'Garbled' messages or 'Legal' mouse and keyboard events at the application until vulnerabilities emerge. Application monitoring can vary from a watchdog that sees if the program is still running, to a remote check to see if the service is still available and responding or all the way to more advanced techniques like watching with a debugger for an anticipated exception. By value manipulation, only a specific data set is tested for specific value changes and through protocol manipulation, the entire protocol structure implementation can be tested. Or both.
The solution is to catch application flaws during development using the best fuzzer you can get - when correction is relatively easy and far less expensive.
By applying automated protocol based fuzzing techniques, beSTORM - a powerful automated black box auditing tool, tries virtually every attack combination, intelligently starting with the most likely scenarios and detects application anomalies which indicate a successful attack. This way security holes can be found in the application far faster, without brute force testing and almost without any user intervention. Easily scalable, beSTORM is equipped with the ability to use multiple processors or multiple machines to parallelize the audit and substantially reduce the testing duration.
Beyond Security's beSTORM is an exhaustive fuzzer. A powerful black box auditing tool, it is designed to find security weaknesses in protocol implementations and uses formal RFC definitions to create an attack language which in turn is used to identify vulnerabilities in the tested application. Although it supports testing for predetermined test cases and tries to exploit more likely vulnerabilities before continuing with the full test, its main objective is to allow for the most complete testing that will cover as much of the protocol space as possible. With beSTORM it is also possible to write custom modules for proprietary protocols using XML.More Information and Free Trial