Security Testing the Internet of Things -IoT
The Internet of Things (IoT) encompasses any and all products that are connected to the internet or to each other. Any product which requires connection to a home, car or office network to deliver its complete set of features falls under this broad term. In fact cars themselves are now a component of the IoT as they now exchange data with the manufacturer routinely if not continuously.
All things IoT, collect data during use and often share that info with their manufacturers without the users being aware that it is being collected. In many cases product functions are dependent upon connection to the internet and may be controlled to a great degree by the manufacturer. This concept of making all components of our increasingly complicated lives communicate with each other, with us, and with internal and external software applications, is what IoT is all about.
Manufacturers of every kind of electronic or electrical device are rushing to add features which require connection to the internet. In their rush to market these companies many of which have no prior experience with networked devices are bound to overlook the complications of hardware and software security design and construction in the haste to get the newest, coolest function working at lowest cost.
It is nearly a rule that the makers of products that test these new frontiers will apply the same guidelines to their selection of processing hardware as they do for any other product components they purchase. The oldest chips whose designs were long ago paid off and are now dirt cheap are attractive building blocks for device designs that need only limited capabilities or capacities.
Testing of the software that is written for a household appliance or child's toy has only one goal - confirm that it works and will be easy to set up (with lots of default selections even passwords). Security is an after-thought at best.
The hardware (chipset) as used in most new products is very old and often has multiple known vulnerabilities. The software that is included with IoT devices and which rarely gets any in-depth security testing almost always has its own set of security issues. The result is that tens of thousands and soon hundreds of millions of appliances, devices and toys being installed into home and business networks are ripe for hacking. And once a vulnerability is discovered in a widely distributed product line there will be thousands or potentially hundreds of thousands of homes and businesses that will be open to having their IoT devices hijacked and potentially opening their entire network to view and attack.
Businesses also see the importance of integrating IoT-connected devices to deliver new function, reduce cost and improve efficiency.
IoT generates and shares loads of data and as such the individual devices are susceptible to malicious attacks, data misuse and forced data breaches thus making a strong case for dynamic testing, code, logic and vulnerability assessment at the product development phase itself.
According to Gartner, the number of Internet-connected devices is expected to reach 50 billion by 2020. While IoT is going to improve life for many, the number of security risks that consumers and businesses are prone to face will increase exponentially.
Stakeholders in the IoT domain face privacy issues, most of the time being unaware of the situation. As such, IoT devices have come under increasing levels of scrutiny in recent months over poor security controls and numerous vulnerabilities. Some of the common problems which have come up due to the spread of IoT include the following:
As IoT-connected devices become an integral part of our daily lives, it is crucial that these devices undergo thorough testing, and establish minimum baseline for security.
If any testing is done at all, static testing is the most frequently implemented process. But static testing is not intended or designed to find vulnerabilities that exist in the 'off the shelf' components such as processors and memory into which the application will be installed.
Dynamic testing, on the other hand, is capable of exposing both code weaknesses and any underlying defects or vulnerabilities introduced by hardware and which may not be visible to static analysis. Also dynamic testing often turns out to be a more pragmatic way of testing the IoT devices and plays a pivotal role in finding out vulnerabilities that are created when new code is used on old processors. As such, manufacturers who purchase hardware and software from others must do dynamic testing to ensure the items are secure.
Developers produce applications that to a greater or lesser degree exchange information by adhering to a protocol as closely as possible. QA then tests application functionality against that protocol in the perfect world of the testing laboratory. Given the numerous ways programmers can make mistakes, looking for security vulnerabilities in a piece of software should be an integral part of the development process. Strangely, that is not always the case as testing the security of a particular product can be an expensive proposition and developers often weigh expense against cost of other factors involved in releasingthe product to its customers. Because of this, even software developed in an environment stringently cognizant of security risks is most likely released without full testing.
Naturally when the application is released, hackers will bash away at it with every possible corrupted form of the protocol to create an error in the application. By pushing at the edges of the envelope of the protocol, they may find a way to trip up the application and create a buffer overflow, the most frequently leveraged design error.
How are hackers finding buffer overflow opportunities missed during development and standard pre-release QA? A wide range of tools have been developed by the hacker community to enable the rank and file to find new exploits. These tools, fuzzers, work by creating and feeding a wide range of unexpected or corrupted inputs looking for a combination that will break the application. The production of these tools has become a small industry of its own. The QA world has attempted to adapt these rough and ready hacker tools into their test processes with some success, but also with many headaches. Most of these hacker-developed fuzzers are focused on a single type of code weakness or just on a single protocol or even on a single application.
In case of IoT-connected devices, it is important for enterprises to identify traffic patterns and differentiate between the legitimate and malicious ones. For instance, an employee may download some apparently genuine app on a smartphone given to him by his employer, without knowing that the app has some malware. In such cases, the organisation must be prepared with the right set of processes to ensure ample security promptly.
Most IoT devices come with default credentials when used for the first time, which means known administrator IDs and passwords. Also some devices come with a built-in Web server. This helps admins to log in and manage the device remotely. This massive vulnerability can easily encourage hackers to misuse available confidential data.To avoid any data leakage, enterprises must develop a strict assigning process, where the initial settings of the device can be tested, verified to find out any kind of vulnerabilities that may exist, validated flaws that may have been identified should be closed, and a "good-to-go" certification from the compliance team should be issued before the device is brought to the market.
Even after all the QA testing being done, buffer overflow error tests, protocol breach tests, and black-box testing should be done to further reduce the scope of adding vulnerabilities to the devices.
Translation of requirements during application development is the first cause of most programming errors. For instance, during the development of a smart fridge application, a project manager translates the requirements from the desired end to the programming team, which members translate to individual programming assignments. The programmers then translate the assignment into a proper syntax for the programming language written by someone else, which a programming language interpreter translates into the corresponding machine code. All these translations are sources of potential programming errors during the design stage.
Off-by-one errors, programming language use errors, integer overflows are all examples of errors generated by a programmer while translating a concept to a proper algorithm. For example, to hold 'n' items that are each 'm' bytes long, the programmer may tell the program to allocate n*m bytes. If m*n is larger than the biggest number that can be represented, less memory will be allocated than intended. This may lead to a buffer overflow. In another instance, if a programmer assumes that a variable contains only positive integers, but if the integer in question is actually a signed integer, arithmetic operations can cause an overwrite of the leftmost bit and make the result a negative number, possibly leading to an exploitable behavior.
However, not all programming errors are created equally. Some allow attackers to gain something or to get an ability they didn't already have. They may be able to deny other users' access to the program by crashing it, or access information they shouldn't be able to. In some cases, they may be able to cause the program to execute any command they tell it. These errors are vulnerabilities. Other errors, while they may have the same causes, won't give attackers any access they didn't already have. So, the first task for a vulnerability researcher is to determine if the programming error is merely a bug or if it can lead to exploitation. If a bug can lead to exploitation, either by itself or when used in concert with other bugs, it is indeed vulnerability.
Buffer overflows and vulnerabilities caused by the application not checking space availability before copying un-trusted data into the pre-allocated space in the system memory, end up overwriting contents of memory outside the buffer. As a result, next time the program looks at that memory space, it sees data from the overflow instead of the original data. If the program tries to use values from that area, it will most likely not see what it expects, the consequences of which can range from a crash of the program to other more potentially dangerous actions like DoS or worse, execution of a new malicious code planted by someone. A stack-based buffer overflow can allow attackers to execute code on the victim's computer, as it overwrites memory addresses that will be used later, while a "stack overflow" typically results in a DoS, as it tries to writeto memory that isn't available.
Multi-protocol, environment-variable 'Smart' fuzzers like beSTORM are vital for finding buffer overflow weaknesses not only because they automate and document the process of delivering corrupted input, but also because they closely watch for unexpected response from the application. For example, beSTORM will try packets with malformed headers, by manipulating packet content and providing the kind of data that the application may be looking for, using &, <, >, full stops and commas within email applications, or typical URL symbols for HTTP servers. This is simply unviable if done manually.
Device protocol APIs allow applications to talk to a device over industry standard protocols. Developers need to simply identify the device and then open a communication channel to it. Opening a channel prompts for access authorization. This is a critical step to help prevent programs from accidentally or maliciously communicate with one or more devices without the user's awareness. Once access is granted, the program can communicate with a device, including starting long data transfers.
Being able to attack the actual implementation of the protocol by generating attack vectors that target basic coding errors such as mishandling of input validation orboundary checking (defensive programming errors), design flaws (logical, design specification, and similar), and the implementation of the protocol itself, is how hackers achieve better results. The main problem with that is the amount of possible attack combinations increase by a significant factor, making the time-to-result (crash or similar) impractical in some cases. This problem is solved by using advanced algorithms in an attempt toutilize attacks that are more likely to cause an error first, and then proceed to cover the entire combination space.
Still, these algorithms are not easy to develop. On some occasions, trying to exploit a large buffer or to send input of the wrong data-type makes for easy catches, but more the fuzzer advances in its search, more significant the efficiency of the algorithms used by individual fuzzers matter.
The use of more advanced manipulations based on the basic two types (value manipulation and protocol manipulation) also considerably impacts the capability of today's fuzzers to provide results. For example, trying to exploit a logical flaw in the program by sending a login request twice and then combining it into the melting pot of attacks increases the number of combinations required, and the success rate. Being able to work with more advanced protocols, requiring the fuzzer to wait for a response before sending the next request (basically establishing sessions with the attacked application session-based fuzzing) is another step in fuzzing.
Some more advanced manipulation techniques based on the basic sets further increase bottom-line results and the success of the fuzzing. One such advanced technique is logic manipulation. Based on protocol manipulation, the fuzzer attempts to find logical programming errors resulting in a potential vulnerability. Another advanced technique is session manipulation (also based on protocol manipulation), which manipulates the actual session. For example, sending a request for a key to be issued, and then when it is received using another one or proceeding without it can cause other types of potential errors.
The main challenge faced by second-generation fuzzers when they employ these new techniques is the time required to cover the combination space exhaustively. Some exotic vulnerabilities in a product can be located at the very end of the combination space. Developing the technology to try to find the most likely attack vectors to trigger likely vulnerabilities in the shortest time possible is the solution.
beSTORM, perhaps the only multi-protocol, environment-variable 'Smart' fuzzer, addresses protocol breaches by testing over 50 protocols and ’auto learns’ new or proprietary protocols while providing automated binary and textual analysis, advanced debugging and stack tracing.
This is a technique that works by automatically feeding a program multiple input iterations that are specially constructed in a way to trigger an internal error indicative of a bug, and potentially crash it. Commonly referred to as "fault injection" technique, blackbox testing can be applied to a network service as much as to a CPU, a cell phone, program parameters, an API, a Web browser, or a file type.
Testers with no knowledge of the internal working of the application being tested go about their work by first sniffing traffic at target protocols, generate input iterations from what they observe, and then drive ’Random’ or ’Garbled’ messages or ’Legal’ mouse and keyboard events at the application until vulnerabilities emerge. Application monitoring can vary from a watchdog that sees if the program is still running to a remote check to see if the service is still available and responding or all the way to more advanced techniques like watching with a debugger for an anticipated exception. By value manipulation, only a specific data set is tested for specific value changes and through protocol manipulation, the entire protocol structure implementation can be tested or both.
The solution is to catch application flaws during development using the best fuzzer you can get – when correction is relatively easy and far less expensive.
By applying automated protocol based fuzzing techniques, beSTORM, a powerful automated black-box auditing tool, tries virtually every attack combination, intelligently starting with the most likely scenarios and detects application anomalies, which indicate a successful attack. This way security holes can be found in the application far faster, without brute force testing and almost without any user intervention. Easily scalable, beSTORM is equipped with the ability to use multiple processors or multiple machines to parallelize the audit and substantially reduce the testing duration.
Beyond Security’s beSTORM is an exhaustive fuzzer. A powerful blackbox auditing tool, it is designed to find security weaknesses in protocol implementations and uses formal RFC definitions to create an attack language which in turn is used to identify vulnerabilities in the tested application. Although it supports testing for predetermined test cases and tries to exploit more likely vulnerabilities before continuing with the full test, its main objective is to allow for the most completetesting that will cover as much of the protocol space as possible. With beSTORM, it is also possible to write custom modules for proprietary protocols using XML.More Information and Free Trial