2010-03-24

Started a Fuzzy Voting System

If you look to the right-hand side of the "Crash-At-A-Time" blog, you should find a voting system between different styles of fuzzers. Let me know if you think some type of a category is missing.

2010-02-02

Fuzzing Coverage

The most important metric for comparing fuzzing approaches is the amount of flaws the method finds. A simple approach, like random fuzzer, requires virtually no investments, but they can only find 10% of the vulnerabilities hiding in the software. A mutation-based fuzzer can find around 50% of the flaws. Once again the tool investments are minimal, but cost of using the tools and integrating them into the development processes are a lot greater. A fully model-based fuzzer can find as much as 80-90% of the flaws, but it can be the most expensive method to build and maintain. The choice of the tool is often based on integration capabilities and challenges, not coverage. If the protocol is standards-based, a model based fuzzer is often the right choice. But especially with emerging technologies and agile development processes, the specifications needed to create model-based tests are not always available. There might be no consensus on the protocol specification or the specification changes rapidly, or in some special cases the specifications are proprietary and not available to testers. In such cases traffic captures can be used to create mutation-based fuzzers. When the amount of interfaces is vast, random fuzzing might be a budgetary choice to get at least some fuzz test coverage before more advanced capabilities are introduced to the development process.

For more information on this topic, check our whitepaper here: http://www.codenomicon.com/products/coverage.shtml

2009-12-08

General Purpose Fuzzing

No matter how bad the quality of mutation-based fuzzing is, there are practical use cases for it. Look at all the hacker tools available and you will notice that most fuzzers just grab files (jpeg, mpeg, doc...), traffic captures (pcap, pdml, ...) or XML-like schemas (schema, dfd, ...) and generate tests from these "specifications".

So why are traffic capture fuzzers so powerful? The reason is speed! I selected a wireless access point and listened to it using a network analyzer while it was booted up. During the boot process, I caught a number of packet traces, which I could then directly export to our brand new traffic capture fuzzer (or in the case of XML protocols, into our XML fuzzer), and just after 5 minutes, I had all test running, and less than 100 test cases later, I had the wireless router booting for the first time.

General purpose fuzzers expedite the process of creating fuzzers as they automate the entire test generation process. Test coverage is minimal compared to full model-based fuzzers, yet it more than suffices to find a few bugs in modern communication products. Highly recommended!

2009-08-12

The Fuzzer That Does Not Fuzz

From: http://www.codenomicon.com/news/newsletter/archive/2009-08.shtml

"The fuzzer that does not fuzz", was how Codenomicon test tools were described at Black Hat USA 2009. Without necessarily knowing it, the speaker made the biggest compliment to our tools anyone has given for years, if ever.

Before 1998, all fuzzers that at least I know about were entirely stateless, and purely random. You should not really even consider this approach any more.

After 1998, in the PROTOS project, we described an approach where no randomness was involved at all. We called it Robustness Testing, based on definitions we heard e.g. ETSI use for such a testing approach. Other names for similar approach are grammar testing (used e.g. by Wurldtech) and syntax testing (used by testing specialists everywhere).

In PROTOS we noticed that if a protocol was modelled using dynamic and thorough state machines and message descriptions, there was no need for randomness anymore. Actually, the incremental benefits of adding random tests to the systematically built tests was so insignificant that eventually we just left them out entirely. Everything was carefully optimized. Test execution times were extremely fast (from minutes to few hours), and test coverage was much better than with other techniques, even those in use today.

After almost ten years, block-based fuzzers were invented. They are a kind of cross-breed between the purely random, non-protocol-aware fuzzers of the early 90s, and robustness testing tools that are entirely based on protocol models and systematic test generation. A block-based fuzzer adds enough protocol awareness to its minimalistic model and state diagrams, to be able to somewhat limit the amount of random or semi-random changes it does. Why did the inventors include any randomness at all? Because a fuzzer is supposed to do random testing - or perhaps that is just what people thought.

So when someone calls us a "fuzzer that does not fuzz", they are finally understanding the difference between a fuzzer, and a Robustness Testing tool. Even though finally in 2008 we decided to call our tools fuzzers also, they really isn't anything fuzzy about our tools. And we are proud of that!

XML Fuzzing

Information regarding the XML flaws in the Codenomicon pages here: http://www.codenomicon.com/labs/xml/

2009-03-17

Finally, a market study on fuzzing

Everyone has heard rumors of this and that company using fuzzing. Finally the real data is available: Codenomicon has invited Forrester and Cigital to share their findings.

Check out the webinar http://www.codenomicon.com/resources/webcasts/20090331.shtml

2009-01-19

Fuzzing In The Media

So what do people think about fuzzing? If you are a security specialist, you might think that fuzzing is a cool tool in the hacker toolbox. Or if you are in quality assurance, you might think it is just another protocol modelling tool, but with a specific purpose for finding security tools. But for the rest? The answer is simple, they have no idea what fuzzing is.

I was speaking at a press conference for Infosec London, and my personal experience there supported my past thinking. Only five out of twenty-plus journalists had ever heard about fuzzing. And these people write about security topics in their publications! It is a long road to change this, and we need everyone's help in that. That is actually the only reason why I wrote a book on fuzzing, and why I have one too many blogs on the topic. Please join me in evangelizing on fuzzing, to change the world for better!

2008-11-13

Quality Assurance Moves Towards Fuzzing

I have been reading a number of QA papers and books recently to catch up from past busy times. If you have time, have a look at some QA topics through your favorite search engine:

  • Test generation
  • Random testing, Adaptive random testing
  • Hypercuboids
  • Statecharts
  • Model based testing
  • Modified Condition/Decision Coverage (MC/DC)

For example Jayaram & Mathur from Purdue are explaining interesting measurements of using statecharts as the basis of generating message sequences for complex protocols such as TLS. Sounds pretty similar to fuzzing, at least to me, although the research at this phase is nowhere in the same domain. Today most block-based fuzzers (although some of them call themselves model-based) use extremely limited message sequence coverage, with the worst of them only take a capture of traffic, and then mutate that. The drawback with this is that you will only do message structure fuzzing, the most basic form of fuzzing.

Then if you look at the work of e.g. Gotlieband and Petit from INRIA, you can get a glimpse of what the QA people are looking at in the area of test generation. Any individual field in the protocol message can (potentially) automatically generate its own set of data based on a very basic assumptions, and therefore optimize those to finally do some intelligent permutations of multi-anomaly fuzzing. Long gone are those static libraries of anomalies (again very few real fuzzers use them today). The result is less test cases, and better test coverage.

It is interesting to see where fuzzing will go in the future, and how companies with QA background, and companies with security background will either end up in the same direction, or very different direction.

2008-10-07

2008-09-14

Win free copy of The Fuzzing Book

We are giving out 11 copies of the fuzzing book. 10 of them are fuzzed (randomness involved) and one is given to a robust answer to one very simple but still difficult question: "why should you get the book?"

Surf this way to get your copy: http://www.codenomicon.com/fuzzing-book/

2008-09-08

Why Two Blogs?

Check out http://www.fuzz-test.com/
Why two blogs? Well, here I will not represent anyone but myself. Any opinions here are not related to the other authors (of the VoIP book, or the Fuzzing book), nor to Codenomicon.

2008-09-03

Fuzzing Is Not Random

Yet another mention that fuzzing is random. Could not resist commenting. http://testingdocs.blogspot.com/2008/09/fuzz-testing.html

Yet Another Fuzzing Blog

I will (try) to re-write, copy, or link entries to other fuzzing related blogs I write to here. Maybe even an occasional link to other blogs...