2010-03-24
Started a Fuzzy Voting System
2010-02-02
Fuzzing Coverage
For more information on this topic, check our whitepaper here: http://www.codenomicon.com/products/coverage.shtml
2009-12-08
General Purpose Fuzzing
So why are traffic capture fuzzers so powerful? The reason is speed! I selected a wireless access point and listened to it using a network analyzer while it was booted up. During the boot process, I caught a number of packet traces, which I could then directly export to our brand new traffic capture fuzzer (or in the case of XML protocols, into our XML fuzzer), and just after 5 minutes, I had all test running, and less than 100 test cases later, I had the wireless router booting for the first time.
General purpose fuzzers expedite the process of creating fuzzers as they automate the entire test generation process. Test coverage is minimal compared to full model-based fuzzers, yet it more than suffices to find a few bugs in modern communication products. Highly recommended!
2009-08-12
The Fuzzer That Does Not Fuzz
From: http://www.codenomicon.com/news/newsletter/archive/2009-08.shtml
"The fuzzer that does not fuzz", was how Codenomicon test tools were described at Black Hat USA 2009. Without necessarily knowing it, the speaker made the biggest compliment to our tools anyone has given for years, if ever.
Before 1998, all fuzzers that at least I know about were entirely stateless, and purely random. You should not really even consider this approach any more.
After 1998, in the PROTOS project, we described an approach where no randomness was involved at all. We called it Robustness Testing, based on definitions we heard e.g. ETSI use for such a testing approach. Other names for similar approach are grammar testing (used e.g. by Wurldtech) and syntax testing (used by testing specialists everywhere).
In PROTOS we noticed that if a protocol was modelled using dynamic and thorough state machines and message descriptions, there was no need for randomness anymore. Actually, the incremental benefits of adding random tests to the systematically built tests was so insignificant that eventually we just left them out entirely. Everything was carefully optimized. Test execution times were extremely fast (from minutes to few hours), and test coverage was much better than with other techniques, even those in use today.
After almost ten years, block-based fuzzers were invented. They are a kind of cross-breed between the purely random, non-protocol-aware fuzzers of the early 90s, and robustness testing tools that are entirely based on protocol models and systematic test generation. A block-based fuzzer adds enough protocol awareness to its minimalistic model and state diagrams, to be able to somewhat limit the amount of random or semi-random changes it does. Why did the inventors include any randomness at all? Because a fuzzer is supposed to do random testing - or perhaps that is just what people thought.
So when someone calls us a "fuzzer that does not fuzz", they are finally understanding the difference between a fuzzer, and a Robustness Testing tool. Even though finally in 2008 we decided to call our tools fuzzers also, they really isn't anything fuzzy about our tools. And we are proud of that!
XML Fuzzing
2009-07-20
2009-03-17
Finally, a market study on fuzzing
Check out the webinar http://www.codenomicon.com/resources/webcasts/20090331.shtml
2009-01-19
Fuzzing In The Media
I was speaking at a press conference for Infosec London, and my personal experience there supported my past thinking. Only five out of twenty-plus journalists had ever heard about fuzzing. And these people write about security topics in their publications! It is a long road to change this, and we need everyone's help in that. That is actually the only reason why I wrote a book on fuzzing, and why I have one too many blogs on the topic. Please join me in evangelizing on fuzzing, to change the world for better!
2008-11-13
Quality Assurance Moves Towards Fuzzing
I have been reading a number of QA papers and books recently to catch up from past busy times. If you have time, have a look at some QA topics through your favorite search engine:
- Test generation
- Random testing, Adaptive random testing
- Hypercuboids
- Statecharts
- Model based testing
- Modified Condition/Decision Coverage (MC/DC)
For example Jayaram & Mathur from Purdue are explaining interesting measurements of using statecharts as the basis of generating message sequences for complex protocols such as TLS. Sounds pretty similar to fuzzing, at least to me, although the research at this phase is nowhere in the same domain. Today most block-based fuzzers (although some of them call themselves model-based) use extremely limited message sequence coverage, with the worst of them only take a capture of traffic, and then mutate that. The drawback with this is that you will only do message structure fuzzing, the most basic form of fuzzing.
Then if you look at the work of e.g. Gotlieband and Petit from INRIA, you can get a glimpse of what the QA people are looking at in the area of test generation. Any individual field in the protocol message can (potentially) automatically generate its own set of data based on a very basic assumptions, and therefore optimize those to finally do some intelligent permutations of multi-anomaly fuzzing. Long gone are those static libraries of anomalies (again very few real fuzzers use them today). The result is less test cases, and better test coverage.
It is interesting to see where fuzzing will go in the future, and how companies with QA background, and companies with security background will either end up in the same direction, or very different direction.
2008-10-07
2008-09-14
Win free copy of The Fuzzing Book
Surf this way to get your copy: http://www.codenomicon.com/fuzzing-book/
2008-09-08
Why Two Blogs?
Why two blogs? Well, here I will not represent anyone but myself. Any opinions here are not related to the other authors (of the VoIP book, or the Fuzzing book), nor to Codenomicon.