2008-11-13

Quality Assurance Moves Towards Fuzzing

I have been reading a number of QA papers and books recently to catch up from past busy times. If you have time, have a look at some QA topics through your favorite search engine:

  • Test generation
  • Random testing, Adaptive random testing
  • Hypercuboids
  • Statecharts
  • Model based testing
  • Modified Condition/Decision Coverage (MC/DC)

For example Jayaram & Mathur from Purdue are explaining interesting measurements of using statecharts as the basis of generating message sequences for complex protocols such as TLS. Sounds pretty similar to fuzzing, at least to me, although the research at this phase is nowhere in the same domain. Today most block-based fuzzers (although some of them call themselves model-based) use extremely limited message sequence coverage, with the worst of them only take a capture of traffic, and then mutate that. The drawback with this is that you will only do message structure fuzzing, the most basic form of fuzzing.

Then if you look at the work of e.g. Gotlieband and Petit from INRIA, you can get a glimpse of what the QA people are looking at in the area of test generation. Any individual field in the protocol message can (potentially) automatically generate its own set of data based on a very basic assumptions, and therefore optimize those to finally do some intelligent permutations of multi-anomaly fuzzing. Long gone are those static libraries of anomalies (again very few real fuzzers use them today). The result is less test cases, and better test coverage.

It is interesting to see where fuzzing will go in the future, and how companies with QA background, and companies with security background will either end up in the same direction, or very different direction.

No comments: