In association with heise online


On the whole, Skipfish's results when scanning the IIS 7.5 and ScrewTurn combination weren't too impressive. However, it is worth pointing out that the scanner's brutal dictionary use detected the /wiki/ URI by itself, which allowed the URI to be examined without any manual intervention.

When examining the Apache server and CGI scripts, and with the dictionary function restricted to learning only, Skipfish scored slightly better. There were only 47 medium risk results, 6 low risk results, 1 warning and 65 informational entries, which only required 83MB. Interestingly, Skipfish isn't capable of handling classic Perl-generated HTML forms. As a result, it doesn't enter any learned keywords into form fields for testing. However, the scanner correctly detects that the forms have no protection against Cross Site Request Forgery (CSRF) attacks.

Both the "Interesting File" category and the "Incorrect or missing MIME type" category list a URL at which flaws in the server configuration actually display the source code of a CGI script rather than launch the script itself. This result was achieved by combining file names and extensions (in this case an empty extension).

Incidentally, the printer scan didn't produce any results because we had to abort scanning after 18 hours with no end in sight. In such situations, it's an inconvenience that Skipfish only writes the report about its findings to the directory once a scan has been completed, or been aborted via Control-C. There is no way of finding out what the tool has already detected while the scan is in progress.


Skipfish excels at discovering forgotten data and provoking unexpected server behaviour. The permutation of keywords and file extensions as well as learned elements is very similar to the fuzzing process, producing similar results in terms of data volume.

In large computer centres, such as Google's, which harbour a very heterogeneous landscape of web applications and have access to almost unlimited processing resources and network bandwidth, using Skipfish in its present form is already bound to help discover unexpected functions and content.

However, using Skipfish makes less sense if there is direct access to a web server's file system – especially when scanning via an internet connection. The amount of data transmitted by the scanner is almost equivalent to a complete file system image, which makes it preferable to analyse an application's scripts and files on site and with well-established tools in one's own time.

Zoom While Skipfish's report is very clear, the problems listed require considerable interpretation.
Skipfish in its current form has only very limited professional uses. Investigating the test results, which is always required, soon becomes a momentous task due to the flood of reports. Furthermore, results such as "Incorrect or missing charset", "Incorrect caching directives" or "Incorrect or missing MIME type" are very abstract and far from non-ambiguous. Even experienced pen testers often need to research to find the actual cause of the problem. Consequently, Skipfish is completely unsuitable for non-professional users or "quick check-ups" and will probably prompt unjustified panic rather than provide reassurance.

In many cases, using the dictionary function is prohibitive just because of the amount of data produced and the resulting target system loads created. A fully mature Java-based business web application would probably not stand up to a Skipfish scan with minimal dictionary because the scanner's CPU and main memory requirements would paralyse the system before the scan is completed. Firewalls with complex inspection modules and IDS/IPS systems would also drown the recipients of their log messages in extreme amounts of data or simply quit functioning.


Print Version | Permalink:
  • Twitter
  • Facebook
  • submit to slashdot
  • StumbleUpon
  • submit to reddit

  • July's Community Calendar

The H Open

The H Security

The H Developer

The H Internet Toolkit