-
Notifications
You must be signed in to change notification settings - Fork 87
Typical problems
Here are the most typical problems and solutions or workaround for most of them. If you cannot find your problem here, please open a issue (or drop me an email).
This is a known little problem. The process typically crashes with a message similar to the following one:
$ python generic_fuzzer.py generic.cfg Something
(...)
[Tue Oct 28 03:56:45 2014 X:Y] We have a crash, moving to test-crash queue...
[Tue Oct 28 03:56:45 2014 X:Y] $PC 0xWHATEVER Signal SIGSEGV Exploitable Unknown
[Tue Oct 28 03:56:45 2014 X:Y] 0xWHATEVER: MOV [RAX], R12D
[Tue Oct 28 03:56:45 2014 X:Y] Launching debugger with command whatever /tmp/tmpwhatever
[Tue Oct 28 03:56:45 2014 X:Y] Exception: ERROR - Must be attached to a process
Error: ERROR - Must be attached to a process
The workaround until the bug (in the VTrace interface) is fixed is to set the environment variable NIGHTMARE_PROCESSES to at least 1 like in the following example:
$ NIGHTMARE_PROCESSES=1 python generic_fuzzer.py generic.cfg Something
It will make sure that there is at least always 1 process running and re-spawn the crashed fuzzers. Naturally, it can be used to launch a number of fuzzing processes running in parallel.
This problem appears typically when running the generic_fuzzer.py tool:
[Tue Nov 4 23:28:23 2014 X:Y] Exception: [Errno 24] Too many open files: '/proc/<PID>/task'
Error: [Errno 24] Too many open files: '/proc/<PID>/task'
The number of opened file descriptors is big enough for the most typical values but it doesn't increase without control so we recommend running the following command before running the fuzzer:
$ ulimit -n
Generic fuzzer stops after a while with the following error:
Section X does not exists in the given configuration file
The reason for this is the same: too many open file descriptors. However, this reason is not obvious since the ConfigParser.read() method swallows exceptions when the given configuration file can't be opened. So when reading fails because of the lack of FDs the dictionary of the available sections simply will be empty, that triggers the above message instead of giving useful info about what the real problem is.
The workaround is the same as above.
The problem happens when using any of the supplied generic debuggers and shows a message like the following one:
[Fri Oct 31 06:30:40 2014 X:Y] Launching debugger with command command /tmp/xxx.fil
[Fri Oct 31 06:30:40 2014 X:Y] Exception: can't start new thread
Error: can't start new thread
Warning! tracer del w/o release()!
The reason, often, is that the fuzzer is running in a virtual machine with very few resources (maybe in a 1GB RAM VM?). I recommend to run the fuzzers with at least 2GB of memory (4GB recommended). It's better to run a lot of fuzzers in parallel in a big machine rather than 1 fuzzer in many VM instances (overall).
When running nfp_engine.py the beanstalkd server raises errors saying that the job is too big. You need to adjust the size of the biggest job supported for beanstalkd. I recommend adding the following command line option to your beanstalkd daemon's command line: -z 55000000.
The SQLite database can be anywhere as long as the file $NIGHTMARE_DIR/runtime/config.cfg correctly points to it. In order to create this database the following command can be executed:
$ sqlite3 your_database.sqlite < $NIGHTMARE_DIR/doc/sql/nightmare_sqlite.sql