Once you have your server up and running and most of the code working correctly, you may still encounter errors generated by your code at runtime. Some possible errors are discussed in this section.
Under mod_perl, you may receive a warning or an error in the error_log file that specifies /dev/null as the source file and line 0 as the line number where the printing of the message was triggered. This is quite normal if the code is executed from within a handler, because there is no actual file associated with the handler. Therefore, $0 is set to /dev/null, and that's what you see.
If some processes have segmentation faults when using XML::Parser, you should use the following flags during Apache configuration:
--disable-rule=EXPAT
This should be necessary only with mod_perl Version 1.22 and lower. Starting with mod_perl Version 1.23, the EXPAT option is disabled by default.
If you build mod_perl and mod_php in the same binary, you might get a segmentation fault followed by this error:
exit signal Segmentation fault (11)
The solution is to not rely on PHP's built-in MySQL support, and instead build mod_php with your local MySQL support files by adding —with-mysql=/path/to/mysql during ./configure.
If the CGI program is not actually executed but is just returned as plain text, it means the server doesn't recognize it as a CGI script. Check your configuration files and make sure that the ExecCGI option is turned on. For example, your configuration section for Apache::Registryscripts should look like this:
<Location /perl> SetHandler perl-script PerlHandler Apache::Registry Options +ExecCGI </Location>
This error message is returned when the client breaks the connection while your script is trying to write to the client. With Apache 1.3.x, you should see the rwrite messages only if LogLevel is set to debug. (Prior to mod_perl 1.19_01, there was a bug that reported this debug message regardless of the value of the LogLevel directive.)
Generally LogLevel is either debug or info. debug logs everything, and info is the next level, which doesn't include debug messages. You shouldn't use debug mode on a production server. At the moment there is no way to prevent users from aborting connections.
This error message is printed when a nondeclared variable is used in the code running under the strict pragma. For example, consider the short script below, which contains a use strict; pragma and then shamelessly violates it:
#!/usr/bin/perl -w use strict; print "Content-type: text/html\n\n"; print "Hello $username";
Since Perl will insist that all variables are defined before being used, the program will not run and will print the error:
Global symbol "$username" requires explicit package name at /home/httpd/perl/tmp.pl line 4.
Moreover, in certain situations (e.g., when SIG{_ _DIE_ _} is set to Carp::confess( )) the entire script is printed to the error_log file as code that the server has tried to evaluate, so if this script is run repeatedly, the error_log file will grow very fast and you may run out of disk space.
This problem can easily be avoided by always declaring variables before using them. Here is the fixed version of our example:
#!/usr/bin/perl -w use strict; my $username = ''; print "Content-type: text/html\n\n"; print "Hello $username";
If you see this message, your code includes an undefined variable that you have used as if it was already defined and initialized. For example:
my $param = $q->param('test'); print $param;
You can fix this fairly painlessly by just specifying a default value:
my $param = $q->param('test') || ''; print $param;
In the second case, $param will always be defined, either with $q->param('test')'s return value or the default value the empty string ('' in our example).
This error usually happens when two scripts or handlers (Apache::Registry in this case) call a function defined in a library without a package definition, or when the two use two libraries with different content but an identical name (as passed to require( )).
Chapter 6 provides in-depth coverage of this conundrum and numerous solutions.
"Callback called exit" is just a generic message when Perl encounters an unrecoverable error during perl_call_sv( ). mod_perl uses perl_call_sv( ) to invoke all handler subroutines. Such problems seem to occur far less often with Perl Version 5.005_03 than 5.004. It shouldn't appear with Perl Version 5.6.1 and higher.
Sometimes you discover that your server is not responding and its error_log file has filled up the remaining space on the filesystem. When you finally get to see the contents of the error_log file, it includes millions of lines like this:
Callback called exit at -e line 33, <HTML> chunk 1.
This is because Perl can get very confused inside an infinite loop in your code. It doesn't necessarily mean that your code called exit( ). It's possible that Perl's malloc( ) went haywire and called croak( ), but no memory was left to properly report the error, so Perl gets stuck in a loop writing that same message to STDERR.
Perl Version 5.005 and higher is recommended for its improved malloc.c, and also for other features that improve the performance of mod_perl and are turned on by default.
See also the next section.
If something goes really wrong with your code, Perl may die with an "Out of memory!" and/or "Callback called exit" message. Common causes of this are infinite loops, deep recursion, or calling an undefined subroutine.
If -DPERL_EMERGENCY_SBRK is defined, running out of memory need not be a fatal error: a memory pool can be allocated by using the special variable $^M. See the perlvar manpage for more details.
If you compile with that option and add use Apache::Debug level => 4; to your Perl code, it will allocate the $^M emergency pool and the $SIG{_ _DIE_ _} handler will call Carp::confess( ), giving you a stack trace that should reveal where the problem is. See the Apache::Resource module for the prevention of spinning httpds.
Note that Perl 5.005 and later have PERL_EMERGENCY_SBRK turned on by default.
Another trick is to have a startup script initialize Carp::confess( ), like this:
use Carp ( ); eval { Carp::confess("init") };
This way, when the real problem happens, Carp::confess doesn't eat memory in the emergency pool ($^M).
If you see an error of this kind:
syntax error at /dev/null line 1, near "line arguments:" Execution of /dev/null aborted due to compilation errors. parse: Undefined error: 0
there is a chance that your /dev/null device is broken. You can test it with:
panic% echo > /dev/null
It should silently complete the command. If it doesn't, /dev/null is broken. Refer to your OS's manpages to learn how to restore this device. On most Unix flavors, this is how it's done:
panic# rm /dev/null panic# mknod /dev/null c 1 3 panic# chmod a+rw /dev/null
You need to create a special file using mknod, for which you need to know the type and both the major and minor modes. In our case, cstands for character device, 1 is the major mode, and 3 is the minor mode. The file should be readable and writable by everybody, hence the permission mode settings (a+rw).
Copyright © 2003 O'Reilly & Associates. All rights reserved.