The language definition states that for each pointer type, there
is a special value -- the "null pointer" -- which is
distinguishable from all other pointer values and which is not
the address of any object or function. That is, the address-of
operator &
will never yield a null pointer, nor will a
successful call to malloc
. (malloc
returns a null pointer when
it fails, and this is a typical use of null pointers: as a
"special" pointer value with some other meaning, usually "not
allocated" or "not pointing anywhere yet.")
A null pointer is conceptually different from an uninitialized pointer. A null pointer is known not to point to any object; an uninitialized pointer might point anywhere. See also questions 3.1, 3.13, and 17.1.
As mentioned in the definition above, there is a null pointer for each pointer type, and the internal values of null pointers for different types may be different. Although programmers need not know the internal values, the compiler must always be informed which type of null pointer is required, so it can make the distinction if necessary (see below).
References: K&R I Sec. 5.4 pp. 97-8; K&R II Sec. 5.4 p. 102; H&S Sec. 5.3 p. 91; ANSI Sec. 3.2.2.3 p. 38.
According to the language definition, a constant 0 in a pointer context is converted into a null pointer at compile time. That is, in an initialization, assignment, or comparison when one side is a variable or expression of pointer type, the compiler can tell that a constant 0 on the other side requests a null pointer, and generate the correctly-typed null pointer value. Therefore, the following fragments are perfectly legal:
char *p = 0; if(p != 0)
However, an argument being passed to a function is not necessarily recognizable as a pointer context, and the compiler may not be able to tell that an unadorned 0 "means" a null pointer. For instance, the Unix system call "execl" takes a variable-length, null-pointer-terminated list of character pointer arguments. To generate a null pointer in a function call context, an explicit cast is typically required, to force the 0 to be in a pointer context:
execl("/bin/sh", "sh", "-c", "ls", (char *)0);
If the (char
*)
cast were omitted, the compiler would not know
to pass a null pointer, and would pass an integer 0 instead.
(Note that many Unix manuals get this example wrong.)
When function prototypes are in scope, argument passing becomes an "assignment context," and most casts may safely be omitted, since the prototype tells the compiler that a pointer is required, and of which type, enabling it to correctly convert unadorned 0's. Function prototypes cannot provide the types for variable arguments in variable-length argument lists, however, so explicit casts are still required for those arguments. It is safest always to cast null pointer function arguments, to guard against varargs functions or those without prototypes, to allow interim use of non-ANSI compilers, and to demonstrate that you know what you are doing. (Incidentally, it's also a simpler rule to remember.)
Summary:
Unadorned 0 okay:
References: K&R I Sec. A7.7 p. 190, Sec. A7.14 p. 192; K&R II Sec. A7.10 p. 207, Sec. A7.17 p. 209; H&S Sec. 4.6.3 p. 72; ANSI Sec. 3.2.2.3 .
NULL
and how is it #defined?
As a matter of style, many people prefer not to have unadorned
0's scattered throughout their programs. For this reason, the
preprocessor macro NULL
is #defined (by <stdio.h>
or
<stddef.h>
), with value 0 (or (void
*)0
, about which more
later). A programmer who wishes to make explicit the
distinction between 0 the integer and 0 the null pointer can
then use NULL
whenever a null pointer is required. This is a
stylistic convention only; the preprocessor turns NULL
back to 0
which is then recognized by the compiler (in pointer contexts)
as before. In particular, a cast may still be necessary before
NULL
(as before 0) in a function call argument. (The table
under question 1.2 above applies for NULL
as well as 0.)
NULL
should only be used for pointers; see question 1.8.
References: K&R I Sec. 5.4 pp. 97-8; K&R II Sec. 5.4 p. 102; H&S Sec. 13.1 p. 283; ANSI Sec. 4.1.5 p. 99, Sec. 3.2.2.3 p. 38, Rationale Sec. 4.1.5 p. 74.
NULL
be #defined on a machine which uses a nonzero
bit pattern as the internal representation of a null pointer?
Programmers should never need to know the internal
representation(s) of null pointers, because they are normally
taken care of by the compiler. If a machine uses a nonzero bit
pattern for null pointers, it is the compiler's responsibility
to generate it when the programmer requests, by writing "0" or
"NULL
," a null pointer. Therefore, #defining NULL
as 0 on a
machine for which internal null pointers are nonzero is as valid
as on any other, because the compiler must (and can) still
generate the machine's correct null pointers in response to
unadorned 0's seen in pointer contexts.
NULL
were defined as follows:
#define NULL ((char *)0)
NULL
work?
Not in general. The problem is that there are machines which
use different internal representations for pointers to different
types of data. The suggested #definition would make uncast NULL
arguments to functions expecting pointers to characters to work
correctly, but pointer arguments to other types would still be
problematical, and legal constructions such as
FILE *fp = NULL
;
could fail.
Nevertheless, ANSI C allows the alternate
#define NULL ((void *)0)
definition for NULL
. Besides helping incorrect programs to work
(but only on machines with homogeneous pointers, thus
questionably valid assistance) this definition may catch
programs which use NULL
incorrectly (e.g. when the ASCII NUL
character was really intended; see question 1.8).
References: ANSI Rationale Sec. 4.1.5 p. 74.
#define Nullptr(type) (type *)0
This trick, though popular in some circles, does not buy much. It is not needed in assignments and comparisons; see question 1.2. It does not even save keystrokes. Its use suggests to the reader that the author is shaky on the subject of null pointers, and requires the reader to check the #definition of the macro, its invocations, and all other pointer usages much more carefully. See also question 8.1.
if(p)
" to test for non-
null pointers valid? What if the internal representation for
null pointers is nonzero?
When C requires the boolean value of an expression (in the if
,
while
, for
, and do
statements, and with the &&
, ||
, !
, and ?:
operators), a false value is produced when the expression
compares equal to zero, and a true value otherwise. That is,
whenever one writes
if(expr)
where "expr" is any expression at all, the compiler essentially acts as if it had been written as
if(expr != 0)
Substituting the trivial pointer expression "p" for "expr," we have
if(p) is equivalent to if(p != 0)
and this is a comparison context, so the compiler can tell that the (implicit) 0 is a null pointer, and use the correct value. There is no trickery involved here; compilers do work this way, and generate identical code for both statements. The internal representation of a pointer does not matter.
The boolean negation operator, !
, can be described as follows:
!expr is essentially equivalent to expr?0:1
It is left as an exercise for the reader to show that
if(!p) is equivalent to if(p == 0)
"Abbreviations" such as if(p)
, though perfectly legal, are
considered by some to be bad style.
See also question 8.2.
References: K&R II Sec. A7.4.7 p. 204; H&S Sec. 5.3 p. 91; ANSI Secs. 3.3.3.3, 3.3.9, 3.3.13, 3.3.14, 3.3.15, 3.6.4.1, and 3.6.5 .
NULL
" and "0" are equivalent, which should I use?
Many programmers believe that "NULL
" should be used in all
pointer contexts, as a reminder that the value is to be thought
of as a pointer. Others feel that the confusion surrounding
"NULL
" and "0" is only compounded by hiding "0" behind a
#definition, and prefer to use unadorned "0" instead. There is
no one right answer. C programmers must understand that "NULL
"
and "0" are interchangeable and that an uncast "0" is perfectly
acceptable in initialization, assignment, and comparison
contexts. Any usage of "NULL
" (as opposed to "0") should be
considered a gentle reminder that a pointer is involved;
programmers should not depend on it (either for their own
understanding or the compiler's) for distinguishing pointer 0's
from integer 0's.
NULL
should not be used when another kind of 0 is required,
even though it might work, because doing so sends the wrong
stylistic message. (ANSI allows the #definition of NULL
to be
(void
*)0
, which will not work in non-pointer contexts.) In
particular, do not use NULL
when the ASCII null character (NUL)
is desired. Provide your own definition
#define NUL '\0'
if you must.
References: K&R II Sec. 5.4 p. 102.
NULL
(rather than 0) in case
the value of NULL
changes, perhaps on a machine with nonzero
null pointers?
No. Although symbolic constants are often used in place of
numbers because the numbers might change, this is not the
reason that NULL
is used in place of 0. Once again, the
language guarantees that source-code 0's (in pointer contexts)
generate null pointers. NULL
is used only as a stylistic
convention.
NULL
is guaranteed to be 0, but the null pointer
is not?
When the term "null" or "NULL
" is casually used, one of several
things may be meant:
NULL
macro, which is #defined to be "0" or
"(void
*)0"
. Finally, as red herrings, we have...
'\0'
) character, but not a null pointer, which
brings us full circle...
NULL
" for sense 4.
C programmers traditionally like to know more than they need to
about the underlying machine implementation. The fact that null
pointers are represented both in source code, and internally to
most machines, as zero invites unwarranted assumptions. The use
of a preprocessor macro (NULL
) suggests that the value might
change later, or on some weird machine. The construct
"if(p == 0)
" is easily misread as calling for conversion of p to
an integral type, rather than 0 to a pointer type, before the
comparison. Finally, the distinction between the several uses
of the term "null" (listed above) is often overlooked.
One good way to wade out of the confusion is to imagine that C had a keyword (perhaps "nil", like Pascal) with which null pointers were requested. The compiler could either turn "nil" into the correct type of null pointer, when it could determine the type from the source code, or complain when it could not. Now, in fact, in C the keyword for a null pointer is not "nil" but "0", which works almost as well, except that an uncast "0" in a non-pointer context generates an integer zero instead of an error message, and if that uncast 0 was supposed to be a null pointer, the code may not work.
Follow these two simple rules:
NULL
".
NULL
" is an argument in a
function call, cast it to the pointer type expected by
the function being called.
The rest of the discussion has to do with other people's misunderstandings, or with the internal representation of null pointers (which you shouldn't need to know), or with ANSI C refinements. Understand questions 1.1, 1.2, and 1.3, and consider 1.8 and 1.11, and you'll do fine.
If for no other reason, doing so would be ill-advised because it would unnecessarily constrain implementations which would otherwise naturally represent null pointers by special, nonzero bit patterns, particularly when those values would trigger automatic hardware traps for invalid accesses.
Besides, what would this requirement really accomplish? Proper understanding of null pointers does not require knowledge of the internal representation, whether zero or nonzero. Assuming that null pointers are internally zero does not make any code easier to write (except for a certain ill-advised usage of calloc; see question 3.13). Known-zero internal pointers would not obviate casts in function calls, because the size of the pointer might still be different from that of an int. (If "nil" were used to request null pointers rather than "0," as mentioned in question 1.11, the urge to assume an internal zero representation would not even arise.)
The Prime 50 series used segment 07777, offset 0 for the null
pointer, at least for PL/I. Later models used segment 0, offset
0 for null pointers in C, necessitating new instructions such as
TCNP (Test C Null Pointer), evidently as a sop to all the extant
poorly-written C code which made incorrect assumptions. Older,
word-addressed Prime machines were also notorious for requiring
larger byte pointers (char
*
's) than word pointers (int
*
's).
The Eclipse MV series from Data General has three
architecturally supported pointer formats (word, byte, and bit
pointers), two of which are used by C compilers: byte pointers
for char
*
and void
*
, and word pointers for everything else.
Some Honeywell-Bull mainframes use the bit pattern 06000 for (internal) null pointers.
The CDC Cyber 180 Series has 48-bit pointers consisting of a ring, segment, and offset. Most users (in ring 11) have null pointers of 0xB00000000000.
The Symbolics Lisp Machine, a tagged architecture, does not even have conventional numeric pointers; it uses the pair <NIL, 0> (basically a nonexistent <object, offset> handle) as a C null pointer.
Depending on the "memory model" in use, 80*86 processors (PC's) may use 16 bit data pointers and 32 bit function pointers, or vice versa.
The old HP 3000 series computers use a different addressing scheme for byte addresses than for word addresses; void and char pointers therefore have a different representation than an int (structure, etc.) pointer to the same address would have.
This message, which occurs only under MS-DOS (see, therefore, section 16) means that you've written, via an uninitialized and/or null pointer, to location zero.
A debugger will usually let you set a data breakpoint on location 0. Alternately, you could write a bit of code to copy 20 or so bytes from location 0 into another buffer, and periodically check that it hasn't changed.