static members, etc. are illustrated in this chapter. The
examples roughly follow the organization of earlier chapters.
As an additional topic, not just providing examples of C++ the subjects of scanner and parser generators are covered. We show how these tools may be used in C++ programs. These additional examples assume a certain familiarity with the concepts underlying these tools, like grammars, parse-trees and parse-tree decoration. Once the input for a program exceeds a certain level of complexity, it's attractive to use scanner- and parser-generators to create the code doing the actual input processing. One of the examples in this chapter describes the usage of these tools in a C++ environment.
std::streambuf as the starting point
for constructing classes interfacing such file descriptor devices.
Below we'll construct classes that can be used to write to a device given its file descriptor. The devices may be files, but they could also be pipes or sockets. Section 26.1.2 covers reading from such devices; section 26.2.3 reconsiders redirection, discussed earlier in section 6.6.2.
Using the streambuf class as a base class it is relatively easy to
design classes for output operations. The only member function that must
be overridden is the (virtual) member
int streambuf::overflow(int c). This member's responsibility is to
write characters to the device. If fd is an output file descriptor and if
output should not be buffered then the member overflow() can simply be
implemented as:
class UnbufferedFD: public std::streambuf
{
public:
int overflow(int c) override;
...
};
int UnbufferedFD::overflow(int c)
{
if (c != EOF)
{
if (write(d_fd, &c, 1) != 1)
return EOF;
}
return c;
}
The argument received by overflow is either written to the file
descriptor (and returned from overflow), or EOF is returned.
This simple function does not use output buffering. For various reasons, using a buffer is usually a good idea (see also the next section).
When output buffering is used, the overflow member is a bit more
complex as it is only called when the buffer is full. Once the buffer is full,
we first have to flush the buffer. Flushing the buffer is the
responsibility of the (virtual) function streambuf::sync. Since
sync is a virtual function, classes derived from streambuf may
redefine sync to flush a buffer streambuf itself doesn't know about.
Overriding sync and using it in overflow is not all that has to be
done. When the object of the class defining the buffer reaches the end of its
lifetime the buffer may be only partially full. In that situation the buffer
must also be flushed. This is easily done by simply calling sync from the
class's destructor.
Now that we've considered the consequences of using an output buffer, we're almost ready to design our derived class. Several more features are added as well, though:
OFdnStreambuf has the following characteristics:
streambuf the <unistd.h> header file must
have been read by the compiler before its member functions can be compiled.
std::streambuf.
class OFdnStreambuf: public std::streambuf
{
int d_fd = -1;
size_t d_bufsize = 0;
char *d_buffer = 0;
public:
OFdnStreambuf() = default;
OFdnStreambuf(int fd, size_t bufsize = 1);
~OFdnStreambuf() override;
void open(int fd, size_t bufsize = 1);
private:
int sync() override;
int overflow(int c) override;
};
open
member (see below). Here are the constructors:
inline OFdnStreambuf::OFdnStreambuf(int fd, size_t bufsize)
{
open(fd, bufsize);
}
sync, flushing any characters stored in the
output buffer to the device. In implementations not using a buffer the
destructor can be given a default implementation:
inline OFdnStreambuf::~OFdnStreambuf()
{
if (d_buffer)
{
sync();
delete[] d_buffer;
}
}
This implementation does not close the device. It is left as an exercise
to the reader to change this class in such a way that the device is optionally
closed (or optionally remains open). This approach was adopted by, e.g., the
Bobcat library.
See also section
26.1.2.2.
open member initializes the buffer. Using
streambuf::setp, the begin and end points of the buffer are
defined. This is used by the streambuf base class to initialize
streambuf::pbase, streambuf::pptr, and
streambuf::epptr:
inline void OFdnStreambuf::open(int fd, size_t bufsize)
{
d_fd = fd;
d_bufsize = bufsize == 0 ? 1 : bufsize;
delete[] d_buffer;
d_buffer = new char[d_bufsize];
setp(d_buffer, d_buffer + d_bufsize);
}
sync flushes the as yet unflushed content of the
buffer to the device. After the flush the buffer is reinitialized using
setp. After successfully flushing the buffer sync returns 0:
inline int OFdnStreambuf::sync()
{
if (pptr() > pbase())
{
write(d_fd, d_buffer, pptr() - pbase());
setp(d_buffer, d_buffer + d_bufsize);
}
return 0;
}
streambuf::overflow is also
overridden. Since this member is called from the streambuf base class when
the buffer is full it should first call sync to flush the buffer to the
device. Next it should write the character c to the (now empty)
buffer. The character c should be written using pptr and
streambuf::pbump. Entering a character into the buffer should be
implemented using available streambuf member functions, rather than `by
hand' as doing so might invalidate streambuf's internal bookkeeping. Here
is overflow's implementation:
inline int OFdnStreambuf::overflow(int c)
{
sync();
if (c != EOF)
{
*pptr() = c;
pbump(1);
}
return c;
}
OFfdStreambuf class to copy its standard
input to file descriptor STDOUT_FILENO, which is the symbolic name of the
file descriptor used for the standard output:
#include <string>
#include <iostream>
#include <istream>
#include "fdout.h"
using namespace std;
int main(int argc, char **argv)
{
OFdnStreambuf fds(STDOUT_FILENO, 500);
ostream os(&fds);
switch (argc)
{
case 1:
for (string s; getline(cin, s); )
os << s << '\n';
os << "COPIED cin LINE BY LINE\n";
break;
case 2:
cin >> os.rdbuf(); // Alternatively, use: cin >> &fds;
os << "COPIED cin BY EXTRACTING TO os.rdbuf()\n";
break;
case 3:
os << cin.rdbuf();
os << "COPIED cin BY INSERTING cin.rdbuf() into os\n";
break;
}
}
std::streambuf, they should be provided with an input buffer
of at least one character. The one-character input buffer allows for the use
of the member functions istream::putback or istream::ungetc. Strictly
speaking it is not necessary to implement a buffer in classes derived from
streambuf. But using buffers in these classes is strongly advised. Their
implementation is very simple and straightforward and the applicability of
such classes is greatly improved. Therefore, all our classes derived from the
class streambuf define a buffer of at least one character.
IFdStreambuf) from streambuf using a
buffer of one character, at least its member
streambuf::underflow should be overridden, as this member eventually
receives all requests for input. The member
streambuf::setg is used to inform the streambuf base class of the
size and location of the input buffer, so that it is able to set up its input
buffer pointers accordingly. This ensures that
streambuf::eback, streambuf::gptr, and
streambuf::egptr return correct values.
The class IFdStreambuf is designed like this:
streambuf, the <unistd.h>
header file must have been read by the compiler before its member functions
can be compiled.
std::streambuf as well.
protected data members
so that derived classes (e.g., see section 26.1.2.3) can access them. Here
is the full class interface:
class IFdStreambuf: public std::streambuf
{
protected:
int d_fd;
char d_buffer[1];
public:
IFdStreambuf(int fd);
private:
int underflow() override;
};
gptr's return value equal to egptr's return value. This
implies that the buffer is empty so underflow is immediately called
to fill the buffer:
inline IFdStreambuf::IFdStreambuf(int fd)
:
d_fd(fd)
{
setg(d_buffer, d_buffer + 1, d_buffer + 1);
}
underflow is overridden. The buffer is refilled by
reading from the file descriptor. If this fails (for whatever reason),
EOF is returned. More sophisticated implementations could act more
intelligently here, of course. If the buffer could be refilled, setg is
called to set up streambuf's buffer pointers correctly:
inline int IFdStreambuf::underflow()
{
if (read(d_fd, d_buffer, 1) <= 0)
return EOF;
setg(d_buffer, d_buffer, d_buffer + 1);
return static_cast<unsigned char>(*gptr());
}
main function shows how IFdStreambuf can be used:
int main()
{
IFdStreambuf fds(STDIN_FILENO);
istream is(&fds);
cout << is.rdbuf();
}
IFdStreambuf developed in the previous section. To make things a bit more
interesting, in the class IFdNStreambuf developed here, the member
streambuf::xsgetn is also overridden, to optimize reading a
series of characters. Also a default constructor is provided that can be used
in combination with the open member to construct an istream object
before the file descriptor becomes available. In that case, once the
descriptor becomes available, the open member can be used to initiate
the object's buffer. Later, in section 26.2, we'll encounter such a
situation.
To save some space, the success of various calls was not checked. In `real
life' implementations, these checks should of course not be omitted. The
class IFdNStreambuf has the following characteristics:
streambuf the <unistd.h> header file must
have been read by the compiler before its member functions can be compiled.
std::streambuf.
IFdStreambuf (section 26.1.2.1), its data
members are protected. Since the buffer's size is configurable, this size is
kept in a dedicated data member, d_bufsize:
class IFdNStreambuf: public std::streambuf
{
protected:
int d_fd = -1;
size_t d_bufsize = 0;
char* d_buffer = 0;
public:
IFdNStreambuf() = default;
IFdNStreambuf(int fd, size_t bufsize = 1);
~IFdNStreambuf() override;
void open(int fd, size_t bufsize = 1);
private:
int underflow() override;
std::streamsize xsgetn(char *dest, std::streamsize n) override;
};
open. Open will then
initialize the object so that it can actually be used:
inline IFdNStreambuf::IFdNStreambuf(int fd, size_t bufsize)
{
open(fd, bufsize);
}
open, its destructor will
both delete the object's buffer and use the file descriptor to close the
device:
IFdNStreambuf::~IFdNStreambuf()
{
if (d_bufsize)
{
close(d_fd);
delete[] d_buffer;
}
}
Even though the device is closed in the above implementation this may not
always be desirable. In cases where the open file descriptor is already
available the intention may be to use that descriptor repeatedly, each time
using a newly constructed IFdNStreambuf object. It is left as an exercise
to the reader to change this class in such a way that the device may
optionally be closed. This approach was followed in, e.g., the
Bobcat library.
open member simply allocates the object's buffer. It is
assumed that the calling program has already opened the device. Once the
buffer has been allocated, the base class member setg is used to ensure
that streambuf::eback streambuf::gptr and
streambuf::egptr return correct values:
void IFdNStreambuf::open(int fd, size_t bufsize)
{
d_fd = fd;
d_bufsize = bufsize == 0 ? 1 : bufsize;
delete[] d_buffer;
d_buffer = new char[d_bufsize];
setg(d_buffer, d_buffer + d_bufsize, d_buffer + d_bufsize);
}
underflow is implemented almost
identically to IFdStreambuf's (section 26.1.2.1) member. The only
difference is that the current class supports buffers of larger
sizes. Therefore, more characters (up to d_bufsize) may be read from the
device at once:
int IFdNStreambuf::underflow()
{
if (gptr() < egptr())
return *gptr();
int nread = read(d_fd, d_buffer, d_bufsize);
if (nread <= 0)
return EOF;
setg(d_buffer, d_buffer, d_buffer + nread);
return static_cast<unsigned char>(*gptr());
}
xsgetn is overridden. In a loop, n is reduced until
0, at which point the function terminates. Alternatively, the member returns
if underflow fails to obtain more characters. This member optimizes the
reading of series of characters. Instead of calling
streambuf::sbumpc n times, a block of avail characters is copied
to the destination, using streambuf::gbump to consume avail
characters from the buffer using one function call:
std::streamsize IFdNStreambuf::xsgetn(char *dest, std::streamsize n)
{
int nread = 0;
while (n)
{
if (!in_avail())
{
if (underflow() == EOF)
break;
}
int avail = in_avail();
if (avail > n)
avail = n;
memcpy(dest + nread, gptr(), avail);
gbump(avail);
nread += avail;
n -= avail;
}
return nread;
}
xsgetn is called by streambuf::sgetn,
which is a streambuf member. Here is an example illustrating the use of
this member function with an IFdNStreambuf object:
#include <unistd.h>
#include <iostream>
#include <istream>
#include "ifdnbuf.h"
using namespace std;
int main()
{
// internally: 30 char buffer
IFdNStreambuf fds(STDIN_FILENO, 30);
char buf[80]; // main() reads blocks of 80
// chars
while (true)
{
size_t n = fds.sgetn(buf, 80);
if (n == 0)
break;
cout.write(buf, n);
}
}
std::streambuf should override the members
streambuf::seekoff and streambuf::seekpos. The class
IFdSeek, developed in this section, can be used to read information from
devices supporting seek operations. The class IFdSeek was derived from
IFdStreambuf, so it uses a character buffer of just one character. The
facilities to perform seek operations, which are added to our new class
IFdSeek, ensure that the input buffer is reset when a seek operation is
requested. The class could also be derived from the class
IFdNStreambuf. In that case the arguments to reset the input buffer
must be adapted so that its second and third parameters point beyond the
available input buffer. Let's have a look at the characteristics of
IFdSeek:
IFdSeek is derived from IFdStreambuf. Like the
latter class, IFdSeek's member functions use facilities declared in
unistd.h. So, the header file <unistd.h> must have been read by the
compiler before it can compile the class's members functions. To reduce the
amount of typing when specifying types and constants from streambuf and
std::ios, several using-declarations are defined by the class.
These using-declarations refer to types that are defined in the header file
<ios>, which must therefore also be included before the compiler can
compile IFdSeek's class interface:
class IFdSeek: public IFdStreambuf
{
using pos_type = std::streambuf::pos_type;
using off_type = std::streambuf::off_type;
using seekdir = std::ios::seekdir;
using openmode = std::ios::openmode;
public:
IFdSeek(int fd);
private:
pos_type seekoff(off_type offset, seekdir dir, openmode);
pos_type seekpos(pos_type offset, openmode mode);
};
inline IFdSeek::IFdSeek(int fd)
:
IFdStreambuf(fd)
{}
seek_off is responsible for performing the actual
seek operations. It calls lseek to seek a new position in a device whose
file descriptor is known. If seeking succeeds, setg is called to define
an already empty buffer, so that the base class's underflow member
refills the buffer at the next input request.
IFdSeek::pos_type IFdSeek::seekoff(off_type off, seekdir dir, openmode)
{
pos_type pos =
lseek
(
d_fd, off,
(dir == std::ios::beg) ? SEEK_SET :
(dir == std::ios::cur) ? SEEK_CUR :
SEEK_END
);
if (pos < 0)
return -1;
setg(d_buffer, d_buffer + 1, d_buffer + 1);
return pos;
}
seekpos is overridden as well:
it is actually defined as a call to seekoff:
inline IFdSeek::pos_type IFdSeek::seekpos(pos_type off, openmode mode)
{
return seekoff(off, std::ios::beg, mode);
}
IFdSeek. If
this program is given its own source file using input redirection then
seeking is supported (and with the exception of the first line, every other
line is shown twice):
#include "fdinseek.h"
#include <string>
#include <iostream>
#include <istream>
#include <iomanip>
using namespace std;
int main()
{
IFdSeek fds(0);
istream is(&fds);
string s;
while (true)
{
if (!getline(is, s))
break;
streampos pos = is.tellg();
cout << setw(5) << pos << ": `" << s << "'\n";
if (!getline(is, s))
break;
streampos pos2 = is.tellg();
cout << setw(5) << pos2 << ": `" << s << "'\n";
if (!is.seekg(pos))
{
cout << "Seek failed\n";
break;
}
}
}
Streambuf classes and classes derived from streambuf should support
at least ungetting the last read character. Special care must be taken
when series of unget calls must be supported. In this section the
construction of a class supporting a configurable number of istream::unget
or istream::putback calls is discussed.
Support for multiple (say `n') unget calls is implemented by
reserving an initial section of the input buffer, which is gradually filled up
to contain the last n characters read. The class is implemented as
follows:
std::streambuf. It
defines several data members, allowing the class to perform the bookkeeping
required to maintain an unget-buffer of a configurable size:
class FdUnget: public std::streambuf
{
int d_fd;
size_t d_bufsize;
size_t d_reserved;
char *d_buffer;
char *d_base;
public:
FdUnget(int fd, size_t bufsz, size_t unget);
~FdUnget() override;
private:
int underflow() override;
};
d_reserved bytes of the class's input buffer.
d_reserved. So, a certain number of bytes may be read. Once d_reserved
bytes have been read at most d_reserved bytes can be ungot.
d_base, pointing to a location d_reserved bytes beyond the
location represented by d_buffer. This is always the location where buffer
refills start.
streambuf's buffer pointers using setg. As no characters have been
read yet, all pointers are set to point to d_base. If unget is
called at this point, no characters are available, and unget
(correctly) fails.
FdUnget::FdUnget(int fd, size_t bufsz, size_t unget)
:
d_fd(fd),
d_reserved(unget)
{
size_t allocate =
bufsz > d_reserved ?
bufsz
:
d_reserved + 1;
d_buffer = new char[allocate];
d_base = d_buffer + d_reserved;
setg(d_base, d_base, d_base);
d_bufsize = allocate - d_reserved;
}
inline FdUnget::~FdUnget()
{
delete[] d_buffer;
}
underflow is overridden as follows:
underflow determines the number of characters that
could potentially be ungot. If that number of characters are ungot, the input
buffer is exhausted. So this value may be any value between 0 (the initial
state) or the input buffer's size (when the reserved area has been filled up
completely, and all current characters in the remaining section of the buffer
have also been read);
d_reserved, but it is set equal to the
actual number of characters that can be ungot if this value is smaller;
d_base;
d_base and not from d_buffer;
streambuf's read buffer pointers are set up.
Eback is set to move locations before d_base, thus
defining the guaranteed unget-area,
gptr is set to d_base, since that's the location of the
first read character after a refill, and
egptr is set just beyond the location of the last character
read into the buffer.
underflow's implementation:
int FdUnget::underflow()
{
size_t ungetsize = gptr() - eback();
size_t move = std::min(ungetsize, d_reserved);
memcpy(d_base - move, egptr() - move, move);
int nread = read(d_fd, d_base, d_bufsize);
if (nread <= 0) // none read -> return EOF
return EOF;
setg(d_base - move, d_base, d_base + nread);
return static_cast<unsigned char>(*gptr());
}
An example using FdUnget
The next example program illustrates the use of the class FdUnget. It
reads at most 10 characters from the standard input, stopping at
EOF. A guaranteed unget-buffer of 2 characters is defined in a buffer
holding 3 characters. Just before reading a character, the program tries to
unget at most 6 characters. This is, of course, not possible; but the program
nicely ungets as many characters as possible, considering the actual
number of characters read:
#include "fdunget.h"
#include <string>
#include <iostream>
#include <istream>
using namespace std;
int main()
{
FdUnget fds(0, 3, 2);
istream is(&fds);
char c;
for (int idx = 0; idx < 10; ++idx)
{
cout << "after reading " << idx << " characters:\n";
for (int ug = 0; ug <= 6; ++ug)
{
if (!is.unget())
{
cout
<< "\tunget failed at attempt " << (ug + 1) << "\n"
<< "\trereading: '";
is.clear();
while (ug--)
{
is.get(c);
cout << c;
}
cout << "'\n";
break;
}
}
if (!is.get(c))
{
cout << " reached\n";
break;
}
cout << "Next character: " << c << '\n';
}
}
/*
Generated output after 'echo abcde | program':
after reading 0 characters:
unget failed at attempt 1
rereading: ''
Next character: a
after reading 1 characters:
unget failed at attempt 2
rereading: 'a'
Next character: b
after reading 2 characters:
unget failed at attempt 3
rereading: 'ab'
Next character: c
after reading 3 characters:
unget failed at attempt 4
rereading: 'abc'
Next character: d
after reading 4 characters:
unget failed at attempt 4
rereading: 'bcd'
Next character: e
after reading 5 characters:
unget failed at attempt 4
rereading: 'cde'
Next character:
after reading 6 characters:
unget failed at attempt 4
rereading: 'de
'
reached
*/
istream objects operator>>, the
standard extraction operator is perfectly suited for the task as in most
cases the extracted fields are white-space (or otherwise clearly) separated
from each other. But this does not hold true in all situations. For example,
when a web-form is posted to some processing script or program, the receiving
program may receive the form field's values as url-encoded
characters: letters and digits are sent unaltered, blanks are sent as +
characters, and all other characters start with % followed by the
character's
ascii-value represented by its two digit hexadecimal value.
When decoding url-encoded information, simple hexadecimal extraction won't
work, as that extracts as many hexadecimal characters as available,
instead of just two. Since the letters a-f` and 0-9 are legal
hexadecimal characters, a text like My name is `Ed', url-encoded as
My+name+is+%60Ed%27
results in the extraction of the hexadecimal values 60ed and 27,
instead of 60 and 27. The name Ed disappears from view, which is
clearly not what we want.
In this case, having seen the %, we could extract 2 characters, put
them in an istringstream object, and extract the hexadecimal value from
the istringstream object. A bit cumbersome, but doable. Other approaches
are possible as well.
The class Fistream for fixed-sized field istream defines
an istream class supporting both fixed-sized field extractions and
blank-delimited extractions (as well as unformatted read calls). The
class may be initialized as a wrapper around an existing istream, or
it can be initialized using the name of an existing file. The class is derived
from istream, allowing all extractions and operations supported by
istreams in general. Fistream defines the following data members:
d_filebuf: a filebuffer used when Fistream reads its information
from a named (existing) file. Since the filebuffer is only needed in
that case, and since it must be allocated dynamically, it is defined
as a unique_ptr<filebuf> object.
d_streambuf: a pointer to Fistream's streambuf. It points
to d_filebuf when Fistream opens a file by name. When an
existing istream is used to construct an Fistream, it
points to the existing istream's streambuf.
d_iss: an istringstream object used for the fixed field
extractions.
d_width: a size_t indicating the width of the field to
extract. If 0 no fixed field extractions is used, but
information is extracted from the istream base class object
using standard extractions.
Fistream's class interface:
class Fistream: public std::istream
{
std::unique_ptr<std::filebuf> d_filebuf;
std::streambuf *d_streambuf;
std::istringstream d_iss;
size_t d_width;
As stated, Fistream objects can be constructed from either a
filename or an existing istream object. The class interface therefore
declares two constructors:
Fistream(std::istream &stream);
Fistream(char const *name,
std::ios::openmode mode = std::ios::in);
When an Fistream object is constructed using an existing istream
object, the Fistream's istream part simply uses the stream's
streambuf object:
Fistream::Fistream(istream &stream)
:
istream(stream.rdbuf()),
d_streambuf(rdbuf()),
d_width(0)
{}
When an fstream object is constructed using a filename, the
istream base initializer is given a new filebuf object to be used as
its streambuf. Since the class's data members are not initialized before
the class's base class has been constructed, d_filebuf can only be
initialized thereafter. By then, the filebuf is only available as
rdbuf, returning a streambuf. However, as it is actually a
filebuf, a static_cast is used to cast the streambuf pointer
returned by rdbuf to a filebuf *, so d_filebuf can be initialized:
Fistream::Fistream(char const *name, ios::openmode mode)
:
istream(new filebuf()),
d_filebuf(static_cast<filebuf *>(rdbuf())),
d_streambuf(d_filebuf.get()),
d_width(0)
{
d_filebuf->open(name, mode);
}
setField(field const
&). This member defines the size of the next field to extract. Its
parameter is a reference to a field class, a manipulator class
defining the width of the next field.
Since a field & is mentioned in Fistream's interface, field
must be declared before Fistream's interface starts. The class field
itself is simple and declares Fistream as its friend. It has two data
members: d_width specifies the width of the next field, and d_newWidth
which is set to true if d_width's value should actually be used. If
d_newWidth is false, Fistream returns to its standard extraction
mode. The class field has two constructors: a default
constructor, setting d_newWidth to false, and a second constructor
expecting the width of the next field to extract as its value. Here is the
class field:
class field
{
friend class Fistream;
size_t d_width;
bool d_newWidth;
public:
field(size_t width);
field();
};
inline field::field(size_t width)
:
d_width(width),
d_newWidth(true)
{}
inline field::field()
:
d_newWidth(false)
{}
Since field declares Fistream as its friend, setField may
inspect field's members directly.
Time to return to setField. This function expects a reference to a
field object, initialized in one of three different ways:
field(): When setField's argument is a field object
constructed by its default constructor the next extraction will use
the same field width as the previous extraction.
field(0): When this field object is used as setField's
argument, fixed-sized field extraction stops, and the Fistream
acts like any standard istream object again.
field(x): When the field object itself is initialized by a
non-zero size_t value x, then the next field width is x
characters wide. The preparation of such a field is left to
setBuffer, Fistream's only private member.
setField's implementation:
std::istream &Fistream::setField(field const ¶ms)
{
if (params.d_newWidth) // new field size requested
d_width = params.d_width; // set new width
if (!d_width) // no width?
rdbuf(d_streambuf); // return to the old buffer
else
setBuffer(); // define the extraction buffer
return *this;
}
The private member setBuffer defines a buffer of d_width + 1
characters and uses read to fill the buffer with d_width
characters. The buffer is an NTBS. This buffer is used to initialize the
d_iss member. Fistream's rdbuf member is used to extract the
d_str's data via the Fistream object itself:
void Fistream::setBuffer()
{
char *buffer = new char[d_width + 1];
rdbuf(d_streambuf); // use istream's buffer to
buffer[read(buffer, d_width).gcount()] = 0; // read d_width chars,
// terminated by a 0-byte
d_iss.str(buffer);
delete[] buffer;
rdbuf(d_iss.rdbuf()); // switch buffers
}
Although setField could be used to configure Fistream to use or
not to use fixed-sized field extraction, using manipulators is probably
preferable. To allow field objects to be used as manipulators an
overloaded extraction operator was defined. This extraction operator accepts
istream & and a field const & objects. Using this extraction
operator, statements like
fis >> field(2) >> x >> field(0);
are possible (assuming fis is a Fistream object). Here is the
overloaded operator>>, as well as its declaration:
istream &std::operator>>(istream &str, field const ¶ms)
{
return static_cast<Fistream *>(&str)->setField(params);
}
Declaration:
namespace std
{
istream &operator>>(istream &str, FBB::field const ¶ms);
}
Finally, an example. The following program uses a Fistream object to
url-decode url-encoded information appearing at its standard input:
int main()
{
Fistream fis(cin);
fis >> hex;
while (true)
{
size_t x;
switch (x = fis.get())
{
case '\n':
cout << '\n';
break;
case '+':
cout << ' ';
break;
case '%':
fis >> field(2) >> x >> field(0);
// FALLING THROUGH
default:
cout << static_cast<char>(x);
break;
case EOF:
return 0;
}
}
}
/*
Generated output after:
echo My+name+is+%60Ed%27 | a.out
My name is `Ed'
*/
fork system call is well
known. When a program needs to start a new process, system can be used.
The function system requires the program to wait for the
child process
to terminate. The more general way to spawn subprocesses is to use
fork.
In this section we investigate how C++ can be used to wrap classes around
a complex system call like fork. Much of what follows in this section
directly applies to the Unix operating system, and the discussion therefore
focuses on that operating system. Other systems usually provide
comparable facilities. What follows is closely related to the
Template Design Pattern (cf. Gamma et al. (1995)
Design Patterns, Addison-Wesley)
When fork is called, the current program is duplicated in memory, thus
creating a new process. Following this duplication both processes continue
their execution just below the fork system call. The two processes may
inspect fork's return value: the return value in the
original process (called the parent process) differs from the return
value in the newly created process (called the child process):
fork returns the process ID of the
(child) process that was created by the fork system call. This is a
positive integer value.
fork returns 0.
fork fails, -1 is returned.
Fork class should hide all bookkeeping details of a system
call like fork from its users. The class Fork developed here does
just that. The class itself only ensures the proper execution of the fork
system call. Normally, fork is called to start a child process, usually
boiling down to the execution of a separate process. This child process may
expect input at its standard input stream and/or may generate output to its
standard output and/or standard error streams. Fork does not know all
this, and does not have to know what the child process will do. Fork
objects should be able to start their child processes.
Fork's constructor cannot know what actions its child
process should perform. Similarly, it cannot know what actions the parent
process should perform. For these kind of situations, the
template method design pattern
was developed. According to Gamma c.s., the template method design
pattern
``Define(s) the skeleton of an algorithm in an operation, deferring some steps to subclasses. [The] Template Method (design pattern) lets subclasses redefine certain steps of an algorithm, without changing the algorithm's structure.''
This design pattern allows us to define an abstract base class
already providing the essential steps related to the fork system call,
deferring the implementation of other parts of the fork system call to
subclasses.
The Fork abstract base class has the following characteristics:
d_pid. In the parent process this data
member contains the child's process id and in the child process it has
the value 0. Its public interface declares only two members:
fork member function, responsible for the actual forking (i.e.,
it creates the (new) child process);
virtual destructor ~Fork (having an empty body).
Fork's interface:
class Fork
{
int d_pid;
public:
virtual ~Fork();
void fork();
protected:
int pid() const;
int waitForChild(); // returns the status
private:
virtual void childRedirections();
virtual void parentRedirections();
virtual void childProcess() = 0; // pure virtual members
virtual void parentProcess() = 0;
};
protected section and can thus only be used by derived classes. They
are:
pid(): The member function pid allows derived classes to
access the system fork's return value:
inline int Fork::pid() const
{
return d_pid;
}
waitForChild(): The member int waitForChild can be called by
parent processes to wait for the completion of their child
processes (as discussed below). This member is declared in the
class interface. Its implementation is:
#include "fork.ih"
int Fork::waitForChild()
{
int status;
waitpid(d_pid, &status, 0);
return WEXITSTATUS(status);
}
This simple implementation returns the child's exit status to
the parent. The called system function waitpid blocks
until the child terminates.
fork system calls are used,
parent processes
and
child processes
must always be distinguished. The main distinction between these
processes is that d_pid becomes the child's process-id in the
parent process, while d_pid becomes 0 in the child process
itself. Since these two processes must always be distinguished (and
present), their implementation by classes derived from Fork is
enforced by Fork's interface: the members childProcess,
defining the child process' actions and parentProcess, defining
the parent process' actions were defined as pure virtual functions.
childRedirections(): this member should be overridden by derived
classes if any standard stream (cin, cout,) or cerr must
be redirected in the child process (cf. section
26.2.3). By default it has an empty implementation;
parentRedirections(): this member should be overridden by derived
classes if any standard stream (cin, cout,) or cerr must
be redirected in the parent process. By default it has an
empty implementation.
void Fork::childRedirections()
{}
void Fork::parentRedirections()
{}
fork calls the system function fork
(Caution: since the system function fork is called by a member
function having the same name, the :: scope resolution operator must be
used to prevent a recursive call of the member function itself).
The function ::fork's return value determines whether parentProcess
or childProcess is called. Maybe redirection is
necessary. Fork::fork's implementation calls childRedirections
just before calling childProcess, and parentRedirections just
before calling parentProcess:
#include "fork.ih"
void Fork::fork()
{
if ((d_pid = ::fork()) < 0)
throw "Fork::fork() failed";
if (d_pid == 0) // childprocess has pid == 0
{
childRedirections();
childProcess();
exit(1); // we shouldn't come here:
} // childProcess() should exit
parentRedirections();
parentProcess();
}
In fork.cc the class's internal header file
fork.ih is included. This header file takes care of the inclusion of the
necessary system header files, as well as the inclusion of fork.h
itself. Its implementation is:
#include "fork.h"
#include <cstdlib>
#include <unistd.h>
#include <sys/types.h>
#include <sys/wait.h>
Child processes should not return: once they have completed their tasks,
they should terminate. This happens automatically when the child process
performs a call to a member of the exec... family, but if the child
itself remains active, then it must make sure that it terminates properly. A
child process normally uses exit to terminate itself, but note that
exit prevents the activation of destructors of objects
defined at the same or more superficial nesting levels than the level at
which exit is called. Destructors of globally defined objects are
activated when exit is used. When using exit to terminate
childProcess, it should either itself call a support member function
defining all nested objects it needs, or it should define all its objects in a
compound statement (e.g., using a throw block) calling exit beyond
the compound statement.
Parent processes should normally wait for their children to complete. Terminating child processes inform their parents that they are about to terminate by sending a signal that should be caught by their parents. If child processes terminate and their parent processes do not catch those signals then such child processes remain visible as so-called zombie processes.
If parent processes must wait for their children to complete, they may
call the member waitForChild. This member returns the exit status of a
child process to its parent.
There exists a situation where the child process continues to
live, but the parent dies. This is a fairly natural event: parents tend to
die before their children do. In our context (i.e. C++), this is called a
daemon program. In a daemon the parent process dies and the child program
continues to run as a child of the basic init process. Again, when the
child eventually dies a signal is sent to its `step-parent' init. This
does not create a zombie as init catches the termination signals of all
its (step-) children. The construction of a daemon process is very simple,
given the availability of the class Fork (cf. section 26.2.4).
ios::rdbuf member function. By assigning the
streambuf of a stream to another stream, both stream objects access the
same streambuf, thus implementing redirection at the level of the
programming language itself.
This may be fine within the context of a C++ program, but once we
leave that context the redirection terminates. The operating system does not
know about streambuf objects. This situation is encountered, e.g., when a
program uses a system call to start a subprogram. The example program at
the end of this section uses C++ redirection to redirect the information
inserted into cout to a file, and then calls
system("echo hello world")
to echo a well-known line of text. Since echo writes its information
to the standard output, this would be the program's redirected file if the
operating system would recognize C++'s redirection.
But redirection doesn't happen. Instead, hello world still appears at
the program's standard output and the redirected file is left untouched. To
write hello world to the redirected file redirection must be realized at
the operating system level. Some operating systems (e.g., Unix and
friends) provide system calls like dup and dup2 to accomplish
this. Examples of the use of these system calls are given in section
26.2.5.
Here is the example of the failing redirection at the system level
following C++ redirection using streambuf redirection:
#include <iostream>
#include <fstream>
#include <cstdlib>
using namespace std;
int main()
{
ofstream of("outfile");
streambuf *buf = cout.rdbuf(of.rdbuf());
cout << "To the of stream\n";
system("echo hello world");
cout << "To the of stream\n";
cout.rdbuf(buf);
}
/*
Generated output: on the file `outfile'
To the of stream
To the of stream
On standard output:
hello world
*/
fork is to start a
child process. The parent process terminates immediately after spawning the
child process. If this happens, the child process continues to run as a child
process of init, the always running first process on Unix systems. Such
a process is often called a daemon, running as a background process.
Although the next example can easily be constructed as a plain C
program, it was included in the C++ Annotations because it is so closely
related to the current discussion of the Fork class. I thought about
adding a daemon member to that class, but eventually decided against it
because the construction of a daemon program is very simple and requires no
features other than those currently offered by the class Fork. Here is an
example illustrating the construction of such a daemon program. Its child
process doesn't do exit but throw 0 which is caught by the catch
clause of the child's main function. Doing this ensures that any objects
defined by the child process are properly destroyed:
#include <iostream>
#include <unistd.h>
#include "fork.h"
class Daemon: public Fork
{
void parentProcess() override // the parent does nothing.
{}
void childProcess() override // actions by the child
{
sleep(3);
// just a message...
std::cout << "Hello from the child process\n";
throw 0; // The child process ends
}
};
int main()
try
{
Daemon{}.fork();
}
catch(...)
{}
/*
Generated output:
The next command prompt, then after 3 seconds:
Hello from the child process
*/
pipe system
call. When two processes want to communicate using such file descriptors, the
following happens:
pipe system call. One of the file descriptors is used for writing, the
other file descriptor is used for reading.
fork function is called),
duplicating the file descriptors. Now we have four file descriptors as
the child process and the parent process both have their own copies of the two
file descriptors created by pipe.
Pipe class
developed here. Let's have a look at its characteristics (before using
functions like pipe and dup the compiler must have read the
<unistd.h> header file):
pipe system call expects a pointer to two int values,
representing, respectively, the file descriptor used for reading and the file
descriptor used for writing. To avoid confusion, the class Pipe defines an
enum having values associating the indices of the array of 2-ints with
symbolic constants. The two file descriptors themselves are stored in a data
member d_fd. Here is the initial section of the class's interface:
class Pipe
{
enum RW { READ, WRITE };
int d_fd[2];
pipe to create a set of associated file descriptors used for
accessing both ends of a pipe:
Pipe::Pipe()
{
if (pipe(d_fd))
throw "Pipe::Pipe(): pipe() failed";
}
readOnly and readFrom are used to configure the
pipe's reading end. The latter function is used when using redirection. It is
provided with an alternate file descriptor to be used for reading from the
pipe. Usually this alternate file descriptor is STDIN_FILENO, allowing
cin to extract information from the pipe. The former function is merely
used to configure the reading end of the pipe. It closes the matching writing
end and returns a file descriptor that can be used to read from the pipe:
int Pipe::readOnly()
{
close(d_fd[WRITE]);
return d_fd[READ];
}
void Pipe::readFrom(int fd)
{
readOnly();
redirect(d_fd[READ], fd);
close(d_fd[READ]);
}
writeOnly and two writtenBy members are available to
configure the writing end of a pipe. The former function is only used to
configure the writing end of the pipe. It closes the reading end, and
returns a file descriptor that can be used for writing to the pipe:
int Pipe::writeOnly()
{
close(d_fd[READ]);
return d_fd[WRITE];
}
void Pipe::writtenBy(int fd)
{
writtenBy(&fd, 1);
}
void Pipe::writtenBy(int const *fd, size_t n)
{
writeOnly();
for (size_t idx = 0; idx < n; idx++)
redirect(d_fd[WRITE], fd[idx]);
close(d_fd[WRITE]);
}
For the latter member two overloaded versions are available:
writtenBy(int fd) is used to configure single
redirection, so that a specific file descriptor (usually STDOUT_FILENO
or STDERR_FILENO) can be used to write to the pipe;
(writtenBy(int const *fd, size_t n)) may be used
to configure multiple redirection, providing an array argument containing
file descriptors. Information written to any of these file descriptors is
actually written to the pipe.
redirect, used to set up
redirection through the dup2 system call. This function expects two file
descriptors. The first file descriptor represents a file descriptor that can
be used to access the device's information; the second file descriptor is an
alternate file descriptor that may also be used to access the device's
information. Here is redirect's implementation:
void Pipe::redirect(int d_fd, int alternateFd)
{
if (dup2(d_fd, alternateFd) < 0)
throw "Pipe: redirection failed";
}
Pipe
objects, we'll use Fork and Pipe in various example programs.
ParentSlurp, derived from Fork, starts a child process
executing a stand-alone program (like /bin/ls). The (standard) output of
the executed program is not shown on the screen but is read by the parent
process.
For demonstration purposes the parent process writes the lines it
receives to its standard output stream, prepending linenumbers to the
lines. It is attractive to redirect the parent's standard input stream to
allow the parent to read the output from the child process using its
std::cin input stream. Therefore, the only pipe in the program is used
as an input pipe for the parent, and an output pipe for the child.
The class ParentSlurp has the following characteristics:
Fork. Before starting ParentSlurp's class
interface, the compiler must have read fork.h and pipe.h. The class
only uses one data member, a Pipe object d_pipe.
Pipe's constructor already defines a pipe, and as d_pipe
is automatically initialized by ParentSlurp's default constructor, which
is implicitly provided, all additional members only exist for
ParentSlurp's own benefit so they can be defined in the class's (implicit)
private section. Here is the class's interface:
class ParentSlurp: public Fork
{
Pipe d_pipe;
void childRedirections() override;
void parentRedirections() override;
void childProcess() override;
void parentProcess() override;
};
childRedirections member configures the writing end of the
pipe. So, all information written to the child's standard output stream ends
up in the pipe. The big advantage of this is that no additional streams are
needed to write to a file descriptor:
inline void ParentSlurp::childRedirections()
{
d_pipe.writtenBy(STDOUT_FILENO);
}
parentRedirections member, configures the reading end of
the pipe. It does so by connecting the reading end of the pipe to the parent's
standard input file descriptor (STDIN_FILENO). This allows the parent to
perform extractions from cin, not requiring any additional streams for
reading.
inline void ParentSlurp::parentRedirections()
{
d_pipe.readFrom(STDIN_FILENO);
}
childProcess member only needs to concentrate on its own
actions. As it only needs to execute a program (writing information to its
standard output), the member can consist of one single statement:
inline void ParentSlurp::childProcess()
{
execl("/bin/ls", "/bin/ls", 0);
}
parentProcess member simply `slurps' the information
appearing at its standard input. Doing so, it actually reads the child's
output. It copies the received lines to its standard output stream prefixing
line numbers to them:
void ParentSlurp::parentProcess()
{
std::string line;
size_t nr = 1;
while (getline(std::cin, line))
std::cout << nr++ << ": " << line << '\n';
waitForChild();
}
ParentSlurp object, and
calls its fork() member. Its output consists of a numbered list of files
in the directory where the program is started. Note that the program also
needs the fork.o, pipe.o and waitforchild.o object files (see
earlier sources):
int main()
{
ParentSlurp{}.fork();
}
/*
Generated Output (example only, actually obtained output may differ):
1: a.out
2: bitand.h
3: bitfunctional
4: bitnot.h
5: daemon.cc
6: fdinseek.cc
7: fdinseek.h
...
*/
start: this starts a new child process. The parent returns the
child's ID (a number) to the user. The ID is thereupon be used to identify a
particular child process;
<nr> text sends ``text'' to the child process having ID
<nr>;
stop <nr> terminates the child process having ID <nr>;
exit terminates the parent as well as all its child processes.
A problem with programs like our monitor is that they allow
asynchronous input from multiple sources. Input may appear at the
standard input as well as at the input-sides of pipes. Also, multiple output
channels are used. To handle situations like these, the select system
call was developed.
select system call was developed to handle asynchronous
I/O multiplexing.
The select system call is used to handle, e.g., input appearing
simultaneously at a set of file descriptors.
The select function is rather complex, and its full discussion is
beyond the C++ Annotations' scope. By encapsulating select in a class
Selector, hiding its details and offering an intuitively attractive
interface, its use is simplified. The Selector class has these
features:
Select's members are very small,
most members can be implemented inline. The class requires quite a few data
members. Most of these data members belong to types that require some system
headers to be included first:
#include <limits.h>
#include <unistd.h>
#include <sys/time.h>
#include <sys/types.h>
fd_set is a
type designed to be used by select and variables of this type contain the
set of file descriptors on which select may sense some
activity. Furthermore, select allows us to fire an
asynchronous alarm. To set the alarm time, the class Selector
defines a timeval data member. Other members are used for internal
bookkeeping purposes. Here is the class Selector's interface:
class Selector
{
fd_set d_read;
fd_set d_write;
fd_set d_except;
fd_set d_ret_read;
fd_set d_ret_write;
fd_set d_ret_except;
timeval d_alarm;
int d_max;
int d_ret;
int d_readidx;
int d_writeidx;
int d_exceptidx;
public:
Selector();
int exceptFd();
int nReady();
int readFd();
int wait();
int writeFd();
void addExceptFd(int fd);
void addReadFd(int fd);
void addWriteFd(int fd);
void noAlarm();
void rmExceptFd(int fd);
void rmReadFd(int fd);
void rmWriteFd(int fd);
void setAlarm(int sec, int usec = 0);
private:
int checkSet(int *index, fd_set &set);
void addFd(fd_set *set, int fd);
};
Selector(): the (default) constructor. It
clears the read, write, and execute fd_set variables, and switches off the
alarm. Except for d_max, the remaining data members do not require
specific initializations:
Selector::Selector()
{
FD_ZERO(&d_read);
FD_ZERO(&d_write);
FD_ZERO(&d_except);
noAlarm();
d_max = 0;
}
int wait(): this member blocks until the alarm times
out or until activity is sensed at any of the file descriptors monitored by
the Selector object. It throws an exception when the select system
call itself fails:
int Selector::wait()
{
timeval t = d_alarm;
d_ret_read = d_read;
d_ret_write = d_write;
d_ret_except = d_except;
d_readidx = 0;
d_writeidx = 0;
d_exceptidx = 0;
d_ret = select(d_max, &d_ret_read, &d_ret_write, &d_ret_except,
t.tv_sec == -1 && t.tv_usec == -1 ? 0 : &t);
if (d_ret < 0)
throw "Selector::wait()/select() failed";
return d_ret;
}
int nReady: this member function's return value is only
defined when wait has returned. In that case it returns 0 for an
alarm-timeout, -1 if select failed, and otherwise the number of file
descriptors on which activity was sensed:
inline int Selector::nReady()
{
return d_ret;
}
int readFd(): this member function's return
value is also only defined after wait has returned. Its return value is
-1 if no (more) input file descriptors are available. Otherwise the next file
descriptor available for reading is returned:
inline int Selector::readFd()
{
return checkSet(&d_readidx, d_ret_read);
}
int writeFd(): operating analogously to readFd, it
returns the next file descriptor to which output is written. It uses
d_writeidx and d_ret_read and is implemented analogously to
readFd;
int exceptFd(): operating analogously to readFd, it
returns the next exception file descriptor on which activity was sensed. It
uses d_except_idx and d_ret_except and is implemented analogously to
readFd;
void setAlarm(int sec, int usec = 0): this member
activates Select's alarm facility. At least the number of seconds to wait
for the alarm to go off must be specified. It simply assigns values to
d_alarm's fields. At the next Select::wait call, the alarm fires
(i.e., wait returns with return value 0) once the configured
alarm-interval has passed:
inline void Selector::setAlarm(int sec, int usec)
{
d_alarm.tv_sec = sec;
d_alarm.tv_usec = usec;
}
void noAlarm(): this member switches off the alarm, by
simply setting the alarm interval to a very long period:
inline void Selector::noAlarm()
{
setAlarm(-1, -1);
}
void addReadFd(int fd): this member adds a
file descriptor to the set of input file descriptors monitored by the
Selector object. The member function wait returns once input is
available at the indicated file descriptor:
inline void Selector::addReadFd(int fd)
{
addFd(&d_read, fd);
}
void addWriteFd(int fd): this member adds a file
descriptor to the set of output file descriptors monitored by the Selector
object. The member function wait returns once output is available at
the indicated file descriptor. Using d_write, it is implemented
analogously to addReadFd;
void addExceptFd(int fd): this member adds a file
descriptor to the set of exception file descriptors to be monitored by the
Selector object. The member function wait returns once activity
is sensed at the indicated file descriptor. Using d_except, it is
implemented analogously to addReadFd;
void rmReadFd(int fd): this member removes a file
descriptor from the set of input file descriptors monitored by the
Selector object:
inline void Selector::rmReadFd(int fd)
{
FD_CLR(fd, &d_read);
}
void rmWriteFd(int fd): this member removes a file
descriptor from the set of output file descriptors monitored by the
Selector object. Using d_write, it is implemented analogously to
rmReadFd;
void rmExceptFd(int fd): this member removes a file
descriptor from the set of exception file descriptors to be monitored by the
Selector object. Using d_except, it is implemented analogously to
rmReadFd;
private section:
addFd adds a file descriptor to a fd_set:
void Selector::addFd(fd_set *set, int fd)
{
FD_SET(fd, set);
if (fd >= d_max)
d_max = fd + 1;
}
checkSet tests whether a file descriptor (*index)
is found in a fd_set:
int Selector::checkSet(int *index, fd_set &set)
{
int &idx = *index;
while (idx < d_max && !FD_ISSET(idx, &set))
++idx;
return idx == d_max ? -1 : idx++;
}
monitor program uses a Monitor object doing most of the
work. The class Monitor's public interface only offers a default
constructor and one member, run, to perform its tasks. All other member
functions are located in the class's private section.
Monitor defines the private enum Commands, symbolically
listing the various commands its input language supports, as well as several
data members. Among the data members are a Selector object and a map
using child order numbers as its keys and pointer to Child objects (see
section 26.2.7.7) as its values. Furthermore, Monitor has a static array
member s_handler[], storing pointers to member functions handling user
commands.
A destructor should be implemented as well, but its implementation is left
as an exercise to the reader. Here is Monitor's interface, including the
interface of the nested class Find that is used to create a function
object:
class Monitor
{
enum Commands
{
UNKNOWN,
START,
EXIT,
STOP,
TEXT,
sizeofCommands
};
using MapIntChild = std::map<int, std::shared_ptr<Child>>;
friend class Find;
class Find
{
int d_nr;
public:
Find(int nr);
bool operator()(MapIntChild::value_type &vt) const;
};
Selector d_selector;
int d_nr;
MapIntChild d_child;
static void (Monitor::*s_handler[])(int, std::string const &);
static int s_initialize;
public:
enum Done
{};
Monitor();
void run();
private:
static void killChild(MapIntChild::value_type it);
static int initialize();
Commands next(int *value, std::string *line);
void processInput();
void processChild(int fd);
void createNewChild(int, std::string const &);
void exiting(int = 0, std::string const &msg = std::string{});
void sendChild(int value, std::string const &line);
void stopChild(int value, std::string const &);
void unknown(int, std::string const &);
};
Since there's only one non-class type data member, the class's constructor is a very simple function which could be implemented inline:
inline Monitor::Monitor()
:
d_nr(0)
{}
s_handler, storing pointers to functions needs to be initialized as
well. This can be accomplished in several ways:
Commands enumeration only specifies a fairly limited
set of commands, compile-time initialization could be considered:
void (Monitor::*Monitor::s_handler[])(int, string const &) =
{
&Monitor::unknown, // order follows enum Command's
&Monitor::createNewChild, // elements
&Monitor::exiting,
&Monitor::stopChild,
&Monitor::sendChild,
};
The advantage of this is that it's simple, not requiring any run-time
effort. The disadvantage is of course relatively complex maintenance. If for
some reason Commands is modified, s_handler must be modified as
well. In cases like these, compile-time initialization often is
asking for trouble. There is a simple alternative though.
Monitor's interface we see a static data member
s_initialize and a static member function initialize. The static
member function handles the initialization of the s_handler array. It
explicitly assigns the array's elements and any modification in ordering of
enum Commands' values is automatically accounted for by recompiling
initialize:
void (Monitor::*Monitor::s_handler[sizeofCommands])(int, string const &);
int Monitor::initialize()
{
s_handler[UNKNOWN] = &Monitor::unknown;
s_handler[START] = &Monitor::createNewChild;
s_handler[EXIT] = &Monitor::exiting;
s_handler[STOP] = &Monitor::stopChild;
s_handler[TEXT] = &Monitor::sendChild;
return 0;
}
The member initialize is a static member and so it can be
called to initialize s_initialize, a static int variable. The
initialization is enforced by placing the initialization statement in the
source file of a function that is known to be executed. It could be main,
but if we're Monitor's maintainers and only have control over the library
containing Monitor's code then that's not an option. In those cases the
source file containing the destructor is a very good candidate. If a class
has only one constructor and it's not defined inline then the
constructor's source file is a good candidate as well. In Monitor's
current implementation the initialization statement is put in run's source
file, reasoning that s_handler is only needed when run is used.
Monitor's core activities are performed by run. It
performs the following tasks:
Monitor object only monitors its standard
input. The set of input file descriptors to which d_selector listens
is initialized to STDIN_FILENO.
d_selector's wait function is called.
If input on cin is available, it is processed by processInput.
Otherwise, the input has arrived from a child process. Information sent by
children is processed by processChild.
As noted by Ben Simons (ben at mrxfx dot com) Monitor must not catch
the termination signals. Instead, the process spawning child processes has
that responsibility (the underlying principle being that a parent process is
responsible for its child processes; a child process, in turn, is responsible
for its own child processes).
run's source file also defines and initializes
s_initialize to ensure the proper initialization of the s_handler
array.
run's implementation and s_initialize's definition:
#include "monitor.ih"
int Monitor::s_initialize = Monitor::initialize();
void Monitor::run()
{
d_selector.addReadFd(STDIN_FILENO);
while (true)
{
cout << "? " << flush;
try
{
d_selector.wait();
int fd;
while ((fd = d_selector.readFd()) != -1)
{
if (fd == STDIN_FILENO)
processInput();
else
processChild(fd);
}
cout << "NEXT ...\n";
}
catch (char const *msg)
{
exiting(1, msg);
}
}
}
The member function processInput reads the commands entered by the
user using the program's standard input stream. The member itself is rather
simple. It calls next to obtain the next command entered by the user, and
then calls the corresponding function using the matching element of the
s_handler[] array. Here are the members processInput and next:
void Monitor::processInput()
{
string line;
int value;
Commands cmd = next(&value, &line);
(this->*s_handler[cmd])(value, line);
}
Monitor::Commands Monitor::next(int *value, string *line)
{
if (!getline(cin, *line))
exiting(1, "Monitor::next(): reading cin failed");
if (*line == "start")
return START;
if (*line == "exit" || *line == "quit")
{
*value = 0;
return EXIT;
}
if (line->find("stop") == 0)
{
istringstream istr(line->substr(4));
istr >> *value;
return !istr ? UNKNOWN : STOP;
}
istringstream istr(line->c_str());
istr >> *value;
if (istr)
{
getline(istr, *line);
return TEXT;
}
return UNKNOWN;
}
All other input sensed by d_select is created by child
processes. Because d_select's readFd member returns the corresponding
input file descriptor, this descriptor can be passed to
processChild. Using a IFdStreambuf (see section 26.1.2.1), its
information is read from an input stream. The communication protocol used here
is rather basic. For every line of input sent to a child, the child replies by
sending back exactly one line of text. This line is then read by
processChild:
void Monitor::processChild(int fd)
{
IFdStreambuf ifdbuf(fd);
istream istr(&ifdbuf);
string line;
getline(istr, line);
cout << d_child[fd]->pid() << ": " << line << '\n';
}
The construction d_child[fd]->pid() used in the above source deserves
some special attention. Monitor defines the data member map<int,
shared_ptr<Child>> d_child. This map contains the child's order number
as its key, and a (shared) pointer to the Child object as its value. A
shared pointer is used here, rather than a Child object, since we want to
use the facilities offered by the map, but don't want to copy a Child
object time and again.
run's implementation has been covered, we'll concentrate on
the various commands users might enter:
start command is issued, a new child process is started.
A new element is added to d_child by the member createNewChild. Next,
the Child object should start its activities, but the Monitor object
can not wait for the child process to complete its activities, as there is no
well-defined endpoint in the near future, and the user probably wants to be
able to enter more commands. Therefore, the Child process must run as a
daemon. So the forked process terminates immediately, but its own child
process continues to run (in the background). Consequently,
createNewChild calls the child's fork member. Although it is the
child's fork function that is called, it is still the monitor program
wherein that fork function is called. So, the monitor program is
duplicated by fork. Execution then continues:
Child's parentProcess in its parent process;
Child's childProcess in its child process
Child's parentProcess is an empty function, returning
immediately, the Child's parent process effectively continues immediately
below createNewChild's cp->fork() statement. As the child process
never returns (see section 26.2.7.7), the code below cp->fork() is never
executed by the Child's child process. This is exactly as it should be.
In the parent process, createNewChild's remaining code simply
adds the file descriptor that's available for reading information from the
child to the set of input file descriptors monitored by d_select, and
uses d_child to establish the association between that
file descriptor and the Child object's address:
void Monitor::createNewChild(int, string const &)
{
Child *cp = new Child{ ++d_nr };
cp->fork();
int fd = cp->readFd();
d_selector.addReadFd(fd);
d_child[fd].reset(cp);
cerr << "Child " << d_nr << " started\n";
}
stop <nr>
and <nr> text commands. The former command terminates child process
<nr>, by calling stopChild. This function locates the child process
having the order number using an anonymous object of the class Find,
nested inside Monitor. The class Find simply compares the
provided nr with the children's order number returned by their nr
members:
inline Monitor::Find::Find(int nr)
:
d_nr(nr)
{}
inline bool Monitor::Find::operator()(MapIntChild::value_type &vt) const
{
return d_nr == vt.second->nr();
}
If the child process having order number nr was found, its file
descriptor is removed from d_selector's set of input file
descriptors. Then the child process itself is terminated by the static member
killChild. The member killChild is declared as a static member
function, as it is used as function argument of the for_each generic
algorithm by exiting (see below). Here is killChild's
implementation:
void Monitor::killChild(MapIntChild::value_type it)
{
if (kill(it.second->pid(), SIGTERM))
cerr << "Couldn't kill process " << it.second->pid() << '\n';
// reap defunct child process
int status = 0;
while( waitpid( it.second->pid(), &status, WNOHANG) > -1)
;
}
Having terminated the specified child process, the corresponding Child
object is destroyed and its pointer is removed from d_child:
void Monitor::stopChild(int nr, string const &)
{
auto it = find_if(d_child.begin(), d_child.end(), Find{ nr });
if (it == d_child.end())
cerr << "No child number " << nr << '\n';
else
{
d_selector.rmReadFd(it->second->readFd());
d_child.erase(it);
}
}
<nr> text sends text to child process nr
using the member function sendChild. This function also uses a Find
object to locate the child-process having order number nr, and simply
inserts the text into the writing end of a pipe connected to that child
process:
void Monitor::sendChild(int nr, string const &line)
{
auto it = find_if(d_child.begin(), d_child.end(), Find(nr));
if (it == d_child.end())
cerr << "No child number " << nr << '\n';
else
{
OFdnStreambuf ofdn{ it->second->writeFd() };
ostream out(&ofdn);
out << line << '\n';
}
}
exit or quit the member exiting is
called. It terminates all child processes using the for_each generic
algorithm (see section 19.1.18) to visit all elements of
d_child. Then the program itself ends:
void Monitor::exiting(int value, string const &msg)
{
for_each(d_child.begin(), d_child.end(), killChild);
if (msg.length())
cerr << msg << '\n';
throw value;
}
main function is simple and needs no further comment:
int main()
try
{
Monitor{}.run();
}
catch (int exitValue)
{
return exitValue;
}
Monitor object starts a child process, it creates an object
of the class Child. The Child class is derived from the class
Fork, allowing it to operate as a daemon (as discussed in the
previous section). Since Child is a daemon class, we know that its parent
process must be defined as an empty function. Its childProcess member
has a non-empty implementation. Here are the characteristics of the class
Child:
Child class has two Pipe data members, to handle
communications between its own child- and parent processes. As these pipes are
used by the Child's child process, their names refer to the child
process. The child process reads from d_in, and writes to d_out. Here
is the interface of the class Child:
class Child: public Fork
{
Pipe d_in;
Pipe d_out;
int d_parentReadFd;
int d_parentWriteFd;
int d_nr;
public:
Child(int nr);
~Child() override;
int readFd() const;
int writeFd() const;
int pid() const;
int nr() const;
private:
void childRedirections() override;
void parentRedirections() override;
void childProcess() override;
void parentProcess() override;
};
Child's constructor simply stores its argument, a
child-process order number, in its own d_nr data member:
inline Child::Child(int nr)
:
d_nr(nr)
{}
Child's child process obtains commands from its standard
input stream and writes its output to its standard output stream. Since the
actual communication channels are pipes, redirections must be used. The
childRedirections member looks like this:
void Child::childRedirections()
{
d_in.readFrom(STDIN_FILENO);
d_out.writtenBy(STDOUT_FILENO);
}
d_in and
reads from d_out. Here is parentRedirections:
void Child::parentRedirections()
{
d_parentReadFd = d_out.readOnly();
d_parentWriteFd = d_in.writeOnly();
}
Child object exists until it is destroyed by the
Monitor's stopChild member. By allowing its creator, the Monitor
object, to access the parent-side ends of the pipes, the Monitor object
can communicate with the Child's child process via those pipe-ends. The
members readFd and writeFd allow the Monitor object to access
these pipe-ends:
inline int Child::readFd() const
{
return d_parentReadFd;
}
inline int Child::writeFd() const
{
return d_parentWriteFd;
}
Child object's child process performs two tasks:
childProcess defines a local
Selector object, adding STDIN_FILENO to its set of monitored input
file descriptors.
Then, in an endless loop, childProcess waits for selector.wait()
to return. When the alarm goes off it sends a message to its standard output
(hence, into the writing pipe). Otherwise, it echoes the messages appearing
at its standard input to its standard output. Here is the childProcess
member:
void Child::childProcess()
{
Selector selector;
size_t message = 0;
selector.addReadFd(STDIN_FILENO);
selector.setAlarm(5);
while (true)
{
try
{
if (!selector.wait()) // timeout
cout << "Child " << d_nr << ": standing by\n";
else
{
string line;
getline(cin, line);
cout << "Child " << d_nr << ":" << ++message << ": " <<
line << '\n';
}
}
catch (...)
{
cout << "Child " << d_nr << ":" << ++message << ": " <<
"select() failed" << '\n';
}
}
exit(0);
}
Monitor object to obtain
the Child's process ID and its order number:
inline int Child::pid() const
{
return Fork::pid();
}
inline int Child::nr() const
{
return d_nr;
}
Child process terminates when the user enters a stop
command. When an existing child process number was entered, the corresponding
Child object is removed from Monitor's d_child map. As a result,
its destructor is called. Child's destructor calls kill to terminate
its child, and then waits for the child to terminate. Once its child has
terminated, the destructor has completed its work and returns, thus completing
the erasure from d_child. The current implementation fails if the child
process doesn't react to the SIGTERM signal. In this demonstration program
this does not happen. In `real life' more elaborate killing-procedures may be
required (e.g., using SIGKILL in addition to SIGTERM). As discussed in
section 10.12 it is important to ensure the proper
destruction. Here is the Child's destructor:
Child::~Child()
{
if (pid())
{
cout << "Killing process " << pid() << "\n";
kill(pid(), SIGTERM);
int status;
wait(&status);
}
}
const & arguments can be implemented using a member implementing the
operation, only offering the basic exception guarantee.
This latter function can in turn be implemented using the binary
assignment member. The following examples illustrated this approach for a
fictitious class Binary:
class Binary
{
public:
Binary();
Binary(int value);
// copy and move constructors are available by default, or
// they can be explicitly declared and implemented.
Binary &operator+=(Binary const &other) &; // see the text
Binary &&operator+=(Binary const &other) &&;
private:
void add(Binary const &rhs);
friend Binary operator+(Binary const &lhs, Binary const &rhs);
friend Binary operator+(Binary &&lhs, Binary const &rhs);
};
Eventually, the implementation of binary operators depends on the availability
of the member implementing the basic binary operation, modifying the object
calling that member (i.e., void Binary::add(Binary const &) in the
example).
Since template functions are not instantiated before they are actually used we can call non-existing functions from template functions that are never instantiated. If such a template function is never instantiated, nothing happens; if it is (accidentally) instantiated, then the compiler generates an error message, complaining about the missing function.
This allows us to implement all binary operators, movable and non-movable, as
templates. In the following subsections we develop the class template
Binops, prividing binary operators. A complete implementation of a class
Derived illustrating how addition and insertion operators can be added to
a class is provided in the file
annotations/yo/concrete/examples/binopclasses.cc in the C++ Annotations'
source archive.
add. This is less attractive when developing function
templates, as add is a private member, requiring us to provide friend
declarations for all function templates so they may access the private add
member.
At the end of section 11.7 we saw that add's implementation
can be provided by operator+=(Class const &rhs) &&. This operator may
thereupon be used when implementing the remaining addition operators:
inline Binary &operator+=(Binary const &rhs) &
{
return *this = Binary{*this} += rhs;
}
Binary operator+(Binary &&lhs, Binary const &rhs)
{
return std::move(lhs) += rhs;
}
Binary operator+(Binary const &lhs, Binary const &rhs)
{
return Binary{lhs} += rhs;
}
In this implementation add is no longer required. The plain binary
operators are free functions, which supposedly can easily be converted to
function templates. E.g.,
template <typename Binary>
Binary operator+(Binary const &lhs, Binary const &rhs)
{
return Binary{lhs} += rhs;
}
Binary operator+(Binary const &lhs, Binary
const &rhs), however, we may encounter a subtle and unexpected
complication. Consider the following program. When run, it displays the value
12, rather than 1:
enum Values
{
ZERO,
ONE
};
template <typename Tp>
Tp operator+(Tp const &lhs, Tp const &rhs)
{
return static_cast<Tp>(12);
};
int main()
{
cout << (ZERO + ONE); // shows 12
}
This complication can be avoided by defining the operators in their own namespace, but then all classes using the binary operator also have to be defined in that namespace, which is not a very attractive restriction. Fortunately, there is a better alternative: using the CRTP (cf. section 22.12).
Binops, using the CRTP the
operators are defined for arguments of the class Binops<Derived>: a base
class receiving the derived class as its template argument.
Thus the class Binops as well as the additional operators are defined,
expecting Binops<Derived> type of arguments:
template <class Derived>
struct Binops
{
Derived &operator+=(Derived const &rhs) &;
};
template <typename Derived>
Derived operator+(Binops<Derived> const &lhs, Derived const &rhs)
{
return Derived{static_cast<Derived const &>(lhs) } += rhs;
}
// analogous implementation for Binops<Derived> &&lhs
This way, a class that derives from Binops, and that provides an
operator+= member which is bound to an rvalue reference object, suddenly
also provides all other binary addition operators:
class Derived: public Binops<Derived>
{
...
public:
...
Derived &&operator+=(Derived const &rhs) &&
};
All, but one....
The operator that's not available is the compound addition operator,
bound to an lvalue reference. As its function name is identical to the one in
the class Derived, it is not automatically visible at the user level.
Although this problem can simply be solved by providing the class Derived
with a using Binops<Derived>::operator+= declaration, it is not a very
attractive solution, as separate using declarations have to be provided for
each binary operator that is implemented in the class Derived.
But a much more attractive solution exists. A beautiful out-of-the-box
solution, completely avoiding the hidden base class operator, was proposed
by Wiebe-Marten Wijnja. Wiebe-Marten conjectured that operator+=, bound
to an lvalue reference could also very well be defined as a free
function. In that case no inheritance is used and therefore no function hiding
occurs. Consequently, the using directive can be avoided.
The implementation of this free operator+= function looks like this:
template <class Derived>
Derived &operator+=(Binops<Derived> &lhs, Derived const &rhs)
{
Derived tmp{ Derived{ static_cast<Derived &>(lhs) } += rhs };
tmp.swap(static_cast<Derived &>(lhs));
return static_cast<Derived &>(lhs);
}
The flexibility of this design can be further augmented once we realize that
the right-hand side operand doesn't have to be a Derived class
object. Consider operator<<: oftentimes shifts are bit-shifts, using a
size_t to specify the number of bits to shift. In fact, the type of the
right-hand side operand can completely be generalized by defining a second
template type parameter, which is used to specify the right-hand side's
operand type. It's up to the Derived class to specify the argument type of
its operator+= (or any other binary compound operator), whereafter the
compiler will deduce the types of the right-hand side operands for the
remaining binary operators. Here is the final implementation of the free
operator+= function:
template <class Derived, typename Rhs>
Derived &operator+=(Binops<Derived> &lhs, Rhs const &rhs)
{
Derived tmp{ Derived{ static_cast<Derived &>(lhs) } += rhs };
tmp.swap(static_cast<Derived &>(lhs));
return static_cast<Derived &>(lhs);
}
void
insert(std::ostream &out) const to insert an object into an ostream and
void extract(std::istream &in) const to extract an object from an
istream. As these functions are only used by, respectively, the insertion
and extraction operators, they can be declared in the Derived class's
private interface. Instead of declaring the insertion and extraction operators
friends of the class Derived a single friend Binops<Derived> is
specified. This allows Binops<Derived> to define private, inline iWrap
and eWrap members, merely calling, respectively, Derived's insert and
extract members:
template <typename Derived>
inline void Binops<Derived>::iWrap(std::ostream &out) const
{
static_cast<Derived const &>(*this).insert(out);
}
Binops<Derived> then declares the insertion and extraction operators as
its friends, allowing these operators to call, respectively, iWrap and
eWrap. Note that the software engineer designing the class Derived
only has to provide a friend Binops<Derived> declaration. Here is the
implementation of the overloaded insertion operator:
template <typename Derived>
std::ostream &operator<<(std::ostream &out, Binops<Derived> const &obj)
{
obj.iWrap(out);
return out;
}
This completes the coverage of the essentials of a class template Binops
potentially offering binary operators and insertion/extraction operators for
any class derived from Binops. Finally, as noted at the beginning of this
section, a complete implementation of a class offering addition and insertion
operators is provided in the file
annotations/yo/concrete/examples/binopclasses.cc in the C++ Annotations'
source archive.
operator[] is that it can't distinguish between its
use as an lvalue and as an rvalue. It is a familiar misconception to
think that
Type const &operator[](size_t index) const
is used as rvalue (as the object isn't modified), and that
Type &operator[](size_t index)
is used as lvalue (as the returned value can be modified).
The compiler, however, distinguishes between the two operators only by the
const-status of the object for which operator[] is called. With
const objects the former operator is called, with non-const objects
the latter is always used. It is always used, irrespective of it being used as
lvalue or rvalue.
Being able to distinguish between lvalues and rvalues can be very
useful. Consider the situation where a class supporting operator[] stores
data of a type that is very hard to copy. With data like that reference
counting (e.g., using shared_ptrs) is probably used to prevent needless
copying.
As long as operator[] is used as rvalue there's no need to copy the data,
but the information must be copied if it is used as lvalue.
The Proxy Design Pattern (cf. Gamma et al. (1995)) can be used to distinguish between lvalues and rvalues. With the Proxy Design Pattern an object of another class (the Proxy class) is used to act as a stand in for the `real thing'. The proxy class offers functionality that cannot be offered by the data themselves, like distinguishing between its use as lvalue or rvalue. A proxy class can be used in many situations where access to the real data cannot or should not be directly provided. In this regard iterator types are examples of proxy classes as they create a layer between the real data and the software using the data. Proxy classes could also dereference pointers in a class storing its data by pointers.
In this section we concentrate on the distinction between using operator[]
as lvalue and rvalue. Let's assume we have a class Lines storing lines
from a file. Its constructor expects the name of a stream from which the
lines are read and it offers a non-const operator[] that can be used as
lvalue or rvalue (the const version of operator[] is omitted as it
causes no confusion because it is always used as rvalue):
class Lines
{
std::vector<std::string> d_line;
public:
Lines(std::istream &in);
std::string &operator[](size_t idx);
};
To distinguish between lvalues and rvalues we must find distinguishing
characteristics of lvalues and rvalues that we can exploit. Such
distinguishing characteristics are operator= (which is always used as
lvalue) and the conversion operator (which is always used as rvalue). Rather
than having operator[] return a string & we can let it return a
Proxy object that is able to distinguish between its use as lvalue
and rvalue.
The class Proxy thus needs operator=(string const &other) (acting as
lvalue) and operator std::string const &() const (acting as rvalue). Do we
need more operators? The std::string class also offers operator+=, so
we should probably implement that operator as well. Plain characters can also
be assigned to string objects (even using their numeric values). As
string objects cannot be constructed from plain characters
promotion cannot be used with operator=(string const &other) if the
right-hand side argument is a character. Implementing operator=(char
value) could therefore also be considered. These additional operators are
left out of the current implementation but `real life' proxy classes should
consider implementing these additional operators as well. Another subtlety is
that Proxy's operator std::string const &() const is not used
when using ostream's insertion operator or istream's extraction
operator as these operators are implemented as templates not recognizing our
Proxy class type. So when stream insertion and extraction is required (it
probably is) then Proxy must be given its own overloaded insertion and
extraction operator. Here is an implementation of the overloaded insertion
operator inserting the object for which Proxy is a stand-in:
inline std::ostream &operator<<(std::ostream &out, Lines::Proxy const &proxy)
{
return out << static_cast<std::string const &>(proxy);
}
There's no need for any code (except Lines) to create or copy Proxy
objects. Proxy's constructor should therefore be made private, and
Proxy can declare Lines to be its friend. In fact, Proxy is
intimately related to Lines and can be defined as a nested class. In the
revised Lines class operator[] no longer returns a string but
instead a Proxy is returned. Here is the revised Lines class,
including its nested Proxy class:
class Lines
{
std::vector<std::string> d_line;
public:
class Proxy;
Proxy operator[](size_t idx);
class Proxy
{
friend Proxy Lines::operator[](size_t idx);
std::string &d_str;
Proxy(std::string &str);
public:
std::string &operator=(std::string const &rhs);
operator std::string const &() const;
};
Lines(std::istream &in);
};
Proxy's members are very lightweight and can usually be implemented
inline:
inline Lines::Proxy::Proxy(std::string &str)
:
d_str(str)
{}
inline std::string &Lines::Proxy::operator=(std::string const &rhs)
{
return d_str = rhs;
}
inline Lines::Proxy::operator std::string const &() const
{
return d_str;
}
The member Lines::operator[] can also be implemented inline: it merely
returns a Proxy object initialized with the string associated with
index idx.
Now that the class Proxy has been developed it can be used in a
program. Here is an example using the Proxy object as lvalue or rvalue. On
the surface Lines objects won't behave differently from Lines objects
using the original implementation, but by adding an identifying cout
statement to Proxy's members it can be shown that operator[] behaves
differently when used as lvalue or as rvalue:
int main()
{
ifstream in("lines.cc");
Lines lines(in);
string s = lines[0]; // rvalue use
lines[0] = s; // lvalue use
cout << lines[0] << '\n'; // rvalue use
lines[0] = "hello world"; // lvalue use
cout << lines[0] << '\n'; // rvalue use
}
An object of this nested iterator class handles the dereferencing of the pointers stored in the vector. This allowed us to sort the strings pointed to by the vector's elements rather than the pointers.
A drawback of this is that the class implementing the iterator is closely tied to the derived class as the iterator class was implemented as a nested class. What if we would like to provide any class derived from a container class storing pointers with an iterator handling pointer-dereferencing?
In this section a variant of the earlier (nested class) approach is discussed. Here the iterator class is defined as a class template, not only parameterizing the data type to which the container's elements point but also the container's iterator type itself. Once again, we concentrate on developing a RandomIterator as it is the most complex iterator type.
Our class is named RandomPtrIterator, indicating that it is a random
iterator operating on pointer values. The class template defines three
template type parameters:
Class). Like before, RandomPtrIterator's
constructor is private. Therefore friend declarations are needed to
allow client classes to construct RandomPtrIterators. However, a
friend class Class cannot be used as template parameter types cannot be
used in friend class ... declarations. But this is a minor problem as not
every member of the client class needs to construct iterators. In fact, only
Class's begin and end members must construct
iterators. Using the template's first parameter, friend declarations can be
specified for the client's begin and end members.
BaseIterator);
Type).
RandomPtrIterator has one private data member, a
BaseIterator. Here is the class interface and the constructor's
implementation:
#include <iterator>
#include <compare>
template <typename Class, typename BaseIterator, typename Type>
struct RandomPtrIterator;
#define PtrIterator RandomPtrIterator<Class, BaseIterator, Type>
#define PtrIteratorValue RandomPtrIterator<Class, BaseIterator, value_type>
template <typename Class, typename BaseIterator, typename Type>
bool operator==(PtrIterator const &lhs, PtrIterator const &rhs);
template <typename Class, typename BaseIterator, typename Type>
auto operator<=>(PtrIterator const &lhs, PtrIterator const &rhs);
template <typename Class, typename BaseIterator, typename Type>
int operator-(PtrIterator const &lhs, PtrIterator const &rhs);
template <typename Class, typename BaseIterator, typename Type>
struct RandomPtrIterator
{
using iterator_category = std::random_access_iterator_tag;
using difference_type = std::ptrdiff_t;
using value_type = Type;
using pointer = value_type *;
using reference = value_type &;
friend PtrIterator Class::begin();
friend PtrIterator Class::end();
friend bool operator==<>(RandomPtrIterator const &lhs,
RandomPtrIterator const &rhs);
friend auto operator<=><>(RandomPtrIterator const &lhs,
RandomPtrIterator const &rhs);
friend int operator-<>(RandomPtrIterator const &lhs,
RandomPtrIterator const &rhs);
private:
BaseIterator d_current;
public:
int operator-(RandomPtrIterator const &rhs) const;
RandomPtrIterator operator+(int step) const;
value_type &operator*() const;
RandomPtrIterator &operator--();
RandomPtrIterator operator--(int);
RandomPtrIterator &operator++();
RandomPtrIterator operator++(int);
RandomPtrIterator operator-(int step) const;
RandomPtrIterator &operator-=(int step);
RandomPtrIterator &operator+=(int step);
value_type *operator->() const;
private:
RandomPtrIterator(BaseIterator const ¤t);
};
template <typename Class, typename BaseIterator, typename value_type>
PtrIteratorValue::RandomPtrIterator(BaseIterator const ¤t)
:
d_current(current)
{}
Looking at its friend declarations, we see that the members begin
and end of a class Class, returning a RandomPtrIterator object for
the types Class, BaseIterator and Type are granted access to
RandomPtrIterator's private constructor. That is exactly what we
want. The Class's begin and end members are declared as bound
friends.
All RandomPtrIterator's remaining members are public. Since
RandomPtrIterator is just a generalization of the nested class
iterator developed in section 22.14.1, re-implementing the required
member functions is easy and only requires us to change iterator into
RandomPtrIterator and to change std::string into Type. For
example, operator<, defined in the class iterator as
is now implemented as:
template <typename Class, typename BaseIterator, typename Type>
inline auto operator<=>(PtrIterator const &lhs, PtrIterator const &rhs)
{
return **lhs.d_current <=> **rhs.d_current;
}
Some additional examples: operator*, defined in the class
iterator as
inline std::string &StringPtr::iterator::operator*() const
{
return **d_current;
}
is now implemented as:
template <typename Class, typename BaseIterator, typename value_type>
value_type &PtrIteratorValue::operator*() const
{
return **d_current;
}
The pre- and postfix increment operators are now implemented as:
template <typename Class, typename BaseIterator, typename value_type>
PtrIteratorValue &PtrIteratorValue::operator++()
{
++d_current;
return *this;
}
template <typename Class, typename BaseIterator, typename value_type>
PtrIteratorValue PtrIteratorValue::operator++(int)
{
return RandomPtrIterator(d_current++);
}
Remaining members can be implemented accordingly, their actual
implementations are left as exercises to the reader (or can be obtained from
the cplusplus.yo.zip archive, of course).
Re-implementing the class StringPtr developed in section 22.14.1
is not difficult either. Apart from including the header file defining the
class template RandomPtrIterator, it only requires a single modification.
Its iterator using-declaration must now be associated with a
RandomPtrIterator. Here is the full class interface and the class's inline
member definitions:
#ifndef INCLUDED_STRINGPTR_H_
#define INCLUDED_STRINGPTR_H_
#include <vector>
#include <string>
#include "iterator.h"
class StringPtr: public std::vector<std::string *>
{
public:
using iterator =
RandomPtrIterator
<
StringPtr,
std::vector<std::string *>::iterator,
std::string
>;
using reverse_iterator = std::reverse_iterator<iterator>;
iterator begin();
iterator end();
reverse_iterator rbegin();
reverse_iterator rend();
};
inline StringPtr::iterator StringPtr::begin()
{
return iterator(this->std::vector<std::string *>::begin() );
}
inline StringPtr::iterator StringPtr::end()
{
return iterator(this->std::vector<std::string *>::end());
}
inline StringPtr::reverse_iterator StringPtr::rbegin()
{
return reverse_iterator(end());
}
inline StringPtr::reverse_iterator StringPtr::rend()
{
return reverse_iterator(begin());
}
#endif
Including StringPtr's modified header file into the program given in
section 22.14.2 results in a program behaving identically to its
earlier version. In this case StringPtr::begin and StringPtr::end
return iterator objects constructed from a template definition.
The examples in this and subsequent sections assume that the reader knows how
to use the
scanner generator flex and the parser generator bison. Both
bison and flex are well documented elsewhere. The original
predecessors of bison and flex, called yacc and lex are
described in several books, e.g. in
O'Reilly's book `lex & yacc'.
Scanner- and parser generators are also available as free software. Both
bison and flex are usually part of software distributions or they can
be obtained from
ftp://prep.ai.mit.edu/pub/non-gnu. Flex creates a C++ class
when %option c++ is specified.
For parser generators the program bison is available. In the early 90's
Alain Coetmeur (coetmeur@icdc.fr) created a
C++ variant (bison++) creating a parser class. Although the
bison++ program produces code that can be used in C++ programs it also
shows many characteristics that are more suggestive of a C context than a
C++ context. In January 2005 I rewrote parts of Alain's bison++
program, resulting in the original version of the program
bisonc++. Then, in May 2005 a complete rewrite of the bisonc++
parser generator was completed (version number 0.98). Current versions of
bisonc++ can be downloaded from
https://fbb-git.gitlab.io/bisoncpp/. Binary versions for various
architectures are available as, e.g., Debian
package (including bisonc++'s documentation).
Bisonc++ creates a cleaner parser class than bison++. In particular,
it derives the parser class from a base-class, containing the parser's token-
and type-definitions as well as all member functions which should not be
(re)defined by the programmer. As a result of this approach, the generated
parser class is very small, declaring only members that are actually defined
by the programmer (as well as some other members, generated by bisonc++
itself, implementing the parser's parse() member). One member that is
not implemented by default is lex, producing the next lexical
token. When the directive %scanner (see section 26.6.2.1) is used,
bisonc++ produces a standard implementation for this member; otherwise it
must be implemented by the programmer.
In early 2012 the program
flexc++ http://flexcpp.org/ reached its initial release. Like
bisonc++ it is part of the Debian linux
distribution.
Jean-Paul van Oosten (jp@jpvanoosten.nl) and Richard Berendsen
(richardberendsen@xs4all.nl) started the flexc++ project in 2008
and the final program was completed by Jean-Paul and me between 2010 and 2012.
These sections of the C++ Annotations focus on bisonc++ as our
parser generator and flexc++ as our lexical scanner
generator. Previous releases of the C++ Annotations were using flex as the
scanner generator.
Using flexc++ and bisonc++ class-based scanners and parsers are
generated. The advantage of this approach is that the interface to the scanner
and the parser tends to become cleaner than without using class
interfaces. Furthermore, classes allow us to get rid of most if not all global
variables, making it easy to use multiple parsers in one program.
Below two example programs are developed. The first example only uses
flexc++. The generated scanner monitors the production of a file from
several parts. That example focuses on the lexical scanner and on switching
files while churning through the information. The second example uses both
flexc++ and bisonc++ to generate a scanner and a parser transforming
standard arithmetic expressions to their postfix notations, commonly used in
code generated by compilers and in HP-calculators. In the second example
the emphasis is mainly on bisonc++ and on composing a scanner object
inside a generated parser.
#include directives, followed by a text
string specifying the file (path) which should be included at the location of
the #include.
In order to avoid complexities irrelevant to the current example, the format
of the #include statement is restricted to the form #include
<filepath>. The file specified between the angle brackets should be
available at the location indicated by filepath. If the file is not
available, the program terminates after issuing an error message.
The program is started with one or two filename arguments. If the program is
started with just one filename argument, the output is written to the
standard output stream cout. Otherwise, the output is written to
the stream whose name is given as the program's second argument.
The program defines a maximum nesting depth. Once this maximum is exceeded, the program terminates after issuing an error message. In that case, the filename stack indicating where which file was included is printed.
An additional feature of the program is that (standard C++) comment-lines are ignored. Include-directives in comment-lines are also ignored.
The program is created in five major steps:
lexer is constructed, containing the
input-language specifications.
lexer the requirements for the
class Scanner evolve. The Scanner class derives from the base class
ScannerBase generated by flexc++.
main is constructed. A Scanner object is created
inspecting the command-line arguments. If successful, the scanner's member
lex is called to produce the program's output.
lex) is a
member of the class Scanner. Since Scanner is derived from
ScannerBase, it has access to all of ScannerBase's protected members
that execute the lexical scanner's regular expression matching algorithm.
Looking at the regular expressions themselves, notice that we need rules
to recognize comment, #include directives, and all remaining characters.
This all is fairly standard practice. When an #include directive is
sensed, the directive is parsed by the scanner. This too is common
practice. Our lexical scanner performs the following tasks:
flex in C contexts. However, in C++ contexts,
flexc++ creates a class Scanner, rather than just a scanner function.
Flexc++'s specification file consists of two sections:
flexc++'s
symbol area, used to define symbols, like a mini scanner, or
options. The following options are suggested:
%debug: includes debugging
code into the code generated by flexc++. Calling the member
function
setDebug(true) activates this debugging code at run-time. When
activated, information about the matching process is written to the
standard output stream. The execution of debug code is suppressed after
calling the member function setDebug(false).
%filenames: defines the base-name of the class header files
generated by flexc++. By default the class name (itself using the default
Scanner) is used.
%filenames scanner %debug %max-depth 3 %x comment %x include
std::cin) to the
standard output stream (std::cout). For this the predefined macro
ECHO can be used. Here are the rules:
%%
// The comment-rules: comment is ignored.
"//".* // ignore eoln comment
"/*" begin(StartCondition__::comment);
<comment>{
.|\n // ignore all characters in std C comment
"*/" begin(StartCondition__::INITIAL);
}
// File switching: #include <filepath>
#include[ \t]+"<" begin(StartCondition__::include);
<include>{
[^ \t>]+ d_nextSource = matched();
">"[ \t]*\n switchSource();
.|\n throw runtime_error("Invalid include statement");
}
// The default rule: echo anything else to std::cout
.|\n echo();
class Scanner is generated once by flexc++. This class has
access to several members defined by its base class ScannerBase. Some of
these members have public access rights and can be used by code external to
the class Scanner. These members are extensively documented in the
flexc++(1) man-page, and the reader is referred to this man-page for
further information.
Our scanner performs the following tasks:
The #include statements in the input allow the scanner to distill the
name of the file where the scanning process must continue. This file name is
stored in a local variable d_nextSource and a member stackSource
handles the switch to the next source. Nothing else is required. Pushing and
popping input files is handled by the scanner's members pushStream and
popStream, provided by flexc++. Scanner's interface, therefore,
only needs one additional function declaration: switchSource.
Switching streams is handled as follows: once the scanner has extracted a
filename from an #include directive, a switch to another file is realized
by switchSource. This member calls pushStream, defined by
flexc++, to stack the current input stream and to switch to the stream
whose name is stored in d_nextSource. This also ends the include
mini-scanner, so to return the scanner to its default scanning mode
begin(StartCondition__::INITIAL) is called. Here is its source:
#include "scanner.ih"
void Scanner::switchSource()
{
pushStream(d_nextSource);
begin(StartCondition__::INITIAL);
}
The member pushStream, defined by flexc++, handles all necessary
checks, throwing an exception if the file could not be opened or if too many
files are stacked.
The member performing the lexical scan is defined by flexc++ in
Scanner::lex, and this member can be called by code to process the tokens
returned by the scanner.
Scanner is very simple. It expects a filename
indicating where to start the scanning process.
The program first checks the number of arguments. If at least one argument was
given, then that argument is passed to Scanner's constructor, together
with a second argument "-", indicating that the output should go to the
standard output stream.
If the program receives more than one argument debug output, extensively documenting the lexical scanner's actions, is written to the standard output stream as well.
Next the Scanner's lex member is called. If anything fails, a
std::exception is thrown, which is caught by main's try-block's catch
clause. Here is the program's source:
#include "lexer.ih"
int main(int argc, char **argv)
try
{
if (argc == 1)
{
cerr << "Filename argument required\n";
return 1;
}
Scanner scanner(argv[1], "-");
scanner.setDebug(argc > 2);
return scanner.lex();
}
catch (exception const &exc)
{
cerr << exc.what() << '\n';
return 1;
}
flexc++ and the GNU C++ compiler g++
have been installed:
flexc++. For
this the following command can be given:
flexc++ lexer
g++ -Wall *.cc
Flexc++ can be downloaded from
https://fbb-git.gitlab.io/flexcpp/,
and requires the bobcat library, which can be downloaded from
http://fbb-git.gitlab.io/bobcat/.
Starting point when developing programs that use both parsers and scanners is the grammar. The grammar defines a set of tokens that can be returned by the lexical scanner (called the scanner below).
Finally, auxiliary code is provided to `fill in the blanks': the actions performed by the parser and by the scanner are not normally specified literally in the grammar rules or lexical regular expressions, but should be implemented in member functions, called from the parser's rules or which are associated with the scanner's regular expressions.
In the previous section we've seen an example of a C++ class generated by
flexc++. In the current section we concentrate on the parser. The parser
can be generated from a grammar specification file, processed by the program
bisonc++. The grammar specification file required by bisonc++ is
similar to the file processed by bison (or bison++, bisonc++'s
predecessor, written in the early nineties by
Alain Coetmeur).
In this section a program is developed converting infix expressions,
where binary operators are written between their operands, to postfix
expressions, where operators are written behind their operands. Also, the
unary operator - is converted from its prefix notation to a postfix form.
The unary + operator is ignored as it requires no further actions. In
essence our little calculator is a micro compiler, transforming numeric
expressions into assembly-like instructions.
Our calculator recognizes a rather basic set of operators: multiplication, addition, parentheses, and the unary minus. We'll distinguish real numbers from integers, to illustrate a subtlety in bison-like grammar specifications. That's all. The purpose of this section is, after all, to illustrate the construction of a C++ program that uses both a parser and a lexical scanner, rather than to construct a full-fledged calculator.
In the coming sections we'll develop the grammar specification for
bisonc++. Then, the regular expressions for the scanner are
specified. Following that, the final program is constructed.
bisonc++ is comparable to
the specification file required by bison. Differences are related to the
class nature of the resulting parser. Our calculator distinguishes real
numbers from integers, and supports a basic set of arithmetic operators.
Bisonc++ should be used as follows:
bisonc++ this is no
different, and bisonc++ grammar definitions are for all practical
purposes identical to bison's grammar definitions.
bisonc++ can generate files defining the parser class and
the implementation of the member function parse.
parse) must be
separately implemented. Of course, they should also be
declared in the parser class's header. At the very least the member
lex must be implemented. This member is called by parse to
obtain the next available token. However, bisonc++ offers a
facility providing a standard implementation of the function
lex. The member function
error(char const *msg)
is given a simple default implementation that may be modified by the
programmer. The member function error is called when parse
detects (syntactic) errors.
int main()
{
Parser parser;
return parser.parse();
}
The bisonc++ specification file has two
sections:
bisonc++
also supports several new declarations. These new declarations are
important and are discussed below.
bison,
albeit that some members that were available in bison and
bison++ are obsolete in bisonc++, while other members can be
used in a wider context. For example, ACCEPT and ABORT can be
called from any member called from the parser's action blocks to
terminate the parsing process.
bison may note that there is no
header section anymore. Header sections are used by bison to provide
for the necessary declarations allowing the compiler to compile the C
function generated by bison. In C++ declarations are part of or
already used by class definitions. Therefore, a parser generator generating a
C++ class and some of its member functions does not require a header
section anymore.
bisonc++ are discussed here. The
reader is referred to bisonc++'s man-page for a full description.
headerheader as the pathname to the file pre-included in the
parser's base-class header. This declaration is useful in
situations where the base class header file refers to types which
might not yet be known. E.g., with %union a std::string *
field might be used. Since the class std::string might not yet
be known to the compiler once it processes the base class header
file we need a way to inform the compiler about these classes and
types. The suggested procedure is to use a pre-include header file
declaring the required types. By default header is
surrounded by double quotes (using, e.g., #include "header").
When the argument is surrounded by angle brackets #include
<header> is included. In the latter case, quotes might be
required to escape interpretation by the shell (e.g., using -H
'<header>').
header headerheader as the pathname to the file pre-included in the
parser's class header. This file should define a class
Scanner, offering a member int lex() producing the next
token from the input stream to be analyzed by the parser generated
by bisonc++. When this option is used the parser's member
int lex() is predefined as (assuming the default parser class
name Parser is used):
inline int Parser::lex()
{
return d_scanner.lex();
}
and an object Scanner d_scanner is composed into the
parser. The d_scanner object is constructed by its
default constructor. If another constructor is required, the
parser class may be provided with an appropriate (overloaded)
parser constructor after having constructed the default parser
class header file using bisonc++. By default header is
surrounded by double quotes (using, e.g., #include
"header"). When the argument is surrounded by angle brackets
#include <header> is included.
typename typename should be the name of an unstructured type (e.g.,
size_t). By default it is int. See YYSTYPE in
bison. It should not be used if a %union specification is
used. Within the parser class, this type may be used as
STYPE.
union-definition bison declaration. As with bison
this generates a union for the parser's semantic type. The union
type is named STYPE. If no %union is declared, a simple
stack-type may be defined using the %stype declaration. If no
%stype declaration is used, the default stacktype (int) is
used.
%union declaration is:
%union
{
int i;
double d;
};
In pre-C++11 code a union cannot contain objects as its fields, as
constructors cannot be called when a union is created. This means that
a string cannot be a member of the
union. A string *, however, is a possible union member. It might also
be possible to use unrestricted unions (cf. section 9.9), having
class type objects as fields.
As an aside: the scanner does not have to know about such a union. It
can simply pass its scanned text to the parser through its matched member
function. For example using a statement like
$$.i = A2x(d_scanner.matched());
matched text is converted to a value of an appropriate type.
Tokens and non-terminals can be associated with union fields. This is
strongly advised, as it prevents type mismatches, since the compiler may then
check for type correctness. At the same time, the bison specific
variables $$, $1, $2, etc. may be used, rather than the full field
specification (like $$.i). A non-terminal or a token may be associated
with a union field using the
<fieldname> specification. E.g.,
%token <i> INT // token association (deprecated, see below)
<d> DOUBLE
%type <i> intExpr // non-terminal association
In the example developed here, both the tokens and the non-terminals can
be associated with a union field. However, as noted before, the scanner does
not have to know about all this. In our opinion, it is cleaner to let the
scanner do just one thing: scan texts. The parser, knowing what the input
is all about, may then convert strings like "123" to an integer
value. Consequently, the association of a union field and a token is
discouraged. Below, while describing the grammar's rules, this is further
illustrated.
In the %union discussion the %token and %type specifications
should be noted. They are used to specify the tokens (terminal symbols) that
can be returned by the scanner, and to specify the return types of
non-terminals. Apart from %token the token declarators %left,
%right, and %nonassoc can be used to specify the associativity of
operators. The tokens mentioned at these indicators are interpreted as tokens
indicating operators, associating in the indicated direction. The precedence
of operators is defined by their order: the first specification has the lowest
priority. To overrule a certain precedence in a certain context %prec can
be used. As all this is standard bisonc++ practice, it isn't further
elaborated here. The documentation provided with bisonc++'s distribution
should be consulted for further reference.
Here is the specification of the calculator's declaration section:
%filenames parser
%scanner ../scanner/scanner.h
%union {
int i;
double d;
};
%token INT DOUBLE
%type <i> intExpr
%type <d> doubleExpr
%left '+'
%left '*'
%right UnaryMinus
In the declaration section %type specifiers are used, associating the
intExpr rule's value (see the next section) to the i-field of the
semantic-value union, and associating doubleExpr's value to the
d-field. This approach, admittedly, is rather complex, as expression rules
must be included for each of the supported union types. Alternatives are
definitely possible, and involve the use of polymorphic semantic
values, covered in detail in the
Bisonc++ user guide.
bisonc++. In particular, note
that no action block requires more than a single line of code. This keeps the
grammar simple, and therefore enhances its readability and
understandability. Even the rule defining the parser's proper termination (the
empty line in the line rule) uses a single member function called
done. The implementation of that function is simple, but it is worth while
noting that it calls Parser::ACCEPT, showing that ACCEPT can be called
indirectly from a production rule's action block. Here are the grammar's
production rules:
lines:
lines
line
|
line
;
line:
intExpr
'\n'
{
display($1);
}
|
doubleExpr
'\n'
{
display($1);
}
|
'\n'
{
done();
}
|
error
'\n'
{
reset();
}
;
intExpr:
intExpr '*' intExpr
{
$$ = exec('*', $1, $3);
}
|
intExpr '+' intExpr
{
$$ = exec('+', $1, $3);
}
|
'(' intExpr ')'
{
$$ = $2;
}
|
'-' intExpr %prec UnaryMinus
{
$$ = neg($2);
}
|
INT
{
$$ = convert<int>();
}
;
doubleExpr:
doubleExpr '*' doubleExpr
{
$$ = exec('*', $1, $3);
}
|
doubleExpr '*' intExpr
{
$$ = exec('*', $1, d($3));
}
|
intExpr '*' doubleExpr
{
$$ = exec('*', d($1), $3);
}
|
doubleExpr '+' doubleExpr
{
$$ = exec('+', $1, $3);
}
|
doubleExpr '+' intExpr
{
$$ = exec('+', $1, d($3));
}
|
intExpr '+' doubleExpr
{
$$ = exec('+', d($1), $3);
}
|
'(' doubleExpr ')'
{
$$ = $2;
}
|
'-' doubleExpr %prec UnaryMinus
{
$$ = neg($2);
}
|
DOUBLE
{
$$ = convert<double>();
}
;
This grammar is used to implement a simple calculator in which integer and
real values can be negated, added, and multiplied and in which standard
priority rules can be overruled by parentheses. The grammar shows the use of
typed nonterminal symbols: doubleExpr is linked to real (double) values,
intExpr is linked to integer values. Precedence and type association is
defined in the parser's definition section.
Bisonc++ generates multiple files, among which the file
defining the parser's class. Functions called from the production rule's
action blocks are usually member functions of the parser. These member
functions must be declared and defined. Once bisonc++ has generated the
header file defining the parser's class, that header file isn't automatically
rewritten, allowing the programmer to add new members to the parser class
whenever required. Here is `parser.h' as used in our little calculator:
#ifndef Parser_h_included
#define Parser_h_included
#include <iostream>
#include <sstream>
#include <bobcat/a2x>
#include "parserbase.h"
#include "../scanner/scanner.h"
#undef Parser
class Parser: public ParserBase
{
std::ostringstream d_rpn;
// $insert scannerobject
Scanner d_scanner;
public:
int parse();
private:
template <typename Type>
Type exec(char c, Type left, Type right);
template <typename Type>
Type neg(Type op);
template <typename Type>
Type convert();
void display(int x);
void display(double x);
void done() const;
void reset();
void error(char const *msg);
int lex();
void print();
static double d(int i);
// support functions for parse():
void executeAction(int d_ruleNr);
void errorRecovery();
int lookup(bool recovery);
void nextToken();
void print__();
};
inline double Parser::d(int i)
{
return i;
}
template <typename Type>
Type Parser::exec(char c, Type left, Type right)
{
d_rpn << " " << c << " ";
return c == '*' ? left * right : left + right;
}
template <typename Type>
Type Parser::neg(Type op)
{
d_rpn << " n ";
return -op;
}
template <typename Type>
Type Parser::convert()
{
Type ret = FBB::A2x(d_scanner.matched());
d_rpn << " " << ret << " ";
return ret;
}
inline void Parser::error(char const *msg)
{
std::cerr << msg << '\n';
}
inline int Parser::lex()
{
return d_scanner.lex();
}
inline void Parser::print()
{}
#endif
Parser::INT or Parser::DOUBLE tokens.
The flexc++ directive %interactive is provided since the
calculator is a program actively interacting with its human user.
Here is the complete flexc++ specification file:
%interactive %filenames scanner %% [ \t] // ignored [0-9]+ return Parser::INT; "."[0-9]+ | [0-9]+"."[0-9]* return Parser::DOUBLE; .|\n return matched()[0];
bisonc++ and flexc++. Here is the
implementation of the calculator's main function:
#include "parser/parser.h"
using namespace std;
int main()
{
Parser parser;
cout << "Enter (nested) expressions containing ints, doubles, *, + and "
"unary -\n"
"operators. Enter an empty line to stop.\n";
return parser.parse();
}
The parser's files parse.cc and parserbase.h are generated by the
command:
bisonc++ grammar
The file parser.h is created only once, to allow the developer to add
members to the Parser class occe the need for them arises.
The program flexc++ is used to create a lexical scanner:
flexc++ lexer
On Unix systems a command like
g++ -Wall -o calc *.cc -lbobcat -s
can be used to compile and link the source of the main program and the
sources produced by the scanner and parser generators. The example uses the
A2x class, which is part of the bobcat library (cf. section
26.6.1.5) (the bobcat library is available on systems offering either
bisonc++ or flexc++). Bisonc++ can be downloaded from
http://fbb-git.gitlab.io/bisoncpp/.