std::priority_queue

The C++ STL priority_queue, found in the standard queue include file, is not a stand alone specific container, but an adapter, that could be based on each container that provides access to its elements through random access iterators and supports front(), push_back(), and pop_back(). The default implementation is based on std::vector, but std::deque could be used instead.

The random access iterator requirement is due to the need of keeping and internal heap structure, that is the way used by this class to organize the structure as required.

Three template parameters are used to specify the underlying type, container, and comparator for a priority queue. The last two are defaulted to std::vector and less. The comparator is used to determine the order in the queue. By default the first element is ensured to be not lesser than any other element in the queue.

There are a couple of constructors, one requires a compare object and a container, both of them defaulted to the default ctor for the class specified in the template; the other one is in its turn a template function on InputIterator. It expects in input a begin-end pair of iterators delimiting a sequence in a container used to initialize our queue, and then an optional compare and container object.

Given a priority queue, we can just push and pop and element, get the top, get the size, or check if it is empty.

Calling top() or pop() on an empty priority queue results in an undefined behavior, so pay a lot of attention in not doing that.

Let's have some fun writing some example code.

All our queues will be integer based, and will be initialized from this bare array:
int values[] = { 42, 12, 99, 42, 72 };
const int vSize = sizeof(values) / sizeof(int);

First test, we create a plain priority queue for integers. After filling on construction, we'll dump its data, extracting one element after the other from its top:
#include <queue>
#include <iostream>
// ...

typedef std::priority_queue<int> MyPriQue;
std::priority_queue<int> pq(values, values + vSize);

std::cout << "Plain int priority queue: ";
while (pq.empty() == false)
{
    std::cout << pq.top() << ' ';
    pq.pop();
}
std::cout << std::endl;
The expected output is:
Plain int priority queue: 99 72 42 42 12
If we want to have queue organized in the other way round, we change the comparator to greater:
typedef std::priority_queue<int, std::vector<int>, std::greater<int> > MyPriQue;
MyPriQue pq(values, values + vSize);

std::cout << "The smallest first: ";
while (pq.empty() == false)
{
    std::cout << pq.top() << ' ';
    pq.pop();
}
std::cout << std::endl;
As output I would expect to see:
The smallest first: 12 42 42 72 99
It is easy to provide as comparator a custom functor, and that gives us an extra degree of flexibility, when we specify it as parametrized:
class MyComp
{
    bool regular;
public:
    MyComp(bool regular = true)
    {
        this->regular = regular;
    }

    bool operator()(int lhs, int rhs) const
    {
        return regular ? lhs < rhs : lhs > rhs;
    }
};
On creation, we can pass to this comparator a boolean, defaulted to true, that is used to determine how to compare the elements passed to it. In this way we could use the same priority queue definition for both increasing and decreasing ordering, what changes is just the constructor call in the code:
typedef std::priority_queue<int, std::vector<int>, MyComp > MyPriQue;

MyPriQue pq(values, values + vSize); // 1
// MyPriQue pq(values, values + vSize, MyComp(false)); // 2

while (pq.empty() == false)
{
    std::cout << pq.top() << ' ';
    pq.pop();
}
std::cout << std::endl;
1. Here we are saying to the compiler to use the default ctor for the comparer (and for the container), actually, the MyComp ctor expects one parameter in input, but it is defaulted to true.
2. If you comment the previous line and uncomment this one, the queue elements are printed in increasing order.

Go to the full post

ZeroMQ 3.1 multithreading reviewed

I have already written a post on how to approach multithreading with ØMQ. In that case I used 0MQ version 2.1 and the C++ wrapper API, here I am using the 3.1 release, still in beta, and the C standard interface.

But I guess this is not the most interesting point of this post, I found more challenging thinking a way of gracefully shutdown the worker threads managed by the server. The idea is take advantage of how polling reacts when the connected socket closes. But let's start from the beginning.

We want to write something like the just seen REQ-REP DEALER-ROUTER broker but here the broker does not connect the client to services each one running in a different process, but to a worker in another thread in the same process of the broker.

The client code changes only because I changed the requisite, to add a bit of fun. Now the client sends integers, the server feeds the workers with these value, that are used for a mysterious job - actually, just sleeping. From the client point of view, the change is limited to the message buffer:
void* context = zmq_init(1);
void* socket = zmq_socket(context, ZMQ_REQ);
zmq_connect(socket, "tcp://localhost:5559");

for(int i = 0; i != 10; ++i)
{
    std::cout << "Sending " << i << std::endl;
    zmq_send(socket, &i, sizeof(int), 0); // 1

    int buffer;
    int len = zmq_recv(socket, &buffer, sizeof(int), 0);
    if(len < sizeof(int) || len > sizeof(int)) // 2
        std::cout << "Unexpected answer (" << len << ") discarded";
    else
        std::cout << "Received " << buffer << std::endl;
}
zmq_send(socket, NULL, 0, 0); // 3

zmq_close(socket);
zmq_term(context);
1. The message now is an int, and its size is, well, the size of an int.
2. A very rough error handling. If the size of the message is not the expected one (or is an error flag), we just print an error message.
3. I'm going on with the convention of sending an empty message as a command to shut down the server.

The server needs to change a bit more, it should create a thread for each service we want to use for this run of the application. The number of threads is passed to the function as a parameter.
void mtServer(int nt)
{
    boost::thread_group threads; // 1

    void* context = zmq_init(1);
    void* frontend = zmq_socket(context, ZMQ_ROUTER); // 2
    zmq_bind(frontend, "tcp://*:5559");

    void* backend = zmq_socket(context, ZMQ_DEALER); // 3
    zmq_bind(backend, "inproc://workers"); // 4

    for(int i = 0; i < nt; ++i)
        threads.create_thread(std::bind(&doWork, context)); // 5

    const int NR_ITEMS = 2; // 6
    zmq_pollitem_t items[NR_ITEMS] = 
    {
        { frontend, 0, ZMQ_POLLIN, 0 },
        { backend, 0, ZMQ_POLLIN, 0 }
    };

    dump("Server is ready");
    while(true)
    {
        zmq_poll(items, NR_ITEMS, -1); // 7

        if(items[0].revents & ZMQ_POLLIN) // 8
            if(receiveAndSend(frontend, backend))
                break;
        if(items[1].revents & ZMQ_POLLIN) // 9
            receiveAndSend(backend, frontend);
    }

    dump("Shutting down");
    zmq_close(frontend);
    zmq_close(backend);
    zmq_term(context);

    threads.join_all(); // 10
}
1. We need to manage a group of thread, this Boost utility class is just right.
2. This router socket connects the server to the client request socket.
3. This dealer socket connects the server to all the worker sockets.
4. The protocol used for thread to thread communicating is "inproc".
5. A number of threads is created, each of them runs on the function doWork(), see it below, and gets in input the ZeroMQ context, that is thread-safe.
6. The broker pattern requires an array of poll items, each of them specifying the socket to be polled and the way the polling should act.
7. Polling on the items, waiting indefinitely for a message coming from either the frontend or the backend.
8. The frontend sent a message, receive from it and send to the backend. If receiveAndSend() returns true, we received a terminator from the client, it is time to exit the while loop.
9. Other way round, a message from the backend should be sent back to the frontend.
10. Wait all the worker threads to terminate before closing the application.

Writing a multithreaded application with ZeroMQ requires designing the code in a different way from the standard techniques. Synchronization among threads is ruled by messages exchange, so we usually don't use mutex and locks. Here I break this advice, and I'd say that it make sense. We have a shared resource, the output console, and we need to rule the access to it:
boost::mutex mio;

void dump(const char* header, int value)
{
    boost::lock_guard<boost::mutex> lock(mio);
    std::cout << boost::this_thread::get_id() << ' ' << header << ": " << value << std::endl;
}

void dump(const char* mss)
{
    boost::lock_guard<boost::mutex> lock(mio);
    std::cout << boost::this_thread::get_id() << ": " << mss << std::endl;
}
The receiveAndSend() function has not change much. Even if the messages now are int, we won't do any assumption on the messages passing from here. In any case we should remember that we have to manage multipart messages, required by the router pattern:
const int MSG_SIZE = 64;
size_t sockOptSize = sizeof(int); // 1

bool receiveAndSend(void* skFrom, void* skTo)
{
    int more;
    do {
        int message[MSG_SIZE];
        int len = zmq_recv(skFrom, message, MSG_SIZE, 0);
        zmq_getsockopt(skFrom, ZMQ_RCVMORE, &more, &sockOptSize);

        if(more == 0 && len == 0)
        {
            dump("Terminator!");
            return true;
        }
        zmq_send(skTo, message, len, more ? ZMQ_SNDMORE : 0);
    } while(more);

    return false;
}
1. The variable storing the sizeof int can't be const, because zmq_getsockopt() could change it.

Finally, the most interesting piece of code, the worker:
void doWork(void* context) // 1
{
    void* socket = zmq_socket(context, ZMQ_REP); // 2
    zmq_connect(socket, "inproc://workers");

    zmq_pollitem_t items[1] = { { socket, 0, ZMQ_POLLIN, 0 } }; // 3

    while(true)
    {
        if(zmq_poll(items, 1, -1) < 1) // 4
        {
            dump("Terminating worker");
            break;
        }

        int buffer;
        int size = zmq_recv(socket, &buffer, sizeof(int), 0); // 5
        if(size < 1 || size > sizeof(int))
        {
            dump("Unexpected termination!");
            break;
        }

        dump("Received", buffer);
        zmq_send(socket, &buffer, size, 0);

        boost::this_thread::sleep(boost::posix_time::seconds(buffer));
    }
    zmq_close(socket);
}
1. The 0MQ context is thread-safe, so we can safely pass it around the threads.
2. Each working thread has its own ZeroMQ reply socket connected to the backend socket in the main thread of the server by inproc protocol.
3. It looks a bit strange polling on just one item, but is exactly what we need here.
4. Polling indefinitely on the socket. If it returns in an error state (it is not expected a return value of zero) we can safely assume that the connected socket has been closed, and we can stop waiting for a message.
5. We know a message waits to be received, we check it has an int size, and then we send back the same message before sleeping for the number passed by the client.

Go to the full post

Boost 1.49 available

Release 1.49 of the Boost C++ Libraries is now available, yuppie!

Have a look at the specific release page on boost.org for details.

Download the zipped package from sourceforge.net.

I am currently in the process of generating the lib and dll for my (Windows) development environment, I had a yyacc error message (not such file) in the bootstrap phase, but I guess it is not an issue, b2 is running fine. If you don't get what I am talking about, you should probably have a look at the Boost Getting Started, or to my post written where I installed the beta version of this same Boost version.

Happy programming!

Go to the full post

ØMQ 3.1 REQ-REP DEALER-ROUTER broker

A ZeroMQ broker is meant to expand a simple REQ-REP message pattern in a much more flexible structure. While a REQ-REP connection is synchronous, the broker uses a DEALER-ROUTER pair of sockets that allows to manage asynchronously the message exchanges that go through it. We add a layer between the client and the service that lets us to hide one side to the other. The benefit of this is that we can can easily change the system's configuration modifying just the broker, while the other components are unaware of what is going on.

I have already done a post on the same matter, but using the C++ interface for 0MQ version 2.1.x, here the code is based on the still beta 3.1 version, and referring to the standard C API. Besides, you could get more details on the Z-Guide, but be aware that currently it is still based on ZMQ version 2.1.x.

The client code has just a tiny change: instead of sending directly its request to the reply service, it goes now to the broker. So I reused the code seen for the Hello REQ-REP client shown previously, changing port address in the connection:
zmq_connect(socket, "tcp://localhost:5559");
The REP side of the story changes more dramatically. It is not acting anymore as a server, but it is just a client of the broker providing to it a service:
void* context = zmq_init(1);
void* socket = zmq_socket(context, ZMQ_REP);
zmq_connect(socket,"tcp://localhost:5560"); // 1
while(true)
{
    char buffer[MSG_SIZE];
    int size = zmq_recv(socket, buffer, MSG_SIZE, 0); // 2
    if(size < 1 || size > MSG_SIZE) // 3
    {
        std::cout << "Terminating (" << size << ")" << std::endl;
        break;
    }

    dump("Received", buffer, size); // 4
    boost::this_thread::sleep(boost::posix_time::seconds(1)); // 5
    zmq_send(socket, buffer, size, 0); // 6
}
zmq_close(socket);
zmq_term(context);
1. Not a server anymore!
2. Receive a message from the broker. Remember that ZeroMQ version 3 uses raw byte array for storing the message.
3. Quite a crude error handling. Any error, and even messages bigger than expected (the user defined constant MSG_SIZE) are considered as messages of zero length, here conventionally seen as a command to shutdown the system.
4. We'll see below the simple dump() utility function that dump to the standard output console the 0MQ message with the passed header.
5. Some time expensive job emulated with a Boost sleep.
6. And finally the same message we got in input is sent back to the caller.

A 0MQ message is nothing more than a sequence of byte, no null terminator is expected, and so it can't be managed as c-string. A simple way to print a sequence of byte is this:
void dump(const char* header, const char* buffer, size_t size)
{
    std::cout << header << ": ";
    std::for_each(buffer, buffer + size, [](char c){ std::cout << c;}); // 1
    std::cout << std::endl;
}
1. for_each() is an STL algorithm, and its third argument is a C++11 lambda function. Much more on this topic in other posts in this same blog.

The most interesting component in this messaging pattern is the broker itself:
void* context = zmq_init(1);

void* frontend = zmq_socket(context, ZMQ_ROUTER); // 1
zmq_bind(frontend, "tcp://*:5559");

void* backend = zmq_socket(context, ZMQ_DEALER); // 2
zmq_bind(backend, "tcp://*:5560");

const int NR_ITEMS = 2;
zmq_pollitem_t items[NR_ITEMS] =
{
    { frontend, 0, ZMQ_POLLIN, 0 },
    { backend, 0, ZMQ_POLLIN, 0 }
};

while(true)
{
    zmq_poll(items, NR_ITEMS, -1); // 3

    if(items[0].revents & ZMQ_POLLIN) // 4
        if(receiveAndSend(frontend, backend))
            break; // terminator!
    if(items[1].revents & ZMQ_POLLIN) // 5
        receiveAndSend(backend, frontend);
}
zmq_close(frontend);
zmq_close(backend);
zmq_term(context);
1. The router accepts connection from the REQ clients.
2. The dealer is in the server role for connections from REP clients.
3. The broker polls on its sockets to check for incoming messages. If it is not clear to you how the polling mechanism works, you could have a look to a couple of other posts. POLLIN on PUB-SUB should work as an introduction, Killing the workers is meant as a next example.
4. The frontend has sent a message. The broker should receive from it and send to the backend. The REQ client has the chance to shutdown the entire system sending an empty message. If the utility function receiveAndSend(), shown below, detects this condition it returns true, and the broker run is terminated.
5. The backend has sent a message. Same as (4), but swapping the terms.

The receiveAndSend() function could turn out to be more interesting than one would have expected:
namespace
{
    size_t sockOptSize = sizeof(int); // 1

    bool receiveAndSend(void* skFrom, void* skTo)
    {
        bool terminator = false;
        int more; // 2
        do {
            char message[MSG_SIZE];
            int len = zmq_recv(skFrom, message, MSG_SIZE, 0);
            zmq_getsockopt(skFrom, ZMQ_RCVMORE, &more, &sockOptSize);

            std::cout << "(" << more << ") ";
            if(more == 0) // 3
            {
                if(terminator = len == 0) // 4
                    std::cout << "Terminator!" << std::endl;
                else
                    dump("Received", message, len);
            }
            else
                std::cout << std::endl;

            zmq_send(skTo, message, len, more ? ZMQ_SNDMORE : 0);
        } while(more);

        return terminator;
    }
}
1. In ZeroMQ version 3 the socket options are stored in a bare int value, and not anymore in a fixed non-standard 64 bit integer type.
2. This application works with simple messages, but the broker manages multipart messages not for being prepared for future extensions but because it has to. In some way the broker should know who is the requester associated to a reply, when it comes back, and this is done by the ROUTER-DEALER pattern adding a prologue to the message. We don't care at all of what is in it, but we should know that it has been added to our original message.
3. No "more" means this is our actual message (warning! this assumes that the REQ client sends simple messages).
4. If the length of the message is zero, we have received a shutdown request. We are still passing this last message to the REP component, so that it would shutdown too.

Go to the full post

ØMQ 3.1 PUB-SUB Proxy

It is quite straightforward to implement the proxy pattern with ZeroMQ, and the changes between version 2.1 to 3.1 are minimal. So there is not much more to remark here than a couple of points.

The minimal configuration to see a proxy at work requires a PUB server and a SUB client. If the PUB runs without even notice that a proxy is among its clients, the SUB has to know its address, so to connect to it.

The proxy should be multipart-message-aware. Even if our PUB currently supports only single-part messages, it would be a good idea to support the extended messaging protocol. It doesn't add much complexity to the proxy code, and it makes much more robust.

More details on the matter on the ZGuide.

Here is a possible implementation for the proxy:
void* context = zmq_init(1);
void* frontend = zmq_socket(context, ZMQ_SUB);
zmq_connect(frontend, "tcp://localhost:50014"); // 1

void* backend = zmq_socket(context, ZMQ_PUB); // 2
zmq_bind(backend, "tcp://*:8100");
zmq_setsockopt(frontend, ZMQ_SUBSCRIBE, NULL, 0); // 3

bool terminator = false;
while(!terminator)
{
    int more;
    do {
        size_t size = sizeof(int);

        char message[MSG_SIZE];
        int len = zmq_recv(frontend, message, MSG_SIZE, 0);
        if(len == 0)
        {
            std::cout << "The broker detected a terminator!" << std::endl;
            terminator = true; // 5
        }
        zmq_getsockopt(frontend, ZMQ_RCVMORE, &more, &size);

        std::cout << "resending message" << std::endl;

        zmq_send(backend, message, len < MSG_SIZE ? len : MSG_SIZE, more ? ZMQ_SNDMORE: 0); // 6
    } while(more);
}

zmq_close(frontend);
zmq_close(backend);
zmq_term(context);
1. The proxy connects as a subscriber to the publisher we have already seen.
2. A subscriber could now connect directly to the original publisher, or to the proxy, specifying its address.
3. No filtering, as one would expect from a well behaving proxy.
4. Even if the server does not send currently any multipart message, the proxy is ready to manage them correctly.
5. Also the proxy follows the convention of terminating when it receives an empty message. But before terminating it forwards also the terminator.
6. The message length has as a limit the maximum buffer size, the SNDMORE flag is replicated, when required.

As I said, there is no change at all in the publisher, the subscriber changes in only one line, where the address that the socket has to use is specified:
void* socket = zmq_socket(context, ZMQ_SUB);
zmq_connect(socket, "tcp://localhost:8100");
zmq_setsockopt(socket, ZMQ_SUBSCRIBE, NULL, 0);

Go to the full post

ZMQ_SNDMORE-ZMQ_RCVMORE in ZeroMQ 3.1

Migrating from ØMQ 2.1 to 3.1 there is a change in the multipart message mechanism that could potentially be a source for big headaches. The message options now are stored in a plain int, and not in fixed size 64 bit integer type. Actually, if in your environment an int is a 64 bit type this is not a change at all. Otherwise it could happens that code that was working fine shows now unexpected behavior.

The example is a simple 0MQ PUB-SUB application, where the server sends a multipart message in four parts to the client:
void* context = zmq_init(1);
void* socket = zmq_socket(context, ZMQ_PUB);
zmq_bind(socket, "tcp://*:50014");

readyToSend();
std::stringstream ss;
for(int i = 0; i < 3; ++i)
{
    ss.str("");
    ss << "multi part message: " << i;
    std::string s = ss.str();

    int len = s.length();
    std::cout << "Sending " << s << std::endl;
    zmq_send(socket, s.c_str(), s.length(), ZMQ_SNDMORE); // 1
}
char* buffer = "That's all";
zmq_send(socket, buffer, strlen(buffer), 0); // 2

zmq_close(socket);
zmq_term(context);
Not much to say in top we have already seen when talking about PUB servers. Just keep your eye on the last parameter of zmq_send():
1. All the messages in the loop have the option ZMQ_SNDMORE specified. This means that they are considered parts of a single multipart message that is completed by the first subsequent message with no option (0) specified.
2. And this is it, a "normal" message, but since it is the first sent after a few ZMQ_SNDMORE ones, it is considered "special". It is the last multipart message of its lot.

The SUB client is not much different from a normal ZeroMQ subscriber. But here we are expecting a single message from the server, possibly a multipart one:
void* context = zmq_init(1);
void* socket = zmq_socket(context, ZMQ_SUB);
zmq_connect(socket, "tcp://localhost:50014");
zmq_setsockopt(socket, ZMQ_SUBSCRIBE, NULL, 0);

while(true)
{
    char buffer[MSG_SIZE];
    int len = zmq_recv(socket, buffer, MSG_SIZE, 0);
    buffer[(len > 0 ? len : 0)] = '\0';

    std::cout << buffer << std::endl;

    int more;
    size_t size = sizeof(int); // 1
    zmq_getsockopt(socket, ZMQ_RCVMORE, &more, &size); // 2
    if(more)
        std::cout << "Reading ... " << std::endl;
    else
    {
        std::cout << "Done" << std::endl;
        break;
    }
}

zmq_close(socket);
zmq_term(context);
1. This is the change in ZeroMQ version 3, a plain int is specified.
2. The variable more is set to 1 if the current message is part of a multipart series (and it is not the tail one), otherwise is set to 0.

If you had legacy code, and if you are compiling you 0MQ code for a 32 (or even less) bit platform, and you didn't adjust the size of "more", ZeroMQ would set only half (or less) of your "more" variable. And since "more" here is not initialized, we should expect it to always carry a true (in the sense of non-zero) value.

Go to the full post

Killing the Divide and Conquer workers

Don't worry (or, be bored), the title is much more truculent of the real content of this post.

We have seen how to rewrite a Divide and Conquer application for ØMQ 3.1 in its three components: ventilator, worker, sink. We complained about the workers helplessly waiting for new messages from the ventilator at the end of the job, requiring an interrupt to end their agony, while we wanted them to gracefully shutdown. From the previous post we know how to let a piece of code to poll on more sockets in input, and now we can use that technique to overcome the worker hanging issue.

This is the porting of the same solution for the C++ interface to 0MQ 2.1 I have done in the past. The common source of inspiration is the official ZGuide.

We want change both the sink and the worker implementations. They will be connected with a PUB-SUB relation, where the sink is going to be a PUB server sending an (empty) message to all the client SUB workers to signal when the job is done.

The change in the sink is tiny. After the local ZeroMQ context is initialized, beside the PULL socket delegated to rule the flow of messages coming from the workers, we add the PUB socket:
// ...
void* context = zmq_init(1);
// ...
void* terminator = zmq_socket(context, ZMQ_PUB);
zmq_bind(terminator, "tcp://*:5559");
// ...
Then all goes as shown in the original worker code, but after all the messages have been received, just before cleaning up the environment (in cauda venenum, as the Romans said) we send an empty message:
// ...

zmq_send(terminator, NULL, 0, 0);
zmq_close(terminator);

zmq_close(socket);
zmq_term(context);
The worker requires a more substantial redesign. First step is adding a SUB 0MQ socket that connects to the PUB one in the sink:
void* context = zmq_init(1);

// ...

void* skController = zmq_socket(context, ZMQ_SUB);
zmq_connect(skController, "tcp://localhost:5559");
zmq_setsockopt(skController, ZMQ_SUBSCRIBE, NULL, 0);

zmq_pollitem_t items [] = {
    { skPull, 0, ZMQ_POLLIN, 0 }, { skController, 0, ZMQ_POLLIN, 0 }
};
// ...
The array of zmq_pollitem_t objects is used for polling. The worker has two input socket, and we want to poll over them. We do that in the infinite loop. In the previous post we used a timed poll, here we set the timeout to -1, meaning that ZeroMQ blocks indefinitely on poll waiting for a message:
while(true)
{
    if(zmq_poll(items, 2, -1) < 1) // 1
    {
        std::cout << "Unexpected polling termination" << std::endl;
        break;
    }

    if(items[0].revents & ZMQ_POLLIN) // 2
    {
        // receiving the message on skPull ...
        // ...
    }
    if(items [1].revents & ZMQ_POLLIN) // 3
    {
        std::cout << " Kill!" << std::endl;
        break;
    }
}
zmq_close(skController); // 4
// ...
1. Given that we are pending forever on zmq_poll() waiting for a message to arrive, a return value of 0 is unexpected, and -1 means something bad happened. In both case, we just give a feedback to the user and break the loop.
2. The zeroth element in the zmq_pollitem_t array is the PULL socket connected to the ventilator, so if a message is pending on it, we go on running the code as was written in the original implementation.
3. If we receive a message on the ZMQ_SUB, it is time to exiting the loop. It is not even worth reading the message pending, we already know it is empty, and nothing more will follow.
4. But we should remember to close all the sockets, included the new one we just introduced.

Go to the full post

POLLIN: a client SUB polling on two PUBs

We have already seen a ØMQ client connecting to more than a server, the worker in the Divide and Conquer example connects as PULL to the ventilator that sends messages, and as PUSH to the sink that waits for messages. A bit trickier is the case where a client is getting messages from two different servers. The point is that it has to poll-in, meaning, poll on the servers to get messages in input, on all the connected servers.

I have redesigned the example I originally wrote in C++ for 0MQ 2.1 for the C ZeroMQ 3.1 version, trying to making it a bit clearer. Now there are two PUB-SUB connections, where the servers are processes running this code:
void serverPoll(char* message, bool flag) // 1
{
    char* address = flag ? "tcp://*:60020" : "tcp://*:60021";

    void* context = zmq_init(1);
    void* socket = zmq_socket(context, ZMQ_PUB); // 2
    zmq_bind(socket, address);

    for(int i = 0; i < 20; ++i)
    {
        std::cout << message << ' ';
        zmq_send(socket, message, strlen(message), 0); // 3
        boost::this_thread::sleep(boost::posix_time::seconds(1));
    }

    zmq_close(socket);
    zmq_term(context);
}
1. The message parameter is whatever you want to be sent to the client. One process should call this function with the flag parameter set to true, the other to false.
2. We are going to have to SUB servers.
3. The input message is sent 20 times to the client, at the rate of one per second.

The client should look more interesting:
// ...
const int MSG_SIZE = 64;
char* addresses[] = {"tcp://localhost:60020", "tcp://localhost:60021"};

// ...

void* context = zmq_init(1);
void* sockets[2];
for(int i = 0; i < 2; ++i)
{
    sockets[i] = zmq_socket(context, ZMQ_SUB); // 1
    zmq_connect(sockets[i], addresses[i]);
    zmq_setsockopt(sockets[i], ZMQ_SUBSCRIBE, NULL, 0); // 2
}

zmq_pollitem_t items [] = {
    { sockets[0], 0, ZMQ_POLLIN, 0 }, { sockets[1], 0, ZMQ_POLLIN, 0 } // 3
};

while(zmq_poll(items, 2, 5000) > 0) // 4
{
    char buffer[MSG_SIZE];
    for(int i = 0; i < 2; ++i)
    {
        if(items[i].revents & ZMQ_POLLIN) // 5
        {
            int size = zmq_recv(sockets[i], buffer, MSG_SIZE, 0);
            buffer[size < MSG_SIZE ? size : MSG_SIZE - 1] = '\0';
            std::cout << buffer << ' ';
        }
    }
    std::cout << std::endl;
}

std::cout << "Terminating";
for(int i = 0; i < 2; ++i)
    zmq_close(sockets[i]);
zmq_term(context);
1. Both sockets are SUB connected as clients to the PUB sockets defined in the servers. By the way, have you ever wondered what happens if you specify that a socket is a SUB and then give as address a server one, with a star instead of the machine ip address? I haven't. But a wild copy and past showed me: I got a perplexing unhandled exception in _callthreadstartex(), I am developing on MSVC, and a message on console saying that there was an "Invalid argument (..\..\..\src\tcp_connecter.cpp:63)". At that line there is an assert on errno after a call to the member function set_address() in the zmq::tcp_connecter_t ctor. Once you think for a while on it, it looks clear. I passed an invalid argument to the address setting function in the tcp connecter construct! Right, put localhost there, not a star.
2. Remember that a SUB needs a filter to be set, here we are using a "no filter", anything is read.
3. This is the tricky part. To poll we need to specify an array of zmq_pollitem_t, each of them specifying the socket on which we are polling and how we are polling on it. POLLIN means polling in input.
4. The ZeroMQ 3.1 version of zmq_poll() is very flexible. Here we are saying that we polls on the first to elements in the passed array of zmq_pollitem_t, waiting for 5 secs for anything to come. If nothing happens in that time, it returns 0. We could ask it to hang forvever passing -1, or not waiting at all, passing 0. It returns -1 in case of error, or the number of sockets containing a pending message.
5. If the field revents in the specified zmq_pollitem_t has the bit ZMQ_POLLIN high, a message is waiting to be read, so we can call zmq_recv() without risking to wait indefinitely on it for a message that is not coming.

Go to the full post

PUSH-PULL with ZeroMQ 3.1: sink

We have a ventilator that pushes messages to one or (usually) more workers. We have the worker, that pulls messages from the ventilator, does some job, and then pushes the result to the sink. Now we are going to write the code for this latter part of this ØMQ 3.1 Divide and Conquer pattern.

The sink is a very simple piece of code. It acts as a PULL server, and runs till it receives all the expected messages, than it prints some statistics before terminating:
void* context = zmq_init(1);
void* socket = zmq_socket(context, ZMQ_PULL); // 1
zmq_bind(socket, "tcp://*:5558");

if(zmq_recv(socket, NULL, 0, 0) == 0) // 2
    std::cout << "Ventilator has started to generate jobs" << std::endl;
else
    std::cout << '!';

boost::posix_time::ptime start = boost::posix_time::microsec_clock::local_time(); // 3

for(int i = 0; i < 100; ++i) // 4
{
    if(zmq_recv(socket, NULL, 0, 0) != 0) // 5
        std::cout << '!';
    else
        std::cout << (i%10 == 9 ? ':' : '.');
}
boost::posix_time::ptime end = boost::posix_time::microsec_clock::local_time(); // 6
std::cout << std::endl << "Delta is: " << end - start << std::endl;

zmq_close(socket);
zmq_term(context);
1. The socket is used as puller in a PULL-PUSH ZeroMQ messaging pattern, and the call to zmq_bind() specifies that it has the server role (on a tcp protocol).
2. This could look a bit wierd at first sight but it is exactely what we need here. We are asking to get a message, passing no buffer where store, but specifying that we are expecting a zero sized message, so it actually doesn't need any place to be stored. If the message, as received was bigger than expected, or if we get an error receiving it, an exclamation mark is showed to the user (quite a poor error handling, but it would suffice here), otherwise we signal that the first dummy message has been received.
3. The job of the sink is just collecting the time taken from the workers to run. So we store the current time now.
4. This is not very nice. We have this "one hundred" written in stone matching the number of tasks generated by splitting from the ventilator. It would be better to establish a connection ventilator-sink so that they could synchronize.
5. Again, messages coming from the workers don't carry any information more than the fact they are issued, so we can avoid any buffer where to store it.
6. At the end of the loop, we calculate and report duration of batch.

This application is meant to be run on a modern machine, with at least a couple of cores available. On a mono-processor, mono-core machine, we'll see how all our job of splitting and parallelize the execution will be just a waste. Otherwise you could have some fun checking the execution time when the number of workers varies.

Go to the full post

PUSH-PULL with ZeroMQ 3.1: worker

I am rewriting the divide and conquer example using this time the raw C interface to ØMQ 3.1; we have already seen the ventilator, now it is the turn of the worker. This component is a PULL-PUSH bridge between the ventilator, that starts the execution stream splitting the original (huge) task in many (tiny) tasks, and the sink, that receives the results from the workers.

The worker is a client of both the ventilator and the sink, and in this first simple implementation it has a flaw: no one tells it when the job is done. It just hangs on the ventilator waiting for more messages that sadly are not coming. For the moment we will kill it with an interrupt, in the near future will see a more elegant way of dealing with this issue.
void* context = zmq_init(1);
void* skPull = zmq_socket(context, ZMQ_PULL); // 1
zmq_connect(skPull, "tcp://localhost:5557");

void* skPush = zmq_socket(context, ZMQ_PUSH); // 2
zmq_connect(skPush, "tcp://localhost:5558");

while(true) // 3
{
    int message;
    if(zmq_recv(skPull, &message, sizeof(int), 0) == sizeof(int)) // 4
    {
        std::cout << '[' << message << ']';
        boost::this_thread::sleep(boost::posix_time::milliseconds(message)); // 5
    }
    else
    {
        std::cout << "[-]";
    }

    zmq_send(skPush, NULL, 0, 0); // 6
}
zmq_close(skPull);
zmq_close(skPush);
zmq_term(context);
1. This socket is used here to connect in PULL mode to the PUSH server socket defined in the ventilator.
2. A second socket used to PUSH messages to the sink, that we are going to provide of a PULL server socket.
3. This forever loop is the soft spot of this worker implementation. There is currently no way to break it other than sending an interrupt to the process running this code. Let's keep it in this way for the time being.
4. We expect the ventilator is sending messages at most int sized, the only null-sized message should be the first one, the flag that signal that the ventilator is about to start its real job. The zmq_recv() makes available space just for an int, if a bigger message is detected, it is firstly truncated, and then discarded.
5. The job done by the worker is emulated by this sleep()
6. In this implementation the sink doesn't expect any meaningful information from the worker, just an acknowledgment of the job done, so an empty message is enough.

Even if the application is not complete, we are still missing the sink, we can already run ventilator and worker. We won't get any result, but we can check if anything goes on fine.

Go to the full post

PUSH-PULL with ZeroMQ 3.1: push ventilator

Now it is the time to refresh the divide and conquer example that I originally wrote in C++ for ZeroMQ 2.x (based on the Z Guide example, written in C, and currently still referring to 2.x). Here is rewritten to use the raw C ØMQ 3.1 interface.

First step is creating a ventilator, the process that in a divide and conquer pattern has to split the (usually huge) original task in many (relatively tiny) tasks that are going to be executed by the workers.

The ventilator will push the messages to all the pulling workers connected. Since the ventilator is going to push all of them very fast, it is a good idea to start all the workers before giving the green light to the ventilator to run.

The greenLight() function could be very simple:
void greenLight()
{
    std::cout << "Press Enter when all the workers are ready";
    std::string input;
    std::getline(std::cin, input);
}
The tasks to be execute are meaningless. The workers will get an int representing the number of milliseconds they have to sleep (simulating a job of variable length). To add a bit of interest in this step, I used the Boost random generator, implementing a commonly used Marsenne twister for a uniform distribution:
class VentiRand
{
private:
    boost::random::mt19937 generator_;
    boost::random::uniform_int_distribution<> random_;
public:
    VentiRand(int low, int hi) : random_(low, hi) {}

    int getValue() { return random_(generator_); }
};
This is the ventilator code:
void* context = zmq_init(1);
void* socket = zmq_socket(context, ZMQ_PUSH); // 1
zmq_bind(socket, "tcp://*:5557"); // 2

greenLight();
zmq_send(socket, NULL, 0, 0); // 3

VentiRand vr(1, 100);
int total = 0; // 4
for(int i = 0; i < 100; ++i)
{
    int workload = vr.getValue(); // 5
    total += workload;

    std::cout << workload << '.';
    if(zmq_send(socket, &workload, sizeof(int), 0) == -1) // 6
        std::cout << '!';
}

std::cout << "Total expected cost: " << total << " msec" << std::endl;
boost::this_thread::sleep(boost::posix_time::seconds(1));

zmq_close(socket);
zmq_term(context);
1. This socket is pushing the messages to all the connected puller.
2. This is the server in the PUSH-PULL pattern.
3. Let's send a first empty message, to signal that the ventilator is about to send the real messages.
4. Total expected cost in msecs for all the jobs generated.
5. The random workload is in the range from 1 to 100msecs, as specified by the VentiRand ctor.
6. In ZeroMQ 3.1, zmq_send() returns the number of bytes sent. Minus one means something wrong happened. Here the error handling is limited to dump an exclamation mark to the output console. Notice also that zmq_send() happily accepts the address of an int as message. It is our responsibility to avoid incongruences.

The code shouldn't look complicated, if you have already written some 0MQ code. The problem is you can't see it at work alone, you need to implement the worker before. Let see it in the next post.

Go to the full post

Splitting a string with boost::tokenizer

As promised, let's see a way to get rid of sscanf for a C++ Boost safer way to split a string. Being the case in question very easy, it is not worthy to use the powerful Spirit library, better to use tokenizer, expressely designed for cases like these.

Here is the offending original function that we want to refactor:
int getValue(char* buffer)
{
    int code, value, index;
    sscanf(buffer, "%d:%d:%d",&code, &value, &index);
    std::cout << code << " [" << index << "]: " << value << std::endl;

    return value;
}
It could be unsafe, but it is simple to write and understand. The Boost tokenizer is much more flexible and safe. And a bit more complicated:
int getValue(char* raw)
{
    std::string buffer(raw);
    boost::char_separator<char> separator(":"); // 1

    typedef boost::tokenizer<boost::char_separator<char> > MyTokenizer; // 2
    MyTokenizer tok(buffer, separator);
    std::vector<std::string> tokens; // 3
    std::copy(tok.begin(), tok.end(), std::back_inserter<std::vector<std::string> >(tokens)); // 4
    if(tokens.size() != 3) // 5
        return 0;

    std::cout << tokens[0] << " [" << tokens[2] << "]: " << tokens[1] << std::endl;
    return atoi(tokens[2].c_str()); // 6
}
1. The separators are passed to the tokenizer in a string, each character passed is considered as a valid separator. In this case we need only colon.
2. A typedef makes the rest of the code more readable. boost::tokenizer is a template class that could be used "as is". Here we specify the separator, so that we can pass an instance of it to the actual tokenizer we are going to use.
3. A vector is used to keep the tokens resulting from the split.
4. Remember that you have to go through an inserter, to allocate memory for the new object in the vector.
5. Usually, when something unexpected happens, a good idea is throwing an exception. Here I assume returning zero is enough.
6. Only the "value" tokes is converted to int, just before returning it to the caller.

The resulting code is longish, but mainly because I aimed to make it as readable as I could.

Go to the full post

PUB-SUB with ZeroMQ 3.1: client

We have seen the publisher, now it is time to write the subscriber to complete the minimal structure of a PUB-SUB application implementing that ØMQ messaging pattern.

One point to stress in the SUB is that we have to remember to set a filter on the socket, to specify which subset of the messages sent by the PUB we are interested in. A bit counterintuitively, if we do not specify it, ZeroMQ assumes we don't want to get any of them.

This client reads all the messages sent by the server till it receives an empty message. It extracts a value from each message and it does some elaboration on it, dumping a result to the user:
void* context = zmq_init(1);
void* socket = zmq_socket(context, ZMQ_SUB); // 1
zmq_connect(socket, "tcp://localhost:50014");

zmq_setsockopt(socket, ZMQ_SUBSCRIBE, NULL, 0); // 2

long total = 0;
int counter = 0;
while(true)
{
    char buffer[MSG_SIZE]; // 3
    int size = zmq_recv(socket, buffer, MSG_SIZE, 0);
    if(size > 0) // 4
    {
        buffer[size < MSG_SIZE ? size : MSG_SIZE - 1] = '\0'; // 5

        total += getValue(buffer); // 6
        counter++;
    }
    else
    {
        std::cout << "Terminating" << std::endl;
        break;
    }
}
std::cout << "Average temperature: " << total / counter << std::endl; // 7
zmq_close(socket);
zmq_term(context);
1. We say to 0MQ that this socket is going to be used by the subscriber in a PUB-SUB pattern.
2. As we said, we need to set a filter on the subscriber. Here we want to get all, this is a "no filter". Here is how to get only the messages starting with the character '1':
char* filter = "1";
zmq_setsockopt(socket, ZMQ_SUBSCRIBE, filter, strlen(filter));
But in this case the client won't receive the empty message, so we should rewrite the code to break in some way the while loop.
3. In ZeroMQ we use a raw byte buffer to exchange messages, so we need to explicitely allocate a chunk of memory. If MSG_SIZE is smaller than the biggest message we get, we have it truncated. So, choose its value carefully.
4. Only if we get a "good" message (no error, no empty message), we do something with it.
5. The raw byte array is converted to a c-string putting a terminator at the end of it.
6. The getValue() function, described below, extracts the value we are interested in from the buffer.
7. Assuming that the information stored in the messages was a temperature, we give the user a feedback on its average value.

The implementation that I show here for the getValue() function is based on the infamous c unsafe string scan. It has just one quality, it is very simple. I guess I'll write another post to show a safer way to do the same, but for the time being let's live with it:
int getValue(char* buffer)
{
    int code, value, index;
    sscanf(buffer, "%d:%d:%d",&code, &value, &index);
    std::cout << code << " [" << index << "]: " << value << std::endl;

    return value;
}

Go to the full post

PUB-SUB with ZeroMQ 3.1: server

Publish-subscribe is the second messaging pattern directly supported by ØMQ that I am going to show at work with an easy example. If you want to read a simple description of the PUB-SUB pattern as implemented by ZeroMQ you could jump to a previous post of mine, written for version 2.1 but at such high level that it is not affected by the 0MQ changes in version 3.1, or you could read the Z-Guide, currently still referring to 2.1, but much more complete (and fun) than my posts.

Let's start with the server. I have rewritten the 2.1 ZMQ PUB code this time using the bare C interface, and not anymore the C++ native wrapper, and doing some tiny changes, just for the fun of it.

The publisher is going send hundred messages in this format:
[0,1]:n:i
That means, 0 or 1, colon, a number, colon, the message index, in (0..99). And then is sending an empty message, as an end of transmission signal.

If you have already seen the ZeroMQ reply server for the REQ-REP 0MQ pattern, you should find it strightforward:
void* context = zmq_init(1);
void* socket = zmq_socket(context, ZMQ_PUB); // 1
zmq_bind(socket, "tcp://*:50014");

readyToSend(); // 2
std::stringstream ss;
for(int i = 0; i < 100; ++i)
{
    ss.str("");
    ss << i%2 << ':' << i*42 << ':' << i; // 3

    std::string s = ss.str();
    int len = s.length();
    std::cout << "Sending " << s << std::endl; // 4
    zmq_send(socket, s.c_str(), s.length(), 0); // 5
}

std::cout << "Sending an empty message, as terminator" << std::endl;
zmq_send(socket, NULL, 0, 0);

zmq_close(socket);
zmq_term(context);
1. We say to ZeroMQ that this socket is used as a publisher in the PUB-SUB pattern.
2. A tiny procedure to let us the time to start the clients before the server sends all its messages. An alternative could be to slow down the server calling sleep().
3. Set the message as defined above, values are quite meaningless, but who cares, it is just a silly example.
4. Some feedback.
5. And finally we send the message. Remember that in ØMQ we use raw byte arrays, so we specify explicitly the buffer and its size.

The function to hang the server on till the clients are ready could be something simple like this:
void readyToSend()
{
    std::cout << "Enter when ready" << std::endl;
    std::string input;
    std::getline(std::cin, input);
}
Next step will be to write the client. We'll see it in a minute.

Go to the full post

REQ-REP with ZeroMQ 3.1: client

Given a ØMQ 3.1 server (REP, the guy who replies), as written in the previous post, it is quite easy to write a client (REQ, the guy who sends a request).

The code is symmetrical. If the server waits on a receive for a client, and then send back to it an acknowledging message, the client sends a message, and than wait for the server reply. To add a bit of fun to the code, the client sends ten "real" messages, and then a "fake" (empty) one, that in the convention set in this application means a request to the server to shutdown:
void* context = zmq_init(1);
void* socket = zmq_socket(context, ZMQ_REQ);
zmq_connect(socket, "tcp://localhost:50013"); // 1

char* buffer = "Hello";
int size = strlen(buffer);
for(int i = 0; i < 10; ++i)
{
    std::cout << "Sending " << buffer << " [" << i << "]" << std::endl;
    zmq_send(socket, buffer, size, 0); // 2

    if(!receive(socket)) // 3
        break;
}

char* terminator = "";
zmq_send(socket, terminator, 0, 0); // 4

zmq_close(socket);
zmq_term(context);
1. We are connecting in client mode, so have to provide the IP address of the machine where the server is. Here we have both part of the pattern in the same machine, so localhost is used. As for the server, protocol is "tcp" and the port number is matching.
2. Remember that in ZeroMQ 3.1 zmq_send() and zmq_recv() have an extra parameter, the size of the message to be sent.
3. The receive() function is shown in the server post. It executes zmq_recv() and return false if the message is empty or there is an error in receiving.
4. An empty message is sent to the server, before shutting down the client.

Go to the full post

REQ-REP with ZeroMQ 3.1: server

To see what has changed from ØMQ 2.x to 3.1, I have rewritten my old simple request-reply example for the new library. And I found a few little surprises.

This time I won't use the C++ wrapper provided by ZMQ, but the naked C interface. The resulting code is more verbose, and the use of void pointers is alarming, for someone used to higher level programming, but we have a better chance to see the process in more details.

This server is designed to synchronously waiting for a client to connect, printing whatever it receives (assuming it a string of readable characters) and send back to the client an acknowledgment message. When the client sends an empty message, or when any error happens, the looping on receiving is interrupted and the server closes.

As we will see, sending a message is pretty straightforward, while receiving is a bit more complex. So I created a receive() function, called by the main loop.
void* context = zmq_init(1); // 1
void* socket = zmq_socket(context, ZMQ_REP); // 2
zmq_bind(socket, "tcp://*:50013"); // 3
while(true)
{
    if(!receive(socket)) // 4
        break;

    boost::this_thread::sleep(boost::posix_time::millisec(250)); // 5
    std::cout << "Sending acknowledgment to the client" << std::endl;
    char* buffer = "Ack";
    zmq_send(socket, buffer, strlen(buffer), 0); // 6
}

zmq_close(socket);
zmq_term(context);
1. Nothing has changed here. Before using any 0MQ functionality we have to initialize the a 0MQ context. The parameter passed is the ØMQ thread pool size, a value of zero make sense only for inproc uses.
2. This is the server, the guy who sits waiting to reply to the client, so here we need a socket type ZMQ_REP.
3. The binding specifies the protocol, tcp, a star to say that we accept connections from anywhere, and the port associated to this service.
4. See below, if the receive on the socket fails, or if the client explicitly asks the server to shutdown sending to it an empty message, receive() return false, and so the loop is interrupted.
5. Let's slow down the execution - I used Boost sleep() to keep the code as portable as possible. If you don't have the Boost libraries on your development environment you could remove this line or, better, add it now.
6. First surprise. The zmq_send() syntax has changed. Now it doesn't expect anymore a pointer to a zmq_msg_t structure, but a raw pointer to void and the number of bytes we want to send starting from that address. The first parameter is still a pointer to the current socket, and the last one flags for more option on sending. For the moment no special option is used, so a 0 suffices.

Let's see now how to receive:
const int MSG_SIZE = 64;

// ...

bool receive(void* socket)
{
    char buffer[MSG_SIZE]; // 1
    int size = zmq_recv(socket, buffer, MSG_SIZE, 0); // 2
    if(size < 0) // 3
    {
        std::cout << "Error code " << errno <<  ". Terminating." << std::endl;
        return false;
    }
    if(!size) // 4
    {
        std::cout << "Empty message received. Terminating." << std::endl;
        return false;
    }
    std::cout << "Received: ";
    std::for_each(buffer, buffer + size, [](char c){std::cout << c;}); // 5
    std::cout << std::endl;
    return true;
}
1. In ZeroMQ 3.1 we could use a plain char buffer to manage our messages. The nuisance of this solution is that we have to explicitly allocate an array with a known size.
2. Same as zmq_send(), we have to pass the pointer to the memory where it should store the message and the max size allowed. What if the incoming message is bigger? It is truncated. The returned value is the size of the message, as received. So it could happen that it is bigger than the actual size of the buffer, and this let us know that the message has been truncated.
3. If the returned value is less than zero an error occurred. The code for the error is stored in the errno global variable.
4. As designed for this simple example application, an empty message means a request from the client to shut down the server.
5. I thought it was kind of cool to dump the data received using a combination of STL for_each() and a lambda function, even though this is not the major point of the post.

Once you get how the server works, the client looks easy. But I am out of time, we'll see it in the next post.

Go to the full post

Boost 1.49 beta and MSVC

I have just installed the latest the Boost libraries, 1.49, currently available in beta version (now the definitive 1.49 version is out), to use them with my MSVC 2010 development environment.

The setup procedure is always the same, but maybe it could be useful to have a refresh.

First thing, a few useful links.

The sourceforge download page for the 1.49 beta 1 version.

The latest Boost libraries release documentation root page.

The Boost getting started for Windows page.

Once you download the libraries from sourceforge, you have to build Boost for MSVC (if this is the compiler you plan to use, please refer to the above Boost documentation for other compilers). You can do it in a few different ways, I prefer to go for this procedure:

1) Open Visual Studio Command Prompt
2) Go to your Boost root directory
3) Run
bootstrap
4) If no error message is raised, run
.\b2
Wait for a (longish) while, at the end you should get this message:
The Boost C++ Libraries were successfully built!

The following directory should be added to compiler include paths:

    D:/dev/boost_1_49_0_beta1

The following directory should be added to linker library paths:

    D:\dev\boost_1_49_0_beta1\stage\lib
Well, directory names would probably be different, but that is the spirit.

The Boost libraries are ready to be used in a project, we just have to tell the compiler where to find .lib and include files.

Include files

Open the project's properties, go to the Configuration Properties, C/C++, General tab, and insert in the Additional Include Directories your root Boost directory.

Libraries

If you don't do this step, and try to compile anyway, you get an error like this:
LINK : fatal error LNK1104: cannot open file 'libboost_thread-vc100-mt-gd-1_49.lib'
Theoretically speaking, you should provide the full name for each library your code refers to. We can save a big part of this tedious job thanks to the "auto-linking support" feature, that requires us to provide just the directory where the compiler has to look for the .lib file.

Again in the project's properties, we go this time to Configuration Properties, Linker, General tab, and add to the Additional Library Directories the Boost lib directory, as specified by b2.

That's it. Have fun with Boost!

Go to the full post

IF ELSE in a Windows shell

When in Rome, do as the Romans do, they say. Same when in Windows. This tiny post is about the quirky syntax of the IF ELSE statement in a batch file for that environment.

But at least a minimal background is required.

The file extension

In the old DOS times, a batch file was identified by the .BAT extension, nowadays is more common to see a .CMD extension. If your script is designed to run on a modern system you could treat these two extensions as synonim.

But if you want to stay on the safe side, and remark that your shell file is not designed to run on DOS, you should prefer for the latter.

Environment variable

If the IF ELSE command is quirky, the behavior of SET, the statement caring of environment variables management, is a good simple way to get an headache. Just a fast introduction to it, here.

You just call SET to print the complete list of all the environment variables currently available. To create, or reset an existing environment variable, you use call something like this:
SET USER=Tom
In this case we are saying that USER (all uppercase) should be set to Tom. The SET command is very sensitive to blanks, and allow you to create different variables named in weird ways, with blanks, upper and lower case letters (this is actually a feature, that was not supported by the old DOS), and even in the associated value blanks could be quite a surprise, if you put them at the beginning of it. So remember: no blanks near the equal sign in a SET statement, at least if you would like to have a quiet life.

Passing parameters

This is a far to be perfect mechanism but, for what it cares here, it could be explained very easily. They are seen as variables ranging from 0 (the name used to call the script itself), than 1 (first parameter) to 9 (ninth parameter). What if our script needs more than 9 parameters? (Hint: we can use SHIFT).

IF ELSE

The major nuisance in the IF ELSE construct for Windows batch files is due to the intepreter nature's, that read and execute one line after the other. That means any line should be executable atomically, and just a minimal support to complex instruction structure is provided.

So, we have just two alternatives when writing an IF ELSE: writing the complete statement on a single line, of using a very strict structure.

I guess an example is the better way of showing what I mean.
@ECHO OFF

IF "%1" == "" (ECHO Welcome, stranger!) ELSE ( ECHO Hello, %1! )

IF NOT DEFINED USER (
   ECHO no local user was defined
   SET USER=%1
) ELSE (
   ECHO local user was %USER%
   set USER=
)
This is a little silly batch application, but it has something interesting to say.

The first IF ELSE is a one-liner. It checks the first parameter for being actually something and, if so, greets the caller using it. Notice the double quotes around %1 in the test clause to let us compare its value against the empty string. There are other more sophisticated way to get the same result, but here it suffices.

The second IF ELSE is more complicated, so complicated that we used the multi-line syntax, a real pain. You must keep this format, otherwise you get mystifying syntax errors.

It checks if the env variable USER is undefined, in that case we print a message and set it to the parameter passed by the caller. Naturally, if the user didn't pass anything, the result is to remove USER from the environment variables.

The ELSE block prints the current value of USER (notice that it is decorated with percent signs to solve the ambiguity between the plain word USER and the variable name USER), and then remove it from the environment.

Go to the full post