Hot questions for Using ZeroMQ in linux

Question:

The Python standard library's socket.create_connection()method has a source address option, for controlling which source IP a connection uses.

How do I do the same thing with a Python ZeroMQ socket, given a machine that has multiple addresses?

In this case, I've been using Linux's iproute2 ip addr add to create the addresses and the ZeroMQ PUB/SUB socket-archetypes.


Answer:

Well, ZeroMQ is a bit tricky to read as a socket-"counterparty" ( it's not )

Why?

Classical socket is a free-to-harness resource.

ZeroMQ is a rather complex hierarchy of ideas and principles of behaviours ( better - distributed behaviours ), that help design smart distributed computing systems, without touching the low-level ( ZeroMQ well abstracted ) details, that control the actual flow of events in the storms of harsh conditions all distributed computing systems are open to face ( and have to handle at low level accordingly, if the high-level abstractions "promised" by ZeroMQ to keep are to be fulfilled and ease the designers' minds to focus rather on his/her core application part, not re-designing wheels ( with all trials and errors ) on pulling strings on O/S resources and shaking systems services for collecting just a few low-hanging types of fruits ).


For these reasons better straight forget ZeroMQ to be "something-like-socket"


ZeroMQ hierarchy in less than a five seconds

1: ZeroMQ promises an easy re-use of a few trivial Scalable Formal Communication Pattern archetypes offering a particular distributed behaviour { PUB/SUB | PUSH/PULL | PAIR/PAIR | XPUB/XSUB | ... | REQ/REP }.

2: Except a case of exclusively using just a device-less inproc:// transport-class, in all other cases, ZeroMQ needs one or more instances of a tunable "engine" - a Context( nIOthreads = N ), N >= 1.

3: Having this, any ( future socket ) Access Point could get instantiated, bearing a behavioural archetype since the very moment of birth:

aSubscribeCHANNEL = aLocalCONTEXT.socket( zmq.SUB )      # this is NOT a <SOCKET>
#                                 ^^^^^^__________________ even it was typed in

4: Having an "Access Point" instance ready "inside" the local "engine", one can lock-in its materialisation in the external-reality, using one or more ( yes, more ... WOW! Meaning more incoming pulling-strings into / whistles blowing out from a single Access Point "behaviour-node" ) calls to either of these methods: .bind( <transport-class>://<a-class-specific-address> ) or .connect( <transport-class>://<a-class-specific-address> )

5: If and only if a .bind()-RTO-ready Access Point A "gets visited" by a first live .connect()-RTO-ready Access Point B, having any matching behaviour pairing, the ZeroMQ-messaging/signalling archetype gets live ( naming it also a socket was probably used for historical reasons, to ease an explanation in times )

( PUB/PUB will never fit, for obvious reasons, whereas PUB/SUB and many other behaviour-archetype pairs will and do lovely match and form the mutually-"compatible"-behaviours that will finally go live and stay so )


So,how do I do the same thing with a Python ZeroMQ socket,given a machine that has multiple addresses?

Simply use the fully qualified specification in a call to .bind( "{ tcp | pgm | epgm }://<ip>:<port#>" ) method and you are done.

That easy.

Cool, isn't it?

Many further pleasant surprises under the hood of performance tuning, latency shaving and security tweaking.

Question:

This is my first time working with IPC so far and I have written this script:

#!/usr/bin/python

import zmq

context = zmq.Context()
socket = context.socket(zmq.PAIR)
socket.setsockopt(zmq.RCVTIMEO, 2000)
socket.connect ("ipc:///tmp/something")
socket.send(b"123")
try:
    message = socket.recv()
except:
    print("DEBUG!")
    message = None

When my server script is running ( it simply sends an answer ) everything is working fine.

But when the .recv()-call timeouts ( e.g. because there is no server running ), the script won't terminate after the "DEBUG!"-print and I have to manually stop it using Ctrl+C.

I tried disconnecting and closing the socket, but it doesn't change anything.

When I put the whole script into a function and call it, I get the following error on KeyboardInterrupt:

^CException ignored in: <bound method Context.__del__ of <zmq.sugar.context.Context object at 0x7f16a36d5128>>
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/zmq/sugar/context.py", line 46, in __del__
    self.term()
  File "zmq/backend/cython/context.pyx", line 136, in zmq.backend.cython.context.Context.term (zmq/backend/cython/context.c:2339)
  File "zmq/backend/cython/checkrc.pxd", line 12, in zmq.backend.cython.checkrc._check_rc (zmq/backend/cython/context.c:3207)
KeyboardInterrupt

I'm running Python 3.6.1 and version 16.0.2 of the PyZMQ module on an Arch Linux.


Answer:

May adopt this as a standard ZeroMQ infrastructure setup policy:

A default value of the LINGER attribute forces the socket instance to wait upon an attempt to .close(). So set this to 0 so as to avoid this feature / behaviour right upon instantiation, not to finally hang upon a termination.

import zmq
nIOthreads = 2                           # ____POLICY: set 2+: { 0: non-blocking, 1: blocking, 2: ...,  }
context = zmq.Context( nIOthreads )      # ____POLICY: set several IO-datapumps

socket  = context.socket( zmq.PAIR )
socket.setsockopt( zmq.LINGER,      0 )  # ____POLICY: set upon instantiations
socket.setsockopt( zmq.AFFINITY,    1 )  # ____POLICY: map upon IO-type thread
socket.setsockopt( zmq.RCVTIMEO, 2000 )

socket.connect( "ipc:///tmp/something" )
socket.send( b"123" )
try:
    message = socket.recv()
except:
    print( "DEBUG!" )
    message = None
finally:
    socket.close()                       # ____POLICY: graceful termination
    context.term()                       # ____POLICY: graceful termination

Question:

I've got a program which receives information from about 10 other (sensor reading) programs (all controlled by myself). I now want to make them communicate using ZeroMQ. For most of the queues the important thing is that the central receiving program always has the latest sensor data, all older messages are not important anymore. If a couple messages get lost I don't care. So for all of them I started out with a separate PUB/SUB socket; one for each program. But I'm not sure if that is the right way to do it. As far as I understand I have two options:

  1. Make a separate socket for every program and read them out in a loop. That way I know by the socket what the information is I'm receiving (I'm often just sending an int).
  2. Make one socket to which all the programs connect, and with every message I send a string which tells the receiving end what the message is about.

All connections are on a PUB/SUB basis, so creating one socket would well work out. I'm just not sure if that is the most efficient way to do it.

All tips are welcome!


Answer:

- PUB/SUB is fine and allows an easy conversion from N-sensors:1-logger into N-sensors:2+-loggers- one might also benefit from a conceptual separation of a socket from an access-port, where more than one sockets may get connected

How to get always JUST THE ACTUAL ( LAST ) SENSOR READOUT:

If not bound, due to system-integration constraints, to some early ZeroMQ API, there is a lovely feature exactly for this via a .setsockopt( ZMQ_CONFLATE, True ) method:

ZMQ_CONFLATE: Keep only last message If set, a socket shall keep only one message in its inbound/outbound queue, this message being the last message received/the last message to be sent. Ignores ZMQ_RCVHWM and ZMQ_SNDHWM options. Does not support multi-part messages, in particular, only one part of it is kept in the socket internal queue.


On design dilemma:

Unless your real-time control stability introduces some hard-real-time limit, the PUB-side freely decides, how often a new value is instructed to .send() to SUB(-s). Here no magic is needed, the less with ZMQ_CONFLATE option set on the internal outgoing queue managed.

The SUB(-s) side receiver(s) will also benefit from the ZMQ_CONFLATE option set on the internal incoming queue managed, but given a set of individual .bind()-s instantiate separate landing ports for delivery of different individual sensoric readouts, your "last" values will remain consistently the "last"-readouts. If all readouts would go into a common landing pad, your receiving process will get masked-out ( lost ) all readouts but the one that was just accidentally the "last" right before .recv() took place, which would not help much, would it?

If some I/O-performance related tweaking becomes necessary, the .Context( n_IO_threads ) + ZMQ_AFFINITY-mapping options may increase and prioritise the resources the ioDataPump may harness for increased IO-performance

Question:

I'm attempting to use the ZMQ draft specs ZMQ_RADIO and ZMQ_DISH. I built libzmq and cppzmq with CMake ExternalProject and and the flag ENABLE_DRAFTS=ON and verified it was built with drafts using the zmq_has() function. I modified the standard hello world example to use radio and dish and cannot get them to talk. I also get compilation errors that ZMQ_RADIO and ZMQ_DISH are undefined. I defined them manually and it compiles but I never get an actual connection so it seems like something else is wrong.

Here's my code:

CMakeLists.txt

cmake_minimum_required(VERSION 2.8.11)
project(zmq_udp)

include(ExternalProject)

ExternalProject_Add(libzmq
    GIT_REPOSITORY https://github.com/zeromq/libzmq
    GIT_TAG master
    CMAKE_ARGS 
      -DENABLE_DRAFTS=ON
      -DWITH_PERF_TOOL=OFF 
      -DZMQ_BUILD_TESTS=OFF 
      -DENABLE_CPACK=OFF
      -DCMAKE_INSTALL_PREFIX=${CMAKE_BINARY_DIR}/zmq
      -DCMAKE_LIBRARY_OUTPUT_DIRECTORY=${CMAKE_BINARY_DIR}/zmq/lib
      -DCMAKE_C_FLAGS=${CMAKE_C_FLAGS}
      -DCMAKE_CXX_FLAGS=${CMAKE_CXX_FLAGS}
      -DCMAKE_SHARED_LINKER_FLAGS=${CMAKE_SHARED_LINKER_FLAGS}
)

ExternalProject_Add(cppzmq
    GIT_REPOSITORY https://github.com/zeromq/cppzmq
    GIT_TAG master
    CONFIGURE_COMMAND ""
    BUILD_COMMAND ""
    INSTALL_COMMAND ${CMAKE_COMMAND} -E copy <SOURCE_DIR>/zmq.hpp ${CMAKE_BINARY_DIR}/zmq/include/zmq.hpp
    TEST_COMMAND ""
)

add_dependencies(cppzmq libzmq)

set(ZEROMQ_LIBNAME "libzmq.so")
set(ZEROMQ_INCLUDE_DIRS ${CMAKE_BINARY_DIR}/zmq/include)
set(ZEROMQ_LIBRARIES ${CMAKE_BINARY_DIR}/zmq/lib/${ZEROMQ_LIBNAME})

include_directories(${ZEROMQ_INCLUDE_DIRS})

add_executable(server server.cpp)
add_executable(client client.cpp)
add_dependencies(server cppzmq)
add_dependencies(client cppzmq)
target_link_libraries(server ${ZEROMQ_LIBRARIES})
target_link_libraries(client ${ZEROMQ_LIBRARIES})

server.cpp

#include <zmq.hpp>
#include <string>
#include <iostream>

#define ZMQ_DISH 15

int main ()
{
    std::cout << zmq_has("draft") << std::endl;

    zmq::context_t context (1);
    zmq::socket_t socket (context, ZMQ_DISH);
    socket.bind ("udp://127.0.0.1:5555");

    while (true)
    {
        zmq::message_t request;

        socket.recv (&request);
        std::cout << "Received Hello" << std::endl;
    }

    return 0;
}

client.cpp

#include <zmq.hpp>
#include <string>
#include <iostream>
#include <unistd.h>

#define ZMQ_RADIO 14

int main ()
{
    zmq::context_t context (1);
    zmq::socket_t socket (context, ZMQ_RADIO);

    std::cout << "Connecting to hello world server…" << std::endl;
    socket.connect ("udp://127.0.0.1:5555");

    for (int request_nbr = 0; request_nbr != 10; request_nbr++)
    {
        zmq::message_t request (5);
        memcpy (request.data (), "Hello", 5);
        std::cout << "Sending Hello " << request_nbr << "…" << std::endl;
        socket.send (request);

        sleep(1);
    }

    return 0;
}

The server outputs a 1 as expected for the zmq_has() function, which should verify libzmq was built with the draft API mode on.

What do I need to do to get RADIO/DISH to work properly?

I'd like to use ZMQ on a project as a UDP receiver to receive some UDP packets from a non-ZMQ application.


Answer:

The RADIO and DISH are in draft state and not available in stable build. If you need to access DRAFT API, build zmq from this link

The following is part of zmq.hpp

// These functions are DRAFT and disabled in stable releases, and subject to 
// change at ANY time until declared stable.                                 
    #ifdef ZMQ_BUILD_DRAFT_API

    //DRAFT Socket types.                                                       
#define ZMQ_SERVER 12
#define ZMQ_CLIENT 13
#define ZMQ_RADIO 14
#define ZMQ_DISH 15
#define ZMQ_GATHER 16
#define ZMQ_SCATTER 17
#define ZMQ_DGRAM 18
#endif

Question:

Why can't I get the Identity of the socket created via zsock_new_stream? zmq_getsockopt returns -1.

zsock_t *socket = zsock_new_stream("tcp://127.0.0.1:5555");

uint8_t id [256];
size_t id_size = 256;

int rc = zmq_getsockopt (socket, ZMQ_IDENTITY, id, &id_size);
assert(rc == 0);

Using the old deprecated zsocket works fine, see below:

zctx_t *ctx = zctx_new();
void *sock = zsocket_new(ctx, ZMQ_STREAM);
int rc = zsocket_connect(sock, "tcp://127.0.0.1:5555");

uint8_t id [256];
size_t id_size = 256;

int rc = zmq_getsockopt (socket, ZMQ_IDENTITY, id, &id_size);
assert (rc == 0);

Does an example exist that uses zsock_new_stream that works?


Answer:

No, there is no working example because the identity property is ignored for STREAM sockets. The czmq implementation follows the ZMTP v3 protocole.

Quote, regarding the identity property:

"A REQ, DEALER, or ROUTER peer connecting to a ROUTER MAY announce its identity, which is used as an addressing mechanism by the ROUTER socket. For all other socket types, the Identity property shall be ignored."

But you can always send an id from a client peer to a server through multipart messages, where your id is in the first frame of the message. On the other side, getting the id is just a matter of reading the first frame of the received message.

Question:

FPC 3.0.0, Lazarus 1.6.0

I have two threads on a Linux machine (I will check on Windows later, this code is supposed to be cross-platform), the main thread and the client thread. The purpose of the client thread is to provide non-blocking request-reply network functionality.

The pascal client (requester) communicates with a server, written in python (replier).

I am using ZeroMQ REQ/REP to the low level stuff, versions at the end.

In the high level, creating and starting the client thread goes like this:

//from the main thread
FClientThread.Create('tcp://localhost:5020');
FClientThread.Start;

To send a request:

//from the main thread
FClientThread.SendRequest('im_requesting_stuff',-1,'give_it_to_me');
Sleep(1000)
FClientThread.SendRequest('requesting',-1,'gimme_more'); 

//the TClientThread descends from TThread.
procedure TClientThread.SendRequest(ACode: string; ATrialIndex: integer;
  ARequest: string);
begin
  FRequest := ARequest;
  FCode := ACode;
  FTrialIndex := IntToStr(ATrialIndex);
  RTLeventSetEvent(FRTLEvent);
end;

procedure TClientThread.Execute;
var
  AMessage : UTF8String;
begin
  while not Terminated do
    begin
      AMessage := '';
      RTLeventWaitFor(FRTLEvent);

      FRequester.send( FRequest );
      FRequester.recv( AMessage ); // blocking code,
                                   // that is why it is inside the thread

      // *****************************************************************************
      // BUG: These vars are being filled with the last values of
      //            ( 'FTrialIndex',               'AMessage',               'FCode' )
      FMsg := #40#39 + FTrialIndex + #39#44#32#39 + AMessage + #39#44#32#39 + FCode + #39#41;
      // 
      // *****************************************************************************

      Synchronize( @Showstatus );
    end;
end;  

The server side works in a non-blocking fashion, single thread:

while True:
   msg = self.socket.recv(flags=zmq.NOBLOCK)
   # do stuff
   self.socket.send(response)

The commented bug ( repetition of the last call ) occurs ONLY if .SendRequest() is called multiple times in very short intervals (without the sleep(), for example).

I was expecting some losses, but I was not expecting repetition since I am explicitly waiting for the main thread to return.

Any suggestions?

Best Regards


Edit 1:

ZMQ versions
python
# zmq
print zmq.zmq_version()
# 4.1.2

# pyzmq
print zmq.__version__
# 15.1.0
pascal

For delphizmq version see commit (50a28b4b72c531536452ee5d79e19f5960c9f7c7).

// zmq
ZMQVersion(minor,major,patch)
WriteLn(minor,major,patch)
// 3.2.5

Edit 2:

Logs

Legend:

LogTickCount : ('REQTickCount', 'TestCode:DeltaTime')

Before copying Fvariables to local variables:

4476.028775652 : [debug] TClockThread.Execute:Start 140736007759616
4476.087363892 : ('4476.08701411', 'S:7.316086894')
4478.567764596 : ('4478.56744415', '*R:9.828251263')
4478.923743380 : ('4478.92343718', 'R:10.192552661')
4479.134910639 : [debug] C: expected next
4479.166311115 : ('4479.16600296', 'R:10.424417498') // BUG
4479.201287606 : ('4479.20099538', 'R:10.424417498') 
4483.037278107 : ('4483.03664644', '2b:14.322538449')
4483.868644929 : ('4483.86663063', '*R:15.153748820')
4484.278650579 : ('4484.27829493', 'R:15.562894551')
4484.552199446 : [debug] C: expected next
// data lost
4484.594517381 : ('4484.59364657', 'R:15.841710775')
4490.088956444 : ('4490.08862455', '1b:21.318677098')
4490.465529156 : ('4490.46522659', '*R:21.738692793')
4490.744151772 : ('4490.74376093', 'R:22.027325175')
4491.047253601 : [debug] C: expected next
4491.064188821 : ('4491.0638315', 'R:22.336764091') // BUG
4491.100142009 : ('4491.09978804', 'R:22.336764091')
4496.084726117 : ('4496.08441014', '2a:27.320218402')
4496.494107080 : ('4496.49378719', '*R:27.762184396')
4496.816623032 : ('4496.81630259', 'R:28.100685410')
4497.088603481 : [debug] C: expected next
// data lost
4497.132445971 : ('4497.13214788', 'R:28.378107825')

After copying Fvariables to local variables (note: data was not lost for pure lucky):

8183.233467050 : [debug] TClockThread.Execute:Start 140736477513472
8183.299773049 : ('8183.29946881', 'S:10.678368191')
8183.751976738 : ('8183.7516616', '*R:11.182574383')
8184.262027198 : ('8184.26170452', 'R:11.690292332')
8184.501553063 : [debug] C: is expected next
8184.517362396 : ('8184.51705341', 'C:11.952652441')
8184.552388891 : ('8184.55206258', 'R:11.950063743')
8189.291033037 : ('8189.29070225', '2b:16.683417064')
8189.880999859 : ('8189.88067746', '*R:17.299351333')
8190.257476710 : ('8190.25714788', 'R:17.699402216')
8190.515557895 : [debug] C: is expected next
8190.553234830 : ('8190.55290992', 'C:17.966527084')
8190.588187291 : ('8190.5878607', 'R:17.964049297')
8196.243242766 : ('8196.24264239', '1b:23.683231358')
8196.758162472 : ('8196.75784342', '*R:24.200630247')
8196.964202600 : ('8196.96383131', 'R:24.397590095')
8197.119091206 : [debug] C: is expected next
8197.142731450 : ('8197.14240346', 'C:24.570427216')
8197.177078782 : ('8197.17673878', 'R:24.567590990')
8208.243614030 : ('8208.24296169', '2a:35.685189479')
8208.659747197 : ('8208.65940133', '*R:36.092803071')
8208.794614317 : ('8208.79431818', 'R:36.230488258')
8208.909275842 : [debug] C: is expected next
8208.935924407 : ('8208.93559782', 'C:36.360198934')
8208.970681025 : ('8208.9703712', 'R:36.357781493')

Turns out that using critical sections was not necessary for my specific implementation.


Edit 3.:

Please, note that the present question IS NOT about data loss (queuing requests). It is about thread synchronization (variable sharing). Regarding the data loss, using Sleep() would be merely a hack (you should use queues).

I have tested the code with Sleep(10) and it reduces the chances of data loss. But you should note that it is a lucky based hack.


Answer:

You need some locking mechanism to read and write the values from different threads.

To have only small locking times make a copy of the data.

TClientThread = class( TThread )
private
  FCriticalSection: TCriticalSection;
  ...
end;

//the TClientThread descends from TThread.
procedure TClientThread.SendRequest(ACode: string; ATrialIndex: integer;
  ARequest: string);
begin
  FCriticalSection.Enter;
  try
    FRequest := ARequest;
    FCode := ACode;
    FTrialIndex := IntToStr(ATrialIndex);
  finally
    FCriticalSection.Leave;
  end;
  RTLeventSetEvent(FRTLEvent);
end;

procedure TClientThread.Execute;
var
  lRequest: string;
  AMessage : UTF8String;
begin
  while not Terminated do
    begin
      AMessage := '';
      RTLeventWaitFor(FRTLEvent);

      FCriticalSection.Enter;
      try
        // copy data to local vars
        lRequest := FRequest;
      finally
        FCriticalSection.Leave;
      end;

      FRequester.send( lRequest );
      ...

Question:

On official ZMQ website, there are references to Windows/Linux sources, and Windows installer (that contains binary .dll/.lib files).

I am able to use the Windows DLL, but cannot find the option to download binary files (.so) for Linux.

Where are the binaries available (without compiling on Linux machine)? I have a c++ project that works well with additional dependency libzmq-v120-mt-4_0_4.dll,libzmq-v120-mt-4_0_4.lib, (compiles and run) and looking to download same binaries to compile and run on Linux machine


Answer:

There are so many different Linux OS variant and architecture available, so it might not a good idea just download one so from others.

Actually, you can build your own easily according to http://zeromq.org/build:sparc-linux. Personally, I successfully built one before with following steps (you can change the ZeroMQ version accordingly)

cd /tmp/
git clone https://github.com/zeromq/zeromq4-1.git
cd zeromq4-1/
./autogen.sh
./configure
make
make install

Question:

I have a Linux machine configured with additional IPs on the loopback interface:

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet 100.100.100.100/24 scope global lo
       valid_lft forever preferred_lft forever
    inet 100.100.100.101/24 scope global secondary lo
       valid_lft forever preferred_lft forever
    ...

I'm using ZeroMQ's PUB/SUB pattern to connect to remotes using each of these source ips.

socket = zmq.Context().socket(zmq.SUB)
socket.connect('tcp://100.100.100.100:5555;192.168.1.1:5555')

The corresponding server listens on all interfaces.

socket = zmq.Context().socket(zmq.PUB)
socket.bind('tcp://*:5555')

Sending using these addresses works for remote hosts given a properly-configured network, so I know this works. (It's complicated.) Now I want to unit-test this setup, which means checking that setting the source ip works without needing a remote host for the server. I run the server in the same configuration and then try to connect using something like:

socket = zmq.Context().socket(zmq.SUB)
socket.connect('tcp://100.100.100.100:5555;localhost:5555')

The client never makes the connection with the server, but if I remove the source endpoint, it works. Neither localhost, nor 127.0.0.1 work as the destination address in the .connect() call. However, if I call netcat, using the same source IP,

nc -s 100.100.100.100 -v -z -w 5 localhost 5555

This succeeds, and the server I connected to properly receives the connection as coming from 100.100.100.100. I looked at tshark's output and in the ZeroMQ client case, I don't seen any traffic from 100.100.100.100 over the loopback interface, while when I use nc to establish a TCP connection to the ZeroMQ server, I do.

What's going on here? Does ZeroMQ do something special for this kind of hairpin connection, and if so, is there a way to disable it? Is there a good way to test that I'm invoking the source IP functionality of ZeroMQ correctly without using a remote host?

This may be viewed as a follow-up to my previous question.


Answer:

You can't send and receive on the same port on the same interface. The best solution is to allow ZeroMQ to pick the port to send on:

socket = zmq.Context().socket(zmq.SUB) socket.connect('tcp://100.100.100.100:*;192.168.1.1:5555')

Unfortunately, the 4.2.2 release of ZeroMQ doesn't support this, though an upcoming release should. For now, the only solution is to hardcode a different port for the sending and receiving address:

socket = zmq.Context().socket(zmq.SUB) socket.connect('tcp://100.100.100.100:6666;192.168.1.1:5555')

Question:

I have this call to the czmq api:

int rc = zsock_connect(updates, ("inproc://" + uuidStr).c_str());
(Note: uuidStr is of type std::string and zsock_connect expects a const char* as its second argument)

Which gives compiler error:

error: format not a string literal and no format arguments [-Werror=format-security]
int rc = zsock_connect(updates, ("inproc://" + uuidStr).c_str());
                                                               ^                                                                                                    

I've tried:

const char* connectTo = ("inproc://" + uuidStr).c_str();
int rc = zsock_connect(updates, connectTo);

and also

int rc = zsock_connect(updates, (const char*)("inproc://" + 
uuidStr).c_str());

But the error persists.

How do I correct this?

Context; I'm trying to compile this code as a Python extension on Linux using pip install. On Windows it compiles with pip install and runs just fine, presumably that compiler is more permissive.


Answer:

This function acts like printf() and friends, right? If so, you have the same problem with that that exists with printf(some_var) - if the string you're passing has format sequences in it, you get undefined behavior and bad stuff happening because there aren't the arguments present that you're telling the function to expect. The fix is to do something like:

int rc = zsock_connnect(updates, "inproc://%s", uuidStr.c_str());

Basically, give it a format that takes your string as an argument.

Question:

I am trying to run an example C++ ZMQ client. The code compiles fine with g++ but I cannot run the generated executable because following error.

./pairserver.out: /opt/Xilinx/Vivado/2016.1/lib/lnx64.o/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by ./pairserver.out)
./pairserver.out: /opt/Xilinx/Vivado/2016.1/lib/lnx64.o/libstdc++.so.6: version `GLIBCXX_3.4.20' not found (required by /usr/local/lib/libzmq.so.5)
./pairserver.out: /opt/Xilinx/Vivado/2016.1/lib/lnx64.o/libstdc++.so.6: version `CXXABI_1.3.8' not found (required by /usr/local/lib/libzmq.so.5)
./pairserver.out: /opt/Xilinx/Vivado/2016.1/lib/lnx64.o/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by /usr/local/lib/libzmq.so.5)
./pairserver.out: /opt/Xilinx/Vivado/2016.1/lib/lnx64.o/libstdc++.so.6: version `GLIBCXX_3.4.21' not found (required by /usr/local/lib/libzmq.so.5)

I use following command to compile the Code

g++ pairserver.cpp -o pairserver.out -lzmq

And here is the sourcecode

//  file: main.cpp
//  Hello World client in C++
//  Connects REQ socket to tcp://localhost:5555
//  Sends "Hello" to server, expects "World" back
//
#include <zmq.hpp>
#include <string>
#include <iostream>

int main ()
{
    //  Prepare our context and socket
    zmq::context_t context (1);
    zmq::socket_t socket (context, ZMQ_REQ);

    std::cout << "Connecting to hello world server…" << std::endl;
    socket.connect ("tcp://localhost:5555");

    //  Do 10 requests, waiting each time for a response
    for (int request_nbr = 0; request_nbr != 10; request_nbr++) {
        zmq::message_t request (5);
        memcpy (request.data (), "Hello", 5);
        std::cout << "Sending Hello " << request_nbr << "…" << std::endl;
        socket.send (request);

        //  Get the reply.
        zmq::message_t reply;
        socket.recv (&reply);
        std::cout << "Received World " << request_nbr << std::endl;
    }
    return 0;
}

I guess there is some conflict between shared libraries of Vivado 2016 which was installed.


Answer:

After some digging, I have found that my libstdc+ library was somehow linked to a third party libstdc+ library(from Vivado). I have used ldd command to find linked libraries and following were results.

linux-vdso.so.1 (0x00007ffda7997000)
libzmq.so.5 => /usr/local/lib/libzmq.so.5 (0x00007f0d9679b000)
libstdc++.so.6 => /opt/Xilinx/Vivado/2016.1/lib/lnx64.o/libstdc++.so.6 (0x00007f0d96499000)
libgcc_s.so.1 => /opt/Xilinx/Vivado/2016.1/lib/lnx64.o/libgcc_s.so.1 (0x00007f0d96283000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f0d95e92000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f0d95c8a000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f0d95a6b000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f0d956cd000)
/lib64/ld-linux-x86-64.so.2 (0x00007f0d96c23000)

Googling the problem, I have ended up with using following command

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/Xilinx/Vivado/2016.1/lib/lnx64.o:/usr/lib/x86_64-linux-gnu

However, It didn't worked. I was very angry to deal with the problem in gentle manner anymore. I have renamed /opt/Xilinx/Vivado/2016.1/lib/lnx64.o to something else and my code ran perfectly fine. That stupid software 'Vivado' has caused the problem. It ruined half of my day.

Question:

I am simulating the following scenario:

There are two routers, A and B, both with Internet access and also connected to each other via an internal private network (an Ethernet cable between them, basically). They each serve N clients (each router as its own client-side network). The routers send each other keepalive-like messages through a ZeroMQ publisher-subscriber scheme through the private network.

Moreover, when A is congested, it must send its clients traffic to B (and vice-versa), which will forward that traffic to the Internet and thus "help" the congested router (temporarily).

Taking into account both routers are Linux, I suppose changing A's default gateway to B's private network interface IP would be enough to stir the traffic into B (through the common internal private network).

However, when B is receiving the traffic from A's clients, it must be careful not to forward to the Internet (external network) the packages containing the messages exchanged between the ZeroMQ applications.

My question is: how can B know and differentiate, in the received packets, the ZeroMQ messages from the client packets (from A)?

Capturing with iptables/nfqueue and then analysing the packet? If so, what would identify a packet destined to the ZeroMQ app?

This is all considering that B will forward to the Internet (up) all packets received in the interface connected to the private network.

Note: I don't know if this is relevant for the question, but in the subscriber application, a filter is applied to the messages received. Every message beginning with "network_zmq" is captured by the subscriber.

Edit: I also exchange ICMP packets (ping) between the A and B (it's a requirement in my scenario). This means that ICMP requests from A to B must also not be forwarded to the Internet.


Answer:

Q : How to differentiate ZeroMQ packets from normal traffic?

Not so easy to answer a simple question. ZeroMQ is not about just sending some packets.

The ZeroMQ packets may use ( if going across a L3+ network infrastructure ) several different transport-class-es { tcp:// | pgm:// | epgm:// | vmci:// }

If in such a need, one may introduce an app-side weak-labeling of such a packet-traffic, if one configures it to set the TOS-label for active declaration of such a marker .setsockopt( ZMQ_TOS, aToS_VALUE )

Sets the ToS fields (Differentiated services (DS) and Explicit Congestion Notification (ECN) field of the IP header


Q : how can B know and differentiate, in the received packets, the ZeroMQ messages from the client packets (from A)?

This part is harder. A weak, ToS-based detection is possible as noted above, the rest of the packet-processing depends on the L3+ ROUTER-software capabilities, not on the ZeroMQ as-is.

In case some client packets from A would have the same ToS-label as was intended to label the ZeroMQ-originated traffic, a typical L3+ ROUTER-software has no chance to distinguish between such cases ( it may build some heuristics and "guesstimate", yet IMHO the L3+ ROUTER software has since ever used to be primarily performance focused ( moving packets as fast as possible among the I/F ), not a "fully-programmable"-User-Defined-sniffer-Platform nor an adaptive-Policy-Enforcement-Platform )

Building a ZeroMQ-proxy ( Man-In-The-Middle ) may help you for cases no legally binding obligations are violated by injecting the MITM-node.