Hot questions for Using ZeroMQ in opencv

Question:

I've got a simple webcam which I read out using OpenCV and I'm now trying to send this video footage to a different (Python) program using ZeroMQ. So I've got the following simple script to read out the webcam and send it using a ZeroMQ socket:

import cv2
import os
import zmq
import base64

context = zmq.Context()
footage_socket = context.socket(zmq.PUB)
footage_socket.connect('tcp://localhost:5555')

# init the camera
camera = cv2.VideoCapture(0)

while True:
    try:
        (grabbed, frame) = camera.read()            # grab the current frame
        frame = cv2.resize(frame, (640, 480))       # resize the frame
        footage_socket.send_string(base64.b64encode(frame))

        # Show the video in a window
        cv2.imshow("Frame", frame)                  # show the frame to our screen
        cv2.waitKey(1)                              # Display it at least one ms
        #                                           # before going to the next frame

    except KeyboardInterrupt:
        camera.release()
        cv2.destroyAllWindows()
        print "\n\nBye bye\n"
        break

This works well in that it shows the video and doesn't give any errors.

I commented out the two lines which show the image (cv2.imshow() and cv2.waitKey(1)). I then started the script below in paralel. This second script should receive the video footage and show it.

import cv2
import zmq
import base64
import numpy as np

context = zmq.Context()
footage_socket = context.socket(zmq.SUB)
footage_socket.bind('tcp://*:5555')
footage_socket.setsockopt_string(zmq.SUBSCRIBE, unicode(''))

# camera = cv2.VideoCapture("output.avi")

while True:
    try:
        frame = footage_socket.recv_string()
        frame = np.fromstring(base64.b64decode(frame), dtype=np.uint8)
        cv2.imshow("Frame", frame)                  # show the frame to our screen
        cv2.waitKey(1)                              # Display it at least one ms
        #                                           # before going to the next frame
    except KeyboardInterrupt:
        cv2.destroyAllWindows()
        break

print "\n\nBye bye\n"

Unfortunately, this freezes on cv2.waitKey(1).

Does anybody know what I'm doing wrong here? Do I need to decode the footage differently? All tips are welcome!


Answer:

In the end I solved the problem by taking intermediate steps. I first wrote individual images to disk, and then I read out those images again. That led me to the fact that I needed to encode the frame as an image (I opted for jpg), and with the magic methods cv2.imencode('.jpg', frame) and cv2.imdecode(npimg, 1) I could make it work. I pasted the full working code below.

This first script reads out the webcam and sends the footage over a zeromq socket:

import cv2
import zmq
import base64

context = zmq.Context()
footage_socket = context.socket(zmq.PUB)
footage_socket.connect('tcp://localhost:5555')

camera = cv2.VideoCapture(0)  # init the camera

while True:
    try:
        (grabbed, frame) = camera.read()  # grab the current frame
        frame = cv2.resize(frame, (640, 480))  # resize the frame
        encoded, buffer = cv2.imencode('.jpg', frame)
        footage_socket.send_string(base64.b64encode(buffer))

    except KeyboardInterrupt:
        camera.release()
        cv2.destroyAllWindows()
        print "\n\nBye bye\n"
        break

and this second script receives the frame images and displays them:

import cv2
import zmq
import base64
import numpy as np

context = zmq.Context()
footage_socket = context.socket(zmq.SUB)
footage_socket.bind('tcp://*:5555')
footage_socket.setsockopt_string(zmq.SUBSCRIBE, unicode(''))

while True:
    try:
        frame = footage_socket.recv_string()
        img = base64.b64decode(frame)
        npimg = np.fromstring(img, dtype=np.uint8)
        source = cv2.imdecode(npimg, 1)
        cv2.imshow("image", source)
        cv2.waitKey(1)

    except KeyboardInterrupt:
        cv2.destroyAllWindows()
        print "\n\nBye bye\n"
        break

In any case I wish you a beautiful day!

Question:

I know how to send a string message from c++ to python via zeromq.

Here's the code for sending a string message I know :

C++ sender code :

void *context = zmq_ctx_new();
void *publisher = zmq_socket(context, ZMQ_PUB);
int bind = zmq_bind(publisher, "tcp://localhost:5563");
std::string message = "Hello from sender";
const char *message_char = message.c_str();
zmq_send(publisher, message_char, strlen(message_char), ZMQ_NOBLOCK);

Python receiver code :

context = zmq.Context()
receiver = context.socket(zmq.SUB)
receiver.connect("tcp://*:5563")
receiver.setsockopt_string(zmq.SUBSCRIBE, "")
message = receiver.recv_string()

What I want is to send an image from c++ zeromq publisher to python receiver.


Answer:

Disclaimer : Answering my own question, so that others shall not stuck where I did.

So lets get started.

What is Zero MQ ?

ZeroMQ is a high-performance asynchronous messaging library, aimed at use in distributed or concurrent applications. It provides a message queue, but unlike message-oriented middleware, a ZeroMQ system can run without a dedicated message broker.

Before, we get started here are the basics :

Protocol/Library used : ZeroMQ

Publisher : C++ oriented

Subsciber : Python oriented


Sending String/char array message via ZeroMQ :

C++ Publisher :-

// Setting up ZMQ context & socket variables
void *context = zmq_ctx_new();
void *publisher = zmq_socket(context, ZMQ_PUB); 
int bind = zmq_bind(publisher, "tcp://*:9000");
std::string message = "Hello from sender";
const char *message_char = message.c_str(); // Converting c++ string to char array
// Sending char array via ZMQ
zmq_send(publisher, message_char, strlen(message_char), ZMQ_NOBLOCK);

Python Subscriber :-

// Setting up ZMQ context & socket variables
context = zmq.Context()
receiver = context.socket(zmq.SUB)
receiver.connect("tcp://localhost:9000")
// Subscribing to start receiving messages
receiver.setsockopt_string(zmq.SUBSCRIBE, "")
message = receiver.recv_string()

Sending Image/ndarray array message via ZeroMQ :

For handling images, opencv is an awesome library. Simple, easy to code & lightning fast.

C++ Publisher :-

void *context = zmq_ctx_new();
void *publisher = zmq_socket(context, ZMQ_PUB);
int bind = zmq_bind(publisher, "tcp://*:9000");

// Reading the image through opencv package
cv::Mat image = cv::imread("C:/Users/rohit/Desktop/sample.bmp", CV_LOAD_IMAGE_GRAYSCALE );
int height = image.rows;
int width = image.cols;
zmq_send(publisher, image.data, (height*width*sizeof(UINT8)), ZMQ_NOBLOCK);

In above code, image is read as a grayscale image, you can read 3-channel (RGB) image as well, by passing appropriate parameters in opencv's imread method.

Also remember to modify the size (3rd parameter in zmq_send function call) accordingly.

Python Subscriber :-

context = zmq.Context()
receiver = context.socket(zmq.SUB)
receiver.connect("tcp://localhost:9000")
receiver.setsockopt_string(zmq.SUBSCRIBE, "")
// Receiving image in bytes
image_bytes = receiver.recv()
int width = 4096; // My image width
int height = 4096; // My image height
// Converting bytes data to ndarray
image = numpy.frombuffer(image_byte, dtype=uint8).reshape((width, height))

TO DO / IMPROVEMENT : You can also pass the image size from the c++ publisher along with the image data. So, that image can be reshaped accordingly at python side.

ZMQ_SNDMORE flag comes handy here

Just add another zmq_send statement at c++ side.

zmq_send(publisher, img_height, strlen(img_height), ZMQ_SNDMORE)
zmq_send(publisher, img_width, strlen(img_width), ZMQ_SNDMORE)
zmq_send(publisher, image.data, (height*width*sizeof(UINT8)), ZMQ_NOBLOCK);

Similarly, add corresponding receiving statements at python end.

height = receiver.recv_string(ZMQ_RCVMORE)
width = receiver.recv_string(ZMQ_RCVMORE)
image_bytes = receiver.recv()

Another Improvement

Thanks @Mark Setchell for pointing out an improvement.

Sending opencv Matrix of large size directly over network can be costly. A better approach would be to encode the image before sending over the network.

C++ Publisher :-

void *context = zmq_ctx_new();
void *publisher = zmq_socket(context, ZMQ_PUB);
int bind = zmq_bind(publisher, "tcp://*:9000");

// Reading the image through opencv package
cv::Mat image = cv::imread("C:/Users/rohit/Desktop/sample.bmp", CV_LOAD_IMAGE_GRAYSCALE );
int height = image.rows;
int width = image.cols;
cv::vector<uchar> buffer;
cv::imencode(".jpg", image, buffer);
zmq_send(publisher, buffer.data(), buffer.size(), ZMQ_NOBLOCK);

Python Subscriber :-

context = zmq.Context()
receiver = context.socket(zmq.SUB)
receiver.connect("tcp://localhost:9000")
receiver.setsockopt_string(zmq.SUBSCRIBE, "")
// Receiving image in bytes
image_bytes = receiver.recv()
// Decoding the image -- Python's PIL.Image library is used for decoding
image = numpy.array(Image.open(io.BytesIO(image_byte)))

Question:

Hello I am trying to send a mat object to another computer using zeromq and boost.

this is my serialization.h file

#include <iostream>
#include <fstream>
#include <boost/archive/binary_oarchive.hpp>
#include <boost/archive/binary_iarchive.hpp>
#include <boost/serialization/split_free.hpp>
#include <boost/serialization/vector.hpp>

BOOST_SERIALIZATION_SPLIT_FREE(cv::Mat)
namespace boost {
    namespace serialization {

        /*** Mat ***/
        template<class Archive>
        void save(Archive & ar, const cv::Mat& m, const unsigned int version)
        {
            size_t elemSize = m.elemSize(), elemType = m.type();

            ar & m.cols;
            ar & m.rows;
            ar & elemSize;
            ar & elemType; // element type.
            size_t dataSize = m.cols * m.rows * m.elemSize();


            for (size_t dc = 0; dc < dataSize; ++dc) {
                ar & m.data[dc];
            }
        }

        template<class Archive>
        void load(Archive & ar, cv::Mat& m, const unsigned int version)
        {
            int cols, rows;
            size_t elemSize, elemType;

            ar & cols;
            ar & rows;
            ar & elemSize;
            ar & elemType;

            m.create(rows, cols, elemType);
            size_t dataSize = m.cols * m.rows * elemSize;

            //cout << "reading matrix data rows, cols, elemSize, type, datasize: (" << m.rows << "," << m.cols << "," << m.elemSize() << "," << m.type() << "," << dataSize << ")" << endl;

            for (size_t dc = 0; dc < dataSize; ++dc) {
                ar & m.data[dc];
            }
        }

    }
}

this is how I serialize my Mat object and send it.

#include <zmq.hpp>
#include <string>
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <boost/archive/binary_oarchive.hpp>
#include <boost/archive/binary_iarchive.hpp>
#include <fstream>
#include "Serialization.h"
#include <boost/archive/text_oarchive.hpp>
#include <boost/archive/text_iarchive.hpp>

using namespace std;
using namespace cv;


std::string save( const cv::Mat & mat )
{
    std::ostringstream oss;
    boost::archive::text_oarchive toa( oss );
    toa << mat;

    return oss.str();
}



int main () {
    Mat img = imread("/Users/Rodrane/Downloads/barbara.pgm", 0);   // Read the file


    std::string serialized = save(img);


    //  Prepare our context and socket
    zmq::context_t context (1);
    zmq::socket_t socket (context, ZMQ_REQ);

    std::cout << "Connecting to hello world server…" << std::endl;
    socket.connect ("tcp://localhost:5555");

    //  Do 10 requests, waiting each time for a response
    for (int request_nbr = 0; request_nbr != 10; request_nbr++) {
        zmq::message_t request (sizeof(serialized));
        memcpy (request.data (), &serialized, sizeof(serialized));
        std::cout << "Sending Hello " << request_nbr << "…" << std::endl;
        socket.send (request);

        //  Get the reply.
        zmq::message_t reply;
        socket.recv (&reply);
        std::cout << "Received World " << request_nbr << std::endl;
    }
    return 0;



}

and this is how I recieve my serialized object and try to show it.

#include <zmq.hpp>
#include <string>
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
#include <stdio.h>
#include <boost/archive/binary_oarchive.hpp>
#include <boost/archive/binary_iarchive.hpp>
#include <fstream>
#include "Serialization.h"
#include <boost/archive/text_oarchive.hpp>
#include <boost/archive/text_iarchive.hpp>

using namespace std;

void load( cv::Mat & mat, const char * data_str )
{
    std::stringstream ss;
    ss << data_str;

    boost::archive::text_iarchive tia( ss );
    tia >> mat;
}

int main () {
    zmq::context_t context (1);
    zmq::socket_t socket (context, ZMQ_REP);
    socket.bind ("tcp://*:5555");
    cv::Mat object;

    while (true) {
        zmq::message_t recivedData;
        socket.recv (&recivedData);
        std::string rpl = std::string(static_cast<char*>(recivedData.data()), recivedData.size());
        const char *cstr = rpl.c_str();
        load(object,cstr);
        imshow("asdasd",object);
        //  Send reply back to client
        zmq::message_t reply (8);
        memcpy (reply.data (), "Recieved", 8);
        socket.send (reply);
  }



}

When I run this 2 projects I retrieve an error which is

libc++abi.dylib: terminating with uncaught exception of type zmq::error_t: Interrupted system call

Since it happens the time I run my client project I assume server recieves data but either its corrupted or there is a problem with de-serialize it


Answer:

I didn't go through all your code, but this is 100% wrong:

std::string serialized = save(img);
//  ...
zmq::message_t request (sizeof(serialized));
memcpy (request.data (), &serialized, sizeof(serialized));

sizeof(std::string) has nothing to do with size of the string held, and you can't memcpy from std::string object. What you want to do is:

zmq::message_t request (serialized.length());
memcpy (request.data (), serialized.c_str(), serialized.length());

Might be other errors as well, but this just sticks out.

Question:

I have been working on a simple video-over-ip program, partly for use in a project and partly to teach myself some basics of networking using high-level interfaces. The trouble is that I can send the data from a cv::Mat over the network just fine, but once I attempt to decode the data, it appears to be missing much of the color data. The code is in this gist, which contains all the files necessary to build and run the project under Linux. Can anyone shine some light on this?

If you need any more information, let me know. You'll need a webcam to test, I'm afraid.


Answer:

When you copy your data with memcpy(m.data(), frame.data, frame.rows * frame.cols);, you're only copying a third of the total data since your image is a 3-channel one.

Try to change it to memcpy(m.data(), frame.data, 3 * frame.rows * frame.cols); (and allocate enough space before).