Hot questions for Using ZeroMQ in data distribution service
I read the followings:
- DDS vs AMQP vs ZeroMQ
And it seems that there is no benfit using DDS instead of zmq:
- the latency of zmq is better.
- It seem to me that the API of ZMQ is cleared and simple.
- I cant use ZMQ in order to transfer data between threads / processes / stations.
- When it is better to use DDS ?
- Are there any better performance of DDS over ZMQ ?
- Is there a clear purpose of using DDS (and not ZMQ) ?
In my (admittedly biased) experience as a DDS implementor/vendor many applications find significant benefits of using DDS over middleware technologies, including ZeroMQ. In fact I see many more "critical applications" using DDS than ZeroMQ...
First a couple of clarifications:
(1) DDS and the RTPS protocol it uses underneath are standards, not specific products. There are many implementations of these standards. To my knowledge there are at least 9 different companies that have independent products and codebases that implement these standards. It does not make sense to talk about the "performance" of DDS vs ZeroMQ, you can only talk about the performance of a specific implementation. I will address the issue of performance later but from that point of view alone your statement "the latency of zmq is better" is plainly wrong. The opposite statement would be just as wrong, of course.
(2) I did not find much objective information in first reference you provided. The main point there was that DDS seemed the best fit for that application and there was a concern about how broadly it was used, which the reply from AC clarified. But the arguments seemed a bit subjective. There was a negative reply to AC's posting based on somebody's subjective examination of the codebase of a specific product. Even assuming the person that posted the negative comments had a valid point, the comments would apply only to one specific DDS implementation/product not to DDS in general. Personally I would not give much credence to that comment, its tone seems too hostile to be based on just the stated facts.
(3) Regarding the clarity/simplicity of API's. Are your comments based on benchmark example you provide in the second reference? This code is not using the standard DDS APIs. I am not sure why OCI (the company that wrote that article) did it like that--perhaps they adapted some other prior code.
A good place to look at API examples that are conformant with the DDS specification are these:
- Modern C++: http://blogs.rti.com/2014/08/01/create-a-p2p-distributed-application-in-under-35-lines-of-c11-code/
- Classic C++: http://www.twinoakscomputing.com/coredx/examples/hello_cpp
In any case as I mention later the layer of abstraction provided by DDS and ZeroMQ is quite different so the APIs are not directly comparable...
In answer to you specific questions.
1. When it is better to use DDS?
It is difficult to provide a short/objective answer to a question as broad as this. I am sure ZeroMQ is a good product and many people are happy. The same can be said about DDS.
I think the best thing to do is point at some of the differences and let people decide what is important to them.
DDS and ZeroMQ are different in terms of the governance, ecosystem, capabilities, and even layer of abstraction.
Some important differences:
1.1 Governance, Standards & Ecosystem
DDS and RTPS are open international standards from the Object Management Group (OMG). ZeroMQ is a "loose structure controlled by its contributors"
This means there is open governance and clear OMG processes that control the specification and its evolution, as well as the IPR rules.
ZeroMQ IPR is less clear IMO. From their web page (http://zeromq.org/docs:features) they state "ZeroMQ's libzmq core is owned by its contributors" and "The ZeroMQ organization is a loose confederation without a clear power structure, that mostly lives on GitHub. The organization wiki page explains how anyone can join the Owners' team simply by bringing in some interesting work."
This "loose structure" may be more problematic to users that care about things like IPR pedigree, Warranty and indemnification, etc.
Related to that. if I understood correctly there is only one core ZeroMQ implementation (the one in github), and only company that stands behind it (iMatix). From there it seems just 4 committers are doing most of the development work in the core (libzmq). If iMatix was to be acquired or decided to change its business model, or the main committers lost interest, the users would few recourse beyond supporting the codebase themselves.
Of course there are many successful projects/technologies based on common ownership of the code. On the other hand having an ecosystem of companies competing with independent products, codebases, and business models provides good assurance to the future of the technology... It all depends how big the communities and ecosystems are and how risk-averse the user is.
1.2 Features & Layer of Abstraction
Both DDS and ZeroMQ support patterns like publish-subscribe and Request-Reply (this is a new addition to DDS, the so-called DDS-RPC). But generally speaking the layer of abstraction of DDS is higher. meaning the middleware does more "automatically" for the application. Specifically.
DDS provides for automatic discovery
In DDS you just publish/subscribe to topic names. You never have to provide IP addresses, computer names, or ports. It is all handled by the builtin discovery. And it does it automatically without additional services. This means that applications can be re-deployed and integrated without recompilation or reconfiguration.
ZeroMQ is lower level. You must specify ports, IP addresses etc.
DDS pub-sub is data-centric.
An application can publish to a Topic, but the associated data can represent updated to multiple data-objects, each identified by key-attributes. For example when publishing airplane positions each update can identify the "airplane ID" and the middleware can provide history, enforce QoS, update rates, etc. for each airplane separately. The middleware understands and communicates when new airplanes appear, or disappear from the system.
Related to the above DDS can keep a cache of relevant data for the application, which it can query (by key or content) as it sees fit, e.g. read the last 5 positions of an airplane. The application is notified of changes but it is not forced to consume them immediately. This also can help reduce the amount of code the application-developer needs to write.
DDS provides more support for "application" QoS
DDS supports over 22 message and data-delivery QoS policies, such as Reliability, Endpoint Liveliness, Message Persistence and delivery to late-joiners, Message expiration, Failover, monitoring of periodic updates, time-based filtering, ordering, etc. This are all configured via simple QoS-policy settings. The application uses the same read/write API and all the extra work is done underneath.
ZeroMQ approaches this problem by providing building blocks and patters. It is quite flexible but the application has to program, assemble and orchestrate the different patterns to get the higher-level behavior. For example to get reliable pub-sub requires combining multiple patterns as described in http://zguide.zeromq.org/page:all#toc119.
DDS supports additional capabilities like content-filtering, time-filtering, partitions, domains, ...
These are not available in ZeroMQ. They would have to be built at the application layer.
DDS provides a type system and supports type extensibility and mutability
You have to combine ZeroMQ with other packages like google protocol buffers to get similar functionality.
There is a DDS-Security specification that provides fine-grained (Topic-level) security including authentication, encryption, signing, key distribution, secure multicast, etc.
2. Are there any better performance of DDS over ZMQ?
Note that the benchmarks you refer to are for Object Computing Inc's "OpenDDS" implementation. As far as I know this is not known to be one of the fastest DDS implementations. I would recommend you take a look at some of the other ones like RTI Connext DDS (our implementation), PrimsTech's OpenSplice DDS, or TwinOaks' CoreDX DDS. Of course results are highly dependable on the actual test, network, and computers used, but typical latency performances for the faster DDS implementations using C++ are on the order of 50 microseconds, not 180 microseconds). See https://www.rti.com/products/dds/benchmarks.html#CPPLATENCY
Middleware layers like DDS or ZeroMQ run on top of a things like UDP or TCP, so I would expect they are bound by what the underlying network can do and for simple cases they are likely not too different, and they will of course be worse than the raw transport.
Differences also come from what services they provide. So you should compare what you can get for the same level of service, for example publishing reliably to scaling to many consumers, prioritizing information, sending multiple flows and large data over UDP (to avoid TCP's head-of-line blocking), etc.
Based on the OpenDDS study you reference, and the relative performance of different DDS implementations (http://www.dre.vanderbilt.edu/DDS/) I would expect that in an apples-to-apples test the better-performing implementations of DDS would match or exceed ZeroMQ's.
That said people rarely select the middleware that gives them the "best performance". Otherwise no-one would use Web Services or HTTP. The selection is based on many factors, performance just needs to be as good as required to meet the needs of the application. Robustness, Scalability, Support, Risk, maintainability, fitness of the programming model to the domain, total cost of ownership, etc. are typically more important to the decision.
3. Is there a clear purpose of using DDS (and not ZMQ) ?
Well, yes... in many applications it provides the best tradeoff in terms of performance, scalability, features, application simplicity, robustness, risk-reduction, and total cost of ownership. In the last few years thousands of projects came to that conclusion :)
I was assigned to update existing system of gathering data coming from points of sale and inserting it into central database. The one that is working now is based on FTP/SFTP transmission, where the information is sent once a day, usually at night. Unfortunately, because of unstable connection links (low quality 2G/3G modems), some of the files appear to be broken. With just a few shops connected that way everything was working smooth, but along with increasing number of shops, errors became more often. What is worse, the time needed to insert data into central database is taking up to 12 - 14h (including waiting for the data to be downloaded from all of the shops) and that cannot happen during the working day as it would block the process of creating sale reports and other activities with the database - so we are really tight with processing time here.
The idea my manager suggested is to send the data continuously, during the day. Data packages would be significantly smaller, so their transmission and insertion would be much faster, central server would contain actual (almost real time) data and night could be used for long running database activities like creating backups, rebuilding indexes etc.
After going through many websites, I found that:
- using ASMX web service is now obsolete and WCF should be used instead
- WCF with MSMQ or System Messaging could be used to safely transmit data, where I don't have to care that much about acknowledging delivery of data, consistency, nodes going offline etc.
- according to http://blogs.msdn.com/b/motleyqueue/archive/2007/09/22/system-messaging-versus-wcf-queuing.aspx WCF queuing is better
- there are also other technologies for implementing message queue, like RabbitMQ, ZeroMQ etc.
And that is where I become confused. With so many options, do you have any pros and cons of these technologies? We were using .NET with Windows Forms and SQL Server, but if it would be necessary, we could change to something more suitable. I am also a bit afraid of server efficiency. After some calculations, server would be receiving about 15 packages of data per second (peak). Is it much? I know there are many websites without serious server infrastructure, that handle hundreds of visitors online and still run smooth, but the website mainly uploads data to the client, and here we would download it from the client.
I also found somewhat similar SO question: Middleware to build data-gathering and monitoring for a distributed system where DDS was mentioned. What do you think about introducing some middleware servers that would cope with low quality links to points of sale, so the main server would not be clogged with 1KB/s transmission?
I'd be grateful with all your help. Thank you in advance!
Rabbitmq can easily cope with thousands of 1kb messages per second.
As your use case is not about processing real time data, I'd say you should combine few messages and send them as a batch. That would be good enough in order to spread load over the day.
As the motivation here is not to process the data in real time, then any transport layer would do the job. Even ftp/sftp. As rabbitmq will work fine here, it's not the typical use case for it.
As you mentioned that one of your concerns is slow/unreliable network, I'd suggest to compress the files before sending them, and on the receiving end, immediately verify their integrity. Rsync or similar will probably do great job in doing that.
I need a broker-less pattern for reading and writing messages between nodes without remove any message from queues until some monitor system accept removing message.
Can i do this with zeromq?, in zmq if one publisher node die the message queued on network is gone too? how can i save this queue in network!!!
( i want send a message with publisher and subscribers read message but don't delete that from queue until my Qos monitor remove that from array. and if my publisher die message queue created with it should not be deleted.
Can i implement these functionality with current patterns in zmq? )
You'll have to build that level of redundancy/reliability into your app, rather than rely on ZMQ to provide it.
What this means is that you'll have to keep track of all your messages at the publisher node, and then a subscriber node should be able to communicate back that it has received the message, allowing the publisher node to delete it's cache. This means multiple sockets, most likely, unless you really want to try and get XPUB/XSUB to communicate in this way, but it seems probably not the ideal choice.
If you need something more directly supported in your communication library, then ZMQ isn't going to cut it for you... but I doubt you'll find anything else that will, either.