I've been enjoying playing around with ZeroMQ lately, and exploring some of the ways it changes the way you approach system architecture.
One of the revelations for me has been how powerful the pub-sub (Publish- Subscribe) pattern is. An architecture that makes it straightforward for multiple consumers to process a given piece of data promotes lots of small simple consumers, each performing a single task, instead of a complex monolithic processor.
This is both simpler and more complex, since you end up with more pieces, but each piece is radically simpler. It's also more flexible and more scalable, since you can move components around individually, and it allows greater language and library flexibility, since you can write individual components in completely different languages.
What's also interesting is that the benefits of this pattern don't necessarily require an advanced toolkit like ZeroMQ, particularly for low-volume applications. Here's a sketch of a low-tech pub-sub pattern that uses files as the pub-sub inflection point, and incron, the 'inotify cron' daemon, as our dispatcher.
incron, the inotify cron daemon, to monitor our data directory for changes. On RHEL/CentOS this is available from the rpmforge or EPEL repositories:
yum install incron.
Capture data to files in our data directory in some useful format e.g. json, yaml, text, whatever.
incrontabentry for each consumer monitoring CREATE operations on our data directory e.g.
/data/directory IN_CREATE /path/to/consumer1 $@/$# /data/directory IN_CREATE /path/to/consumer2 $@/$# /data/directory IN_CREATE /path/to/consumer3 $@/$#
$@/$#magic passes the full file path to your consumer - see
man 5 incrontabfor details and further options.
Done. Working pub-sub with minimal moving parts.blog comments powered by Disqus