Asynchronous concurrency with vert.x – Part 1
Event-Driven Concurrency
At synyx, we are looking at vert.x for an upcoming project where we are building a system that will need to scale under load. The tag-line of vert.x is effortless asynchronous application development for the modern web and enterprise, which fits the bill, so I decided to play around with it a little bit.
The advantage of event-driven concurrency compared to traditional technologies is the reduced risk of deadlocks, livelocks and race conditions. Using mutexes and semaphores correctly is extremely difficult and can lead to very subtle bugs that are difficult to reproduce. The downside is that information can only be shared by passing messages.
Anybody who has has used jQuery’s $.ajax
should have some idea of what event-driven concurrency means: an event loop triggers predefined callbacks after a certain event happens. In that case, the system is retrieving the data in the background, while your JavaScript program can do something else in the meantime, like respond to user events. Once the data has arrived, the callback method is triggered and the data is passed as a function argument – no other callback function can run simultaneously. The same is true for setTimeout
, which is used extensively for animations: adjust the properties of an element a little bit each call, then return to the event loop.
This is the reason why there is no sleep()
function in JavaScript – the browser would freeze, the user couldn’t interact with the web page. Each callback method must be short-running.
With WebWorkers, you can now also perform client-side computation without blocking the main event loop, putting your multi-core CPU to use. The mechanism of communication between the background task and the main task is the same as with doing asynchronous IO – using callbacks and message passing.
vert.x
Vert.x brings this concept to the server side on top of the JVM. It allows writing applications using a event-driven concurrency model. There are bindings for basically every language that runs on top of the JVM: Java, Ruby, Groovy, Python and JavaScript. The distributed event bus provides seamless scaling over multiple cores or hosts. You perform a computation one process, send data via the event bus to another process where a callback method is executed.
Event bus
A short example in JavaScript that uses the event bus from the vert.x github repository – define a message handler that simply displays messages sent to the event bus address example.address
:
// file handler.js load('vertx.js') var eb = vertx.eventBus; var address = 'example.address' var handler = function(message) { stdout.println('Received message ' + message) } eb.registerHandler(address, handler); function vertxStop() { eb.unregisterHandler(address, handler); }
We put the program that sends messages in a different file to achieve process isolation:
// file sender.js load('vertx.js') var eb = vertx.eventBus; var address = 'example.address' vertx.setPeriodic(2000, sendMessage) var count = 0 function sendMessage() { var msg = "some-message-" + count++; eb.send(address, msg); stdout.println("sent message " + msg) }
Both programs are then started separately using the vertx runtime. They can then communicate on the event bus via the network:
# vertx run handler.js -cluster -cluster-port 10001 & # vertx run sender.js -cluster -cluster-port 10002
Repliers
This described system can be extended by the use of repliers, which can be used to start a dialog between a message handler and a sender. The sender and the replier both live in the same file this time:
var vertx = require('vertx'); // alternative import var address = "example.address"; var handler = function (message, replier) { stdout.println("sender sent " + message); replier("pong 1", function (message, replier) { // and so on }); } vertx.eventBus.registerHandler(address, handler); vertx.eventBus.send(address, "ping 1", function (message, replier) { stdout.println("handler sent " + message); replier("ping 2", function(message, replier) { // and so on }); });
Every sent message can be acknowledged with a reply by the other side and vice versa. This concurrency model is very easy to grasp and very powerful. We will use it in the next part of this series, where we tackle the Sleeping barber problem – stay tuned!