All posts by Yudhanjaya Wijeratne

WSO2Con EU 2016: London, and What We Did There

Every so often, London gets to deal with us.

And by us, we mean WSO2 and all of our fantastic clients from all over Europe: people implementing solutions large and small at the cutting edge of enterprise tech and government.

Thus, WSO2Con EU, 2016:  three days of pure WSO2 at the Park Plaza Riverbank Hotel, London. We’d set up a pretty fancy stage and made sure we had a good 1:1 ratio between wine-glasses and presentations. Our sponsors from all across Europe – Yenlo, Chakray, Emoxa, Profesia, RealDolmen, Avintis, iEOLLODEV and Redpill – had joined us at their booths, and all was good…

The tutorials

It’s tradition that the first day houses all of our tutorials. Park Plaza’s curiously labelled Floor -4 (there’s no -3) became home to four color-coded tracks conducted by our experts, with a little bit of mad science happening in the background.


At our last conference (Asia), we saw quite a lot of interest in microservices and IoT: this time it was integration and analytics that pulled the majority of the crowd – perhaps this had a lot to with the fact that we’re making some pretty big changes and product reveals in these spaces.

The keynotes

Our conferences typically begin with keynotes and shift to a series of topical tracks, including one workshop especially for our partners.

But you need a proper opening act to start off with – and this time it was Box 9 Drumline; the energetic UK-based drummers provided the perfect beat to get everyone awake and ready for WSO2’s founder, CEO and Chief Architect, Dr Sanjiva Weerawarana.

Sanjiva’s keynote touched not just on how WSO2 really works, but also everything new: our new approach to product delivery, for instance, where product, tooling and analytics are equally important; our plans for Carbon 5, the next generation of the underlying foundation on which all our products are built; where we stand on the microservices hype; and how everything we’re doing helps our customers adapt and keep evolving in a digital world.


There is, he explained, a whole spectrum of digital readiness within organizations. Some companies are born and bred in digital: others are moving up the cycle in stages – starting with problems like ‘I’d like this application connected to this data store’ to ‘how to we give this functionality to our customers?’

And regardless of whatever level of readiness a company finds itself in, he pointed out, introducing the newest changes to our platform, WSO2 is ready to work with them.

The second keynote was by Nigel Fenwick, a VP and Principal Analyst of Forrester. Forrester’s grown immensely since their first report in 1983, and Nigel’s speech, which outlined how companies actually deal with the challenges of going digital, was a clear illustration of the staggering amount of insight they’ve gained in this industry. Less than 20% of companies, he explained, have the right tech to go digital proper in the first place, a problem exacerbated by tacky bolt-on approaches to the problem. It’s a problem of leadership as much as it is of tech. Leaders who truly do understand digital are able to transform into Digital Predators, leading the curve and disrupting how their businesses deliver value to customers.

The third for the day, before we broke off for our multi-track speeches, was Isabelle Mauny, WSO2’s VP of Product Strategy. Isabelle touched on a critical point of today’s business world: customers are no longer satisfied with generic services: decades of personalized search and the likes of social media have led us to expect services that are automatically tailored to our needs. That’s where Big Data and analytics really come in. With WSO2’s analytics platform, not only are we now capable of building services at scale: we’re becoming better and better at identifying what people actually want and adapting – also at scale.

We greeted the end of day three wrapped up with two more keynotes and two panel discussions. Venura Mendis, CTO of WSO2.Telco, a joint venture between WSO2 and Axiata, explaining WSO2.Telco’s vision for telecommunications companies fighting in a digital world; there was Asanka Abeysinghe, WSO2’s VP of Solutions Architecture, explaining how one goes about building a digital enterprise – including what being digital and being connected means in the context of each industry, drawing from the wealth of solutions we’ve built with customers to discuss everything from transport to wearables to architecture.

To those of you who missed out on the keynotes, fear not: they’re recorded and available at

The sessions

And was that it? Hardly.

Our conference played out over two levels of the Park Plaza Riverbank, London. In between the keynotes, we had four tracks running constantly, ranging from Integration to Analytics to Strategy to Governance, where our experts talked about the old and the new of the WSO2 platform and our customers and partners shared their stories and insight with what they’ve done. Yenlo even wrote a blogpost from within the conference itself.


What does Carbon 5 mean for customers? The new IoT server: how do you leverage that? Where is Data Analytics Server headed? What’s the best way to approach microservices? All of this and more were discussed within these halls.

And of course, we had epic music to wind down to …


We socialized …


And of course, we had food.


Because food is important and you can’t integrate stuff on an empty stomach. Our partners agreed with us: Yenlo, who were among our sponsors, brought along a whopping 80 kilograms of candy to distribute to all and sundry. Yes. Free.

All in all, it was three days of pure, undiluted WSO2 in front of the Thames. 55 glorious sessions, 56 speakers, 5 keynotes and 2 parties to boot: experts from all over Europe converging into one single location to share stories you’d never find elsewhere.

Don’t worry – we’ve not let it go unrecorded. Over the next couple of weeks, we’ll be uploading all the videos to our conference videos list, our YouTube channel and putting up the slides from the presentations  on Slideshare.

Trimble, WSO2, and The Internet of Dirty Things

“It’s probably a simplification to say that you have to have muddy work boots to be a Trimble customer, but if you have muddy work boots, you know who we are.”

– Gregory Best, Senior Technologist, Trimble, speaking at WSO2Con US 2015

Trimble, founded in 1978, is a company where the Internet of Things is not just a catchphrase. For some reason, Trimble’s Wikipedia page doesn’t do it justice; ‘makes GPS positioning devices, laser rangefinders and UAVs’ barely scratches the surface of what Trimble does.

Consider: In 1990, a climber named Wally Berg led an expedition up Mount Everest. He carried with him a Trimble GPS device, which he planted on Everest at roughly the cruising altitude of a Boeing 747. The purpose was to try and figure out the real height of the tallest mountain in the world.

Take Disneyland.


Disneyland has some 100 million dollars’ worth of extravagant and complex costumes. Tracking all of those was once a 180 person-hour job – 15 to 20 people, says Gregory, would work 8 to 10 hours a day to go through and hand-count everything. One Trimble division changed all that: by attaching lowly RFID tags to every costume, they managed to set up a system where one person pushes a cart up and down the aisle and all the costumes check in – a device role-call done via radio.

That’s 180 person-hours cut down to 2.

As Gregory says, if you can do it in one place, you can do it in another. If you can tag clothes, you can tag other things. Trimble, working with Ford and DeWalt, created a system where tagged tools are networked to a computer sitting in a dashboard. When the contractor has a specific job, the system is able to highlight what he needs. When he’s done, the system is able to check whether he’s returned everything and is free to go.


“And if you can do that inside the truck, you can do that outside: so we can put tags on equipment and materials out on a storage yard, but the RFID tags on the outside of the truck can now add a GPS receiver. As the truck goes through the yard it can inventory everything and associate that with their GPS positions; now I know where everything I need to know is.”

This is IoT. Stripped to actual moving parts, IoT becomes a buzzword wrapped around transmitters, receivers, sensors and clever software.

The buck doesn’t stop here. Trimble’s applications of this technology take us into fleet management – where every truck is not just a vehicle, but a rolling mass of information on wheels, spewing out numbers for everything from speed to engine faults to fuel consumption; that veers into routing, where it’s never the shortest distance, but the most fuel-efficient journey that matters, with driving regulations that change from state to state. Where you’re able to tell if a truck is going too fast, and if its weight is causing it to handle different at those speeds.

Screen Shot 2016-06-27 at 2

That leads up to being able to collect data from all sorts of different sources, analyze it and be able to tell truckers that gas is cheaper here than in the next state, and to be able to use all of these things to figure out the best possible route for any truck to take.

“But we can do better than that,” says Gregory, who seems to have made this his catchphrase. While Google has been building self-driving cars, Trimble’s been gunning for the big game: they’ve used Trimble positioning to automate massive CAT haul trucks. They pick up loads in very specific points, drop them off in very specific points, stop when they wants to refuel, and doing it in a very efficient, very safe way.

Screen Shot 2016-06-27 at 2

A robot driving something this large is almost scary, when you think about it. And Trimble hasn’t stopped: they’re extending this to farming vehicles, and pairing that with survey data to control how much water, fertilizer and effluence is laid down on the field. Everything is optimized for the best harvest.

All of this inevitably demands some incredibly powerful software, and that’s what Trimble Connect is: a robust Platform-as-a-Service that provides the core components for any application and lets Trimble’s rather diversified businesses maintain a set of services on top of it. It’s accessible to Trimble’s network of partners and dealers and also provides a cloud container than can host any Trimble service. It’s built using four multi-tenant, cloud-enabled WSO2 middleware  products: WSO2 Enterprise Service Bus, WSO2 API Manager, WSO2 Application Server, and WSO2 Identity Server.

Screen Shot 2016-06-27 at 2

This is crucial, because, as Gregory  explains, Trimble’s businesses are run separately and there’s not a lot of coordination between all of them; after all, it’s a huge leap from measuring the tops of mountains to automating giant machines that look like they came out of Mad Max. But because of this platform, Trimble is able to share technology and capability across all of these – if agriculture wants a geofencing capability and construction has one, they can just go take that capability. Thanks to WSO2 and a lot of hard work, Trimble can keep climbing those mountains and stalking giant fleets of IoT-enabled trucks. 

For more insight into Trimble and how they do things, watch Gregory Best’s talk at WSO2Con here. For more information on WSO2 and how our platform works, visit

Zeomega: Building on WSO2 for a Comprehensive Healthcare Solution

The typical health management platform is a complex mechanism. This is, after all, an industry with zero tolerance for faults: even the slightest mistake could mean a life in danger.

Building healthcare solutions is what Zeomega specializes in. The Texas-based firm delivers integrated informatics and business process management solutions. Zeomega’s clients collectively service more than 30 million individuals across the United States.


WSO2 is a part of their success: key to Zeomega is Jiva, Zeomega’s population health management platform. Delivering analytics, workflow, content and patient engagement capabilities, Jiva uses key WSO2 products and provides a deployable PHM infrastructure that both healthcare providers and clients can use. A strong track record of integration and acquisitions keep both Zeomega and Jiva on top of what they do.

Attending WSO2Con Asia 2016 to explain all of this were Praveen Doddamani and Harshavardhan Gadham Mohanraj, Technical Leads at Zeomega. Their speech, titled Building on WSO2 for a Comprehensive Healthcare Solution, detailed how Jiva works and why. Let’s dig in.

The State of the Art

Jiva has the capability to integrate with various data repositories and management systems. During the initial days of integration, they built an ETL tool and a framework – using Python – to integrate data into Jiva, generally in the form of a CSV. It could also export data.

As their customer base expanded, this integration challenge became even more integral; their requirements changed to needing to load millions of records. To pull this off, Zeomega used the pyramid framework to build a RESTful web service that would do the job. They ended up building a SOAP system as well to better interface with their clients, and using these three tools, they could address batch integrations effectively.

When it comes to a deployment, however, with multiple servers, having these multiple systems turned out to be a burden, especially when clients needed a single API to be able to manipulate data; multiple systems with different tech stacks became roadblocks to both support and development.

The Fix

“We don’t want to rewrite our existing logic; we want to leverage the existing business logic and provide a healthcare solution to external applications and well as third-party vendors,” said Harshavardhan Mohanraj, who was co-presenting with Doddamani.

At this point, they started evaluating WSO2 for a solution to this problem. WSO2 Enterprise Service Bus and WSO2 API Manager are built for this purpose. The WSO2 ESB would allow them to retain their legacy business platform and still connect whatever they needed to. WSO2 API Manager would handle the complete API management lifecycle, allowing them to push out secure APIs for their real-time web services.

To do this, said Mohanraj, they created a Jiva API framework. The core Jiva platform is exposed through RabbitMQ. Data is sent and received to this core platform through a module with the WSO2 ESB; this handles the integration, data transformation, turning flat files (CSV/XML)  or anything else into the JSON actually processed by Jiva.

image01This functionality is exposed via WSO2 API Manager, which enables Zeomega to publish, deploy and manage the necessary SOAP and REST APIs.

In the future, said Mohanraj, they intend to shift Jiva from a monolithic structure to a less tightly coupled SOA model, with reusable components and better standards support. And to do this, they intend to use WSO2 – not just WSO2 ESB and WSO2 API Manager, but also WSO2 Identity Server and WSO2 Governance Registry.

“WSO2 products provide us with high performance, high availability, and better configurability,” said Mohanraj. “We want SOA governance, DevOps and flexibility. As a whole, we’re able to achieve a robust solution by integrating WSO2 products. We’re now moving away from spending more of our efforts on business infrastructure and we’re able to speed up agility by creating healthcare solutions.”

To learn more about Jiva and the WSO2 collaboration, watch the Zeomega talk at WSO2Con Asia 2016 here.


WSO2Con Insights: How NYU used WSO2 to become a more agile organization

New York University is one of the largest private American non-profits for higher education; it’s long since expanded beyond New York, and now spans more than twenty schools, colleges, and institutes – including 12 major branches across the world. They’ve produced thirty-six Nobel Prize Winners and the most Oscar winners of any university in existence. It’s safe to say that’s it’s a pretty big organization.


Washington Square Park in Greenwich Village – the original home of NYU

Underneath all of the education and the alumni achievements lies a deeper, more technical problem. This level of largesse means enormous amounts of data and rather complex services required to keep everything together. Peter Morales, PhD, leads NYU’s Educational Technology Innovation efforts. Speaking at WSO2Con USA 2015, he described his task: to find out how to move away from their New York-centric data center model.

The solution? WSO2’s Enterprise Service Bus.

Swapping out the engines

NYU has a lot of existing processes. The key word there is existing. To innovate, they would have to avoid touching everything else and breaking it.

This wasn’t just code, but people. Bringing in an ESB wasn’t simply bringing in technology. “You have lots of layers and lots of roles and people who are going to be affected, and you really have to be mindful about that, or the whole strategy unwinds,” explains Peter, who likens this to changing the engines of an airplane while the airplane is in flight.”

The task of implementing the ESB wasn’t simply a technological addition: it was a way of bringing in organizational change. Peter outlined several ‘Agility Accelerators’: agile processes, lean investments, cloud services, unified architecture – things that make it easier for NYU to move forward.

WSO2 comes in on a technical level. NYU uses WSO2 to decouple services at three levels – at the UI level, at the middleware level and at the data level. “If you don’t decouple it at those three layers, you’re always going to end up with some degree of coupling that’s going to impede your ability to change,” said Peter.

Screen Shot 2016-04-19 at 11

The decoupling gives them the ability to build a model where existing systems can interoperate with newer services. This, in turn, solves the original problem: they can now add and extend functionality without disrupting the old code, rolling out incremental improvements in a way they simply could not do before.

The bus in the cloud

At the heart of this implementation lies what Peter calls an “ESB in the cloud”: an architecture that runs on Amazon web servers and allows them to build applications. These applications function as cohesive units, but are actually comprised of lots of swappable services running in the background – services that range from anything from identity to ones that detect and write captions for videos. Various WSO2 ESB clusters host these services, which are then delivered through Amazon CloudFront.

This, as it turns out, is a powerful combination that allows them to run everything at low latencies. It also gives them some interesting capabilities: the ability to orchestrate functionality, and the ability to roll forward services and roll them back in real time.

One of the biggest hurdles they encountered, says Peter, was adding a process for innovating – especially when it comes to introducing new technologies. There were a lot of misconceptions about what was needed.

“A lot of us, coming out of the financial services world,  had been involved in enterprise service bus implementations which traditionally were kinda heavy – the TIBCOs, the Jbosses – this is where WSO2 is very different,” he says. “And the other argument we heard was ‘why not to microservices, without an ESB’? And the big one is ‘Is this services bus going to become another point of failure?’ We have a lot of software that needs to run 100% uptime, all the time.”

It’s safe to say that WSO2’s lightweight, high performance ESB overcame all those concerns, because NYU now runs the WSO2 ESB without a hitch. And now, says Peter, they’re looking at building an enterprise service fabric – multiple instances of an ESB on the background, synchronizing data in such a way that you get the same data regardless of where you are in the world or what your latency is supposed to be.

That’s a lot of boundaries to push – organizational, technical, you name it. But whatever NYU does, we’re proud to be there, pushing those boundaries with them.

For more information on how NYU jump-started middleware services, watch Peter’s presentation at WSO2con US 2015.

WSO2Con Insights: Why West Interactive built an app-based cloud platform with WSO2

West Corporation is a spider in a web. Andrew Bird, Senior Vice President at West, speaking at WSO2Con USA 2015, described it as a 2.5-billion dollar giant situated right at the heart of America’s telecommunications. Close to a third of the world’s conference calls run through the West network. To give you some perspective, Google+ and Cisco run calls on West networks – as does the 911 call system.

Screen Shot 2016-04-18 at 10

According to Andrew (who runs product management, development and innovation there) depending on where you are in America, 60% of the time, any call you place would go through the West network.

However, networks aren’t all that West does. West has a division called West Interactive Services which builds IVR systems for customers that need complex customer interaction networks. Here’s what Andrew had to say about how West Interactive used integrated, modular WSO2 middleware to drastically speed the delivery of service and enhance these systems – for both the customers and for themselves.

The challenge: customer interaction

Screen Shot 2016-04-18 at 9
IVR systems involve providing customer interaction platforms, application design services, multi-channel communication systems, and often goes beyond building solutions for Fortune 100 companies. The services involved are often complex –  context identification, notifications, chat, call, data collection, routing, message delivery, provisioning, identity – and the ability to communicate across Web, IVR, mobile and social platforms.

To represent its work, Andrew played a demo where a customer dials into a call center from an iPhone. The automated system on the other end recognized the customer, recognizes that fact that he is on a mobile device and addresses him by name. It then proceeds to interact with the customer via text and speech – all of this without needing an app.

Context is key here: Andrew Bird – and West – believes that customers should not have to repeatedly tell systems who they are. They should not have to waste time identifying themselves, their devices and the context in which they’re calling. Systems should be able to figure out that Mr Smith is calling from such and such a location and that’s probably because of this reason. West’s systems are designed to understand this kind of context, and they’re very good at it.

The solution: a middleware platform for West

But of course, building is not enough: scaling these kinds of systems is the challenge.

At some point, West apparently realized that while they were the best at scale, running “a couple of complex event processing engines, a couple of business rules managing engines, a couple of databases” – was neither sustainable nor particularly supportable. For one customer, for instance, they were managing 43 APIs, all of which were completely different. They needed everything on common standards, able to work with each other instead of in little silos of their own.

West’s solution was to build cloud-enabled middleware platform that sits between West’s proprietary services and the applications running across different channels. West’s managed services are exposed through the platform via APIs.

This is where WSO2 came in. The WSO2 ESB serves as the SOA backbone, providing mediation and transformation between West’s different applications; WSO2 Governance Registry provides run-time SOA governance, and WSO2 Analytics platform monitors SOA metrics.

Screen Shot 2016-04-18 at 9
Other, more specific functionality is provided by the likes of WSO2 Complex Event Processor, Application Server, Data Services Server and Machine Learner. The multi-channel access services  – those that face the world – rely on WSO2 Identity Server and WSO2 API Manager, providing a way to expose APIs to internal or external applications that may integrate with the platform.

Context is everything

For West to rely on WSO2 for the backbone of their middleware platform is, for us, an indicator of the amount of faith they have in our products. West, after all, is a company that supports some of the biggest organizations in the world. They cannot afford to fail.

But perhaps the best statement was Andrew’s recollection of how much their customers trust WSO2. “I was once meeting with a customer, talking about our vision,” he says, “and they were like ‘so what are you using for an ESB?’ I said, “WSO2”. No more questions. Done. They were using the same thing as well. I needed something like that – something where if I go talk to a customer who I’m trying to take care of, I don’t need to spend my time justifying myself.”

If you’re interested in knowing more, check out Andrew’s complete keynote talk at WSO2Con USA here. For more details on the deployment, read our case study on West Interactive here.


The Microservices Discussion: Didn’t We Do This Before?

Microservices is a trending term right now. The enterprise programmers’ corner of the Internet seems to be stuffed to the brim with talk of microservices – Mark Russinovich, CTO of Microsoft Azure, even wrote a blogpost calling it a revolution.  Chris Hart of wrote at length about it, linking it to the Unix philosophy.  By all accounts, microservices seem to be changing the world…

Or are they? On the 24th of March, we (WSO2) hosted a meetup in Colombo, based on Microservices. In it, Kasun Indrasiri and Afkham Azeez tackled what we think is a pressing question: what are microservices, and how are they different from what we’re already doing?

By now, everyone knows what the monolith is – the dreaded single-unit architecture that ends up becoming a nightmare to deploy, build on and scale. It is self-contained and is, in essence, a silo unto itself.

Enter microservices: the philosophy that a single application should be composed of multiple fine-grained, loosely coupled services that are built and deployed independently of each other.

Kasun explored this concept in minute detail in an earlier blogpost, pointing out that microservices need to follow the Single Responsibility Principle: each microservice handles a limited and focused business operation, and as such should have very few operations and a simple message format. That’s so you don’t end up just building miniature monoliths. Harries Blog contains a diagram that illustrates this well:  

However, in all of this, the software industry seems to be forgetting that such a development style has already existed for a while now. Some time ago one of the hottest buzzwords was Service Oriented Architecture, or SOA: essentially, unassociated, self-contained units of functionality communicating with each other to get a job done, usually with some kind of interface in between.

Sounds familiar?

While the definitions of microservices seem less vague, Microservices, Kasun pointed out, is actually little more than SOA done right.  Indeed, as Azeez noted, the software industry likes to reinvent old things by slapping a new name onto them.  It’s not a new paradigm, and nor is it a panacea; there are instances where it’s not the most optimal route to take.

It’s possible that ‘microservices’ started trending because we now have better and easier tools for facilitating this kind of development. Docker and Kubernetes have practically hammered in these basic concepts into a lot of developers’ heads. All it needed was a name.  For a more nuanced understanding of microservices, read “Scope Versus Size: a Pragmatic Approach to Microservices Architecture” by Asankan Abeysinghe, VP of Solutions Architecture at WSO2.

Either way, here’s a toast to microservices – for keeping the spirit of SOA alive and kicking. .


Deep Huge: AI Predicts Donald Trump Becoming the Next President


Predicting the Presidential Election is practically a national sport. However, traditional predictors – especially the talkshow hosts on Fox News – have historically been terrible at calling the next set of numbers. It took Nate Silver’s exceptional statistical skill to show us that with public data, you could accurately predict the election down to the last winning percentage – if the mind doing the calculations was good enough.

Artificial Intelligence has evolved exponentially over the years. We’ve gone from Deep Blue beating Gary Kasparov to DeepMind mastering Go. A Japanese AI just wrote a novel that almost won a literary prize. We may not have another Nate Silver, but the world is in a position to create his machine analogue.

Which is why we at WSO2 have constructed a system designed for the sole purpose of election math. While Google and Microsoft have been happy to use their gifts to play board games and embarrass themselves on Twitter, ours, powered by WSO2 Machine Learner, has been set the task of picking the next POTUS.

Deep Huge, as we’ve called the system (a nod to Deep Blue) predicts that Donald Trump will almost certainly win. That is, if he picks the former Governor of California, Arnold Schwarzenegger, as his Vice President.

To state it in numbers: there is a 52.3% chance that Donald Trump will win by himself, regardless of his choice of VP; with Schwarzenegger, there is a 99.4% chance that Trump will defeat all others and become the next POTUS.

How it works

Like Nate Silver’s FiveThirtyEight itself, Deep Huge’s predictions are probabilistic. We use poll data from the Associated Press, historical records. earlier elections, news articles, secret NSA surveys and Twitter for exploring sentiment and secondary issue mapping. This data is then fed into WSO2 Machine Learner, which computes the prediction model. 

Since sources other than polls are not representative, we have paid more attention to trends rather than absolute numbers, and extrapolate the poll predictions while using other sources as calibrations.  The current model analyzes the win probability of presidential candidates and then runs this against an array of potential vice presidents.

At the start of the elections, the probability matrix was far too diffuse for any prediction to be useful. However, as the candidates dropped out and campaign tactics solidified, the predictions become more accurate. Deep Huge has successfully modeled the key pitfalls such as shifts of public opinion and the problems with running your own email server.

Every model shows that the choice of a VP is critical, as the second most powerful player in the game brings their own voterbase with them. 


In this case, the former Terminator not only solidifies Trump’s position in California, but Schwarzenegger offsets concerns—particularly among men—about Trump’s small hands.

The two men also share strong similarities, including a desire for closing the Mexican border. Schwarzenegger also has a track record of what one might consider Trump-like, politically incorrect statements, such as in 2007, where he urged Hispanic journalists to “Turn off the Spanish television set” and “Learn English”.

Other potential Republican VP candidates provided small gains or even losses to Trump’s odds of winning the 2016 US presidential election. Notably the probability of Trump winning the general election was 53.4% with US Senator Ted Cruz, 46.2% with former Alaska Governor Sarah Palin, and 65.1% with Fox journalist Megyn Kelly.

According to the latest research, in today’s connected world there are just three and a half degrees of separation between an also-ran and America’s next president,” said Dr. Sanjiva Weerawarana, WSO2 founder, CEO and chief architect. “Our Deep Huge project demonstrates the power of combining streaming, batch and predictive analytics to take a pulse on American voters’ sentiments and provide insights into the winning combination of presidential and vice presidential candidates in 2016.”

Deriving secondary insights

We noted while digging into the model that, in the GOP, the divide between campaign position in terms of key issues between Trump and Cruz is semantically closer, while in the Democratic party, the semantic divide is much larger. This makes it harder for the party to rally the voters who are divided in primaries. Further analysis revealed that divide proved to be a major turning point in earlier election outcomes.

To train itself to this level of accuracy, Deep Huge has to date run 11,302 simulations on available prediction data from the previous years, comparing it against the actual results to dynamically build a prediction model using Random Forest Regression.

While it may bear some passing resemblance to 538’s model, it has not been taught the concepts of weighted polling averages and state fundamentals. Its prediction model has been learned and built by the neural network itself, using features from social media sentiment, news articles, poll numbers in terms of campaign issues, and to compute a constantly evolving prediction model.

In Conclusion

In the process of building Deep Huge, we’ve gained valuable insight to the uncertainty inherent to elections. While we’re thrilled to have created the machine analogue of Nate Silver, we hope that one day we will be able to scale Deep Huge to predict any election throughout the globe – one bot to predict them all.

We’re also heartened by the fact that after hearing of this prediction, Donald Trump has reversed his stance on outsourcing and decided to have his campaign planning computed by WSO2 Machine Learner running in Sri Lanka.

And by the way… April Fool.

Deep Huge isn’t real, but WSO2 does keep an eye on politics via our Election Monitor project. This offers a real-time window on the US Election unfolding across Twitter and across mainstream media – mapping influence, sentiment, popular opinion and so much more.  Visit

Big Data and Politics: How the Internet sees the US Election

Nothing is a hotter topic than the US Election, especially if you’re a statistician at heart. Legions of us have been mesmerised by the idea of predicting who gets to be the most powerful President on the planet.

This year, however, it’s far more fun to kick back and watch the Internet collectively explode over each and every one of the candidates in the limelight. What with Clinton’s emailgate, Bernie’s economics, Ted Cruz’s household issues and Donald Trump’s existence …

WSO2 is a technology company. We looked around and realized that we had the tools to observe this theater on an unprecedented scale. We’d like you to join us.

Which is why we present to you the WSO2 Election Monitor.

At its heart, the Election Monitor is the WSO2 Enterprise Service Bus (ESB), Data Analytics Server (DAS) and Complex Event Processor (CEP). The ESB scans Twitter, pulling conversations about the US Election every second. DAS and CEP go to work on these tweets.

 The first thing we’ve done is build this (real-time) counter of the number of unique Twitter accounts talking about each camp. In a 24-hour time window, as of the time of writing, the Republicans seem to be dominating the Twittersphere.


That’s a huge margin, isn’t it? Let’s find out why as we go along.


This is firstly a live feed of what we’re getting from Twitter. The gray columns are the interesting ones: they display the most popular recent tweets – recent being not more than 12 hours ago. Donald Trump often dominates both fields. Occasionally, Bernie seems to break through. As of the time of writing, in the “Popular from candidates” column, Donald Trump has three tweets, one of them about a reporter touching him. The others are one tweet from Clinton Enough is enough”  and one from Bernie talking about deficits.


This is consistent for what we’ve seen so far; ever since the site went live,  Trump’s snazzy one-liners have consistently gotten more retweets and favourites than Bernie and Clinton’s policy-centric tweets. It would appear that one man / tweep from the Republican party is more popular than every other candidate put together… are we really surprised that there’s more people talking about the Republicans than the Democrats?

But what about their followers? Using candidates’ hashtags, we can peek into the conversation by sifting through tweets and finding the most used conversations in that space.

Trump’s people are talking about the border. No surprise there. They’re also talking about New York. That corresponds with the fact that Hillary Clinton just took aim at Trump in a N.Y. ad. It shows a white Trump supporter sucker-punching an African American protester.  

Clinton? The email scandal hasn’t left her behind. There’s talk of war, probably because Clinton tweeted about defeating ISIS recently. There is a LOT of discussion regarding an upcoming debate with Bernie.

Bernie’s community, too, is talking about the debate. There’s few other clues in his wordcloud at the moment.

 Ted Cruz’s community is talking about his wife. That’s because he’s mired in a bit of controversy now: the family man is being dodgy about questions regarding his marriage. There’s a lot of questions about his principles.  

There’s one man missing from this: John Kasich. As of the time of writing, he’s got 143 votes. Cruz had 463. Trump has 736. They all need to hit 1,237 for nomination.

As remote as Kasich’s chances look in the polls, he barely exists on Twitter. For now, we must exclude him.


Step three of the site is the community graph – or, as we call it, the attention graph. Here we map out the most popular accounts talking about the US election. The larger an account’s bubble is, the more popular it is.

What do we see? Donald Trump has gathered more attention to himself than any other tweep. It’s not even a small margin. Dan Scavino comes in at a distant second. Everyone else is miniscule, like little asteroids orbiting Planet Trump. And yet even those tiny accounts get over 2000 likes and retweets. These are the people who are essentially driving opinion on Twitter.

The fourth and final part is how the media’s opinion of a candidate changes over time. By analyzing news articles published online, we can determine shifts as campaigns unfold.

Consider how attitudes have changed towards Hillary. Here’s her standing on the 15th of March:


Here’s her standing on the 17th:


Opinion has swung her way. Examine the titles of the news articles on those days. On the 15th of March: “Was Hillary Clinton Bribed For Her Iraq War Vote?” And “The Cure to Hillary Clinton’s Problem With Millennials? Donald Trump.” Not that good.

On the 17th? “How Hillary Clinton Triumphed on Tuesday” and “Hillary Clinton Becomes Kween of Broad City”.  Short on the heels of a victory comes better press.

It’s fascinating to see how the American media react to candidates as they take on world events. Opinion on Trump, for example, hit rock bottom over his views on China and implications that supporters could go haywire.

Our collection of insights has just gotten started, of course. As the election unfolds, all of this will be running. While we can’t say that Internet is go along to predict who wins, we think it’s a pretty interesting gauge of what the people and the press of America are thinking.

Drop by The project has been deprecated, but we’ve preserved a snapshot of the data so you can see what it was like.

It’s football season and we kick off with WSO2 BigDataGame

Few other professional sports generate more data than American Football – especially when the Big Game beckons. Things heat up. Tables are drawn, graphs are computed and analysts take to the predictions game like moths to a flame.

WSO2 Machine Learner is the latest addition to our products portfolio, and while it was build as a high performance, open source predictive analytics platform that takes enterprise data, uses machine learning to analyze patterns and generates models that can be accurately used to make business predictions, there’s no reason why you can’t use it for sports analysis too.

This is exactly what our team set out to do a few weeks ago. In a fit of experimentation, we connected WSO2 Machine Learner with the data it needs to try and predict the Big Game.

Setting up for the Big Game

American football basically has three seasons. Preseason, regular season and playoffs. After a bit of searching, we came across, which had the data on all the teams for many years, and collected the historical data for 2012, 2013 and 2014.

A few rules were established:

  1. Pre-season data should not be considered because some of the best players don’t play in them.
  2. Injuries, are very common and really skew the data, especially if it’s a quarterback who gets hurt.
  3. Teams that have won the Big Game have usually had a great defense.
  4. Some teams start off the season slow and then begin playing better to make the playoffs.

Taking all this into consideration, we paired Random Forest regression with stacked autoencoders.

And it works! We did a little bit more calculation and arrived at a mathematical 76.5% accuracy rate, which was confirmed by our first set of predictions for the four games held the weekend that the Bengals played the Steelers.

We quickly built out a site so that anyone can test it out for themselves.


You’re free to run any two teams you want against each other and see which one stands the best chance of winning.

Do note that we’re still in the process of tweaking it. Right now, we’re basically predicting probabilities of success – and while we have faith in our product, there’s a whole lot of things that are impossible to account for, injuries in particular. There’s also no predicting the effects of morale on a team; that stuff is sorcery.

However (while it will take more data to confirm this) we’re confident that, as of the time of writing, BigDataGame is one of the most accurate solutions on the web.

For more on how we did it and to pair out your own favourite teams head on out to

WSO2 Microservices Server: Microservices in Ten Minutes!

Before we begin, I’d like to start by saying that this is not an introduction to microservices. Much has been written about the microservices paradigm and monolithic hell. If what you’re after is a good introduction, look no further than NginX’s introduction to microservices or Microservices: a definition to this architectural term by Martin Fowler and James Lewis.

Instead, we’re going to talk about WSO2 Microservices Server. At WSO2, we’ve put together a lightweight, high-performance microservices runtime with a  simple programming model for developing Java based microservices. It’s new, starts in under 300 milliseconds, and of course, integrates with our Data Analytics Server for additional insight. Let’s get started.

Hello, World

Let’s run through the process of getting the most basic sample running. This is literally a Hello World.

Firstly, make sure you have Java 8 and Maven installed. Then grab the zip from Github.

This folder is basically <MSS_HOME>. Make sure you mount the pom.xml into your Java IDE of choice. Then navigate to <MSS_HOME>/samples/hello_world. If you haven’t fired up your command prompt already, do it and type mvn package.

This’ll take a short while, and when you’re done you’ll have a helloworld*.jar. Run it.

From the target directory, use java -jar.

Congratulations! You’ll see a Netty listener starting up. You’ll also see that the Microservices Server has started up – in so many milliseconds.

It’s alive!

Of course, you need to test it. Use the curl command:

curl -v http://localhost:8080/hello/wso2 (depending on where you’ve put it)

For windows users, try You can also forgo Curl entirely and use HTTPie.

You should get a response similar to the following:

* Adding handle: conn: 0x7fc9d3803a00
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* – Conn 0 (0x7fc9d3803a00) send_pipe: 1, recv_pipe: 0
* About to connect() to localhost port 8080 (#0)
*   Trying ::1…
* Connected to localhost (::1) port 8080 (#0)
> GET /hello/wso2 HTTP/1.1
> User-Agent: curl/7.30.0
> Host: localhost:8080
> Accept: */*
< HTTP/1.1 200 OK
< Content-Type: */*
< Content-Length: 10
< Connection: keep-alive
* Connection #0 to host localhost left intact
Hello wso2

What happens here?

Open up the HelloService class.


import Path;

import Pathparam;

That’s JAX-RS, which comes in handy when creating RESTful web services.


public class HelloService{
@Path (“(/name)”)
public String hello(@PathParam(“name”) String name) {
return “Hello “ + name;

Here we have a simple class acting as a REST endpoint. This is what says hello.
How does this get started? By something like this:

public class Application (
public static void main(String[] args) {
new MicrosevicesRunner(8080)
.deploy(new HelloService())

If you want to pass services through different ports, all you need to do is modify the arguments passed to MicroserviceRunner(). MicroservicesRunner(7888, 8888) will start up two Netty Listeners on port 7888 and 8888 respectively.

Now for a second exercise: stockquotes.

In the same manner as before, enter the <MSS_HOME> folder, navigate to samples, and enter the stockquotes-services folder. Technically, this sample is meant to demonstrate the use of @Produces and @Consumes annotations for bean conversion. We’re going to use it to see how a simple way of sending a POST message to our service.
Use mvn package.

curl -v http://localhost:8080/stockquote/GOOG

You should get an output along these lines: (“symbol”: “GOOG”, “name”: “Alphabet Inc.”: “last”:652.3, “low”:657.81,”high”:643.15”)
We’ve just pulled data that’s in our stockquote service.
What if we wanted to add data, say, the stock price of another company? We can add it via a POST message using the same format.

curl -v -X POST -H “Content-Type:application/json” -d ‘{“symbol”:”BVMF”,”name”: “Bovespa”,”last”:149.62,”low”:150.78,”high”:149.18,”createdByHost”:”localh ost”}’ http://localhost:8080/stockquote

This command will save a new symbol into the Stock Quote Service.

Congratulations! You’ve successfully used Java 8 and Maven for setting up the WSO2 Microservices Server and building our services and samples, executed Microservices with a basic java -jar approach and seen how easy it to to deploy a REST Service onto our Microservices Server.
For more exploration, drop by our documentation page for more samples and check out our more detailed slideshow.