All posts by Yudhanjaya Wijeratne

Retailers optimize multichannel IT implementations

From the simple setup that involved using a TV and a domestic telephone line, e-commerce has greatly evolved since its introduction by Michael Aldrich in the late 1970s. Today it includes more innovative ways to shop online including mobile devices as well as connected stores.

In order to keep pace with the growing demands of today’s customers and partners, retail businesses need to deliver connected and personalized experiences across stores, the web, mobile and social channels. Becoming a connected enterprise helps to offer these experiences to consumers.

For the enterprise, a connected retail business will help to increase the reach of the business, explore and discover new business opportunities, and increase revenue. But that’s easier said than done. The complexity of the IT landscape, which consists of multiple disparate systems linked together and exposed through several interfaces and channels, pose many challenges.

Kasun Indrasiri, an architect at WSO2, authored the white paper “Connected Retail Reference Architecture,” that discusses the importance of creating a connected retail system today and explains how a complete middleware platform can help address these challenges and meet the demands of multichannel retail IT requirements.

Here are some insights from his white paper.

Among some of the key hurdles an enterprise would need to overcome to become connected is to develop transparent, collaborative, and real-time supply chains through seamless interaction with all systems to optimize underlying inventory stores. Managing multiple channels through which data and sales management are performed has also become extremely difficult due to its large scale.

To this end, a retail enterprise can adopt a comprehensive solution that will connect the dots and eventually facilitate the creation of a fully functional ecosystem. This ecosystem must contain various layers including an integration layer that allows merchandising, order management, supply chain, and distribution systems to communicate with each other. It should also have an API management layer that will be used to expose functionalities directly to customers and external users while business analytics in the analytics layer will be used to gather insightful information that’s key and relevant to the business.

Screen Shot 2016-04-26 at 3

A successful connected retail enterprise will seamlessly connect, manage and control its service layers, underlying web services, and all other business services.

An architecture such as this can help create a rich customer experience through fast delivery and checkout procedures, manage multiple channels through which data and sales management are performed, and seamlessly upload price updates so that it propagates to all parts of the retail ecosystem.

To learn more about how products within the comprehensive, open source WSO2 enterprise middleware platform can be used to meet a retail enterprise’s IT requirements, you can download Kasun’s whitepaper by visiting http://wso2.com/whitepapers/connected-retail-reference-architecture.  

 

How we ended up hatching startups

One of the first things anyone working at WSO2 learns is the email system. As an open source company, we stick to the Apache way of doing things, which means that all important communication is done across email – and these emails are open for anyone in the company to read and reply to, regardless of where they work and what they’re working on.

You’d think that this means a LOT of email on a daily basis, and you’d be right. On the other hand, it means questions, like “Is there anything we can do for startups” get picked up faster than a hot Stackoverflow thread.

In fact, the exact scenario happened a couple of months ago. Over the course of a week-long brainstorm across email, we realized that a lot of startups (at least, here in Sri Lanka) were sorely in need of technical expertise. Often, business-minded founders would invest capital, hire a third-party to build their application for them – and lacking the technical expertise to make architectural calls, they’d struggle with frameworks, iterations, system requirements and everything else thereafter.

We also realized that we could help fix that. For free.

And thus the Hatchery was born.

hatchy-colombo-logo

WSO2 Hatchery (note the cute dragon) is our free CTO-as-a-service. The idea is simple: if you’re a startup, you pitch your problem to a panel of our best solutions architects. These folks have worked with everything from Fortune 500 companies to startups; you pair off with them – and get to consult them as your CTO, free, for a period of three months. They provide high-level technical understanding – the kind that non-technical founders often need.  

The caveat is simple: the problem has to be something we can solve with any of our open source products.

It’s a simple way of making sure it plays within our direct field of expertise. Given the vast range of problems that WSO2 products can solve – everything from securely logging into a website to connecting two enterprise systems together – it gives us a lot of flexibility to play with.

After much back and forth across email, we went ahead with our first Hatchery event in Colombo, Sri Lanka. Our Media Partner for the event, Readme.lk, did an excellent recap of the event. To summarize, we enrolled sixteen startups in the program – and over the next three months, our team will be working with them to ensure that their dreams become reality.

14976525_1192750594107143_5913050206925895298_o

What’s next? The Hatchery is not meant to be a solely Sri Lankan project. This first event is a learning experience for us, too. Once we have the data in place – participation, ideas for improvement, the kind of metrics we need to track – we hope to be able to scale this, and take it global. With our offices in the US and the UK, we have the right tools to bring everyone to the Hatchery. It’s just a matter of figuring out the tiny details.

Curious to see what it looked like? You can check out the event’s photo album on Facebook here.

Better Transport for a better London: How We Won TfL’s Data in Motion Hackathon

Transport for London (TfL)  is a fascinating organization. The iconic red circle is practically part and parcel of the everyday life of the 1.3 billion people that the TfL network transports across London.

As part of their mandate, TfL is constantly on the search for ways better manage traffic, train capacity, maintenance, and even account for air quality during commutes. These are some very interesting challenges, so when TfL, Amazon Web Services and Geovation hosted a public hackathon, we at WSO2 decided to come up with our own answers to some of these problems.

Framing the problem

29413875894_f7ba6582b0_k
TfL’s Chief Technical Architect, Gordon Watson, catches up with the WSO2 team. Photo by TFL.

TfL pushes out a lot of data regarding the many factors that affect public transport within Greater London; a lot of this is easily accessible via the TfL Unified API from https://api.tfl.gov.uk/. In addition to volumes of historical data, TfL also controls a network of SCOOT traffic sensors deployed across London. Given a two-day timeframe, we narrowed our focus down to three main areas:

  1. To use historical data regarding the number of passengers at stations to predict how many people would be on a selected train or inside a selected station
  2. To use Google Maps and combine that with sensor data from TfL sensors across the city to pick the best routes from point A to B, while predicting traffic, five to ten minutes into the future, so that commuters could pick the best routes
  3. To pair air quality data from any given region and suggest safer walking and cycling routes for the denizens of Greater London

Using WSO2 Complex Event Processor (which holds our Siddhi CEP engine) with Apache Spark and Lucene (courtesy of WSO2 Data Analytics Server), we were able to use TfL’s data to build a demo app that provided a solution for these three scenarios.

1

For starters, here’s how we addressed the first problem. With data analysis, it’s not just possible to estimate how many people are inside a station; we can break this down to understand traffic from entrance to a platform, from a platform to the exit, and between platforms. This makes it possible to predict incoming and outgoing crowd numbers. The map-based user interface that you see above allows us to represent this analysis.

The second solution makes use of the sensor network we spoke of earlier. Here’s how TfL sees traffic.

2

The red dots are junctions; yellow dots are sensors; dashed lines indicate traffic flow. The redder the dashed lines are, the denser the traffic at that area. We can overlay the map with reported incidents and ongoing roadworks, as seen in the screenshot below:

3Once this picture is complete, we have the data needed to account for road and traffic conditions while finding optimal routes.

This is what Google suggests:

4

We can push the data we have to WSO2 CEP, which runs streaming queries to perform flow, traffic, and density analytics. Random Forest classification enables us to use this data to build a machine learning model for predicting traffic – a model which, even with relatively little data, was 88% accurate in our tests.  Combining all of this gives us a richer traffic analysis picture altogether.

5

For the third problem – the question of presenting safer walking and cycling routes using air quality – our app pulled air pollution data from TfL’s Unified API.

This helps us to map walking routes; since we know where the bike stations are, it also lets us map safer cycling routes. It also allows us to push weather forecasts and air quality updates to commuters.

A better understanding of London traffic

In each scenario, we were also able to pinpoint ways of expanding on, or improving what we hacked together. What this essentially means is that we can better understand traffic inside train stations, both for TfL and for commuters. We can use image processing and WiFi connections to better gauge the number of people inside each compartment; we can show occupancy numbers in real-time across screens in each station, and on apps, and assist passengers with finding the best platform to catch a less crowded compartment.

We can even feed Oyster Card tap data into WSO2 Data Analytics Server, apply machine learning to build a predictive model, and use WSO2 CEP to predict source to destination travel times. Depending on screen real estate, both air quality and noise level measures could be integrated to keep commuters better informed of their travelling conditions.

How can we improve on traffic prediction? By examining historical data, making a traffic prediction, then comparing that with actual traffic levels, we could potentially predict  traffic incidents that our sensors might have missed. We could also add location-based alerts pushed out the commuters – and congestion warnings and time-to-target countdowns on public buses.

We have to say that there were a number of other companies hacking away on excellent solutions of their own; it was rather gratifying to be picked as the winners of the hackathon. For more information, and to learn about the solutions that we competed against, please read TfL’s blog post on the hackathon.

BLS: using WSO2 to make Switzerland’s railways work better

BLS is Switzerland’s second-largest railway company. It employs about 3000 people and runs both passenger transport trains in Switzerland and freight trains across the Alps. It owns or operates on seven major lines and also operates the standard gauge railway network of the S-Bahn Bern, which spans about 500 kilometers.

BLS_RABe_535_Loetschberger

The story starts in the 1990s, when the European Commission made railway infrastructure operators separate from train operating companies in order to create a more efficient railway network and more competition. Thus, a train operating company, such as BLS, has to now request a train path from an infrastructure operator and had to pay for this path.

In 2007, the main Swiss railway infrastructure operator had to replace its 25-year-old timetable planning system. The system had the interfaces to about 50 other systems from different railway companies. Unfortunately, there was a long delay – some ten years – and costs tripled.  But by 2015, the project was back on track, with BLS determined to finish it.

In an architectural sense, BLS realized that their product teams often may not build the best fit for a problem. There are many reasons for this – including a team being unfamiliar with the most optimal integration patterns, or a preference towards one particular middleware stack simply because they understand it better. BLS thus first devised a set of non-functional properties, relevant for describing integration problems. They then devised a decision matrix that returns a number of integration patterns for a given problem. Based on this, they devised a set of integration guidelines, including how the pattern should be implemented and what middleware was available for the purpose.

image00

They were then able to get on with the problem of integration. In the data flow structure below, BLS needed to introduce a mediation component, with traceability, routing, data validation, data transformation and protocol changes as its key functionality.

image02

For this they selected WSO2 Enterprise Service Bus; with it they were able to separate transaction data from master data. Transported by the interfaces between the train operating company and the infrastructure manager are train paths and data about the network, train paths, and junctions. Data was sent as a push with the transaction data; by using WSO2 Data Services Server, they implemented a data pull to store this data as a copy in the system.

This project commenced in 2013, when BLS started evaluating products for the task. By December 2014, BLS had four products on their list: after a cost-benefit evaluation, they were down to two by January 2015, and after a successful proof-of-concept build they had selected WSO2 by April 2015.

In their talk at WSO2Con EU 2015, the BLS executives described themselves as being satisfied with WSO2 on many fronts, both product release schedules and financial growth; the availability of partners in Switzerland; with the architecture and cost effectiveness of the product; and also, with the availability of the source code. Using WSO2’s Quick Start Program, they were able to rapidly prototype cost-effective solutions for their integration.

At WSO2, we’re proud to be a part of BLS’s success. Our open source products are used by enterprises around the world – ranging from companies like BLS to governments. If your organization has a need for world-class middleware, talk to us. We’ll be glad to help.

CREATE-NET Discusses WSO2 and the Future of IoT

Charalampos Doukas is a researcher at CREATE-NET – or rather, the Center for Research And Telecommunication Experimentation for NETworked communities, the non-profit research center headquartered in Italy. Charalampos spoke at WSO2Con EU 2015 about his research into the world of open source in IoT and where WSO2 stands in this context.

In his 28-minute presentation, Charalampos started off by pointing out that despite the strange lack of discussion about open source in IoT conferences, to him the whole thing started with the open source community “with people connecting their Arduinos to the Internet and sharing their sensor data.” In fact, Pebble and SmartThings (the smart home platform maker acquired by Samsung) both used Arduinos for their 2012 proofs of concept; open source has always been closely tied to IoT platforms as we know them.

From a developer’s perspective, an IoT platform must be able to connect devices to each other and to users and to allow services to consume the data and control these devices, delivering interesting use cases. The main features, thus, are to communicate with and actuate devices, to collect and manage data from them, and to allow user interaction. A “spaghetti” of standardization bodies push a wide variety of protocols and standards for doing all this.

As Charalampos explained, there are over 40 IoT platforms that fulfill these requirements. Some of them, like ThingSpeak and Nimbits, are open-source; Nimbits, one of the oldest, runs on Google App Engine and even integrates with Wolfram Alpha (leading to some interesting use cases). Then there are the likes of SiteWhere, which embeds WSO2 Siddhi for Complex Event Processing and connects to WSO2 Identity Server.

“So, WSO2,” he said in his talk. “This picture is quite clear and illustrates the different layers that you need to build an IoT application and where WSO2 starts. You have the devices, you have the enterprise service bus, and message broker that enable the messaging; you can do the processing and analytics, and on top of that you can have things like a dashboard or web portal for managing data and devices. The new things that are coming – and hopefully will be more and more improved and used – are the device manager and identity server.”

Charalampos quickly sketched out what he sees as the core components of the WSO2 IoT platform: the WSO2 Message Broker, Enterprise Service Bus, Identity Server, Enterprise Mobility Manager, User Engagement Server, API Manager, Business Activity Monitor and Complex Event Processor. Yes, it’s a handful to enunciate – but the way we’ve built our platform, each component is built on the Carbon framework and provides functionality that you can add and subtract as needed. This makes it easy to not just maintain the lightweight stack that an IoT solution typically needs, but also to integrate with other software that provides similar functionality.

One of our biggest changes since then is to create an all-new product, the upcoming WSO2 IoT Server, bringing together the best of the WSO2 platform’s many capabilities into a more out-of-the-box, enterprise-grade server-side IoT device management architecture. Once integrated with WSO2 Data Analytics Server (which contains the functionality of WSO2 CEP and WSO2 BAM), it offers advanced IoT device analytics, including policy-based edge analytics and predictive analytics using machine learning. And true to the roots of IoT, this remains open source.

To explore this future addition to our IoT platform for free, visit wso2.com/products/iot-server/. To watch the full video of Charalampos Doukas’ analysis of the IoT sphere, click here.

 

Eurecat: using iBeacons, WSO2 and IoT for a better shopping experience

Eurecat, based in Catalonia, Spain, is in the business of providing technology. A multinational team of researchers and technologists spread their efforts into technology services, consulting and R&D across sectors ranging from Agriculture to Textiles to the Aerospace industry. By default, this requires them to work in the space of Big Data, cloud services, mobile and the Internet of Things.

One of their projects happened to involve iBeacons in a store. In addition to transmitting messages, the low-energy, cross-platform Bluetooth BLE-based sensors can detect the distance between a potential user and themselves – and transmit this information as ‘frames’. Using this functionality, a customer walking outside the store would be detected and contacted via an automated message.

image00

Upon arriving at the entrance to the store, the customer would be detected by beacons at the front of the shop (near) and at the back of the shop (far). This event itself would be a trigger for the system – perhaps a notification for a store clerk to attend to the customer who just walked in. The possibilities aren’t limited to these use cases: with the combination of different positions and detection patterns, many other events can be triggered or messages pushed.

To implement this, Eurecat architected the system thusly.

image01

The process is set in motion by the iBeacon, which keeps broadcasting frames. These are picked up by the smartphone, which contacts the business services. Complex Event processing would occur here to sort through all these low-level events in real-time. The bus then funnels this data to where it needs to go – notification services, third parties, interfaces and databases.

The WSO2 Complex Event Processor (CEP) and the WSO2 Enterprise Service Bus (ESB) fit in readily, with the ESB collecting the events and passing them on to the processing layer.

image02

Jordi Roda of Eurecat, speaking at WSO2Con EU 2015, detailed why they choose to go with WSO2: the real-time processing capabilities of CEP, the array of protocols and data formats it can handle, and the Siddhi language, which enabled them to easily construct the queries that would sift through the events. The ESB, said Jordi, they selected because of its performance, security and connectivity it offered.

At the time of speaking, Eurecat had improvements pending: data analytics, a wifi-based location service, better security and scalability.

image03

At WSO2, we’re delighted to be a part of Eurecat’s success – and if your project leads you along similar paths, we’d like to hear from you. Contact us.[a] If you’d like to try us out before you talk to us, our products are 100% free and open source – click here to explore the WSO2 Enterprise Service Bus or here to visit the WSO2 Complex Event Processor.

Toon by Quby: Smarter Home Devices With WSO2 API Manager

Quby is hardly a household name. Nevertheless, many Dutch people would have come across a smart thermostat called the Toon, sold by Eneco, one of the largest suppliers of gas, electricity and heat in the Netherlands (there were about 120,000 of these devices installed in the Netherlands in 2015).

This thermostat is a Quby product: the Dutch startup designs the hardware, runs the software and basically does everything regarding this tablet-like device.

Eneco Toon from PlusOne on Vimeo.

The Toon is a little bit more than a thermostat: it also displays energy and gas usage. The device hooks into the home WiFi network and integrates with the central heating system, electricity meter and/or solar panel (so that you can check your yield). It also connects with Philips Smart LED lighting, Smart Plugs, and also exposes all of its functionality via mobile apps for phones and tablets. The apps let a user do everything from analyze a visual graph of energy usage to turn off your central heating from outside the home.  

Speaking for Quby, Michiel Fokke drilled down into the workings of these mobile apps. The app connected over a secure connection to the Mobile Backend – essentially, a reverse proxy; this in turn integrates with their asset management system to figure out the device login credentials, and once this is done the app is given a live connection to the display in your home

However, said Michiel at WSO2Con EU, they had a few problems with this architecture.

One, Quby had no information on alternative uses of that API they use for the connection. Someone made a Windows Mobile app for their platform, and someone else reverse-engineered the API to write a Python framework: they had no measurements on any of these uses. Thus, they could not account for capacity or predict it.

Two, there was a proprietary login involved, which means storing encrypted credentials on the device itself.

Three, their API was undocumented and unsupported, making it difficult for them to open up their platform to third parties even if they wanted to.

When they started looking for an off the shelf solution to fit their needs, they had a wishlist: it had to support OAuth2, it had to be open source with affordable support – because they wanted to look under the hood – and it had to be extensible enough that a small team like Quby’s could innovate with it. WSO2’s API Manager was an ideal fit for this.

According to Michiel, they went with WSO2 over MuleSoft – which was the other candidate considered – because of the open source nature of the product, the ease of using it – “You can download a zip file and unpack it and just run the start script; basically you’re ready to go!” – and also because of the clearly defined pricing model for support.

image00The new architecture has the WSO2 API Manager sitting between the connection, the Mobile Backend, the Asset Management system and the Authentication Service. The proprietary login has now been moved into the API Manager, so that both the device app and the backend can be standards-based. A couple of additions were needed – plugin to interface with Eneco’s authentication web service and JSON web tokens to pass user claims.

“We did a pilot implementation and organized a hackathon at the beginning of March: I have to say that was quite successful – over the weekend we had 14 working apps: we had complete Windows Mobile app; we had a working prototype of Apple Watch app; and a couple of students had figured out how to use a Smart Plug to measure the energy usage of a single individual. They had created a complete portal and a web app over the weekend.”

Quite a success, we’d say.

What’s next for Quby? Michiel, at WSO2COn EU 2015, outlined plans to migrate their own apps to the new API and start using WSO2 API Manager to provide internal APIs. The WSO2 API Manager, he said, is perfectly fit for that use case, too.

For more detail, watch Michiel’s talk at WSO2Con EU here.

At WSO2, we’re constantly working on improving our products – so to see what’s new with WSO2 API Manager, drop by here.  

WSO2 and MedVision360: Delivering Healthcare across Europe

Jan-Marc Verlinden is the founder and CEO of MedVision360. MedRecord, their flagship project, is an eHR (electronic Health Record) system: everything from patient data to digital health apps, devices, wearables, companies and hospital systems are wrapped together, providing a single platform on which healthcare providers can build medical applications and expose data and services via APIs. Currently, the MedVision platform is used in over 8 large EU projects, including hospitals from Hannover to Rome to Southampton.

At WSO2Con EU 2015, Jan-Marc took the stage to explain how MedVision360 achieved all this: using WSO2 products at the heart of their platform, with expertise from our partner, Yenlo.

Inside the medicine cabinet

MedRecord was born of a desire to do better. Europe, says Jan-Marc, is aging; by 2020, there will be 3 working people to every old person. In China, this problem is bound to be even more serious. This is a huge challenge for healthcare.

Given the severity of this situation, one would imagine this problem would have been tackled ages ago. Not so.

According to Jan-Marc, there are a few major problems in the way of change coming in with effective use of ICT; cost, concerns – or technical ignorance- about privacy and data security, a lack of communication between ICT systems, and the human capital it costs for data entry.

There’s also the lack of financial incentives to do better. There’s no real incentive for doctors to change the way they work and to reduce those long queues to something as simple as a mobile app, especially given the costs faced.

MedVision360 built two stacks: the first, which they subsequently open sourced, uses XML for storing data – and a second, enterprise version, with better performance using PostgreSQL. Both are based on the CEN/ISO EN 13606 standard, which requires the platform to use a dual-model architecture that maintains a clear separation between information and knowledge.

To convey the depth of modelling involved in this, Jan-Marc used the example of blood pressure, one of the many measurements involved in the process of treating a patient.

image01
This is the type of semantic model template developed by the NHS (the National Health Services, UK).  The idea is that this delivers both the data needed and the context that a medical specialist would need to frame the data in. As the system consumes these archetypes, it becomes instantly proficient.

image00
However, due to different workflows and standards, a doctor in a country other than the UK might require a different version of things, as seen above; not just a 1-1 translation of terms, either.

Working with Yenlo, MedVision360 utilized the open source WSO2 API Manager, WSO2 App Manager, and WSO2 Identity Server to solve this issue. WSO2 API Manager is a complete solution for designing and managing the API lifecycle after publishing. MedRecord’s architecture uses API manager to expose the data in the MedRecord platform from the PaaS layer, while managing access rights. WSO2 Identity Server enables login through third party identity providers (like Google and Netherland’s UZI-pass), handling role based access control and providing an audit trail.

image003
Everything else – applications, websites – is hosted on this layer. Swagger and JSON make it easier to build validated apps. Paired with a drag-and-drop HTML5 tooling interface, developers can easily build applications by accessing functionality from APIs with a few clicks. Hooks to portals like Drupal and Liferay allow better, device-independent presentation of content.

This opens up possibilities even for integration with Google Fit or Apple Health Kit. Google Fit, for example, collects data on the patient walking and so on; while that’s not relevant for a doctor, who’s more concerned with the patient not walking, parsing and analyzing the data would allow medical professionals to keep an eye on their customers’ health.

Healthcare is a very serious business, and at WSO2, we’re glad that providers like MedVision360 – and their clients- have chosen to trust our platform with the lives of others. To examine the full video of Jan-Marc Verlinden’s talk, click here.

At WSO2, we’re committed to making our platform better. To check out the components that MedVision used so successfully, visit the WSO2 API Manager, Identity Server and App Manager pages.

Capgemini, WSO2 and the new UN ecosystem

Ibrahim Khalili is a system integration analyst at Capgemini, a multinational that’s one of the world’s foremost providers of management consulting, technology and outsourcing. Headquartered in Paris, Capgemini has been running since 1967, and now makes over 11 billion EUR in revenue.

ibrahim

Capgemini and WSO2 have a history of working together. One of Capgemini’s recent projects was for the United Nations – to build a new reference architecture for UN agencies to function across a connected technology platform. Khalili, speaking at WSO2Con Asia 2016 in Colombo, Sri Lanka, outlined the three major goals of the new platform.

Whatever they designed had to allow beneficiaries, donors, citizens and the UN’s increasingly mobile workforce to access the functionality and information of agencies regardless at “anywhere, anytime, on any device”; it had to handle information, people and devices in a much smarter and more cost-efficient way that the UN was doing already. It also had to break out the data and bring the UN’s agencies into the world of an API ecosystem.

To put this into finer context, we’re talking about a system that can handle assets, finances, information and humans across a diverse array of agencies – including the nitty gritty of fundraising, running initiatives, and reporting that are key to most UN operations. What they required was what Khalili calls a “platform enabled agency” – more or less a complete update to operational infrastructure, with APIs exposing services, information, and functionality across the board.

Their solution starts with an integration layer that connects to all legacy systems, providing a view of all the data that can be managed. On top of that goes the process layer, which contains the functionality, and on top an API layer exposing the platform’s services and data.

capgemini-abstract

Once the logical framework was done, Capgemini started filling it in. At the very bottom go IaaS services like VMWare. On top of that comes an ERP universe of sorts – functionality from SugarCRM, Talend, WSO2 Application Server, WSO2 Complex Event Processor, and others, connected by the WSO2 ESB. WSO2 Enterprise Mobility Manager, WSO2 API Manager, and WSO2 User Engagement Server face outwards, allowing this functionality to be used. WSO2 Identity Server wraps around the entire platform, handling ID and authentication.

 

That gives Capgemini – and the UN – not only a cleaner, layered architecture, but one that brings in better scalability as well as a Devops approach. But above all, the chief advantage, says Khalili, is that it’s also open source. With WSO2 products, Capgemini has complete freedom to customize, take apart or rebuild whatever’s required to make a better platform. There’s no stopping innovation.

Capgemini’s not the only one who can leverage our technology. All WSO2 products are free and open source.

Go to http://wso2.com/products/ to download and use any part of our middleware platform. For more information on Capgemini’s solution for the UN, watch Ibrahim Khalili’s full presentation at WSO2Con here.  

 

WSO2 named as Cool Vendor by Gartner!

Gartner has just named WSO2 as a Cool Vendor in The Internet of Things Analytics, 2016 report.

What does this mean?

Gartner’s IoT Analytics report examines what things vendors are doing in the IoT analytics space. ‘Cool Vendor’ is their designation for vendors that are particularly innovative. “WSO2 is one of the few open-source IoT analytics vendors with an end-to-end IoT platform, extensive application integration capabilities and state-of-the-art analytics features,” reads the report.

We’re grateful (and humbled) to be named here. We’ve been named Cool Vendor eight years ago for our Mashup Server product, but this one’s all for IoT analytics. Let’s drill down into what we’re being recognized for.

Our overarching analytics platform WSO2 Data Analytics Server (DAS), with WSO2 Machine Learner and WSO2 Complex Event Processor available for those who need only a specific subset of DAS’s full functionality. WSO2 DAS can handle all of the needs of IoT analytics – from batch to streaming to predictive analytics to visualization to alerts. These offerings are available as downloads to run on servers, can be run on the cloud on a PaaS and in virtual machines, and we can even host and manage the service for you.

Our strength comes in how well these three integrate with other products to form an IoT platform that can adapt to your needs. As Gartner notes, “the IoT platform uses traditional WSO2 application integration capabilities, including the WSO2 Enterprise Service Bus, adapters to a wide range of platforms and applications, the WSO2 API Manager and other capabilities.” Iot server, which spans all IoT related capability, is also on the way. It handles device management and many others and folds into the rest of our platform – of course, everything’s open source.

Of course, you needn’t take our word for it. Gartner’s report is readily available at https://www.gartner.com/doc/3314217/cool-vendors-internet-things-analytics. Do pay them a visit and see exactly why they chose us as cool vendors. To learn more about analytics “on the edge”, as it were, visit http://wso2.com/analytics and http://wso2.com/iot to see what we can do for you.