Category Archives: Featured

Cashing in on APIs – Leveraging Technology to Boost Your Business

Even if you’re not an excessively tech-savvy individual, you most likely would have used a mobile application in your mobile device via an internet connection, used a Gmail client, Twitter, Facebook, or mobile apps, or purchased something online. In a tech world, you’re already reaping the benefits of application programming interfaces (APIs). The use of APIs is becoming even more popular today as service providers are scrambling to embrace the Internet of Things. With the availability of new tracking devices, smart homes, smart vehicles, mobile phones and tablets, consumers now have more options on how they consume applications.

Let’s take a step back and try to understand what this all means. An API is a term that’s used to denote a well-defined interface to access certain resources – in other words a service available to an end-user. If you haven’t worked with web APIs before, you may think it’s a type of service exposed over the Internet to perform certain operations. APIs are the foundation of today’s software engineering industry and enterprises are jumping on the bandwagon to reap the benefits of using them to integrate and automate to make their online services more appealing and user-friendly to end-users. Well-designed APIs will enable your business to expose content or services to internal and external audiences in a versatile manner. Today, most organizations use APIs to build their solutions internally and expose these services to the world at large. APIs will immensely benefit both service development teams as well as service consumers.

A good, yet simple example that illustrates this well is a weather update application that’s available on your mobile device. This application that typically runs on a device will not be able to provide weather forecasts of a specific area without connecting to an external service. However, it can call a GPS device on your mobile device or request the user to retrieve location coordinates of a specific area for which you want a forecast. Once you’ve defined your geographical location, the mobile application can simply call a weather service API and request the required information. What’s important to note here is that you don’t need to perform any complicated tasks, do calculations, or run an analysis on the mobile device. You can simply push relevant parameters to an API and obtain the results you want.

If you view this same example from one level up, you’d see that there’s a client application and a service and both of these are connected by an API. That’s essentially what an API does; it can integrate your services, data, content, and processes with external parties in a very effective and efficient manner. So, what’s the difference between services and APIs? Essentially, the functions of both are the same, but a slight differentiator would be that an API would generally have a well-defined interface to its services. That said, there’s a notable difference between managed and normal web APIs/services. Managed APIs are often enriched with additional features on top of a standard API or service. These are referred to quality of services or QoS. Common QoSs include security, access control, throttling, and usage monitoring. Security forms the foundation any API infrastructure across the entire digital value chain. Malicious users can access your systems the same as legitimate users would, therefore it’s important to enable security at all points of engagement. Usage monitoring helps enterprises to improve their APIs, attract the right app developers, troubleshoot problems and, ultimately, translate these to better business decision.

Boosting efficiency to become more competitive

Enterprises too are seeing the potential benefits of APIs to propel business growth, irrespective of the size and nature of the business and the industry they operate in. The key is to get started now to be able to maintain a competitive edge. A typical example is the extensive use of APIs in the hospitality industry; for instance, the owner of a restaurant or a small hotel would operate a simple website and some internal services. But at some point, when the business grows, they cannot maintain the same internal system and work with external parties. At this stage, business owners would need to think about consuming external services and exposing their services to the external world. And that’s when APIs and API management solutions come into play.

Large, global companies in the financial, transportation, logistics, and consumer sectors have already started to expose their systems and services to the outside world as APIs. The real benefit lies with being able to seamlessly integrate internal systems with those external ones to leverage benefits like creating properly structured services that are synced within the company, e.g. human resources department exposing non-sensitive employee data to other departments that need this information. A typical example is an online retail business that would need a payment solution to integrate with its system. Such a solution would not need to be implemented from scratch, rather the business can expose APIs via already available payment solution providers like Stripe, Zuora, or PayPal.

To explain this further, let’s consider a restaurant owner who can expose menus and ordering services via APIs. This will enable external developers to consume these APIs with their apps and incorporate the restaurant’s menus and services into the travel applications they’re building. When exposing APIs, the restaurant owner would need to consider throttling, a process responsible for regulating the rate at which the application is processing, as well as the security aspect of exposing these APIs. On top of these, a service provider may need some insights into the usage of these APIs – for instance, details about service consumers (like which apps have been invoked more), usage patterns (most popular food types), traffic patterns (peak order times), etc in order to make certain business decisions and make the service more efficient. For this, you might need sort of analytics and usage monitoring capabilities as part of your overall API management solution.

How Internal Services Can Expose Services to External World Via APIs

how internal services can expose services via APIs

Ultimately what you achieve in terms of business benefits is brand awareness by becoming a smart business. Moreover, in addition to profits gained from direct API consumption, users can earn additional revenue by charging users for API/service usage. This concept is known as API monetization and most API management solutions already have this feature in-built as an extension, enabling creative users to turn cool ideas into revenue generating APIs within minutes. And open source products have proved to be most useful to meet all your API management requirements as its cost effective and easy to deploy.

Turning a Software Product Company Into a Cloud Company

From 2011 to 2015 Software as a Service (SaaS) adoption in enterprises grew fivefold from 13% to 74%. The trend still continues with public cloud services worldwide growing by 18% in 2017. With this growth, the pressure to become a cloud company in order to remain competitive is increasing.

We at WSO2 have already gone through the transition and in this blog I would like to share a few experiences and give you some pointers on becoming a cloud company. This will help you to go from being an on-premise business to adopting a cloud and as-a-service model. First, let’s explore why you need to make the move. Being a cloud company brings many benefits for both you and your customers.

Here are some of the customer benefits that we identified:

  • Customers don’t have to pay a lot of money upfront, so the cost of entry becomes low.
  • With the pay-as-you-go model customers don’t invest a lot of money unnecessarily.
  • Everything is already set up by the vendors so customers can go-to-market faster.
  • Customers don’t need to maintain infrastructure and can now outsource their operations including uptime, upgrades, and security.
  • Most cloud vendors care about having APIs and integration points so customers can typically integrate their system with other solutions.
  • Customers can easily scale up or down as required.
  • Web user interfaces are mainly used so they can work from anywhere.
  • Since these are shared deployments customers have an entire community around them that will help find bugs and fixes before they even notice them.

Also, there are quite a few vendor benefits that you can reap:

  • Its cost-effective delivery model lets you address new markets with lower expenses.
  • By enabling a self-service model for your customers you can cater to lower levels of the market as well as to larger geographies.
  • You receive faster feedback on your products because customers will notice any faults and let you know immediately.
  • There is less shelfware because people start using your products much faster and the chances of them buying a license and not using the product at all are low.
  • Because of this you gain recurring revenue and adopting a subscription model rather than a booking model allows you to predict next month’s revenue much better.

Now that you know why you should become a cloud company, ask yourself how this would affect your organization. Moving to an as-a-service model affects every single part of your organization including research and development, operations, security, sales, presales, support, and finance among others.

Research and Development (R&D)

In the waterfall model teams typically work on one big release every year or so and follow that up with a wave of upgrades for enterprise customers. The iterative cloud-first model is much faster. For example, if a product manager identifies a new market segment your team will be able to easily get the new features out in weeks or even days. The feedback they receive will also be faster since people will start using the features as soon as it’s released. This can be a very gratifying experience for developers but if something doesn’t work, they can’t make excuses and blame the customer for not configuring it correctly.

This also impacts testing, upgrading, and troubleshooting. Testing is key. There is lower tolerance if something is not working because it affects everyone using it, not just the client who happens to deploy it first. You need to pay a lot more attention to automated tests, acceptance tests, staging environments and more. Since it’s a shared deployment, teams get access to shared files, environments and servers that allow you to troubleshoot and fix issues faster.

You need to make sure your products are ready for the cloud before you launch them. They need to be able to scale for growing numbers of customers. When I first joined the company, the products were able to run in multi-tenant mode, but when we scaled for thousands of customers we started having issues which we needed to fix.

Usability is another aspect that customers have high expectations for. Cloud users expect a seamless experience that makes it easy for them to understand, configure and use the products themselves.

Designed by Freepik

Operations

In typical software companies, the extent of operations includes an internal information services team that maintains emails, WiFi, etc. Apart from this they provide a team that goes to customer sites, when the need is required, to help them deploy and fix things.

When you become a cloud business, operations become a key factor. You need to have a team that is dedicated to updating, installing and monitoring your services to make sure they are up and running all the time. Your need to hire or grow a team with a different mentality from traditional development. Pick some engineers who may be in development but have the ops way of thinking. On one hand, it’s very gratifying to know that the systems are up and running and the customers are happy because of you. On the other hand, it’s very different from normal development work where you just write the code and people use it. It’s also a 24/7 role because we now live in an era of globalization where either your customers or your customer’s customers have clients all over the world.

Cloud also increases the visibility of failures. Your customers will quickly notice if something is wrong so you need to introduce new processes for security, postmortems, shifts, and rotation models and implement an alerting system that lets your customers know if something is broken. Monitoring is also key so that you get early warnings and end up preventing a fire rather than putting it out.

Designed by Freepik

Transparency

That’s why you need transparency. When we first launched our cloud we were not very transparent. When things went wrong, we worked on fixing them but a lot of the times customers would be confused as to whether it’s something they’re doing wrong or if it’s something wrong with the service. We have implemented an uptime dashboard so that all our paying customers can check whether the services are up or down. We have also implemented a notification system that sends an email alert to customers when there is an outage and again when the problem is fixed. They also receive postmortem reports for further insight. When our formal SLAs with uptime guarantees are not met we give our customers credit.

The most important thing is to communicate. Cloud is a services business so you need to be very transparent and let your customers know what’s happening. They need to trust in you and your service, understand how the system works and know exactly what they are getting from it.

Security

Culturally in most industries today, cloud and SaaS is accepted. But security is a key factor for a lot of customers when choosing a cloud vendor. There are compliance factors that need to be in place. For example, if you have payments in the cloud then PCI compliance is a must. You need to conduct audits, have an internal security team and use external security services. You need to use encryption where ever you can.

In general, make sure you document all your procedures. Document the way you work with your software, run the server, etc. We ourselves have a fairly long security processes document that we share with all our customers, which validates to them that we treat security as an extremely important factor.

Source: http://www.zdnet.com/article/industry-cloud-research-security-and-data-protection-is-still-the-most-important-feature-for/

Sales

Currently, you have an existing sales team and existing products that you sell. When cloud comes into the picture, it will have an impact on your sales. You need to consider a few factors with regards to this:

  • Decide whether to let your team sell both the enterprise and cloud products or the enterprise product first and then the cloud as a service.
  • Decide on what the pricing levels should be if your service needs to address lower tiers of the market.
  • Figure out how to protect your larger enterprise sales from being cannibalized.
  • Make sure you offset the old revenue with your new revenue.
  • Give a clear message to your current and future customers to decrease the confusion caused by introducing these new services.
  • Distinguish between the customers who can take advantage of self-service and those who will need more help.

At WSO2, we try to align our pricing for cloud so that even people with lower budgets can use it. Our sales team actively promotes our cloud services to those customers that fit the model best. We get a smaller revenue from these customers but at the same time, we don’t spend as much time and effort to enroll them and customize their solution because of the self-service feature. It’s a win-win because our account managers can focus more on our bigger customers who need more assistance.

Designed by Freepik

Pricing

You will have to experiment with pricing. We’ve been doing the same. There are three main pricing model: freemium, trial and commercial. Some vendors will offer their solutions for free at certain tiers. In our case, we have a free trial because we found that optimal for the nature of our solutions. Overall, try to make the pricing predictable and easy to understand for your customers. Charge in terms that make sense to your customer rather than based on the resources you spend, but also do your math and make sure you don’t lose money.

Presales and services

How do you go about hand-holding? Is it okay for customers to work in a self-service mode and understand how to use everything on their own, or do they still need help with customizations? You need to be able to distinguish between smaller issues that customers can deal with on their own and bigger projects like customization.

Then, you need to figure out how to serve customers across geographies. What can you automate and what requires human presence? For example, you can embed some tutorials and run automated nurturing campaigns during the trial period so that they can easily understand how to use the service efficiently. You also need to have a way for your customers to request for help, either through a ticket-based model where customer ask for help as and when they need it or on a project-based model where for example, you work with them to create a proof-of-concept.

Support

You need to create a support model that works for you. Will you give a certain amount of community support through user forums? Would you prefer ticket-based support? Will the product team handle support or will you have a dedicated team? These are the questions you’ll need to ask yourself. At WSO2 we have a rotation model for support. The engineers who actually work on the products work in the support team on rotation, so they know exactly what the customers want, what issues they might be facing and how to quickly solve them.

Designed by Freepik

Finance

Typically for enterprise software, finances are calculated from a bookings perspective. You record it as soon as you get the deal. Cloud follows a subscription model with recurring revenue. With bookings, you can’t really predict the actual amount of revenue you will get. Looking at your monthly recurring revenue (MRR) is a good way of predicting next month’s revenue and how much you are growing.

Average revenue per customer (ARPC) is another important factor to consider. When you grow that figure, it means that you are getting more money from the richer customers, so you can spend more money to attract new customers.

The churn rate is also very important. The lower your churn rate (meaning the customers are happy and stay with you longer) and higher the average revenue per customer – the higher your lifetime value (LTV) from a customer is. If your LTV is higher than your customer acquisition cost (CAC), then you can spend more money on acquiring customers and make more money from them.

Becoming a cloud company has a cultural impact throughout your organization. The factors we talked about previously are all departments and teams in your company. They need to change the way they think and do their work. You can either go into this by creating smaller teams that follow the new model and work beside those that follow the older model and incrementally shifting to an as-a-service model or with a big bang where all your teams are transitioned to the new model at once. I would recommend you to start with some projects and dedicated teams, show their success and expand the team. This way you don’t disrupt any of the existing products and teams but coexist during this transition.

I hope this blog has helped you understand what it takes to moe to the new cloud and as-a-service model. For more information you can watch my webinar on this topic. Good luck!

Nutanix: How WSO2’s Identity Server Enhanced Customer Experience

Nutanix is a leader in hyper converged systems with a mission to make infrastructure invisible by delivering an enterprise cloud platform that enables you to focus on the applications and services that power your business. At WSO2Con USA 2017, Director of SaaS and Tools Engineering at Nutanix Manoj Thirutheri explored how WSO2 Identity Server helped them enhance their customer experience to stay competitive against large vendors like HP, Microsoft and Cisco.

Nutanix provides over 4450 customers across the globe with a hyperconvergence appliance that has storage, virtualization and network components overlaid by an intelligent software layer in order to minimize the need for infrastructure. “Customer experience is the last mile of digital transformation,” Manoj said while stressing on the importance of creating an integrated ecosystem of customers and partners to be successful. They currently maintain multiple web portals for customer support, partner support, and the community. One of their top priorities is to make customer experiences as simple and seamless as possible. They needed to create a more seamless sign-on experience for their portals and mobile apps to maintain growth.

Because of the speed at which Nutanix was growing, many identity silos existed, which meant the same customer was identified in multiple ways. They had non-standard and insecure authentication and authorization mechanisms in place which made them vulnerable and hindered their user experience. Furthermore, their ability to be agile and innovate fast was deterred by the proprietary technology they used, which was not open or extendable. “The bottom line is, we didn’t know what our customers or partners were doing. We were lost,” notes Manoj. Having a 360 view of their customers’ activities and keeping track of them across the different portals were key requirements of their solution to these challenges.

As shown in the diagram below, Nutanix used WSO2 Identity Server to overcome their major identity and access management challenges. Manoj then explained the architecture from the bottom up. The highly available WSO2 Identity Server cluster is load balanced across multiple regions for high redundancy. Next, they built an intelligent API layer, which exposed all the APIs including user management, tenant management, service provider and identity provider APIs. By doing so they avoided vendor lock-in and didn’t couple their functionality to any technology, be it open source or proprietary. The third layer consisted of their own entitlement system called My Nutanix where customers and partners register and access the service providers. The green boxes at the top depict the service providers including the following:

  • The customer portal enables customers to access the services offered in My Nutanix.
  • The partner portal allows partners to perform deal registrations among other things.
  • The community portal is open source and can be used by anyone. Here, they use WSO2 Identity Server to authenticate the users through basic OAuth over Transport Layer Security (TLS), which allows them to track the users and gain new customer prospects.
  • They also have the educational and training portal in addition to many other service providers that are still in development.

Nutanix currently uses many industry standards for authentication including OAuth 2.0, OpenID Connect, and SAML 2.0, which are all supported out-of-the-box by WSO2 Identity Server. They also use WSO2 Identity Server for Just-in-Time (JIT) provisioning of users. Nutanix performs SMS-based multi-factor authentication (MFA) by using WSO2 Identity Server connectors to integrate with Twilio, which allows you to programmatically send and receive text messages using its web service APIs. In addition, they integrate with their partners through the Active Directory Federation Services (ADFS) provided by WSO2 Identity Server.

Apart from these implemented features, Nutanix is working on leveraging more capabilities of WSO2 Identity Server. They will soon bring in multi-tenancy because every customer has their own tenant with their own isolated roles. They will also experiment with a service-based authentication, a fairly new concept to them, which uses certificates to authenticate the user and creates the service accounts within WSO2 Identity Server. As Manoj states, “Two services, no human interaction”.

Having a product that is open source, supported multiple security protocols, and can scale was key. WSO2 Identity Server met all these requirements. WSO2 Identity Server helped create a seamless single sign-on experience for their customers, partners and prospects, while keeping track of all their actions. A key advantage that helped sustain Nutanix’s rapid growth was WSO2 Identity Server’s high scalability and availability and its ability to support a rapid increase in the number of users from 1000 to 100,000 in just two years. It met all of Nutanix’s requirements including out-of-the-box support for many standard protocols, multi-factor authentication (both SMS-based and Google authenticator), identity federation, multi-tenancy and tenant management. Furthermore, Nutanix also used WSO2 Managed Cloud, which provides excellent support.

“We now have a bunch of happy customers and partners. We ourselves are also very happy with WSO2 Identity Server,” Manoj added.To learn more about how Nutanix leveraged WSO2 watch Manoj’s talk at WSO2Con USA 2017.

Verifone: Using WSO2 Technology to Provide a Unique Payment Terminal that Increases Customer Engagement

In Honolulu, Hawaii, one man’s vision for the future of commerce has now become one of the world’s largest point-of-sale (POS) terminal vendors and a leading provider of payment and commerce solutions. Verifone still upholds this vision and keeps innovating for the future. At WSO2Con USA 2017 Ulrich Herberg, a senior Java architect at Verifone, joined us via Skype to speak about how they leveraged WSO2 technology when creating Verifone Carbon – a powerful device that combines elegant design into an integrated POS solution.

Verifone Carbon is a payment terminal that sets a new standard for a valuable and engaging consumer experience. It consists of two parts: a larger Android tablet facing the merchant and a smaller terminal with different kinds of payment functionality, such as Apple pay and payment through credit cards. These two devices are placed on a mobile base, which is used for charging the devices, printing receipts, and connecting to the ethernet.

What makes Verifone Carbon unique is that it’s embedded in an ecosystem called the Verifone Commerce Platform, which consists of a number of additional systems that provide more than what a typical payment terminal offers, explained Ulrich.

  • The developer portal allows third-party developers to create their own customer and merchant facing application by using Verifone’s APIs to download software development kits (SDKs) that can trigger payments, get information of successful or failed payments and more.
  • The app marketplace provides an interface similar to the Google Play Store or the Apple App Store where these apps can be placed and purchased.
  • The estate owner portal is used by large corporations that directly deal with the merchants to
    • Manage the estate (all the devices)
    • Get an overview of the devices
    • Manage, create, remove and update merchants
    • Purchase apps for the merchants
  • The merchant portal provides a smaller scope for the merchants only, which allows them to see their devices and purchase apps for their devices

With Verifone Carbon, merchants can now reward their best customers with loyalty points, display promotional media and coupons, leverage beacons for store analytics and invite customers to redeem personalized offers in real-time among other things.

Ulrich explained that for all of this to happen, they needed a solution that allowed them to manage and monitor all the Carbon devices. They started by evaluating commercial products. The commercial products worked on a pay-per-device model which would have been costly as they scaled up. At often times they didn’t have all the features they required and didn’t provide the flexibility to create any customized features.

The fully open source WSO2 Enterprise Mobility Manager (WSO2 EMM which is now significantly enhanced to provide enterprise IoT solutions as well as mobile device and app management in a single download via WSO2 IoT Server) overcame all of these challenges. “We were able to create a solution that fit our exact needs by either modifying the product on our own or getting WSO2 support services to help modify it,” said Ulrich. They avoided vendor lock-in and are independent of anyone else because they have control over the source code. They were also able to easily integrate WSO2 EMM with their existing terminal management infrastructure.

Ulrich then went on to discuss three major use cases of WSO2 EMM in Verifone Carbon.

Use case 1: Blank Android devices are shipped to the merchants so that they all have the same operating system image. WSO2 EMM uses individual device certificates to identify, authorize and authenticate these devices using mutual Transport Layer Security (TLS).

Use case 2: Verifone already has a legacy terminal management system which runs on a different operating system that can’t directly connect with and use Android features. So they used WSO2 EMM to communicate with the tablet.

Use case 3: Verifone doesn’t use the interface provided by WSO2 EMM so they had figure out how to use WSO2 EMM as a black box. They call it from their terminal management system, sends commands and monitors all the devices through it without having to know how it works internally. They did this by working closely with WSO2 to create a thorough list of RESTful APIs that were documented in Swagger.

Ulrich went on to list a few more WSO2 EMM features they currently use including

  • Getting device information including location data
  • Over-the-air (OTA) update that allows you to update the OS remotely
  • APK installation/update/removal in the background
  • Remotely locking, rebooting or factory resetting the devices
  • Debugging and sending Android logs to the server
  • Sending pop up notification to the tablet

He concluded by explaining in detail how they plan on scaling WSO2 EMM as the number of devices becomes larger.

To learn more about how Verifone used WSO2 technology to increase customer engagement through a unique payment terminal watch his talk at WSO2Con USA 2017.

Motorola Mobility: Using WSO2 Integration Platform to Increase Business Agility

Companies all over the globe are realizing the power of lean technology on the cloud and Motorola Mobility is one of them that’s taking action towards wielding this power. In February 2017, Sri Harsha Pulleti, an integration architect at Motorola Mobility and Richard Striedl, an advisory IT architect at Motorola Mobility, spoke at WSO2Con USA 2017 about their move to a hybrid cloud and container architecture with zero-touch automation.

A few years ago, on the day after thanksgiving, Motorola’s website crashed, resulting in the loss of many transactions from buyers who were flooding in to get their discounts. That’s when they started questioning how it happened, why it happened, and what they could do about it, explained Sri. All their web services were running through heavy-weight enterprise service buses (ESBs) in their data centers that didn’t have any other technical capability. They needed to move away from this to a lightweight platform in the cloud.

After evaluating many vendors they found WSO2 and its lightweight ESB – just what they had been looking for. Sri explained that they could quickly spin up instances of it and even set auto-healing and auto-scaling capabilities. WSO2 ESB (now extended as WSO2 Enterprise Integrator, which includes all the other key products and technologies from the WSO2 Integration Platform) also supports Amazon Web Services (AWS), which was their first option for cloud computing services. After choosing their vendor, Motorola began to make the necessary changes in their environment by re-architecting the system, setting up multiple ESBs and moving to a micro-platform architecture.

A year later, thanksgiving came along and this time everything went smoothly. “It was perfect, there were no issues and everything was absolutely fine”, explained Sri. However, a few months later, they realized that this was costly. Sri was given the challenge of finding something with the same capabilities as AWS, but at a lower cost. That’s when they started looking at OpenStack: an open source software for creating private and public clouds. It created an environment with similar capabilities to AWS and allowed them to set up their own data centers. After discussing further, they decided to run both environments (AWS and OpenStack) parallely and scale them up or down as needed.

This time, they decided to use containers, which allowed them to package their software into standardized units for development, shipment and deployment. But why? It’s lightweight, flexible and easy to scale. Sri then went on to discuss the importance of emphasizing collaboration and communication between developers as well as IT through DevOps: “It’s something everybody wants to achieve”. Instead of having just a DevOps team to achieve this, they made a zero touch automation DevOps platform. This homegrown application called Debug 360 built on open source products allows their developers to focus on developing the code and checking it into a repository while the end-to-end automation takes care of the rest. It now takes less than a week to complete any new development in an integration model.

Motorola now has WSO2 ESB on AWS and OpenStack, one without containers and one with. The next step will be to integrate these instances to achieve their ultimate goal of spinning up instances in both environments, Sri noted.

Motorola Mobility Advisory IT Architect Richard Striedl further explained the concept of cloud elasticity. He stated that they have learnt a lot especially in terms of enhancing DevOps while working with WSO2 the last few of years. The requirements for cloud elasticity included having the same DevOps procedures, cloud capabilities and application code and auto-scaling.

“We’re evaluating WSO2 API Manager,” said Richard while explaining their need for APIs to manage the environment, build the framework and have more control over it. At present, they have 35 applications with 90% of traffic going through OpenStack and 10% going through AWS. Richard concluded by exploring their future plans of dockerizing with data services and message brokering capabilities available in the new WSO2 Enterprise Integrator. “We might even take that step towards Ballerina as we all learned today,” he added.

To learn more about how Motorola Mobility is moving to the cloud through zero touch automation listen to Sri’s and Richard’s talk at WSO2Con USA 2017.

West Interactive: Using WSO2 Identity Server to Enhance Customer Experience

Headquartered in Omaha, West Corporation is all about telecommunication – be it conferencing solutions, safety services, interactive voice response solutions or speech application automation. Pranav Patel, the vice president of systems development at West Interactive, recently spoke at WSO2Con USA 2017 about the unique customer experience they offer through their multi-tenanted role-based identity and access management solution built using WSO2 Identity Server.

An increasing numbers of users today are turning to various different channels like the web, mobile devices, and social media to interact with vendors. Pranav explained that knowing the customer and making sure that they can access West Interactive’s services from whichever channel they prefer is a key requirement for them.

West has been in the telecommunication industry for the last 30 years, and quite commonly, have many solutions that are siloed and distributed. Connecting all these solutions was a major challenge they needed to overcome in order to provide a holistic experience to their customers, explained Pranav. This meant dealing with and managing various different identities that belonged to many different customer portals. They needed to create a solution that revolves around centralizing user identities to a single user portal and creating an efficient identity and access management system.

Pranav then examined the requirements they needed to meet in order to achieve operational efficiency, easily manage accounts, save costs, and provide great customer experience. Other than the evident single sign-on and federation requirements, multitenancy with hierarchical tenant management was an important feature that enabled them to serve all their tenants (a client of West represented as a domain in the system) and users (individuals that require access to the portal and are grouped at the tenant level) through their portal. The system also needed to enforce rule-based access control that allows access to certain products (web applications that need to be integrated) depending on who the user is. In addition to this, they had corporate policy requirements for passwords, needed to maintain password history and had a password expiry date that prompted users to frequently change the password. Audit logging and user bulk imports were some other requirements.

“WSO2 fulfilled several of our requirements out-of-the-box, especially support for various protocols and heterogeneous multiple user stores,” observed Pranav. He went on to explain that they could easily extend the product and customize it for any features that it didn’t already have, making it the perfect solution for West.

WSO2 Identity Server is used for

  • Introducing a relationship hierarchy between the parent tenant and child subtenant and allowing multi-tenancy
  • Asking for and storing answers to five security questions per user
  • Defining permissions or roles for products (web applications) and users
  • Providing single sign-on and federation for users
  • Allowing employees to mimic a user and see how they perceive the user portal
  • Enforcing password policies set by tenants

Pranav expressed how WSO2 Identity Server meets all their current requirements and how they would like to introduce customizable login pages (by tenant), two-factor and multi-factor authentication, automated user provisioning and self-registration among other features in the future. He concluded by saying they were looking forward to adding WSO2 Data Analytics Server to the mix in order to monitor what’s really going on in the system.

To learn more about West Interactive’s story listen to Pranav’s talk at WSO2Con USA 2017.

WSO2Con USA 2017 – ballerinas, blockchain, oxygen bars and more!

San Francisco met us with bitter, cold winds, but we didn’t let that stop us from hosting the best user conference ever! With a fully restructured agenda, major product and roadmap reveals and phenomenal entertainment, this year’s WSO2Con USA was bigger and better than ever.

This year, we even had an actual oxygen bar inside the WSO2 Oxygen Bar: a place where attendees were able to meet with WSO2’s solutions architecture and engineering teams to answer all their questions on integration, API management, analytics, identity and access management, and the Internet of Things.

The conference started off with a bang, literally. The Taiko drummers marched on to stage and gave a warm welcome to everyone in the crowd!

WSO2 Founder, CEO and Chief Architect Dr. Sanjiva Weerawarana then presented the repositioning of WSO2’s product strategy to focus on providing a platform that enables digital transformation through integration, API management, identity and access management, smart analytics, and the Internet of Things.

Thomas Squeo, the senior vice president of digital transformation and enterprise architecture at West Corporation, followed this with another keynote. He explored how to digitally disrupt from within your enterprise by empowering your employees who will in turn engage with customers, transform your products and optimize operations.

Next, cue the ballerinas…

…which led to WSO2’s big reveal: Ballerinalang

Sanjiva examined Ballerina in detail: the general purpose, concurrent and strongly typed programming language with both textual and graphical syntaxes, optimized for integration. Following this, Sameera Jayasoma, the associate director, architect and lead choreographer of Ballerina showcased a few demos of the language in use.

The first day then broke off into the individual tracks; integration, analytics and strategy; with introductory, advanced and hand-on sessions as well as customer talks from Motorola Mobility and State of Arizona.

The end of day one was met with smooth Jazz sounds of The San Francisco Metro Combo at our networking event where attendees got to mingle with their peers and WSO2 experts.

The second day commenced with a keynote by State of Arizona Chief Technology Officer Jason Simpson who examined their cloud-first strategy towards becoming a digital government. He spoke on the challenges of moving their legacy technology and systems to the cloud with low budgets to meet the increasing demands of their users and went on to explain how State of Arizona overcame them.

This was followed by an insightful customer panel on bridging IT and business in digital transformation moderated by our very own Vice President of Solutions Architecture Asanka Abeysinghe. The panel consisted of Jason Simpson, the CTO of State of Arizona, Sri Harsha Pulleti, an integration architect at Motorola Mobility and Naresh Sikha, the chief architect at StubHub.

The second day’s tracks consisted of technical sessions on integration, analytics, API management, IoT and Security by WSO2 experts and many customer talks and panels including:

Yet another day of learning and exchanging ideas came to an end. But that wasn’t it for day two! Right after the sessions we went into the carnival themed conference party where attendees got to mingle, play old-school arcade games like Street Fighter, take funky pictures at the photo booth and dance the night away. DJ Nikkie Matteo scratched some killer beats but the band, Pacific Soul, stole the show and made everyone cut loose, footloose and kick of their Sunday shoes!

Day three went straight into the tracks that included technical sessions in the areas of IoT, security and devops. It also had a track specially for partners and one specially by partners. The customer and partner talks on day three included:

  1. IoT in Airline Operations, Suresh Subasinghe, Principal Architect, United Airlines
  2. Multi-tenanted, Role-based Identity & Access Management Solution at West, Pranav Patel, VP, Systems Development, West Interactive
  3. 0-60 with WSO2: API Management and User Authentication and Authorization Automation, Ismail Seyfi, Lead Software Architect, iJET International and Matt Barnes, Automation and Software Engineer, iJET International
  4. Enhancing Customer Experience with WSO2 Identity Server, Manoj Thirutheri, Director, SaaS and Tools Engineering, Nutanix
  5. Providing a Pathway from Stovepipe Systems to a Secure SOA Enterprise, Neil Custer, Senior Enterprise Systems Engineer, Eagle TG
  6. Rise to the Challenge with WSO2 Identity Server and WSO2 API Manager, Stefan Smeets, Enterprise Architect & Unit Manager, RealDolmen
  7. Journey of Migration from Legacy ESB to Modern WSO2 ESB Platform, Michael Enos, Senior Director, Techsoup and Ratnavel Sundaramurthi, Integration Architect, Aspire Systems
  8. Integrating Systems for University of Exeter Using Zero and the WSO2 Platform, Jack A. Rider, CTO, Chakray

We even had a session through Skype on Managing Verifone’s New Payment Device “Carbon” with WSO2’s EMM by Ulrich Herberg, a senior Java architect at Verifone, who couldn’t be physically present at the venue!

For the unconference sessions, Sameera had to get back on stage to do more Ballerina demos for the eager crowd. They just couldn’t get enough of it!

The attendees gathered in the main hall once again to listen to the last few keynotes of the conference.

Catheryn Nicholson, an engineer, entrepreneur, mother, and former U.S. Naval Officer, who is also the Founder of BlockCypher gave the first engrossing keynote on blockchain’s digital disruption and why developers, startups, corporations, academic institutions, and governments are all examining what blockchain technology can solve.

After exploring the past and present of blockchain technology and how it may influence your business, Catheryn made quite an exciting reveal on the future of blockchain and cryptocurrency. A group of open source developers with Harry Potter pseudonyms are currently developing a protocol (called Mimblewimble), which is still largely theoretical but has a lot of potential to solve a number of the clunkiness issues that bitcoin has. She predicts that the project will come out this year. So make sure to keep an eye out for that!

WSO2 Vice President of Solutions Architecture Asanka Abeysinghe, gave the closing keynote on a pragmatic approach to digital transformation through iterative architecture. He spoke of his experience as a consultant and evangelist of digital transformation and examined how to overcome technical and non-technical barriers to your vision by thinking big and acting small.

Before the final adieu, we made sure to recognize customers who have been with us for the past ten (of our eleven) years. Sanjiva presented Ron Murphy from eBay, Jey Bala from Kaiser Permanente, Prakash Iyer from Trimble, and Concur (who wasn’t able to attend the conference) with a small token of appreciation for taking our first steps with us, and helping us get to where we are today.

Stay tuned for news on our next conference, and hope to see you there soon!

Introducing WSO2 Enterprise Integrator 6.0

WSO2 started out as a middleware company. Since then, we’ve realized – and championed the fact that our products enable not just technological infrastructure, but radically change how a company works.

All over the world, enterprises use our products to maximize revenue, create entirely new customer experiences and products, and interact with their employees in radically different ways. We call this digital transformation – the evolution of a company from one age to another, and our role in this has become more a technology partner than a simple software provider.

In this realization, we’ve announced WSO2 Enterprise Integrator (EI) 6.0. Enterprise Integrator brings together all of the products and technologies WSO2’s created for the enterprise integration domain – a single package of digital transformation tools closely connected together for ease of use.

When less is more

Those of you who are familiar with WSO2 products will know that we had more than 20 products across the entire middleware stack.

The rationale behind having such a wide array of products was to enable systems architects and developers to pick and choose the relevant bits that are required to build their solution architecture. These products were categorized into several broad areas such as integration, analytics, Internet of Things (IoT) and so on.

We realized that it was overwhelming for the architects and developers to figure out which products should be chosen. We also realized that digital transformation requires these products to be used in certain common patterns that mirrored five fields: Enterprise Integration, API Management, Internet of Things, Security and Smart Analytics.

In order to make things easier for everyone, we decided to match our offerings to how they’re used best. In Integration, this means we’ve combined the functionality of the WSO2 Enterprise Service Bus, Message Broker, Data Services Server and others; now, rather than including and setting up many many products to implement an enterprise integration solution you can simply download and run Enterprise Integrator 6 (EI 6.0).

What’s it got?

EI 6.0 contains service integration or service bus functionality. It has data integration, service, and app hosting, messaging, business processes, analytic and tooling. It also contains connectors which enable you to connect to external services and systems.



The package contains the following runtimes:

  1. Service Bus

Includes functionality from ESB, WSO2 Data Services Server (DSS) and WSO2 App Server (AS)

  1. Business Processes

Includes functionality of WSO2 Business Process Server (BPS).

  1. Message Broker

Includes the functionality of WSo2 Message Broker (MB). However, this is not to be used for purely message brokering solutions; this runtime is there for guaranteed delivery integration scenarios and Enterprise Integration Patterns (EIPs).

  1. Analytics

The analytics runtime for EI 6.0, useful for tracking performance, tracing mediation flows and more.

In order to provide a unified user experience, we’ve made some changes to the directory structure. This is what it looks like now:

The main runtime is the integrator or service bus runtime and all directories relevant to that runtime are at the top level.

This is very similar to the directory structure we use for other WSO2 products; the main difference is the WSO2 directory, under which the other runtimes are available.

Under the other runtimes, you find the same directory structure as the older releases of those products, as shown below.

One might ask why we’ve included multiple runtimes instead of putting everything in a single runtime. The reason for doing so is the separation of concerns. Short running, stateless integrations will be executed on the service bus runtime while long-running and possibly stateful integrations will be executed on the BPS runtime. We also have optional runtimes such as message broker and analytics which will be required only for certain integration scenarios and when analytics are required, respectively.

By leaving out unnecessary stuff, we can reduce the memory footprint and ensure that only what is required is loaded. In addition, when it comes to configuration files, only files related to a particular runtime will be available under the relevant runtime’s directory.

On the Management Console

There’s also been a change to the port that the management console uses. The 9443 servlet transport port is no longer accessible; we now use the 8243 HTTPS port. Integration services, web apps, data services and the management console are all accessible only on the passthrough transport port, which defaults to 8243.

Tooling

Eclipse-based tooling is available for the main integration and business process runtimes. For data integration, we recommend using the management console itself from the main integration runtime.


Why 6.0?

As the name implies, EI is an integration product. The most widely used product in the integration domain is the WSO2 Enterprise Service Bus (ESB), which in the industry is known to run billions of transactions per day. EI is in effect the evolution of WSO2 ESB 5.0, adding features coming from other products. Thus, it’s natural to dub this product 6.0 – the heart of it is still the same.

However, we’ve ensured that the user experience is largely similar to what it was in terms of the features of the previous generation of products.  The Carbon platform that underlies all of our products made it easy to achieve that goal.

Migration to EI 6.0

The migration cost from the older ESB, BPS, DSS and other related products to EI 6.0 is minimal. The same Synapse and Data Services languages, specifications and standards have been followed in EI 6.0. Minimal changes would be required for deploying automation scripts such as Puppet scripts -the directory structures are still very similar, and the configuration files haven’t changed.

Up Next: Enterprise Integrator 7.0

EI 6.0 is based on several languages – Synapse for mediation, BPMN and BPEL for business processes, DSS language for data integration.

A user who wants to implement an integration scenario involving mediation, business processes, and data integration has to learn several languages with different tooling. While it’s effective, we believe we can do better.

At WSO2Con 2017, we just unveiled Ballerina, an entirely new language for integration. EI 7.0 will be completely based on Ballerina – a single language and tooling experience. Now the integration developer can concentrate on the scenario, and implement it using a single language and tool with first level support for visual tooling using a sequence diagram paradigm to define integration scenarios.

However, 7.0 will come with a high migration cost. Customers who are already using WSO2 products in the integration domain can transition over to EI 6.0 – which we’ll be fully supporting – while planning on their 7.0 migration effort in the long term; the team will be working on tooling which will allow migration of major code to Ballerina.

WSO2 will continue to develop EI 6 and EI 7 in parallel. This means new features and fixes will be released as WUM updates and newer releases of the EI 6.0 family will be available over the next few years so that existing users are not forced to migrate to EI 7.0. This is analogous to how Tomcat continues to release 5.x, 6.x, 7.x and so on.


EI 6.0 is available for download at wso2.com/integration and on github.com/wso2/product-ei/releases. Try it out and let us know what you think – it’s entirely open source, so you can take a look under the hood if that takes your fancy. To report issues and make suggestions, head over to https://github.com/wso2/product-ei/issues.

Need more information? Looking to deploy WSO2 in an enterprise production environment? Contact us and we’ll get in touch with you.

 

How we handle security at WSO2

A Proactive Strategy for Security Management

Any decent software development organization generally has a well-defined set of policies and procedures for security management.

At WSO2, we – as in, the Platform Security Team – constantly collaborate with other product teams, customers and external security researchers to manage overall security of all WSO2 product. In this post, we’d like to talk about how we do this.


Part One: in the realm of code

code-944504_1280

I: Designing for security

The first stage of software design is the gathering of requirements. In open source software, we tend to use third-party code quite a bit – it’s how open source works: we stand on the shoulders of giants.
However, we can’t simply use what code we think is suitable.

The first check comes here. At WSO2, if we identify any kind of third-party code to be used, we need it to be first approved by the Engineering Management group, who are an internal group of seasoned architects who function at a directorial level. For us, security comes as a first priority, not as an afterthought.

The next set of checks come in the design phase. What are the communication protocols being used? How secure are they? Where is the data stored, and how? What endpoints are we exposing to the public? We go through a series of use cases to identify where this design can be broken, and work with the product design team to integrate our security concerns from the start.

II: Review, rinse, repeat

The next part is obvious: every developer is responsible for writing clean code [1, 2, 3].

Code written by each developer goes through a process of code quality reviewing overseen by members of the relevant product team and the Platform Security Team. When submitting the code for reviewing, the developer has to submit the static code analysis reports – generated using tools like FindSecBugs [4]. This is a mandatory security check in the reviewing process. Only upon fixing all issues spotted in the first pass is code is merged to the repository.

III: Testing with the automated grindhouse

At WSO2, we use Jenkins quite a lot for automating the build process. It builds individual components; it packages components together; it constantly builds and re-builds.

A large part of our security testing is integrated right into this process. Jenkins first performs the OWASP Dependency Check [5, 6], which analyzes the project dependencies and produces vulnerability reports. Even after the selection process in the first stage is complete, there can be some vulnerabilities that we haven’t spotted – especially if they’ve only been discovered extremely recently.

Next, Jenkins uses FindSecBugs as a plugin; during each automated build cycle, it checks individual components and generates vulnerability reports, which are in turn submitted to the security team for review.

Jenkins also uses the OWASP Zed Attack Proxy for dynamic code analysis [7, 8]. During the dynamic security analysis, the entire URL tree of the product is scanned and well-known attacks and exploits are automatically performed; the results are reported. These reports, too, are investigated by the respective product team as well as the Platform Security Team.

Once the testing is complete and a product is ready to be released, the respective product team has to receive security clearance from the Platform Security Team. If any known vulnerabilities are still listed in the reports, the product team has to justify to us the existence of the reported vulnerability – a pretty hard job.

We find that developers may write code following all the best security practices, but when the code is merged together, it might still open up a vulnerability because of how everything integrates together.


 Part Two: when humans happen

astronaut-and-robonaut-shake-hands

I: Preparing for the real world

There’s a saying: no battle plan survives contact with the customer. Although security standards and processes are followed to the letter, our products have to run in the real world.

One of the most important things is building awareness. We put together a set of deployment patterns, security recommendations, and best practices to be followed when deploying our products; we also conduct public webinars for making awareness in security related topics for WSO2 users, which are available at wso2.com/library/webinars.

II: Building internal Champions

Sometimes there is a gap between the product team and the security team, since the members of the security team might not be specialists of the product.

In order to bridge this gap, we’ve have someone we call the ‘Security Champion’ in each product team. The Security Champion of the product team is responsible for maintaining the safety of the product and conducting vulnerability assessments.

All Security Champions (from different product teams) directly work with the Platform Security Team and share knowledge and experiences with each other. They also share the knowledge of the Platform Security Team back with the members of the product teams.

III: Patching up 

When a vulnerability is detected in a product, patches are created for all the versions that the issue exists in. If the severity of the vulnerability is catastrophic, these patches will be released to all customers immediately. If the severity is not catastrophic, we aggregate all patches developed during the month and release the lot at the end of the month as a security bulletin.

When a patch is ready, it’s sent out through WSO2 Update Manager (WUM), added to wso2.com/security-patch-releases and publicly announced. Every version of any given product supported by WUM will receive the patches automatically. Note that unless the product is supported by WUM, security patches are publicly released only for the very latest version of the products.

Moving forward, we’ve started recording this in Documentation at docs.wso2.com/display/Security/Security+Advisories for the sake of preserving more patch information. This effort is still recent but will add up over time.

IV: Responding to Vulnerability Reports

Technology gets updated every day and there are always new vulnerabilities and exploits discovered. We welcome contributions from our user community, developers, and security researchers to reinforce our product security. Over the years, a great many people – both customers and from the community -have helped us make our products the best they can be.

When someone reports a vulnerability, we try to verify the issue and respond to the reporter. If the vulnerability is a true positive, the patching process begins.

Generally, we do ask that the reporter refrains from publicly disclosing the vulnerability until we’ve patched it – this is to prevent anyone who might be vulnerable from being targeted.

We’re always looking for ways to make this easier. For example, we’ve set up wso2.com/security to serve as an easy, central point for our community to report issues. As time goes on,


 

References

[1] OWASP Secure Coding Practices https://www.owasp.org/index.php/OWASP_Secure_Coding_Practices_-_Quick_Reference_Guide

[2] Oracle Secure Coding Guidelines for Java http://www.oracle.com/technetwork/java/seccodeguide-139067.html

[3] SANS Secure Coding Guidelines https://www.sans.org/course/secure-coding-java-jee-developing-defensible-applications

[4] Static Code Analysis for Java using FindBugs Plugin and Identifying Security Bugs with FindSecurityBugs Plugin
http://tharindue.blogspot.com/2016/06/static-code-analysis-for-java-using.html

[5] OWASP Dependency Check CLI – Analyzing Vulnerabilities in 3rd Party Libraries http://tharindue.blogspot.com/2016/10/owasp-dependency-check-cli-analyzing.html

[6] Checking vulnerabilities in 3rd party dependencies using OWASP Dependency-Check Plugin in Jenkins https://medium.com/@PrakhashS/checking-vulnerabilities-in-3rd-party-dependencies-using-owasp-dependency-check-plugin-in-jenkins-bedfe8de6ba8#.ipu0b8u4o

[7] Dynamic Scanning with OWASP ZAP for Identifying Security Threats https://medium.com/@PrakhashS/dynamic-scanning-with-owasp-zap-for-identifying-security-threats-complete-guide-52b3643eee04#.nyy1fwiok

[8] Automating the boring stuff in development using ZAP and Jenkins : Continuous Integration
https://medium.com/@PrakhashS/automating-the-boring-stuffs-using-zap-and-jenkins-continues-integration-d4461a6ace1a#.jtknrzajt

Better Transport for a better London: How We Won TfL’s Data in Motion Hackathon

Transport for London (TfL)  is a fascinating organization. The iconic red circle is practically part and parcel of the everyday life of the 1.3 billion people that the TfL network transports across London.

As part of their mandate, TfL is constantly on the search for ways better manage traffic, train capacity, maintenance, and even account for air quality during commutes. These are some very interesting challenges, so when TfL, Amazon Web Services and Geovation hosted a public hackathon, we at WSO2 decided to come up with our own answers to some of these problems.

Framing the problem

29413875894_f7ba6582b0_k
TfL’s Chief Technical Architect, Gordon Watson, catches up with the WSO2 team. Photo by TFL.

TfL pushes out a lot of data regarding the many factors that affect public transport within Greater London; a lot of this is easily accessible via the TfL Unified API from https://api.tfl.gov.uk/. In addition to volumes of historical data, TfL also controls a network of SCOOT traffic sensors deployed across London. Given a two-day timeframe, we narrowed our focus down to three main areas:

  1. To use historical data regarding the number of passengers at stations to predict how many people would be on a selected train or inside a selected station
  2. To use Google Maps and combine that with sensor data from TfL sensors across the city to pick the best routes from point A to B, while predicting traffic, five to ten minutes into the future, so that commuters could pick the best routes
  3. To pair air quality data from any given region and suggest safer walking and cycling routes for the denizens of Greater London

Using WSO2 Complex Event Processor (which holds our Siddhi CEP engine) with Apache Spark and Lucene (courtesy of WSO2 Data Analytics Server), we were able to use TfL’s data to build a demo app that provided a solution for these three scenarios.

1

For starters, here’s how we addressed the first problem. With data analysis, it’s not just possible to estimate how many people are inside a station; we can break this down to understand traffic from entrance to a platform, from a platform to the exit, and between platforms. This makes it possible to predict incoming and outgoing crowd numbers. The map-based user interface that you see above allows us to represent this analysis.

The second solution makes use of the sensor network we spoke of earlier. Here’s how TfL sees traffic.

2

The red dots are junctions; yellow dots are sensors; dashed lines indicate traffic flow. The redder the dashed lines are, the denser the traffic at that area. We can overlay the map with reported incidents and ongoing roadworks, as seen in the screenshot below:

3Once this picture is complete, we have the data needed to account for road and traffic conditions while finding optimal routes.

This is what Google suggests:

4

We can push the data we have to WSO2 CEP, which runs streaming queries to perform flow, traffic, and density analytics. Random Forest classification enables us to use this data to build a machine learning model for predicting traffic – a model which, even with relatively little data, was 88% accurate in our tests.  Combining all of this gives us a richer traffic analysis picture altogether.

5

For the third problem – the question of presenting safer walking and cycling routes using air quality – our app pulled air pollution data from TfL’s Unified API.

This helps us to map walking routes; since we know where the bike stations are, it also lets us map safer cycling routes. It also allows us to push weather forecasts and air quality updates to commuters.

A better understanding of London traffic

In each scenario, we were also able to pinpoint ways of expanding on, or improving what we hacked together. What this essentially means is that we can better understand traffic inside train stations, both for TfL and for commuters. We can use image processing and WiFi connections to better gauge the number of people inside each compartment; we can show occupancy numbers in real-time across screens in each station, and on apps, and assist passengers with finding the best platform to catch a less crowded compartment.

We can even feed Oyster Card tap data into WSO2 Data Analytics Server, apply machine learning to build a predictive model, and use WSO2 CEP to predict source to destination travel times. Depending on screen real estate, both air quality and noise level measures could be integrated to keep commuters better informed of their travelling conditions.

How can we improve on traffic prediction? By examining historical data, making a traffic prediction, then comparing that with actual traffic levels, we could potentially predict  traffic incidents that our sensors might have missed. We could also add location-based alerts pushed out the commuters – and congestion warnings and time-to-target countdowns on public buses.

We have to say that there were a number of other companies hacking away on excellent solutions of their own; it was rather gratifying to be picked as the winners of the hackathon. For more information, and to learn about the solutions that we competed against, please read TfL’s blog post on the hackathon.