Retailers optimize multichannel IT implementations

From the simple setup that involved using a TV and a domestic telephone line, e-commerce has greatly evolved since its introduction by Michael Aldrich in the late 1970s. Today it includes more innovative ways to shop online including mobile devices as well as connected stores.

In order to keep pace with the growing demands of today’s customers and partners, retail businesses need to deliver connected and personalized experiences across stores, the web, mobile and social channels. Becoming a connected enterprise helps to offer these experiences to consumers.

For the enterprise, a connected retail business will help to increase the reach of the business, explore and discover new business opportunities, and increase revenue. But that’s easier said than done. The complexity of the IT landscape, which consists of multiple disparate systems linked together and exposed through several interfaces and channels, pose many challenges.

Kasun Indrasiri, an architect at WSO2, authored the white paper “Connected Retail Reference Architecture,” that discusses the importance of creating a connected retail system today and explains how a complete middleware platform can help address these challenges and meet the demands of multichannel retail IT requirements.

Here are some insights from his white paper.

Among some of the key hurdles an enterprise would need to overcome to become connected is to develop transparent, collaborative, and real-time supply chains through seamless interaction with all systems to optimize underlying inventory stores. Managing multiple channels through which data and sales management are performed has also become extremely difficult due to its large scale.

To this end, a retail enterprise can adopt a comprehensive solution that will connect the dots and eventually facilitate the creation of a fully functional ecosystem. This ecosystem must contain various layers including an integration layer that allows merchandising, order management, supply chain, and distribution systems to communicate with each other. It should also have an API management layer that will be used to expose functionalities directly to customers and external users while business analytics in the analytics layer will be used to gather insightful information that’s key and relevant to the business.

Screen Shot 2016-04-26 at 3

A successful connected retail enterprise will seamlessly connect, manage and control its service layers, underlying web services, and all other business services.

An architecture such as this can help create a rich customer experience through fast delivery and checkout procedures, manage multiple channels through which data and sales management are performed, and seamlessly upload price updates so that it propagates to all parts of the retail ecosystem.

To learn more about how products within the comprehensive, open source WSO2 enterprise middleware platform can be used to meet a retail enterprise’s IT requirements, you can download Kasun’s whitepaper by visiting  


What is new in WSO2 Identity Server 5.3.0?

Since its launch in 2007, WSO2 Identity Server (WSO2 IS) has become an industry leading product in the open source, on-premise IAM space. It’s trusted by both the government and private sectors for large scale deployments ranging up to millions of users.

Apart from the open standard support, WSO2 IS has a solid architecture to build a strong identity ecosystem around it. More than 40 connectors are now available for you to download from WSO2 Connector Store – including SMS OTP, Email OTP, TOTP (Google Authenticator), Duo Security, mePIN, RSA, FIDO U2F  – and many more. All these connectors are released under the same open source Apache 2.0 license, as of the product.

The focus of WSO2 Identity Server 5.3.0 is to build and enhance features around Identity/Account Administration and Access Governance. Here are the new features introduced in WSO2 Identity Server 5.3.0:

  • Identify and suspend user accounts that have been idle for a pre-configured amount of time. Prior to account suspension, the administrator can set up the notification system to notify the user with a warning that the account will be suspended.
    For instance, if a user has not logged in to his/her account for 90 days, the user will be notified that his account will be suspended within the next 7 days if there continues to be no activity, after which the account will be suspended.
  • A new REST API was introduced to recover a lost/forgotten password, i.e., by using email notifications or secret questions. It is also possible to recover the username if forgotten. This extends the functionality of the SOAP service WSO2 IS had before 5.3.0.
  • The administrator can trigger the password reset for a given user. This may be required if the user forgets the credentials and then makes a request to the administration to reset the password — and also in cases where the credentials get exposed to outsiders then the administrator can lock the account and enforce password reset.
  • Support for Google reCAPTCHA as a way of brute-force mitigation. The administrator can configure Google reCAPTCHA in the login, password/account recovery and sign up flows.
  • Maintain the history of the user’s passwords according to a pre-configured count. This prevents a user from using a password he/she has used in the recent past. For example, if you configure a count of 5, the user will be prevented from reusing his/her last 5 passwords as the current password.
  • The administrator can monitor all the login sessions — and can selectively terminate.
  • Enforce policies to control outbound user provisioning operations. For example, you can provision users having the salesteam role to Salesforce and anyone having an email address with the domain name to Google Apps.
  • Partition users by service providers. WSO2 IS had support for multiple user stores since its version 4.5.0. With this new feature, the administrator can specify against which user store the user should authenticate, by the service provider. For example, only the users in the foo user store will be able to access the foo service provider.
  • Enforce policies during the authentication flow. The administrator can, for example, enforce a policy which states only the users having the salesteam role can access Salesforce, and only during a weekday from 8 AM to 4 PM.
  • Improvements for the JIT provisioning flow. The administrator can now specify mandatory attribute requirements for JIT provisioning and if any of those are missing, WSO2 IS will prompt the user to enter the values for the missing attributes.
  • Improvements for identity analytics. With WSO2 IS 5.3.0 the identity administrator can get alerts for abnormal and suspicious login sessions.

In addition to the above set of features, WSO2 IS 5.3.0 also introduced a set of enhancements for its existing open standards.

  • SAML 2.0 Metadata Profile
  • SAML 2.0 Assertion Query/Request Profile
  • OpenID Connect Dynamic Client Registration
  • OAuth 2.0 Token Introspection
  • OpenID Connect Discovery
  • JSON/REST profile of XACML

WSO2 IS 5.3.0 is now the best it’s ever been. We hope you will find it quite useful to address your enterprise identity management requirements, and we’re more than happy to hear your feedback/suggestions — please feel free to post them to or

Implementing an Effective Deployment Process for WSO2 Middleware

Image reference:

At WSO2, we provide middleware solutions for Integration, API Management, Identity Management, IoT and Analytics. Running our products on a local machine is quite straightforward: one just needs to install Java, download the required WSO2 distribution, extract the zip file and run the executable.

This provides a middleware testbed for the user in no time. If the solution needs multiple WSO2 products, those can be run on the same machine by changing the port-offsets and configuring the integrations accordingly.

This works very well for trying out product features and implementing quick PoCs. However, once the preliminary implementation of the project is done, a proper deployment process is needed for moving the system to production. 

Any software project needs at least three environments for managing development, testing, and the live deployments. More importantly, a software governance model would be needed for delivering new features, improvement, bug fixes and managing the overall development process.

This becomes crucial when the project implements the system on top of a middleware solution. Both middleware and application changes will need to be delivered. There might be considerable amounts of prerequisites, artifacts and configurations. Without having a well-defined process, it would be difficult to manage such projects efficiently.

A High-Level Examination

One would have to consider the following points would need to be considered when implementing an effective deployment process:

  • Infrastructure

WSO2 middleware can be deployed on physical machines, virtual machines and on containers. Up to now most deployments have been done on virtual machines.

Around 2015, WSO2 users started moving towards container-based deployments using Docker, Kubernetes and Mesos DC/OS. As containers do not need a dedicated operating system instance, this cuts down resource requirements for running an application – in contrast to a VM. In addition, the container ecosystem makes the deployment process much easier using lightweight container images and container image registries.

We provide Puppet Modules, Dockerfiles, Docker Compose, Kubernetes and Mesos (DC/OS) artifacts for automating such deployments.

  • Configuration Management

The configuration for any WSO2 product can be found inside the relevant repository/conf folder. This folder contains a collection of configuration files corresponding to the features that the product provides.

The simplest solution is to maintain these files in a version control system (VCS) such as Git. If the deployment has multiple environments and a collection of products, it might be better to consider using a configuration management system such as Ansible, Puppet, Chef or Salt Stack for reducing configuration value duplication.

We ship Puppet modules for all WSO2 products for this purpose.

  • Extension Management

WSO2 middleware provides extension points in all WSO2 products for plugging in required features.

For example, in WSO2 Identity Server a custom user store manager can be implemented for connecting to external user stores. In the WSO2 Integration products, handlers or class mediators can be implemented for executing custom mediation logic.  Almost all of these extensions are written in Java and deployed as JAR files. These files will simply need to be copied to the repository/components/lib folder or the repository/components/dropins folder if they are OSGi compliant.

  • Deployable Artifact Management

Artifacts that can be deployed in repository/deployment/server folder fall under this category. For, example, in the ESB, proxy services, REST APIs, inbound endpoints, sequences, security policies can be deployed in runtime via the above folder.

We recommend that you create these artifacts in WSO2 Developer Studio (DevStudio) and package them into Carbon Archive (CAR) files for deploying them as collections. WSO2 DevStudio provides a collection of project templates for managing deployable files of all WSO2 products. These files can be effectively maintained using a VCS.

These files can be effectively maintained using a Version Control System.

  • Applying Patches/Updates

Patches were applied to a WSO2 product by copying the patch<number> folder which is found inside the patch zip file to the repository/deployment/patches/ folder.

We recently introduced a new way of applying patches for WSO2 products with WSO2 Update Manager (WUM). The main difference of updates, in contrast to the previous patch model, is that fixes/improvements cannot be applied selectively; it applies all the fixes issued up to a given point using a CLI. This is the main intention of this approach.

  • Lifecycle Management

In any software project it is important to have at least three environments – one for managing development, one for testing and one for production deployments.  New features, bug fixes or improvements need to be first done in the development environment and then moved to the testing environment for verification. Once the functionality and performance are verified the changes can be applied in production (as explained in the “Rolling Out Changes”) section.

The performance verification step might need to have resources identical to the production environment for executing load tests. This is vital for deployments where performance is critical.

With our products, changes can be moved from one environment to the other as a delivery.  Deliveries can be numbered and managed via tags in Git.

The key advantage of using this approach is the ability to track, apply and roll back updates at any given time.

  • Rolling Out Changes

Changes to the existing solution can be rolled out in two main methods:

1. Incremental Deployment (also known as Canary Release).

The idea of this approach is to incrementally apply changes to the existing solution without having to completely switch the entire deployment to the new solution version. This gives the ability to verify the delivery in the production environment using a small portion of the users before propagating it to everyone.

2. Blue-Green Deployment

In Blue-Green deployment method, the deployment is switched to the newer version of the solution at once. It would need an identical set of resources for running the newer version of the solution in parallel to the existing deployment until the newer version is verified. In case of failure, the system can be switched back to the previous version via the router. Taking such approach might need a far more thorough testing procedure compared to the first approach.

Deployment Process Approach 1

This illustrates the simplest form of executing a WSO2 deployment effectively.

In this model the configuration files, deployable artifacts and extension source code are maintained in a version control system. WSO2 product distributions are maintained separately in a file server. Patches/updates are directly applied to the product distributions and new distributions are created. The separation of distributions and artifacts allows product distributions to be updated without losing any project content.

As shown by the green box in the middle, a deployable product distribution is created, combining the latest product distributions, configuration files, deployable artifacts and extensions. Deployable distributions can be extracted on physical, virtual machines or containers and run. Depending on the selected deployment pattern, multiple deployable distributions will need to be created for a product.

In a containerized deployment, each deployable product distribution will have a container image. Depending on the containerized platform, a set of orchestration and load balancing artifacts might also be used.

Deployment Process Approach 2

In the second approach, a configuration management system has been used for reducing the duplication of the configuration data and automating the installation process.

Similar to the first approach, deployable artifacts, configuration data and extension source code are managed in a version control system. Configuration data needs to be stored in a format that is supported by the configuration management system.

For an example, in a Puppet configuration, data is either stored in manifest files or Hiera YAML files. Deployable WSO2 product distributions are not created. Rather, that process is executed by the configuration management system inside a physical machine, virtual machine or in a container at the container build time.

In conclusion

Any of the deployment approaches we’ve spoken about above can be followed with any infrastructure. If a configuration management system is used, it can be used for installing and configuring the solution on virtual machines and as well as on containers. The main difference with containers is that configuration management agent will only be triggered at the container image build time. It may not be run in the when the container is running.

If a configuration management system is used, it can be used for installing and configuring the solution on virtual machines and as well as on containers. The main difference with containers is that configuration management agent will only be triggered at the container image build time. It may not be run in the when the container is running.

At the end of the day, a proper deployment process is essential. For more information and support, please reach out to us. We’d be happy to help.

How we handle security at WSO2

A Proactive Strategy for Security Management

Any decent software development organization generally has a well-defined set of policies and procedures for security management.

At WSO2, we – as in, the Platform Security Team – constantly collaborate with other product teams, customers and external security researchers to manage overall security of all WSO2 product. In this post, we’d like to talk about how we do this.

Part One: in the realm of code


I: Designing for security

The first stage of software design is the gathering of requirements. In open source software, we tend to use third-party code quite a bit – it’s how open source works: we stand on the shoulders of giants.
However, we can’t simply use what code we think is suitable.

The first check comes here. At WSO2, if we identify any kind of third-party code to be used, we need it to be first approved by the Engineering Management group, who are an internal group of seasoned architects who function at a directorial level. For us, security comes as a first priority, not as an afterthought.

The next set of checks come in the design phase. What are the communication protocols being used? How secure are they? Where is the data stored, and how? What endpoints are we exposing to the public? We go through a series of use cases to identify where this design can be broken, and work with the product design team to integrate our security concerns from the start.

II: Review, rinse, repeat

The next part is obvious: every developer is responsible for writing clean code [1, 2, 3].

Code written by each developer goes through a process of code quality reviewing overseen by members of the relevant product team and the Platform Security Team. When submitting the code for reviewing, the developer has to submit the static code analysis reports – generated using tools like FindSecBugs [4]. This is a mandatory security check in the reviewing process. Only upon fixing all issues spotted in the first pass is code is merged to the repository.

III: Testing with the automated grindhouse

At WSO2, we use Jenkins quite a lot for automating the build process. It builds individual components; it packages components together; it constantly builds and re-builds.

A large part of our security testing is integrated right into this process. Jenkins first performs the OWASP Dependency Check [5, 6], which analyzes the project dependencies and produces vulnerability reports. Even after the selection process in the first stage is complete, there can be some vulnerabilities that we haven’t spotted – especially if they’ve only been discovered extremely recently.

Next, Jenkins uses FindSecBugs as a plugin; during each automated build cycle, it checks individual components and generates vulnerability reports, which are in turn submitted to the security team for review.

Jenkins also uses the OWASP Zed Attack Proxy for dynamic code analysis [7, 8]. During the dynamic security analysis, the entire URL tree of the product is scanned and well-known attacks and exploits are automatically performed; the results are reported. These reports, too, are investigated by the respective product team as well as the Platform Security Team.

Once the testing is complete and a product is ready to be released, the respective product team has to receive security clearance from the Platform Security Team. If any known vulnerabilities are still listed in the reports, the product team has to justify to us the existence of the reported vulnerability – a pretty hard job.

We find that developers may write code following all the best security practices, but when the code is merged together, it might still open up a vulnerability because of how everything integrates together.

 Part Two: when humans happen


I: Preparing for the real world

There’s a saying: no battle plan survives contact with the customer. Although security standards and processes are followed to the letter, our products have to run in the real world.

One of the most important things is building awareness. We put together a set of deployment patterns, security recommendations, and best practices to be followed when deploying our products; we also conduct public webinars for making awareness in security related topics for WSO2 users, which are available at

II: Building internal Champions

Sometimes there is a gap between the product team and the security team, since the members of the security team might not be specialists of the product.

In order to bridge this gap, we’ve have someone we call the ‘Security Champion’ in each product team. The Security Champion of the product team is responsible for maintaining the safety of the product and conducting vulnerability assessments.

All Security Champions (from different product teams) directly work with the Platform Security Team and share knowledge and experiences with each other. They also share the knowledge of the Platform Security Team back with the members of the product teams.

III: Patching up 

When a vulnerability is detected in a product, patches are created for all the versions that the issue exists in. If the severity of the vulnerability is catastrophic, these patches will be released to all customers immediately. If the severity is not catastrophic, we aggregate all patches developed during the month and release the lot at the end of the month as a security bulletin.

When a patch is ready, it’s sent out through WSO2 Update Manager (WUM), added to and publicly announced. Every version of any given product supported by WUM will receive the patches automatically. Note that unless the product is supported by WUM, security patches are publicly released only for the very latest version of the products.

Moving forward, we’ve started recording this in Documentation at for the sake of preserving more patch information. This effort is still recent but will add up over time.

IV: Responding to Vulnerability Reports

Technology gets updated every day and there are always new vulnerabilities and exploits discovered. We welcome contributions from our user community, developers, and security researchers to reinforce our product security. Over the years, a great many people – both customers and from the community -have helped us make our products the best they can be.

When someone reports a vulnerability, we try to verify the issue and respond to the reporter. If the vulnerability is a true positive, the patching process begins.

Generally, we do ask that the reporter refrains from publicly disclosing the vulnerability until we’ve patched it – this is to prevent anyone who might be vulnerable from being targeted.

We’re always looking for ways to make this easier. For example, we’ve set up to serve as an easy, central point for our community to report issues. As time goes on,



[1] OWASP Secure Coding Practices

[2] Oracle Secure Coding Guidelines for Java

[3] SANS Secure Coding Guidelines

[4] Static Code Analysis for Java using FindBugs Plugin and Identifying Security Bugs with FindSecurityBugs Plugin

[5] OWASP Dependency Check CLI – Analyzing Vulnerabilities in 3rd Party Libraries

[6] Checking vulnerabilities in 3rd party dependencies using OWASP Dependency-Check Plugin in Jenkins

[7] Dynamic Scanning with OWASP ZAP for Identifying Security Threats

[8] Automating the boring stuff in development using ZAP and Jenkins : Continuous Integration