apim
2016/10/25
25 Oct, 2016

[Article] Scalable Traffic Manager Deployment Patterns for WSO2 API Manager - Part 1

  • Sanjeewa Malalgoda
  • Director - Engineering | Architect at WSO2 - WSO2

Table of contents



Introduction

A typical WSO2 API Manager deployment requires an additional instance to process the data of every API request and take throttling decisions based on the applicability of available throttle policies. With new throttling implementation released with API Manager 2.0.0, we were able to overcome many of these limitations. Throttling decisions are made by the Siddhi runtime. Throttling decisions are published to JMS topic. Since each and every API gateway is subscribed to this JMS topic, API gateways are notified of throttling decisions instantly.



The importance of Traffic Manager

A Traffic Manager instance in any distributed deployment is dedicated to throttle policy evaluation through the Siddhi runtime. When an API request comes to a gateway, it will send data to the Traffic Manager node, which will then take throttle decisions.

In a nutshell, the Traffic Manager executes the throttle policies against data coming with every event and takes decisions based on the applicability of each throttle policy available in the system. If a particular request is throttled, then the Traffic Manager sends those details to a JMS topic. Every gateway node is subscribed to this JMS topic; hence the gateway nodes get notified of throttle decisions through JMS messages.

The throttle decision message that comes to the gateway contains the throttle key, throttle window expiry time and throttle state. These properties will be used by the throttle handler to take decisions about incoming messages to the gateway.

The data required for the Traffic Manager needs to be published from each gateway node. Each and every request that comes to the gateway must be published. Binary or thrift transport can be used to publish this data from gateway to the Traffic Manager; in the implementation, this data publishing has been implemented asynchronously, in order to remove the overhead of data publishing from the API request path.

This ensures that API requests are unaffected of data publishing overhead: even if the Traffic Manager instance goes down, the API gateway will continue to serve the API requests - without throttling, of course.



How to deploy an external Traffic Manager cluster

In this post we will mainly discuss how a Traffic Manage server can scale and be used for different deployment patterns. First we will examine the need to create databases and share them across Traffic Manager instances. You may use the same databases for the store/publisher/key manager as well, because all of these usually reside within the militarized zone.



Configure databases and connect them

This is not a mandatory step to setup instances, but in production deployments we generally use a central relational database to store data - and do not use local file system based databases. In this example, therefore, we will use MySQL databases to setup deployment. However, if you need to, you can try with any other database type or local H2 databases.

Create database.
mysql> create database 200tmregistry;
Query OK, 1 row affected (0.02 sec)
mysql> use 200tmregistry;
Database changed
mysql> source /home/sanjeewa/work/packs/scaled-traffic/wso2am-2.0.0-TM1/dbscripts/mysql.sql

mysql> create database 200tmusermanager;
Query OK, 1 row affected (0.00 sec)
mysql> use 200tmusermanager;
Database changed
mysql> source /home/sanjeewa/work/packs/scaled-traffic/wso2am-2.0.0-TM1/dbscripts/mysql.sql

mysql> create database 200tmamdb;
Query OK, 1 row affected (0.00 sec)
mysql> use 200tmamdb;
Database changed
mysql> source /home/sanjeewa/work/packs/scaled-traffic/wso2am-2.0.0/dbscripts/apimgt/mysql.sql

Next, follow the instructions given below to add the following configurations for using data sources:

  1. Add the following to /repository/conf/datasources/master-datasources.xml
    <datasource>
            <name>WSO2AM_DB</name>
            <description>The datasource used for the API Manager database</description>
            <jndiConfig>
                <name>jdbc/WSO2AM_DB</name>
            </jndiConfig>
            <definition type="RDBMS">
                <configuration>
                    <url>jdbc:mysql://apim_rdbms:3306/200tmamdb?autoReconnect=true</url>
                    <username>root</username>
                    <password>root</password>
                    <defaultAutoCommit>false</defaultAutoCommit>
                    <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                    <maxActive>50</maxActive>
                    <maxWait>60000</maxWait>
                    <testOnBorrow>true</testOnBorrow>
                    <validationQuery>SELECT 1</validationQuery>
                    <validationInterval>30000</validationInterval>
                </configuration>
            </definition>
    </datasource>
    
    <datasource>
            <name>WSO2UM_DB</name>
            <description>The datasource used by user manager</description>
            <jndiConfig>
                <name>jdbc/WSO2UM_DB</name>
            </jndiConfig>
            <definition type="RDBMS">
                <configuration>
                    <url>jdbc:mysql://apim_rdbms:3306/200tmusermanager?autoReconnect=true</url>
                    <username>root</username>
                    <password>root</password>
                    <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                    <maxActive>50</maxActive>
                    <maxWait>60000</maxWait>
                    <testOnBorrow>true</testOnBorrow>
                    <validationQuery>SELECT 1</validationQuery>
                    <validationInterval>30000</validationInterval>
                </configuration>
            </definition>
    </datasource>
    
    <datasource>
            <name>WSO2REG_DB</name>
            <description>The datasource used by the registry</description>
            <jndiConfig>
                <name>jdbc/WSO2REG_DB</name>
            </jndiConfig>
            <definition type="RDBMS">
                <configuration>
                    <url>jdbc:mysql://apim_rdbms:3306/200tmregistry?autoReconnect=true</url>
                    <username>root</username>
                    <password>root</password>
                    <driverClassName>com.mysql.jdbc.Driver</driverClassName>
                    <maxActive>50</maxActive>
                    <maxWait>60000</maxWait>
                    <testOnBorrow>true</testOnBorrow>
                    <validationQuery>SELECT 1</validationQuery>
                    <validationInterval>30000</validationInterval>
               </configuration>
            </definition>
    </datasource>
    
  2. Add the following to /repository/conf/registry.xml (if you are starting the server with the Traffic Manager profile then rename registry_TM.xml as registry.xml and perform the configurations there):
  3. <dbConfig name="govregistry">
        	<dataSource>jdbc/WSO2REG_DB</dataSource>
    	</dbConfig>
    	<remoteInstance url="https://publisher">
        	<id>gov</id>
        	<cacheId>root@jdbc:mysql://apim_rdbms:3306/200tmregistry</cacheId>
        	<dbConfig>govregistry</dbConfig>
        	<readOnly>false</readOnly>
        	<enableCache>true</enableCache>
        	<registryRoot>/</registryRoot>
    	</remoteInstance>
    	<mount path="/_system/governance" overwrite="true">
        	<instanceId>gov</instanceId>
        	<targetPath>/_system/governance</targetPath>
    	</mount>
    	<mount path="/_system/config" overwrite="true">
        	<instanceId>gov</instanceId>
        	<targetPath>/_system/config</targetPath>
    	</mount>
    
  4. Add the following to /repository/conf/user-mgt.xml
    <Property name="isCascadeDeleteEnabled">true</Property>
    <Property name="dataSource">jdbc/WSO2UM_DB</Property>
    
  5. Copy the mysql lib jar to the repository/component/lib directory.



Configure Traffic Manager cluster

Configurations in Traffic Manager - 01

(Add Traffic Manager specific axis2.xml registry xml)

For this test setup we will run two Traffic Manager instances in same machine. SWe need to run servers with port offset. For this setup the port offsets would be as follows.

Traffic Manager1 1
Traffic Manager2 2

Edit carbon.xml file in Traffic Manager node 01 and add following.

<Offset>1</Offset>

Remove existing webapps and jaggeryapps from <Product-home>/repository/deployment/server folder.

Do the clustering configurations in the <PRODUCT_HOME>/repository/conf/axis2/axis2.xml file of both nodes. If you’re running only the Traffic Manager profile then you can rename axis2_TM.xml as axis2.xml and then do configurations therein. In each node we need to add the other Traffic Manager node as well known members. See the following complete configurations for more information:

<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
<parameter name="membershipScheme">wka</parameter>
<parameter name="domain">wso2.carbon.domain</parameter>
<parameter name="localMemberHost">172.17.0.1</parameter>
<parameter name="localMemberPort">4000</parameter>
    <members>
        <member>
            <hostName>172.17.0.1</hostName>
            <port>4001</port>
        </member>
    </members>
</clustering>

Traffic Manager node 01 needs to push events to the queue in Traffic Manager node 02 as well, because Traffic Manager node 02 can become the active node at any time. With the following configuration, each node will perform throttle calculations and add events to topics deployed in the current node and then add same to the other node as well.

Update following in /wso2am-2.0.0-TM1/repository/conf/event-processor.xml. Please refer this article for more information about CEP clustering.

  <mode name="HA" enable="true">
    	<nodeType>
        	<worker enable="true"/>
        	<presenter enable="false"/>
    	</nodeType>
    	<checkMemberUpdateInterval>10000</checkMemberUpdateInterval>
    	<eventSync>
        	<hostName>172.17.0.1</hostName>
     	 
    	</eventSync>
    	<management>
        	<hostName>172.17.0.1</hostName>
        	<port>10005</port>
    	</management>
    	<presentation>
        	<hostName>172.17.0.1</hostName>
        	<port>11000</port>
    	</presentation>
    	<persistence enable="true">
        	<persistenceIntervalInMinutes>15</persistenceIntervalInMinutes>
        	<persisterSchedulerPoolSize>10</persisterSchedulerPoolSize>
        	<persister class="org.wso2.carbon.event.processor.core.internal.persistence.FileSystemPersistenceStore">
            	<property key="persistenceLocation">cep_persistence</property>
        	</persister>
    	</persistence>
	</mode>

Add the following configurations to /wso2am-2.0.0-TM1/repository/deployment/server/eventpublishers/jmsEventPublisher2.xml:

<?xml version="1.0" encoding="UTF-8"?>
<eventPublisher name="jmsEventPublisher2" statistics="disable"
  trace="disable" xmlns="https://wso2.org/carbon/eventpublisher">
  <from streamName="org.wso2.throttle.globalThrottle.stream" version="1.0.0"/>
  <mapping customMapping="disable" type="map"/>
  <to eventAdapterType="jms">
	<property name="java.naming.factory.initial">org.wso2.andes.jndi.PropertiesFileInitialContextFactory</property>
	<property name="java.naming.provider.url">repository/conf/jndi2.properties</property>
	<property name="transport.jms.DestinationType">topic</property>
	<property name="transport.jms.Destination">throttleData</property>
	<property name="transport.jms.ConcurrentPublishers">allow</property>
	<property name="transport.jms.ConnectionFactoryJNDIName">TopicConnectionFactory</property>
  </to>
</eventPublisher>

Then add /home/sanjeewa/work/packs/scaled-traffic/wso2am-2.0.0-TM1/repository/conf/jndi2.properties - a new file to hold connection details for Node 02 topics. Add the following content (5674 means node 02 port, because default port + 2 = 5672 +2).

connectionfactory.TopicConnectionFactory = amqp://admin:admin@clientid/carbon?brokerlist='tcp://localhost:5674'
connectionfactory.QueueConnectionFactory = amqp://admin:admin@clientID/test?brokerlist='tcp://localhost:5674'
topic.throttleData = throttleData

Also update /home/sanjeewa/work/packs/scaled-traffic/wso2am-2.0.0-TM1/repository/conf/jndi.properties - a by adding following content(5673 means node 01 port, because default port + 1 = 5672 +1).

connectionfactory.TopicConnectionFactory = amqp://admin:admin@clientid/carbon?brokerlist='tcp://localhost:5673'
connectionfactory.QueueConnectionFactory = amqp://admin:admin@clientID/test?brokerlist='tcp://localhost:5673'

Then we start the server with the Traffic Manager profile. Use the following command to start traffic manager 01:

sh wso2server.sh -Dprofile=traffic-manager


Configurations in Traffic Manager 02

Edit carbon.xml file and add following.

<Offset>2</Offset>

Do the clustering configuration (in the same way we did for traffic manager 01) in the <PRODUCT_HOME>/repository/conf/axis2/axis2.xml file for the Active node and Passive node.

	<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent"
            	enable="true">
    	<parameter name="membershipScheme">wka</parameter>
    	<parameter name="domain">wso2.carbon.domain</parameter>
    	<parameter name="localMemberHost">172.17.0.1</parameter>
    	<parameter name="localMemberPort">4001</parameter>
    	<members>
        	<member>
            	<hostName>172.17.0.1</hostName>
            	<port>4000</port>
        	</member>
    	</members>
 
	</clustering>

Update following in /wso2am-2.0.0-TM1/repository/conf/event-processor.xml. Please refer to this article for more information about CEP clustering.

  <mode name="HA" enable="true">
    	<nodeType>
        	<worker enable="true"/>
        	<presenter enable="false"/>
    	</nodeType>
    	<checkMemberUpdateInterval>10000</checkMemberUpdateInterval>
    	<eventSync>
        	<hostName>172.17.0.1</hostName>
     	 
    	</eventSync>
    	<management>
        	<hostName>172.17.0.1</hostName>
        	<port>10005</port>
    	</management>
    	<presentation>
        	<hostName>172.17.0.1</hostName>
        	<port>11000</port>
    	</presentation>
    	<persistence enable="true">
        	<persistenceIntervalInMinutes>15</persistenceIntervalInMinutes>
        	<persisterSchedulerPoolSize>10</persisterSchedulerPoolSize>
        	<persister class="org.wso2.carbon.event.processor.core.internal.persistence.FileSystemPersistenceStore">
            	<property key="persistenceLocation">cep_persistence</property>
        	</persister>
    	</persistence>
	</mode>

Traffic Manager node 02 needs to push events to the queue in Traffic Manager node 01 as well, Traffic Manager node 01 will become active node at any time. With following configuration, each node will do throttle calculations and add events to topics deployed in the current node - and then add same the other node. In /wso2am-2.0.0-TM2/repository/deployment/server/eventpublishers/jmsEventPublisher1.xml:

<?xml version="1.0" encoding="UTF-8"?>
<eventPublisher name="jmsEventPublisher1" statistics="disable"
  trace="disable" xmlns="https://wso2.org/carbon/eventpublisher">
  <from streamName="org.wso2.throttle.globalThrottle.stream" version="1.0.0"/>
  <mapping customMapping="disable" type="map"/>
  <to eventAdapterType="jms">
	<property name="java.naming.factory.initial">org.wso2.andes.jndi.PropertiesFileInitialContextFactory</property>
	<property name="java.naming.provider.url">repository/conf/jndi1.properties</property>
	<property name="transport.jms.DestinationType">topic</property>
	<property name="transport.jms.Destination">throttleData</property>
	<property name="transport.jms.ConcurrentPublishers">allow</property>
	<property name="transport.jms.ConnectionFactoryJNDIName">TopicConnectionFactory</property>
  </to>
</eventPublisher>

Now add /home/sanjeewa/work/packs/scaled-traffic/wso2am-2.0.0-TM1/repository/conf/jndi1.properties - a new file to hold connection details for Node 01 topics. Add the following content(5673 means node 1 port because default port + 1 = 5672 +1).

connectionfactory.TopicConnectionFactory = amqp://admin:admin@clientid/carbon?brokerlist='tcp://localhost:5673'
connectionfactory.QueueConnectionFactory = amqp://admin:admin@clientID/test?brokerlist='tcp://localhost:5673'
topic.throttleData = throttleData

Also update /home/sanjeewa/work/packs/scaled-traffic/wso2am-2.0.0-TM1/repository/conf/jndi.properties by adding following content(5674 means node 2 port because default port + 2 = 5672 +2). This configuration was there to connect itself when port offset was 2.

connectionfactory.TopicConnectionFactory = amqp://admin:admin@clientid/carbon?brokerlist='tcp://localhost:5674'
connectionfactory.QueueConnectionFactory = amqp://admin:admin@clientID/test?brokerlist='tcp://localhost:5674'
topic.throttleData = throttleData

Then start the server with following command.

sh wso2server.sh -Dprofile=traffic-manager

Since we have enabled clustering and HA for both traffic manager nodes, we will see a member join message once this server starts up. Check carbon logs for following log lines; they will confirm whether both nodes joined the cluster and communicated with each other properly. If you have multiple nodes, you’ll need to do the same for other nodes as well.

[2016-08-25 11:30:14,950]  INFO - WKABasedMembershipScheme Member joined [af53fb1c-26c0-4756-a6c6-53901561163d]: /172.17.0.1:4000



Configuring API Publisher

Next, we need to configure API Manager store, publisher gateway and key manager for this deployment. If we do not have enough resources, then we can run all of the above in one server to test the Traffic Manager deployment; but just for the sake of completeness I will add all the steps needed to configure each server type.

Here I will use the API publisher node as the admin node (which is used to create policies and manage the server). I have listed only traffic manager related configurations: if you need all the information about connecting gateway manager, store and key manager to each other, please refer the API Manager clustering guideline document.

api-manager.xml can changed as seen below. Here we will enable advanced throttling in the admin/publisher node.

  <EnableAdvanceThrottling>true</EnableAdvanceThrottling>
  <DataPublisher>
      <Enabled>false</Enabled>
 …………………….
  </DataPublisher>
  <PolicyDeployer>       <ServiceURL>https://<Traffic-Manager-LBURL>:443/services/</ServiceURL>
      <Username>admin</Username>
      <Password>admin</Password>
  </PolicyDeployer>
  <BlockCondition>
      <Enabled>false</Enabled>
…………………...
  </BlockCondition>
  <JMSConnectionDetails>
      <Enabled>false</Enabled>
   …………………….
  </JMSConnectionDetails>
  <JMSEventPublisherParameters>

As you can see above, we need to point this node to the Traffic Manager nodes. If we have only one node, then we can point to that: if we have multiple nodes, then we need to point to the load balance URL which is front of the traffic manager cluster.

When we create/update throttle policy from the Admin dashboard it creates a database entry for updates. It will also a create Siddhi execution plan and deploy it within Traffic Manager. Here the PolicyDeployer configurations are there to admin the dashboard server to connect Traffic Manager and deploy policies. When we have multiple Traffic Managers then we need to have a deployment synchronization mechanism for synchronized execution plans across all Traffic Managers (if the execution plan deployed to one node then it needs to be copied to the other node and deploy there as well).



API Gateway configurations

In this deployment I will have only one gateway node: it will act as both manager and worker deployment. When we create an API from the publisher, it will be deployed in this node. If we have high availability requirements, users need to deploy the gateway worker cluster and manager in an active/passive pattern. However, I will not go into details with all possible configurations: we will mainly focus on configurations related to the traffic management feature.



Connect Gateway to one Traffic Manager

First we will configure API gateway to connect to only one Traffic Manager. All throttling events will publish to that Traffic Manager instance, and it will fetch throttle decisions from the same Traffic Manager instance.

Configure the throttle Configuration under API-Manager.xml as follows to connect the Traffic Manager 01 node which runs with port offset 01.

<ThrottlingConfigurations>
       <EnableAdvanceThrottling>true</EnableAdvanceThrottling>
       <DataPublisher>
           <Enabled>true</Enabled>
           <Type>Binary</Type>
           <ReceiverUrlGroup>tcp://<Traffic-Manager-host>:9612</ReceiverUrlGroup>
           <AuthUrlGroup>ssl://<Traffic-Manager-host>:9712</AuthUrlGroup>
…………………….
       </DataPublisher>
       <PolicyDeployer> <ServiceURL>https://<Traffic-Manager-host>:9444/services/</ServiceURL>
………………..
       </PolicyDeployer>
……………….
       <JMSConnectionDetails>
           <Enabled>true</Enabled>
           <ServiceURL>tcp://<Traffic-Manager-host>:5673</ServiceURL>
 ………….
   </ThrottlingConfigurations>

If you have configured everything else, Gateway will communicate with Traffic Manager 01 and recieve updates from that. Figure 1 depicts this simple deployment:

Figure 1



Connect Gateway to multiple Traffic Managers

A failover throttle data publisher pattern for API Gateway

In this case, the gateway will publish events to two different Traffic Manager instances. In this scenario we will configure each API Gateway worker to communicate with multiple Traffic Managers and push throttle events to them with a failover mechanism.

	<ThrottlingConfigurations>
    	<EnableAdvanceThrottling>true</EnableAdvanceThrottling>
    	<DataPublisher>
        	<Enabled>true</Enabled>
        	<Type>Binary</Type>
        	<ReceiverUrlGroup>{tcp://127.0.0.1:9612, tcp://127.0.0.1:9613}</ReceiverUrlGroup>
        	<!--ReceiverUrlGroup>tcp://${carbon.local.ip}:9612</ReceiverUrlGroup-->
        	<AuthUrlGroup>{ssl://127.0.0.1:9712, ssl://127.0.0.1:9713}</AuthUrlGroup>
        	<!--AuthUrlGroup>ssl://${carbon.local.ip}:9712</AuthUrlGroup-->
        	<Username>${admin.username}</Username>
        	<Password>${admin.password}</Password>
        	<DataPublisherPool>
            	<MaxIdle>1000</MaxIdle>
            	<InitIdleCapacity>200</InitIdleCapacity>
        	</DataPublisherPool>
        	<DataPublisherThreadPool>
            	<CorePoolSize>200</CorePoolSize>
            	<MaxmimumPoolSize>1000</MaxmimumPoolSize>
            	<KeepAliveTime>200</KeepAliveTime>
        	</DataPublisherThreadPool>
    	</DataPublisher>

With the above configuration, API gateway will publish events to both Traffic Managers in a round robin fashion. It will also detect failover scenarios and then send messages to only active nodes. In this case event duplication does not happen, as a single event will always go to single server.


A failover throttle data receiver pattern for API Gateway

In this pattern we connect gateway workers to two Traffic Managers: if one goes down, the other can act as a Traffic Manager for API gateway.

Thus we need to configure API gateway to push throttle events to both Traffic Managers (see the diagram below to understand how this deployment works). As you can see, the gateway node will push events to both Traffic Manager node01 and node02. The gateway receivers throttle decision updates from both traffic managers using failover data receiver pattern.

The Traffic Managers are fronted with a load balancer, as shown in the diagram. The admin dashboard/publisher server will communicate with Traffic Manager through load balancer. When user creates a new policy from the admin dashboard, that policy will be stored in database and published to a Traffic Manager node through the load balancer.

Since we do have a deployment synchronization mechanism for Traffic Managers, one Traffic Manager can update the other one with latest changes. Thus, it's sufficient to publish the throttle policy to one node in an active/passive pattern(if one node is active, keep sending requests to it - if it's not available, send to other node). If we are planning to use an SVN-based deployment, the throttle policies that are created should always be publish to the manager node(Traffic Manager instance) and workers need to synchronize with it.

Figure 2

However, if the server storing and forwarding messages goes down, the entire system will go down - no matter what other servers and functions are involved. Thus, in order to make a robust messaging system it is mandatory to have a failover mechanism.

When we have a few instances of Traffic Manager servers up and running in the system, each of these servers generally has a broker. If one broker goes down then the gateway automatically switches to the other broker and continues receiving throttle messages. If that one also fails, it will try next - and so on. Thus the risk of downtime is greatly reduced in the whole system.

In order to achieve this kind of high availability for the data reception, we need to configure JMSConnectionParameters to connect multiple brokers running within each Traffic Manager. For that we need to add following configuration to each gateway.

<JMSConnectionParameters>           	 
<transport.jms.ConnectionFactoryJNDIName>TopicConnectionFactory</transport.jms.ConnectionFactoryJNDIName>
<transport.jms.DestinationType>topic</transport.jms.DestinationType>        	 
<java.naming.factory.initial>org.wso2.andes.jndi.PropertiesFileInitialContextFactory</java.naming.factory.initial>         	 
<connectionfactory.TopicConnectionFactory>amqp://admin:admin@clientID/carbon?failover='roundrobin'%26cyclecount='2'%26brokerlist='tcp://127.0.0.1:5673?retries='5'%26connectdelay='50';tcp://127.0.0.1:5674?retries='5'%26connectdelay='50''</connectionfactory.TopicConnectionFactory>
</JMSConnectionParameters>

If a single gateway has to communicate with multiple Traffic Manager this is the easiest way to configure it to do so.



Conclusion

In this post we discussed about traffic manager and its basic usages in API Manager distributed deployment. Also we had look at on how to configure external traffic manager cluster with WSO2 API Manager cluster. We don't always need to scale traffic manager according to load as it can handle multiple gateways at one time.Usually you need to add additional traffic manager node to cluster if you add five or more gateways to the system. When new traffic manager added to system it can be communicated with other nodes and take updates from them. This enables us to spawn any number of traffic manager nodes when gateway cluster gown dynamically. In summary API Manager distributed deployment with traffic manager is really good for high volume traffic scenarios. In next article we will discuss more about data publishing methodologies.

 

About Author

  • Sanjeewa Malalgoda
  • Director - Engineering | Architect at WSO2
  • WSO2