WSO2 Venus

Dimuthu De Lanerolle

Java Tips .....

To get directory names inside a particular directory ....

private String[] getDirectoryNames(String path) {

        File fileName = new File(path);
        String[] directoryNamesArr = fileName.list(new FilenameFilter() {
            @Override
            public boolean accept(File current, String name) {
                return new File(current, name).isDirectory();
            }
        });
        log.info("Directories inside " + path + " are " + Arrays.toString(directoryNamesArr));
        return directoryNamesArr;
    }



To retrieve links on a web page ......

 private List<String> getLinks(String url) throws ParserException {
        Parser htmlParser = new Parser(url);
        List<String> links = new LinkedList<String>();

        NodeList tagNodeList = htmlParser.extractAllNodesThatMatch(new NodeClassFilter(LinkTag.class));
        for (int x = 0; x < tagNodeList.size(); x++) {
            LinkTag loopLinks = (LinkTag) tagNodeList.elementAt(m);
            String linkName = loopLinks.getLink();
            links.add(linkName);
        }
        return links;
    }


To search for all files in a directory recursively from the file/s extension/s ......

private List<String> getFilesWithSpecificExtensions(String filePath) throws ParserException {

// extension list - Do not specify "." 
 List<File> files = (List<File>) FileUtils.listFiles(new File(filePath),
                new String[]{"txt"}, true);

        File[] extensionFiles = new File[files.size()];

        Iterator<File> itFileList = files.iterator();
        int count = 0;

        while (itFileList.hasNext()) {
            File filePath = itFileList.next();
           
extensionFiles[count] = filePath;
            count++;
        }
        return
extensionFiles;



Reading files in a zip

     public static void main(String[] args) throws IOException {
        final ZipFile file = new ZipFile("Your zip file path goes here");
        try
        {
            final Enumeration<? extends ZipEntry> entries = file.entries();
            while (entries.hasMoreElements())
            {
                final ZipEntry entry = entries.nextElement();
                System.out.println( "Entry "+ entry.getName() );
                readInputStream( file.getInputStream( entry ) );
            }
        }
        finally
        {
            file.close();
        }
    }
        private static int readInputStream( final InputStream is ) throws IOException {
            final byte[] buf = new byte[8192];
            int read = 0;
            int cntRead;
            while ((cntRead = is.read(buf, 0, buf.length) ) >=0)
            {
                read += cntRead;
            }
            return read;
        } 



Converting Object A to Long[]

 long [] myLongArray = (long[])oo;
        Long myLongArray [] = new Long[myLongArray.length];
        int i=0;

        for(long temp:myLongArray){
            myLongArray[i++] = temp;
        } 


Getting cookie details on HTTP clients

import org.apache.http.impl.client.DefaultHttpClient;

HttpClient httpClient = new DefaultHttpClient();

((DefaultHttpClient) httpClient).getCookieStore().getCookies(); 

 HttpPost post = new HttpPost(URL);
        post.setHeader("User-Agent", USER_AGENT);
        post.addHeader("Referer",URL );
        List<NameValuePair> urlParameters = new ArrayList<NameValuePair>();
        urlParameters.add(new BasicNameValuePair("username", "admin"));
        urlParameters.add(new BasicNameValuePair("password", "admin"));
        urlParameters.add(new BasicNameValuePair("sessionDataKey", sessionKey));
        post.setEntity(new UrlEncodedFormEntity(urlParameters));
        return httpClient.execute(post);



Ubuntu Commands

1. Getting the process listening to a given port (eg: port 9000) 

sudo netstat -tapen | grep ":9000 "


Running  a bash script from python script

shell.py
-----------

import os

def main():

    os.system("sh hello.sh")

if __name__=="__main__":
 os.system("sh hello.sh")


hello.sh
-----------
#Linux shell Script


echo "Hello Python from Shell";

public void scriptExecutor() throws IOException {

    log.info("Start executing the script to trigger the docker build ... ");

    Process p = Runtime.getRuntime().exec(
            "python  /home/dimuthu/Desktop/Python/shell.py ");
    BufferedReader in = new BufferedReader(new InputStreamReader(
            p.getInputStream()));
    log.info(in.readLine());

    log.info("Finished executing the script to trigger the docker build ... ");

}

Chandana NapagodaLifecycle Management with WSO2 Governance Registry

SOA Lifecycle management is one of the core requirements for the functionality of an Enterprise Governance suite. WSO2 Governance Registry 5.2.0 supports multiple lifecycle management capability out of the box. Also, it gives an opportunity to the asset authors to extend the out of the box lifecycle functionality by providing their own extensions, based on the organization requirements. Further, the WSO2 Governance Registry supports multiple points of extensibility. Handlers, Lifecycles and Customized asset UIs(RXT based) are the key types of extensions available.

Lifecycle: 

A lifecycle is defined with SCXML based XML element and that contains,
  • A name 
  • One or more states
  • A list of check items with role based access control 
  • One or more actions that are made available based on the items that are satisfied 

Adding a Lifecycle
To add a new lifecycle aspect, click on the Lifecycles menu item under the Govern section of the extensions tab in the admin console. It will show you a user interface where you can add your SCXML based lifecycle configuration. A sample configuration will be available for your reference at the point of creation.

Adding Lifecycle to Asset Type
The default lifecycle for a given asset type will be picked up from the RXT definition. When creating an asset, it will automatically attach lifecycle into asset instance. Lifecycle attribute should be defined in the RXT definition under the artifactType element as below.

<lifecycle>ServiceLifeCycle</lifecycle>

Multiple Lifecycle Support

There can be instances, where given asset can go through more than one lifecycle. As an example, a given service can a development lifecycle as well as a deployment lifecycle. Above service related states changes will not be able to visualize via one lifecycle and current lifecycle state should depend on the context (development or deployment) which you are looking at.

Adding Multiple Lifecycle to Asset Type
Adding multiple lifecycles to an Asset Type can be done in two primary methods.

Through Asset Definition(Available with G-Reg 5.3.0):Here, you can define multiple lifecycle names in a comma separated manner. Lifecycle name which is defined in first will be considered as the default/primary lifecycle. Here, multiple lifecycles which are specified in the asset definition(RXT configuration) will be attached to the asset when itis getting created. An example of multiple lifecycle configurations is as below,
<lifecycle>ServiceLifeCycle,SampleLifeCycle</lifecycle>

Using Lifecycle Executor
Using custom executor Java code, you can assign another lifecycle into the asset. Executors are one of the facilitators which helps to extend the WSO2 G-Reg functionality and Executors are associated with a Governance Registry life cycle. This custom lifecycle executor class needs to implement the Execution interface that is provided by WSO2 G-Reg. You can find more details from below article[Lifecycles and Aspects].

Isuru PereraBenchmarking Java Locks with Counters

These days I am analyzing some Java Flight Recordings from taken from WSO2 API Manager performance tests and I found out that main processing threads were in "BLOCKED" state in some situations.

The threads were mainly blocked due to "synchronized" methods in Java. Synchronizing the methods in a critical section of request processing causes bottlenecks and it has an impact on the throughput and overall latency.

Then I was thinking whether we could avoid synchronizing the whole method. The main problem with synchronized is that only one thread can run that critical section. When it comes to consumer/producer scenarios, we may need to give read access to data in some threads and write access to a thread to edit data exclusively. Java provides ReadWriteLock for these kinds of scenarios.

Java 8 provides another kind of lock named StampedLock. The StampedLock provides an alternative way to the standard ReadWriteLock and it also supports optimistic reads. I'm not going to compare the features and functionalities of the each lock type in this blog post. You may read the StampedLock Idioms by Dr. Heinz M. Kabutz.

I'm more interested in finding out which lock is faster when it is accessed by multiple threads. Let's write a benchmark!


The code for benchmarks


There is an article on "Java 8 StampedLocks vs. ReadWriteLocks and Synchronized" by Tal Weiss, who is the CEO of Takipi. In that article, there is a benchmark for Java locks with different counter implementations. I'm using that counters benchmark as the basis for my benchmark. 

I also found another fork of the same benchmark and it has added the Optimistic Stamped version and Fair mode of ReentrantReadWriteLock. I found out about that from the slides on "Java 8 - Stamped Lock" by Haim Yadid after I got my benchmark results.

I also looked at the article "Java Synchronization (Mutual Exclusion) Benchmark" by Baptiste Wicht.

I'm using the popular JMH library for my benchmark. The JMH has now become the standard way to write Java microbenchmarks. The benchmarks done by Tal Weiss do not use JMH.

See JMH Resources by Nitsan Wakart for an introduction to JMH and related links to get more information about JMH.

I used Thread Grouping feature in JMH and the Group states for benchmarking different counter implementations.

This is my first attempt in writing a proper microbenchmark. If there are any problems with the code, please let me know. When we talk about benchmarks, it's important to know that you should not expect the same results in a real life application and the code may behave differently in runtime.

There are 11 counter implementations. I also benchmarked the fair and non-fair modes of ReentrantLockReentrantReadWriteLock and Semaphore.

Class Diagram for Counter implementations

There are altogether 14 benchmark methods!

  1. Adder - Using LongAdder. This is introduced in Java 8.
  2. Atomic - Using AtomicLong
  3. Dirty - Not using any mechanism to control concurrent access
  4. Lock Fair Mode - Using ReentrantLock
  5. Lock Non-Fair Mode - Using ReentrantLock
  6. Read Write Lock Fair Mode - Using ReentrantReadWriteLock
  7. Read Write Lock Non-Fair Mode - Using ReentrantReadWriteLock
  8. Semaphore Fair Mode - Using Semaphore
  9. Semaphore Non-Fair Mode - Using Semaphore
  10. Stamped - Using  StampedLock
  11. Optimistic Stamped - Using  StampedLock with tryOptimisticRead(); If it fails, I used the read lock. There are no more attempts to tryOptimisticRead().
  12. Synchronized - Using synchronized block with an object
  13. Synchronized Method - Using synchronized keyword in methods
  14. Volatile  - Using volatile keyword for counter variable

The code is available at https://github.com/chrishantha/microbenchmarks/tree/v0.0.1-initial-counter-impl

Benchmark Results


As I mentioned, I used Thread Grouping feature in JMH and I ran the benchmarks for different thread group distributions. There were 10 iterations after 5 warm-up iterations. I measured only the "throughput". Measuring latency would be very difficult (as the minimum throughtput values were having around 6 digits)

The thread group distribution was passed by the "-tg" argument to the JMH and the first number was used for "get" (read) operations and the second number was used for "increment" (write) operations.

There are many combinations we can use to run the benchmark tests. I used 12 combinations for thread group distribution and those are specified in the benchmark script.

These 12 combinations include the scenarios tested by Tal Weiss and Baptiste Wicht.

The benchmark was run on my Thinkpad T530 laptop.

$ hwinfo --cpu --short
cpu:
Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz, 3394 MHz
Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz, 3333 MHz
Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz, 3305 MHz
Intel(R) Core(TM) i7-3520M CPU @ 2.90GHz, 3333 MHz
$ free -m
total used free shared buff/cache available
Mem: 15866 4424 7761 129 3680 11204
Swap: 18185 0 18185

Note: I added the "Dirty" counter only to compare the results, but I omitted it from the benchmark as no one wants to keep a dirty counter in their code. :)

I have committed all results to the Github repository and I used gnuplot for the graphs.

It's very important to note that the graphs show the throughput for both reader and writer threads. If you need to look at individual reader and writer throughput, you can refer the results at https://github.com/chrishantha/microbenchmarks/tree/v0.0.1-initial-counter-impl/counters/results

Let's see the results!

1 Reader, 1 Writer

2 Readers, 2 Writers

4 Readers, 4 Writers

5 Readers, 5 Writers

10 Readers, 10 Writers

16 Readers, 16 Writers

64 Readers, 64 Writers

128 Readers, 128 Writers

1 Reader, 19 Writers

19 Readers, 1 Writer

4 Readers, 16 Writers

16 Readers, 4 Writers



Conclusion


Following are some conclusions we can make when looking at above results

  1. Optimistic Stamped counter has much better throughput when there is high contention.
  2. Fair modes of the locks are very slow.
  3. Adder counter has better throughput than Atomic counter when there are more writers.
  4. When there are less threads, the Synchronized and Synchronized Method counters has better throughput than using a Read Write Lock (in non-fair mode, which is the default)
  5. The Lock counter also has better throughput than Read Write Lock when there are less threads.

Adder, Atomic and Volatile counter examples do not show a way to provide mutual exclusion, but those are thread-safe ways to keep a count. You may refer benchmark results for other counters with Java locks if you want to have a mutual exclusion to some logic in your code.

In this benchmark, the read write lock has performed poorly. The reason could be that there are writers continuously trying to access the write lock. There may be situations that a write lock may be required less frequently and therefore this benchmark is probably not a good way to evaluate performance for read write locks.

Please make sure that you run the benchmarks for your scenarios before making a decision based on these results. Even my benchmarks give slightly different results for each run. So, it's not a good idea to rely entirely on benchmarks and you must test the performance of the overall application.


If there are any questions or comments on the results or regarding benchmark code, please let me know.

Prabath SiriwardenaBuilding Microservices ~ Designing Fine-grained Systems

The book Building Microservices by Sam Newman is one of the very first on the subject. It’s a great book for anyone who talks about or designs or builds microservices must read — I strongly recommend buying it!. This article reviews the book while highlighting the key takeaways from each chapter.

Jayanga DissanayakeDeploying artifacts to WSO2 Servers using Admin Services

In this post I am going to show you, how to deploy artifacts on WSO2 Enterprise Service Bus [1] and WSO2 Business Process Server [2] using Admin Services [3]

Usual practice with WSO2 artifacts deployment is to, enable DepSync [4] (Deployement Synchronization). And upload the artifacts via the management console of master node. Which will then upload the artifacts to the configured SVN repository and notify the worker nodes regarding this new artifact via a cluster message. Worker nodes then download the new artifacts from the SVN repository and apply those.

In this approach you have to log in to the management console and do the artifacts deployment manually.

With the increasing use of continuous integration tools, people are looking in to the possibility of automating this task. There is a simple solution in which you need to configure a remote file copy to the relevant directory inside the [WSO2_SERVER_HOME]/repository/deployment/server directory. But this is a very low level solution.

Following is how to use Admin Services to do the same in much easier and much manageable manner.

NOTE: Usually all WSO2 servers accept deployable as .car file but WSO2 BPS prefer .zip for deploying BPELs.

For ESB,
  1. Call 'deleteApplication' in ApplicationAdmin service and delete the
    application existing application
  2. Wait for 1 min.
  3. Call 'uploadApp' in CarbonAppUploader service
  4. Wait for 1 min.
  5. Call 'getAppData' in ApplicationAdmin, if it returns application data
    continue. Else break
 For BPS,
  1. Call the 'listDeployedPackagesPaginated' in
    BPELPackageManagementService with page=0 and
    packageSearchString=”Name_”
  2. Save the information
    <ns1:version>
    <ns1:name>HelloWorld2‐1</ns1:name>
    <ns1:isLatest>true</ns1:isLatest>
    <ns1:processes/>
    </ns1:version>
  3. Use the 'uploadService' in BPELUploader, to upload the new BPEL zip
    file
  4. Again call the 'listDeployedPackagesPaginated' in
    BPELPackageManagementService with 15 seconds intervals for 3mins.
  5. If it finds the name getting changed (due to version upgrade. Eg:
    HelloWorld2‐4), then continue. (Deployment is success)
  6. If the name doesn't change for 3mins, break. Deployment has some
    issues. Hence need human intervention

[1] http://wso2.com/products/enterprise-service-bus/
[2] http://wso2.com/products/business-process-server/
[3] https://docs.wso2.com/display/BPS320/Calling+Admin+Services+from+Apps
[4] https://docs.wso2.com/display/CLUSTER420/SVN-Based+Deployment+Synchronizer

Chathurika Erandi De SilvaSimple HTTP Inbound endpoint Sample: How to

What is an Inbound endpoint?

As per my understanding an inbound endpoint is an entry point. Using this entry point, a message can be mediated directly from the transport layer to the mediation layer. Read more...

Following is a very simple demonstration on Inbound Endpoints using WSO2 ESB

1. Create a sequence


2. Save in Registry



3. Create an Inbound HTTP endpoint using the above sequence



Now it's time to see how to send the requests. As I have explained, in the start of this post, the inbound sequence is an entry point for a message. If the above third step is inspected, its illustrated that a port is given for the inbound endpoint. When the incoming traffic is directed towards the given port, the inbound endpoint will receive it and straight away pass it to the sequence defined with it. Here the axis2 layer is skipped.

In the above scenario the request should be directed to http://localhost:8085/ as given below


Then the request is directed to the inbound endpoint and directly to the sequence

Shashika UbhayaratneHow to resolve "File Upload Failure" when importing a schema with dependany in WSO2 GREG


Schema is one of the main asset model used in WSO2 GREG and you can find more information on https://docs.wso2.com/display/Governance520/Adding+a+Schema.

There can be situations where you want to import a schema to GREG which imports another schema (It has a dependency)

1. Lets say you have a schema file.
example: original.xsd
 <?xml version="1.0" encoding="UTF-8"?>  
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" targetNamespace="urn:listing1">
<xsd:complexType name="Phone1">
<xsd:sequence>
<xsd:element name="areaCode1" type="xsd:int"/>
<xsd:element name="exchange1" type="xsd:int"/>
<xsd:element name="number1" type="xsd:int"/>
</xsd:sequence>
</xsd:complexType>
</xsd:schema>

2. Import above schema on publisher as per the instructions given on https://docs.wso2.com/display/Governance520/Adding+a+Schema.

3. Now, you need to import another schema which import/ has reference to previous schema
example: link.xsd
<?xml version="1.0" encoding="UTF-8"?>  
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" targetNamespace="urn:listing">
<xsd:import namespace="urn:listing1"
schemaLocation="original.xsd"/>
<xsd:complexType name="Phone">
<xsd:sequence>
<xsd:element name="areaCode" type="xsd:int"/>
<xsd:element name="exchange" type="xsd:int"/>
<xsd:element name="number" type="xsd:int"/>
</xsd:sequence>
</xsd:complexType>
</xsd:schema>

Issue: You may encounter an error similar to following:
ERROR {org.wso2.carbon.registry.extensions.handlers.utils.SchemaProcessor} - Could not read the XML Schema Definition file. this.schema.needs  
org.apache.ws.commons.schema.XmlSchemaException: Could not evaluate Schema Definition. This Schema contains Schema Includes that were not resolved
at org.apache.ws.commons.schema.SchemaBuilder.handleInclude(SchemaBuilder.java:1676)
at org.apache.ws.commons.schema.SchemaBuilder.handleXmlSchemaElement(SchemaBuilder.java:221)
at org.apache.ws.commons.schema.SchemaBuilder.build(SchemaBuilder.java:121)
at org.apache.ws.commons.schema.XmlSchemaCollection.read(XmlSchemaCollection.java:512)
at org.apache.ws.commons.schema.XmlSchemaCollection.read(XmlSchemaCollection.java:385)
at org.apache.ws.commons.schema.XmlSchemaCollection.read(XmlSchemaCollection.java:425)
....................
Caused by: org.wso2.carbon.registry.core.exceptions.RegistryException: Could not read the XML Schema Definition file. this.schema.needs
at org.wso2.carbon.registry.extensions.handlers.utils.SchemaProcessor.putSchemaToRegistry(SchemaProcessor.java:137)
at org.wso2.carbon.registry.extensions.handlers.XSDMediaTypeHandler.processSchemaUpload(XSDMediaTypeHandler.java:263)
at org.wso2.carbon.registry.extensions.handlers.XSDMediaTypeHandler.put(XSDMediaTypeHandler.java:186)
at org.wso2.carbon.registry.core.jdbc.handlers.HandlerManager.put(HandlerManager.java:2503)
at org.wso2.carbon.registry.core.jdbc.handlers.HandlerLifecycleManager.put(HandlerLifecycleManager.java:957)
at org.wso2.carbon.registry.core.jdbc.EmbeddedRegistry.put(EmbeddedRegistry.java:697)
at org.wso2.carbon.registry.core.caching.CacheBackedRegistry.put(CacheBackedRegistry.java:550)
at org.wso2.carbon.registry.core.session.UserRegistry.putInternal(UserRegistry.java:827)
at org.wso2.carbon.registry.core.session.UserRegistry.access$1000(UserRegistry.java:60)
at org.wso2.carbon.registry.core.session.UserRegistry$11.run(UserRegistry.java:803)
at org.wso2.carbon.registry.core.session.UserRegistry$11.run(UserRegistry.java:800)
at java.security.AccessController.doPrivileged(Native Method)
at org.wso2.carbon.registry.core.session.UserRegistry.put(UserRegistry.java:800)
at org.wso2.carbon.registry.resource.services.utils.AddResourceUtil.addResource(AddResourceUtil.java:88)

Solution 1:
Zip all schemas together and upload

Solution 2:
Specify the absolute path for dependent schema file:
example:
 <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" targetNamespace="urn:listing">  
<xsd:import namespace="urn:listing1"
schemaLocation="http://www.example.com/schema/original.xsd"/>





sanjeewa malalgodaHow to disable throttling completely or partically for given API- WSO2 API Manager 1.10 and below versions

Sometimes particular requirement(allowing any number of un authenticated requests) will be applied to only few APIs in your deployment. If that is the case we may know those APIs by the time we design system. So one thing we can do is remove throttling handler from handler list of given API. Then all requests dispatched to given API will not perform any throttling related operations. To do that you need to edit synapse API definition manually and remove handler from there.

We usually do not recommend users to do this because if you updated API again from publisher then handler may add again(each update from publisher UI will replace current synapse configuration). But if you have only one or two APIs for related to this use case and those will not update very frequently. And we can use that approach.

Another approach we can follow is update velocity template in a way it will not add throttling handler for few pre defined APIs. In that case even if you update API from publisher still deployer will remove throttling handler from synapse configuration. To do this we should know APIs list which do not require throttling. Also then no throttling will apply for all resources in that API.

Sometimes you may wonder what is the impact of having large number of max requests for unauthenticated tier.
If we discuss about performance of throttling it will not add huge delay to the request. If we consider throttling alone then it will take less than 10% of complete gateway processing time. So we can confirm that having large number for max request count and having unauthenticated tier will not cause major performance issue. If you don't need to disable throttling for entire API and need to allow any number of unauthenticated requests in tier level then that is the only option we do have now.


Please consider above facts and see what is the best solution for your use case. If you need further assistance or any clarifications please let us know. We would like to discuss further and help you to find the best possible solution for your use case.

Chathurika Erandi De SilvaEncoded context to URI using REST_URL_POSTFIX with query parameters

WSO2 ESB provides a property called REST_URL_POSTFIX that can be used to append context to the target endpoint when invoking a REST endpoint.


With the upcoming ESB 500 release the value of the REST_URL_POSTFIX can contain non standard special characters such as spaces and these will be encoded when sending to the backend. This provides versatility because we can't expect each and every resource path not to contain non standard special characters.

In order to demonstrate this, I have a REST service with the following context path

user/users address/address new/2016.05

You can see this contains standard as well as non standard  characters.

Furthermore I am sending values needed for the service execution as query parameters and while appending the above context to the target endpoint, i need to send the query parameters as well.

The request is as follows

http://<ip>:8280/testapi?id=1&name=jane&address=wso2

In order to achieve my requirement I have created the following sequence


Afterwards I have created a simple API in WSO2 ESB  and  used the above sequence as below


When invoked following log entry is visible in console (wire logs should be  enabled) that indicates the accomplishment of  the mission

[2016-05-20 15:13:42,549] DEBUG - wire HTTP-Sender I/O dispatcher-1 << "GET /SampleRestService/restservice/TestUserService/user/users%20address/address%20new/2016.05?id=1&name=jane&address=wso2 HTTP/1.1[\r][\n]"


Evanthika Amarasiri[WSO2 Governance Registry] - How to analyse the history of registry resources

Assume that you are working on a setup where you need to analyse the history of registry resources. One might want to know what type of operations have been done to the resource throughout it’s lifetime. This is possible from a simple DB query.

select * from REG_LOG where REG_PATH=‘resource_name’;

i.e. select * from REG_LOG where REG_PATH='/_system/governance/apimgt/statistics/ga-config.xml';


As an example, assume I want to find out the actions taken on the resource ga-config.xml. So when I query the table REG_LOG, below is the result I would receive.




When you look at the above result set, you would notice that the column REG_ACTION shows different values in each row. The actions that represents these values are configured in the class Activity.java. For example, REG_ACTION=10 means that the resource have been moved from it’s current location. REG_ACTION=7 means that it has been deleted from the system. Likewise, when you go through [1], you can find out the rest of the actions which you can take on these registry resources.

Therefore as explained above, by going through the REG_LOG of the registry database table, you can audit the actions taken on each and every resource.

[1] - https://github.com/wso2/carbon-kernel/blob/4.4.x/core/org.wso2.carbon.registry.api/src/main/java/org/wso2/carbon/registry/api/Activity.java

Chandana NapagodaG-Reg and ESB integration scenarios for Governance


WSO2 Enterprise Service Bus (ESB) employs WSO2 Governance Registry for storing configuration elements and resources such as WSDLs, policies, service metadata, etc. By default, WSO2 ESB shipped with embedded Registry, which is entirely based on the WSO2 Governance Registry (G-Reg). Further based on the requirements, you can connect to a remotely running WSO2 Governance Registry using a remote JDBC connection which is known as a ‘JDBC registry mount’.

Other than the Registry/Repository aspect of WSO2 G-Reg, its primary use cases are Design time governance and Runtime governance with seamless lifecycle management. It is known as Governance aspect of WSO2 G-Reg. So with this governance aspect of WSO2 G-Reg, more flexibility is provided for integration with WSO2 ESB.

When integrating WSO2 ESB with WSO2 G-Reg in governance aspect, there are three options available. They are:

1). Share Registry space with both ESB and G-Reg
2). Use G-Reg to push artifacts into ESB node
3). ESB pulls artifacts from the G-Reg when needed

Let’s go through the advantages and disadvantages of each option. Here we are considering a scenario where metadata corresponds to ESB artifacts such as endpoints are stored in the G-Reg as asset types. Each asset type has their own lifecycle (Ex: ESB Endpoint RXT have it’s own Lifecycle). Then with the G-Reg lifecycle transition, synapse configurations (Ex: endpoints) will be created. Those will be the runtime configurations of ESB.


Share Registry space with both ESB and G-Reg

Embedded Registry of every WSO2 product consist of three partitions. They are local, config and governance.

Local Partition : Used to store configuration and runtime data that is local to the server.
Configuration Partition : Used to store product-specific configurations. This partition can be shared across multiple instances of the same product.
Governance Partition : Used to store configuration and data that are shared across the whole platform. This partition typically includes services, service descriptions, endpoints and data sources
How to integration should work:
When sharing registry spaces with Both ESB and G-Reg products, we are sharing governance partition only. Here governance space will be shared using JDBC. When G-Reg lifecycle transition happens on the ESB endpoint RXT, it will create the ESB synapse endpoint configuration and copy into relevant registry location using Copy Executor. Then ESB can retrieve that endpoint synapse configuration from the shared registry when required.
Mount(3).png

Advantages:
     Easy to configure
    Reduced amount of custom code implementation
Disadvantages:
     
If servers are deployed across data centers, JDBC connections will be created in between data centers(may be through WAN or Public networks).
      With the number of environments, there will be many database mounts.
      ESB registry space will be exposed via G-Reg.

Use G-Reg to push artifacts into ESB node
How to integration should work:
In this pattern, G-Reg will create synapse endpoints and push into relevant ESB setup(Ex: Dev/QA/Prod, etc) by using Remote Registry operation. After G-Reg pushing appropriate synapse configuration into ESB, APIs or Services will be able to consume.
G-Reg Push(1).png

Advantages:
      Provide more flexibility from the G-Reg side to manage ESB assets
      Can plug multiple ESB environments on the go
      Can restrict ESB API/Service invocation until G-Reg lifecycle operation is completed

ESB pull artifact from the G-Reg

How to integration should work:


In this pattern, when lifecycle transition happens, G-Reg will create synapse level endpoints in the relevant registry location.

When API or Service invocation happens, ESB will first lookup the endpoint in their own registry. If it is not available, it will pull the endpoint from G-Reg using Remote Registry operations.  Here ESB side endpoint lookup should be implemented as a custom implementation.  

ESB pull.png

Advantages: 
        User might be able to deploy ESB API/Service before G-Reg lifecycle transition happens. Disadvantages: 
        First API/Service call get delayed, until Remote API call is completed 
        First API/Service call get failed, if G-Reg lifecycle transition is not completed. 
        Less control compared to option 1 and 2

Chanaka FernandoWSO2 ESB 5.0.0 Beta Released

WSO2 team is happy to announce the beta release of the latest WSO2 ESB 5.0.0. This version of the ESB has major improvements to the usability aspects of the ESB in real production deployments as well as development environments. Here are the main features of the ESB 5.0.0 version.

Mediation debugger provides the capability to debug mediation flows from WSO2 developer studio tooling platform. It allows the users to view/edit/delete properties and the payload of the messages passing through each and every mediator.
You can find more information about this feature at below post.
Analytics for WSO2 ESB 5.0.0 (Beta) — https://github.com/wso2/analytics-esb/releases/tag/v1.0.0-beta

Malintha AdikariK-Means clustering with Scikit-learn


K-Means clustering is a popular unsupervised classification algorithm. In simple terms we have unlabeled dataset with us. Unlabeled dataset means we have a dataset but we don't have any clue about how we are going to categorized each row in the dataset. Following is an example few rows from unlabeled dataset about crime data in USA. Here we have one row for each state and set of features related to crime information. We have this dataset with us but we don't know what to do this with this data. One thing we can do is finding similarities of the states. In other way we can try to prepare few buckets and put states into those buckets based on the similarities in crime information.


State

Murder
Assault
UrbanPop
Rape
Alabama

13.2
236
58
21.2
Alaska

10
263
48
44.5
Arizona

8.1
294
80
31
Arkansas

8.8
190
50
19.5
California

9
276
91
40.6


Now let's discuss how we can implement K-Means cluster for our dataset with Scikit-learn. You can download USA crime dataset from my github location.


Import KMeans from Scikit-learn.


from sklearn.cluster import KMeans


Load your datafile into Pandas dataframe


df = Utils.get_dataframe("crime_data.csv")


Create KMean model providing required number of clusters. Here I have defined required number of clusters to 5


KMeans_model = KMeans(n_clusters=5, random_state=1)


Refine your data removing non-numeric data, unimportant features..etc.


df.drop(['crime$cluster'], inplace=True, axis=1)
df.rename(columns={df.columns[0]: 'State'}, inplace=True)


Select only numeric data in your dataset.


numeric_columns = df._get_numeric_data()


Train KMeans-clustering model


KMeans_model.fit(numeric_columns)


Now you can see the label of each row in your training dataset.


labels = KMeans_model.labels_
print(labels)


Predic new state’s crime cluster as follows


print(KMeans_model.predict([[15, 236, 58, 21.2]]))


Malintha AdikariVisualization in Machine Learning

Scatter Plots

We can visualize the correlations (relationship between two variables) between features or features and the classes using scatter plots. In a scatter plot we can use n-dimensional space to visualize correlations between n variables. We plot data points there and finally we can use the output to determine correlations between each variable. Following is a sample 3-D scatterplot (from http://rgraphgallery.blogspot.com/2013/04/rg-3d-scatter-plots-with-vertical-lines.html)


Chathurika Erandi De SilvaStatistics and ESB -> ESB Analytics Server: Message Tracing

This is the  second post on ESB Analytics server and I hope you have read the previous one.

When ESB recieves a request, its taken in as a message. This message consists of a header and a body.The Analytics server have provided an comprehensive way of viewing the message that ESB works all through the cycle. this is called as tracing.

Normally ESB takes in a request, mediate it through some logic and then sends to the backend. The response from the backend is then again mediated along some logic then returned to the client. The analytics server graphically illustrates this flow, so the message flow can be easily viewed and understood

Sample Message Flow



Further it provides graphical view of the message tracing by providing details on the message passed through the ESB. Transport properties, message context properties are illustrated with respective to the mediators in the flow

Sample Mediator Properties



Basically the capability of viewing the message flow, tracing the flow in a graphical mode is provided which is user friendly and simple.

sanjeewa malalgodaHow to add soap and WSDL based API to WSO2 API Manager via REST API.

If you are using old jaggery API then you can add API in the same way you add API from jaggery application. To do that we need to follow steps below. Since exact 3 steps(design > implement > manage) only used by jaggery applications those are not listed in API documents. So i have listed them here for your reference. One thing we need to tell you is we cannot add soap endpoint based with swagger content(soap apis cannot define with swagger content).

Steps to create soap API with WSDL.
============================

Login and obtain session.
curl -X POST -c cookies http://localhost:9763/publisher/site/blocks/user/login/ajax/login.jag -d 'action=login&username=admin&password=admin'

Design API
curl -F name="test-api" -F version="1.0" -F provider="admin" -F context="/test-apicontext" -F visibility="public" -F roles="" -F wsdl="https://svn.apache.org/repos/asf/airavata/sandbox/xbaya-web/test/Calculator.wsdl" -F apiThumb="" -F description="" -F tags="testtag" -F action="design" -F swagger='{"apiVersion":"1.0","swaggerVersion":"1.2","authorizations":{"oauth2":{"scopes":[],"type":"oauth2"}},"apis":[{"index":0,"file":{"apiVersion":"1.0","basePath":"http://10.100.5.112:8280/test-apicontext/1.0","swaggerVersion":"1.2","resourcePath":"/test","apis":[{"index":0,"path":"/test","operations":[{"nickname":"get_test","auth_type":"Application & Application User","throttling_tier":"Unlimited","method":"GET","parameters":[
{"dataType":"String","description":"AccessToken","name":"Authorization","allowMultiple":false,"required":true,"paramType":"header"}
,
{"description":"RequestBody","name":"body","allowMultiple":false,"required":true,"type":"string","paramType":"body"}
]},{"nickname":"options_test","auth_type":"None","throttling_tier":"Unlimited","method":"OPTIONS","parameters":[
{"dataType":"String","description":"AccessToken","name":"Authorization","allowMultiple":false,"required":true,"paramType":"header"}
,
{"description":"RequestBody","name":"body","allowMultiple":false,"required":true,"type":"string","paramType":"body"}
]}]}]},"description":"","path":"/test"}],"info":{"title":"test-api","termsOfServiceUrl":"","description":"","license":"","contact":"","licenseUrl":""}}' -k -X POST -b cookies https://localhost:9443/publisher/site/blocks/item-design/ajax/add.jag

Implement API
curl -F implementation_methods="endpoint" -F endpoint_type="http" -F endpoint_config='{"production_endpoints":
{"url":"http://appserver/resource/ycrurlprod","config":null}
,"endpoint_type":"http"}' -F production_endpoints="http://appserver/resource/ycrurlprod" -F sandbox_endpoints="" -F endpointType="nonsecured" -F epUsername="" -F epPassword="" -F wsdl="https://svn.apache.org/repos/asf/airavata/sandbox/xbaya-web/test/Calculator.wsdl" -F wadl="" -F name="test-api" -F version="1.0" -F provider="admin" -F action="implement" -F swagger='{"apiVersion":"1.0","swaggerVersion":"1.2","authorizations":{"oauth2":{"scopes":[],"type":"oauth2"}},"apis":[{"index":0,"file":{"apiVersion":"1.0","basePath":"http://10.100.5.112:8280/test-apicontext/1.0","swaggerVersion":"1.2","resourcePath":"/test","apis":[{"index":0,"path":"/test","operations":[{"nickname":"get_test","auth_type":"Application & ApplicationUser","throttling_tier":"Unlimited","method":"GET","parameters":[
{"dataType":"String","description":"AccessToken","name":"Authorization","allowMultiple":false,"required":true,"paramType":"header"}
,
{"description":"RequestBody","name":"body","allowMultiple":false,"required":true,"type":"string","paramType":"body"}
]},{"nickname":"options_test","auth_type":"None","throttling_tier":"Unlimited","method":"OPTIONS","parameters":[
{"dataType":"String","description":"AccessToken","name":"Authorization","allowMultiple":false,"required":true,"paramType":"header"}
,
{"description":"RequestBody","name":"body","allowMultiple":false,"required":true,"type":"string","paramType":"body"}
]}]}]},"description":"","path":"/test"}],"info":{"title":"test-api","termsOfServiceUrl":"","description":"","license":"","contact":"","licenseUrl":""}}' -k -X POST -b cookies https://localhost:9443/publisher/site/blocks/item-design/ajax/add.jag

Manage API.
curl -F default_version_checked="" -F tier="Unlimited" -F transport_http="http" -F transport_https="https" -F inSequence="none" -F outSequence="none" -F faultSequence="none" -F responseCache="disabled" -F cacheTimeout="300" -F subscriptions="current_tenant" -F tenants="" -F bizOwner="" -F bizOwnerMail="" -F techOwner="" -F techOwnerMail="" -F name="test-api" -F version="1.0" -F provider="admin" -F action="manage" -F swagger='{"paths":{"/*":{"post":{"responses":{"201":{"description":"Created"}},"x-auth-type":"Application & Application
User","x-throttling-tier":"Unlimited"},"put":{"responses":{"200":{"description":"OK"}},"x-auth-type"
:"Application & Application User","x-throttling-tier":"Unlimited"},"get":{"responses":{"200":{"description"
:"OK"}},"x-auth-type":"Application & Application User","x-throttling-tier":"Unlimited"},"delete":{"responses"
:{"200":{"description":"OK"}},"x-auth-type":"Application & Application User","x-throttling-tier":"Unlimited"
}}},"swagger":"2.0","info":{"title":"testAPI","version":"1.0.0"}}' -F outSeq="" -F faultSeq="json_fault" -F tiersCollection="Unlimited" -k -X POST -b cookies https://localhost:9443/publisher/site/blocks/item-design/ajax/add.jag

sanjeewa malalgodaWSO2 API Manager how to change some resource stored in registry for each tenant load.

As you all know in API Manager we have stored tiers and lot of other data in registry. In some scenarios we may need to modify and update before tenant user use it. In such cases we can write tenant service creator listener and do what we need. In this article we will see how we can change tiers.xml file before tenant load to system. Please note that with this change we cannot change tiers values from UI as this code replace it for each tenant load.

Java code.

CustomTenantServiceCreator.java

package org.wso2.custom.observer.registry;
import org.apache.axis2.context.ConfigurationContext;
import org.apache.commons.io.IOUtils;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.wso2.carbon.context.PrivilegedCarbonContext;
import org.wso2.carbon.registry.core.exceptions.RegistryException;
import org.wso2.carbon.registry.core.session.UserRegistry;
import org.wso2.carbon.utils.AbstractAxis2ConfigurationContextObserver;
import org.wso2.carbon.registry.core.Resource;
import org.wso2.carbon.apimgt.impl.internal.ServiceReferenceHolder;
import org.wso2.carbon.apimgt.impl.APIConstants;


import java.io.IOException;
import java.io.InputStream;
import java.util.Iterator;
public class CustomTenantServiceCreator extends AbstractAxis2ConfigurationContextObserver {
    private static final Log log = LogFactory.getLog(CustomTenantServiceCreator.class);
    @Override
    public void createdConfigurationContext(ConfigurationContext configurationContext) {
        UserRegistry registry = null;
        try {
            String tenantDomain = PrivilegedCarbonContext.getThreadLocalCarbonContext().getTenantDomain();
            int tenantId = PrivilegedCarbonContext.getThreadLocalCarbonContext().getTenantId();
            registry = ServiceReferenceHolder.getInstance().getRegistryService().getGovernanceSystemRegistry(tenantId);
            Resource resource = null;
            resource = registry.newResource();
            resource.setContent("");
            InputStream inputStream =
                    CustomTenantServiceCreator.class.getResourceAsStream("/tiers.xml");
            byte[] data = IOUtils.toByteArray(inputStream);
            resource = registry.newResource();
            resource.setContent(data);
            registry.put(APIConstants.API_TIER_LOCATION, resource);
        } catch (RegistryException e) {
            e.printStackTrace();
        } catch (IOException e) {
            e.printStackTrace();
        }
    }




CustomObserverRegistryComponent.java

package org.wso2.custom.observer.registry;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.osgi.framework.BundleContext;
import org.osgi.service.component.ComponentContext;
import org.wso2.carbon.utils.Axis2ConfigurationContextObserver;
import org.wso2.carbon.utils.multitenancy.MultitenantConstants;
import org.wso2.carbon.apimgt.impl.APIManagerConfigurationService;
import org.wso2.carbon.apimgt.impl.APIManagerConfiguration;
/**
 * @scr.component name="org.wso2.custom.observer.services" immediate="true"
 * @scr.reference name="api.manager.config.service"
 *                interface=
 *                "org.wso2.carbon.apimgt.impl.APIManagerConfigurationService"
 *                cardinality="1..1"
 *                policy="dynamic" bind="setAPIManagerConfigurationService"
 *                unbind="unsetAPIManagerConfigurationService"
 */
public class CustomObserverRegistryComponent {
    private static final Log log = LogFactory.getLog(CustomObserverRegistryComponent.class);
    public static final String TOPICS_ROOT = "forumtopics";
    private static APIManagerConfiguration configuration = null;
    protected void activate(ComponentContext componentContext) throws Exception {
        if (log.isDebugEnabled()) {
            log.debug("Forum Registry Component Activated");
        }
        try{
            CustomTenantServiceCreator tenantServiceCreator = new CustomTenantServiceCreator();
            BundleContext bundleContext = componentContext.getBundleContext();
            bundleContext.registerService(Axis2ConfigurationContextObserver.class.getName(), tenantServiceCreator, null);
         
        }catch(Exception e){
            log.error("Could not activate Forum Registry Component " + e.getMessage());
            throw e;
        }
    }
 
 
    protected void setAPIManagerConfigurationService(APIManagerConfigurationService amcService) {
log.debug("API manager configuration service bound to the API host objects");
configuration = amcService.getAPIManagerConfiguration();
}
protected void unsetAPIManagerConfigurationService(APIManagerConfigurationService amcService) {
log.debug("API manager configuration service unbound from the API host objects");
configuration = null;
}
}
}



Complete source code for project
https://drive.google.com/file/d/0B3OmQJfm2Ft8b3cxU3QwU0MwdWM/view?usp=sharing

Once tenant loaded you will see updated values as follows.



Prabath SiriwardenaEnabling FIDO U2F Multi-Factor Authentication for the AWS Management Console with the WSO2 Identity Server

This tutorial on Medium explains how to enable authentication for the AWS Management Console against the corporate LDAP server and then enable multi-factor authentication (MFA) with FIDO. FIDO is soon becoming the de facto standard for MFA, backed by the top players in the industry including Google, Paypal, Microsoft, Alibaba, Mozilla, eBay and many more.


Malintha AdikariModel Evaluation With Coress Validation


We can use cross validation to evaluate the prediction accuracy of the model. We can keep subset of our dataset without using it for training purposes. So those are new or unknown data for the model once we train that with the rest of data. Then we can use that subset of unused data to evaluate the accuracy of the trained model. Here, first we partition data into test dataset and training dataset and then train the model with the training dataset. Finally we evaluate the model with the test dataset. This process is called "Cross Validation".

In this blog post I would like to demonstrate how we can cross validate a decision tree classification model which is build using scikit-learn + Panda. Please visit decision-tree-classification-using scikit-learn post if you haven't create your classification model yet. As a recap at this point we have a decision tree model which predicts whether a given person in Titanic ship is going to survive from the tragedy or die in the cold, dark sea :(.

In previous blog post we have used entire Titanic dataset for training the model. Let's see how we can use only 80% of data for training and the rest 20% for evaluation purpose.

# separating 80% data for training
train = df.sample(frac=0.8, random_state=1)

# rest 20% data for evaluation purpose
test = df.loc[~df.index.isin(train.index)]

Then we train the model normally but we use training dataset

dt = DecisionTreeClassifier(min_samples_split=20, random_state=9)
dt.fit(train[features], train["Survived"])

Then we predict the result for rest 20% data.

predictions = dt.predict(test[features])


Then we can calculate Mean Squared Error of the predictions vs. actual values as a measurement of the prediction accuracy of the trained model.

0686d09b81bdb146174754ee2f74b81f.png

We can use scikit-learn built in mean squared error function for this. First import it to current module.

from sklearn.metrics import mean_squared_error

Then we can do the calculation as follows

mse = mean_squared_error(predictions, test["Survived"])
print(mse)

You can play with the data partition ratio and the features and observe the variation of the Mean Squared Error with those parameters.


sanjeewa malalgodaHow to avoid issue in default APIs in WSO2 API Manager 1.10

In API Manager 1.10 you may see issue in mapping resources when you create another version and make it as default version. In this post lets see how we can overcome that issue.

Lets say we have resource with a path parameter like this.
 /resource/{resourceId}

Then we will create another API version and make it as default.
As you can see from the xml generated in synapse config, corresponding to the API, the resource is created correctly in admin--resource_v1.0.xml

<resource methods="GET" uri-template="/resource/{resourceId} " faultSequence="fault">

But if you checked newly created default version then you will see following.

<resource methods="GET"
             uri-template="$util.escapeXml($resource.getUriTemplate())"
             faultSequence="fault">

Therefore, we cannot call the resource of the API via gateway with the APIs default version.
Assume we have API named testAPI and we have 3 versions named 1.0.0, 2.0.0 and 3.0.0.
By defining default API what we do is just create a proxy for default version. Then we will create default proxy which can accept any url pattern and deploy it.
For that we recommend you to use /* pattern. It will only mediate requests to correct version. Lets think default version is 2.0.0 then default version API
will forward request to that version. So you can have all your resources in version 2.0.0 and it will be processed there. And you can have any complex url pattern there.

So for default API having resource definition which match to any request is sufficient. Here is the configuration to match it.

  <resource methods="POST PATCH GET DELETE HEAD PUT"
             url-mapping="/*"
             faultSequence="fault">

To confirm this you can see content of default API. There you will see it is pointed to actual API with given version. So All your resources will be there in versioned API as it is.

Here i have listed complete velocity template file for default API.
Please copy it and replace wso2am-1.10.0/repository/resources/api_templates/default_api_template.xml file.

<api xmlns="http://ws.apache.org/ns/synapse"  name="$!apiName" context="$!apiContext" transports="$!transport">
   <resource methods="POST PATCH GET DELETE HEAD PUT"
             uri-template="/*"
             faultSequence="fault">
    <inSequence>
        <property name="isDefault" expression="$trp:WSO2_AM_API_DEFAULT_VERSION"/>
        <filter source="$ctx:isDefault" regex="true">
            <then>
                <log level="custom">
                    <property name="STATUS" value="Faulty invoking through default API.Dropping message to avoid recursion.."/>
                </log>
                <payloadFactory media-type="xml">
                    <format>
                        <am:fault xmlns:am="http://wso2.org/apimanager">
                            <am:code>500</am:code>
                            <am:type>Status report</am:type>
                            <am:message>Internal Server Error</am:message>
                            <am:description>Faulty invoking through default API</am:description>
                        </am:fault>
                    </format>
                    <args/>
                </payloadFactory>
                <property name="HTTP_SC" value="500" scope="axis2"/>
                <property name="RESPONSE" value="true"/>
                <header name="To" action="remove"/>
                <property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>
                <property name="ContentType" scope="axis2" action="remove"/>
                <property name="Authorization" scope="transport" action="remove"/>
                <property name="Host" scope="transport" action="remove"/>
                <property name="Accept" scope="transport" action="remove"/>
                <send/>
            </then>
            <else>
                <header name="WSO2_AM_API_DEFAULT_VERSION" scope="transport" value="true"/>
                #if( $transport == "https" )
                <property name="uri.var.portnum" expression="get-property('https.nio.port')"/>
                #else
                <property name="uri.var.portnum" expression="get-property('http.nio.port')"/>
                #end

            <send>
                <endpoint>
                #if( $transport == "https" )
                <http uri-template="https://localhost:{uri.var.portnum}/$!{fwdApiContext}">
                #else
                <http uri-template="http://localhost:{uri.var.portnum}/$!{fwdApiContext}">
                #end
                        <timeout>
                            <duration>60000</duration>
                            <responseAction>fault</responseAction>
                        </timeout>
                        <suspendOnFailure>
                             <progressionFactor>1.0</progressionFactor>
                        </suspendOnFailure>
                        <markForSuspension>
                            <retriesBeforeSuspension>0</retriesBeforeSuspension>
                            <retryDelay>0</retryDelay>
                        </markForSuspension>
                    </http>
                </endpoint>
            </send>
            </else>
        </filter>
        </inSequence>
        <outSequence>
        <send/>
        </outSequence>
    </resource>
        <handlers>
            <handler class="org.wso2.carbon.apimgt.gateway.handlers.common.SynapsePropertiesHandler"/>
        </handlers>
</api>


Evanthika AmarasiriHow to solve the famous token regeneration issue in an API-M cluster

In a API Manager clustered environment (in my case, I have a publisher, a store, two gateway nodes and two key manager nodes fronted by a WSO2 ELB 2.1.1), while regenerating tokens, if you come across an error saying Error in getting new accessToken with an exception as below at Key Manager node, then this is due to a configuration issue.

TID: [0] [AM] [2014-09-19 05:41:28,321]  INFO {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil} -  'Administrator@carbon.super [-1234]' logged in at [2014-09-19 05:41:28,321-0400] {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil}
TID: [0] [AM] [2014-09-19 05:41:28,537] ERROR {org.wso2.carbon.apimgt.keymgt.service.APIKeyMgtSubscriberService} -  Error in getting new accessToken {org.wso2.carbon.apimgt.keymgt.service.APIKeyMgtSubscriberService}
TID: [0] [AM] [2014-09-19 05:41:28,538] ERROR {org.apache.axis2.rpc.receivers.RPCMessageReceiver} -  Error in getting new accessToken {org.apache.axis2.rpc.receivers.RPCMessageReceiver}
java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
    at java.lang.reflect.Method.invoke(Method.java:619)
    at org.apache.axis2.rpc.receivers.RPCUtil.invokeServiceClass(RPCUtil.java:212)
    at org.apache.axis2.rpc.receivers.RPCMessageReceiver.invokeBusinessLogic(RPCMessageReceiver.java:117)
    at org.apache.axis2.receivers.AbstractInOutMessageReceiver.invokeBusinessLogic(AbstractInOutMessageReceiver.java:40)
    at org.apache.axis2.receivers.AbstractMessageReceiver.receive(AbstractMessageReceiver.java:110)
    at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
    at org.apache.axis2.transport.http.HTTPTransportUtils.processHTTPPostRequest(HTTPTransportUtils.java:172)
    at org.apache.axis2.transport.http.AxisServlet.doPost(AxisServlet.java:146)
    at org.wso2.carbon.core.transports.CarbonServlet.doPost(CarbonServlet.java:231)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:755)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
    at org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(ServletRegistration.java:61)
    at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:128)
    at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:68)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
    at org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.service(DelegationServlet.java:68)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
    at org.wso2.carbon.tomcat.ext.filter.CharacterSetFilter.doFilter(CharacterSetFilter.java:61)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
    at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
    at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:178)
    at org.wso2.carbon.tomcat.ext.valves.CarbonTomcatValve$1.invoke(CarbonTomcatValve.java:47)
    at org.wso2.carbon.webapp.mgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:56)
    at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:47)
    at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:141)
    at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:156)
    at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:936)
    at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:52)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
    at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1004)
    at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
    at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1653)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1176)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
    at java.lang.Thread.run(Thread.java:853)
Caused by:
org.wso2.carbon.apimgt.keymgt.APIKeyMgtException: Error in getting new accessToken
    at org.wso2.carbon.apimgt.keymgt.service.APIKeyMgtSubscriberService.renewAccessToken(APIKeyMgtSubscriberService.java:281)
    ... 45 more
Caused by:
java.lang.RuntimeException: Token revoke failed : HTTP error code : 404
    at org.wso2.carbon.apimgt.keymgt.service.APIKeyMgtSubscriberService.renewAccessToken(APIKeyMgtSubscriberService.java:252)
    ... 45 more


This is what you have to do to solve this issue.

1. In you Gateway nodes, you need to change the host and the port values of the below APIs that resides under $APIM_HOME/repository/deployment/server/synapse-configs/default/api                                                      _TokenAPI_.xml                                                                                                                                         _AuthorizeAPI_.xml                                                                                                                                       _RevokeAPI_.xml                                                                                                                                        
If you get a HTTP 302 error at Key manager side while regenerating the token, make sure to check the RevokeURL of the api-manager.xml of the Key Manager node to see if it is pointing to the NIO port of the Gateway Node.

sanjeewa malalgodaHow to change API Managers authentication failure message to message default content type.


The error response format sent from WSO2 gateway is usually in xml format. Also if need we can change this behavior. To do this we have extension point to customize message error generation. For auth failures throttling failures we have handler to generate messages.

For auth failures following will be used.
/repository/deployment/server/synapse-configs/default/sequences/_auth_failure_handler.xml

The handler can be updated to dynamically use the content Type to return the correct response.
Once you changed it will work for XML and JSON calls.

Change the file to add a dynamic lookup of the message contentType i.e,
From

<sequence name="auth_failure_handler" xmlns="http://ws.apache.org/ns/synapse">

<property name="error_message_type" value="application/json"/>
<sequence key="cors_request_handler"/>
</sequence>

To
<sequence name="auth_failure_handler" xmlns="http://ws.apache.org/ns/synapse">

<property name="error_message_type" value="get-property('transport', 'Content-Type')"/>
<sequence key="cors_request_handler"/>
</sequence>


Malintha AdikariDecision Tree Classification using scikit-learn



Please visit Preparing-machine-learning-developing blog post if you haven't prepared your development environment yet.

First we have to load data from a dataset. We can use a dataset in our hand at this point or online dataset for this. Use following python method to load titanic.csv data file into Pandas[1] dataframe.

Here I have used my downloaded csv file. You can download that file from https://github.com/caesar0301/awesome-public-datasets/tree/master/Datasets or Goolge it


def load_data():
  df = pand.read_csv("/home/malintha/projects/ML/datasets/titanic.csv");
  return df

before you use Pandas functions you have to import that module.

import pandas as pand

Now we can print the first few rows of the dataframe using

print(df.head(), end = "\n\n")

And it will output

PassengerId
Survived
Pclass
Name
Sex
Age
SibSp
Parch
Ticket
Fare
Cabin
Embarked

1
0
3
Braund, Mr. Owen Harris
male
22
1
0
A/5 21171
7.25

S

2
1
1
Cumings, Mrs. John Bradley (Florence Briggs Thayer)
female
38
1
0
PC 17599
71.2833
C85
C

3
1
3
Heikkinen, Miss. Laina
female
26
0
0
STON/O2. 3101282
7.925

S

4
1
1
Futrelle, Mrs. Jacques Heath (Lily May Peel)
female
35
1
0
113803
53.1
C123
S



We can remove “Name”,  “Ticket” and “PassengerId”  features for the dataset as they are not much important features over other features. We can use Pandas ‘drop’ facility to remove column from a dataframe.

df.drop(['Name','Ticket',’PassengerId’], inplace=True, axis=1)

Next task is mapping nominal data into integers in order to create the model in scikit-learn.

Here we have 3 nominal feature in our dataset.
  1. Sex
  2. Cabin
  3. Embarked

We can replace the original values with integers with following code segment.


def map_nominal_to_integers(df):
  df_refined = df.copy()
  sex_types = df_refined['Sex'].unique()
  cabin_types = df_refined['Cabin'].unique()
  embarked_types = df_refined["Embarked"].unique()
  sex_types_to_int = {name: n for n, name in enumerate(sex_types)}
  cabin_types_to_int = {name: n for n, name in enumerate(cabin_types)}
  embarked_types_to_int = {name: n for n, name in enumerate(embarked_types)}
  df_refined["Sex"] = df_refined["Sex"].replace(sex_types_to_int)
  df_refined["Cabin"] = df_refined["Cabin"].replace(cabin_types_to_int)
  df_refined["Embarked"] = df_refined["Embarked"].replace(embarked_types_to_int)
  return df_refined

We have one more step to shape-up our dataset. If you look at the refined dataset carefully ,you may able to see there are “NaN” value for some of age values. We should replace this NaN with appropriate integer value. Pandas provide built-in function for this. I will use 0 as the replacement for NaN.

df["Age"].fillna(0, inplace=True)

Now we all set to build the decision tree from our refined dataset. We have to choose the features and the target value for the decision tree.

features = ['Pclass','Sex','Age','SibSp','Parch','Fare','Cabin','Embarked']
X = df[features]
Y = df["Survived"]

Here,X is the feature set and Y is the target set. Now we build the decision tree. For this you should import the scikit-learn decision tree into your python module.
from sklearn.tree import DecisionTreeClassifier

And build the decision tree with our feature set and the target set

dt = DecisionTreeClassifier(min_samples_split=20, random_state=9)
dt.fit(X,Y)

Now it is time to do a prediction with our trained decision tree. We can use sample feature data set and predict the target for that feature value set.

Z = [1,1,22.0,1,0,7.25,0,0]
print(dt.predict(Z))


sanjeewa malalgodaNew API Manager Throttling implementation for API Manager 2.0.0

As you know at the moment we are working on completely new throttling implementation for API Manager next release. So in this article i will briefly summarize what we are going to do with next release. Please note that these facts are depend on the discussions happen at architecture@wso2.com and developers mailing lists. Also these content may subject to change before release.

Existing API Manager Throttling DesignBased on Hazelcast IAtomicLong distributed counter.
High performance with accuracy.
Bit difficult to design complex policies.
Cannot define policies specific to given API.
Can throttle based on requests count only.

Advantages of New DesignBased on Central Policy Server.
Extensible and Flexible to define advanced rules based on API properties such as headers, users and etc.
Efficient siddhi(https://github.com/wso2/siddhi) based implementation for throttle core.
Efficient DB lookups with Bloom Filter based implementation of Siddhi.
Can design throttling policies based on request count and bandwidth both.

New architecture and message flow.

Screenshot from 2016-05-09 21-37-23.png

Message Flow and how it worksAPI Gateway will be associated with new throttle handler.
Throttle handler will extract all the relevant properties from message context and generate throttle keys.
To check API level throttling API level key would be context:version combination.
For resources context:version:resource_path:http_method.
Throttle handler will do map lookup for throttled events when new request comes to API gateway.
Then once throttling process completed handler will set message context to agent.
Throttle data process and publisher agent will asynchronously process message to push events to Central Policy Server.
Central Policy Server will evaluate complex rules based on events and update topic accordingly.
All gateway workers will fetch throttled events from database time to time in asynchronous manner.
Two layers of cache will be used to store throttling decisions.
Local decisions will be based map lookup.

So far we have identified few throttle/ rate limit conditions.
Number of requests per unit time(what we have now). This can be associated with the tier.
Data amount transferred through gateway per unit time. This also can be associated with the tier.
Dynamic rules(such as blocking some IP or API). This should be applied globally.
Rate limiting(this should be applied in node level as replicating counters will cause performance issues). Ex: requests on fly at given time is 500 for API.

Content Based Throttling
We have identified some parameters available in message context which we can use as throttling parameters.
We may use one or more of them to design policy.
  • IP Address.
  • IP Address Range.
  • Query Parameters.
  • Transport Headers.
  • Http Verb.
  • Resource path.
Policy Design Scenarios
You may design new policies based on request count per unit time interval or bandwidth for given time period. 
Policies can be designed per API. Therefore this will facilitate  the current resource level throttling implementation.  
Example: For API named “testAPI” resource “people” HTTP GET allows 5 requests per minute and POST allows 3 requests per minute. 
If our API support only mobile devices then we can add policy in API level to check user agent and throttle.

System administrators can define set of policies which apply across all APIs.
Example: If user bob is identified as fraud user then admin can set policy saying block bob.
Same Way we can block given IP address, user agent, token etc. 

Policies can be applied at multiple levels such as:
  • API Level
  • Application Level
  • Global Level(custom policy and blocking conditions)
  • Subscription Level
We can create new policy using admin dashboard user interface.
Then it will create policy file and send it to central policy server.
Central policy server will deploy it.
Here i have attached some of the images in admin dashboard related to throttling policy design.

How to create API/Resource level policy with multiple conditions

policyEditor1.png



policyEditor2.png


How to block certain requests based on API, Application, IP address and User.
blockEntity.png


How to add and use custom policy to handle custom throttling scenarios based on requirements.
customPolicy.png


Key Advantages
  • Ability to design complex throttle policies.
  • Advanced policy designer user interface.
  • Users can design policies with multiple attributes present in request.
    • Ex: transport headers, body content, HTTP verb etc. 
  • Can design tier by combining multiple policies
    • Ex: For given IP range, given HTTP verb, given header limit access.
  • If client is mobile device throttle them based on user agent header.
  • Can design API specific policies.

Bhathiya Jayasekara[WSO2 APIM] Setting up API Manager Distributed Setup with Puppet Scripts





In this post we are going to use puppet to setup a 4 node API Manager distributed setup. You can find the puppet scripts I used, in this git repo.

NOTE: This blog post can be useful to troubleshoot any issues you get while working with puppet.

In my puppet scripts there are below IPs of the nodes I used. You have to replace them with yours.

Puppet Master/MySQL :   192.168.57.92
Publisher:   192.168.57.93
Store:   192.168.57.94
Key Manager:   192.168.57.96
Gateway:   192.168.57.97

That's just some information. Now let's start setting up each node, one by one.

1) Configure Puppet Master/ MySQL Node 

1. Install NTP, Puppet Master and MySQL.

> sudo su
> ntpdate pool.ntp.org ; apt-get update && sudo apt-get -y install ntp ; service ntp restart
> cd /tmp
> wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
> dpkg -i puppetlabs-release-trusty.deb
> apt-get update
> apt-get install puppetmaster
> apt-get install mysql-server


2. Change hostname in /etc/hostname to puppet (This might need a reboot)

3. Update /etc/hosts with below entry. 

127.0.0.1 puppet


4. Download and copy https://github.com/bhathiya/apim-puppet-scripts/tree/master/puppet directory to /etc/puppet

5. Replace IPs in copied puppet scripts. 

6. Before restarting the puppet master, clean all certificates, including puppet master’s certificate which is having its old DNS alt names.

> puppet cert clean --all


7. Restart puppet master

> service puppetmaster restart

8. Download and copy jdk-7u79-linux-x64.tar.gz to /etc/puppet/environments/production/modules/wso2base/files/jdk-7u79-linux-x64.tar.gz

9. Download and copy wso2am-2.0.0-SNAPSHOT.zip to 
/etc/puppet/environments/production/modules/wso2am/files/wso2am-2.0.0-SNAPSHOT.zip

10. Download and copy https://github.com/bhathiya/apim-puppet-scripts/tree/master/db_scripts directory to /opt/db_scripts

11. Download and copy https://github.com/bhathiya/apim-puppet-scripts/tree/master/run_puppet.sh file to /opt/run_puppet.sh (Copy required private keys as well, to ssh to puppet agent nodes)

12. Open and update run_puppet.sh script as required, and set read/execution rights.

> chmod 755 
run_puppet.sh


2) Configure Puppet Agents 

Repeat these steps in each agent node.

1. Install Puppet.

> sudo su
> apt-get update
> apt-get install puppet


2. Change hostname in /etc/hostname to apim-node-1 (This might need a reboot)

3. Update /etc/hosts with puppet master's host entry.

192.168.57.92 puppet

4. Download and copy https://github.com/bhathiya/apim-puppet-scripts/tree/master/puppet-agents/setup.sh file to /opt/setup.sh

5. Set execution rights.

> chmod 755 
setup.sh 


6. Download and copy https://github.com/bhathiya/apim-puppet-scripts/tree/master/puppet-agents/deployment.conf file to /opt/deployment.conf (Edit this as required. For example, product_profile should be one of api-store, api-publisher, api-key-manager and gateway-manager)


3) Execute Database and Puppet Scripts

Go to /opt in puppet master and run ./run_puppet.sh (or you can run setup.sh in each agent node.)

If you have any questions, please post below.


References:
[1] https://github.com/wso2/puppet-modules/wiki/Use-WSO2-Puppet-Modules-in-puppet-master-agent-Environment
[2] https://github.com/wso2/puppet-modules/

sanjeewa malalgodaFix WSO2 API Manager Token generation issue due to no matching grant type(Error occurred while calling token endpoint: HTTP error code : 400)


If you have migrated API Manager setup then sometimes you may see this error due missed entries in tables.
"Error occurred while calling token endpoint: HTTP error code : 400"

If we dont have  grant_type in IDN_OAUTH_CONSUMER_APPS table and then that may cause this error.
Grant_type may be emplty for the Default Application in IDN_OAUTH_CONSUMER_APPS table. Also in IDN_OAUTH2_ACCESS_TOKEN table grant_type may be NULL.

When you try to generate tokens for that application you may see error like below.
"Error occurred while calling token endpoint: HTTP error code : 400"
Since the token regenerate process try to match the grant_types of IDN_OAUTH2_ACCESS_TOKEN with grant_types of IDN_OAUTH_CONSUMER_APPS.

To fix that we can update IDN_OAUTH2_ACCESS_TOKEN table as 'client_credentials' and grant_type of the IDN_OAUTH_CONSUMER_APPS as 'urn:ietf:params:oauth:grant-type:saml2-bearer iwa:ntlm implicit refresh_token client_credentials authorization_code password'

If this effected multiple places do same for all application. Then restart servers.
Now when you generate tokens you should be able to generate tokens.

Chathurika Erandi De SilvaStatistics redefined -> ESB Analytics Server - First peek


I am writing this blog post to make an introduction to the upcoming WSO2 ESB Analytics server since I am working with it these days. This is an introductory post of WSO2 ESB Analytics server. This will be released with ESB 500 and can be downloaded when released from here

Analytics server provides a comprehensive graphical view on the requests received by a certain proxy service, sequence, API, inbound endpoint or an endpoint. These requests can be viewed as hourly, daily, monthly or yearly basis. Of course if you need more granularity with a higher level, a custom defined time frame can be used as well.  Based on the time frame of your selection, a diagrammatic representation of the overall requests received by that particular entity is given. This incorporates both the count and percentage of the successes and failures with respect to requests.


Diagrammatic view of overall request count per proxy





Furthermore it provides graphical representation of the message count and message latency against time. These can be used in a production environment to view and understand many important aspects such as peaks, etc…

Message Count and Message Latency Graphs





The messages that are passed to and fro in ESB with respective to each and every request is listed which is provided by the tracing capability. 



By clicking on a particular message the user has the capability of viewing the entire message flow as well as properties of the message in a detail manner, which will be discussed in a subsequent post. 



 

Sameera JayasomaResolving Startup Order of Carbon Components in WSO2 Carbon 5.0.0

In my previous post https://medium.com/@sameera.jayasoma/startup-order-resolving-mechanisms-in-osgi-48aecde06389, I explained the startup…

Dimuthu De Lanerolle

Troubleshooting Wso2 TAF
=====================


This is series of super important clues to overcome such bugs we would encounter while working with Wso2 TAF - Automation Framework ....


1. Errror 

when building wso2 TAF if you get something like this on the console .......

diamond operator is not supported in -source 1.6
  (use -source 7 or higher to enable diamond operator)
  
  Solution 
Add the maven source plugin to the pom.xml file. 


            <plugin>
                   <artifactId>maven-compiler-plugin</artifactId>
                   <version>2.3.1</version>
                   <inherited>true</inherited>
                       <configuration>
                               <source>1.8</source>
                               <target>1.8</target>
                       </configuration>
            </plugin>

Danushka FernandoCreate an application in WSO2 App Cloud using Maven Plugins

In Application Development life cycle continuous integration is an important factor. How easy to get something deployed which is built in a build server. You can simply use maven exec plugin to run Curl commands to call rest apis.

Following is an example. Before call the create application api we need to call login api and get created a logged in session. To do that we need to call login api with -c cookies and we need to call create application api with -b cookies.

       <plugin>  
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.2</version>
<executions>
<execution>
<id>login</id>
<phase>deploy</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>curl</executable>
<arguments>
<argument>-v</argument>
<argument>-k</argument>
<argument>-c</argument>
<argument>cookies</argument>
<argument>-X</argument>
<argument>POST</argument>
<argument>-F</argument>
<argument>action=login</argument>
<argument>-F</argument>
<argument>userName=<email @ replaced with .>@<tenant domain></argument>
<argument>-F</argument>
<argument>password=<password></argument>
<argument>https://newapps.cloud.wso2.com/appmgt/site/blocks/user/login/ajax/login.jag</argument>
</arguments>
</configuration>
</execution>
<execution>
<id>create application</id>
<phase>deploy</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>curl</executable>
<arguments>
<argument>-v</argument>
<argument>-k</argument>
<argument>-b</argument>
<argument>cookies</argument>
<argument>-X</argument>
<argument>POST</argument>
<argument>https://newapps.cloud.wso2.com/appmgt/site/blocks/application/application.jag</argument>
<argument>-F</argument>
<argument>action=createApplication</argument>
<argument>-F</argument>
<argument>applicationName=Buzzwords&#x20;Backend</argument>
<argument>-F</argument>
<argument>applicationDescription=API&#x20;Producer&#x20;application&#x20;for&#x20;buzzword&#x20;sample</argument>
<argument>-F</argument>
<argument>conSpecMemory=512</argument>
<argument>-F</argument>
<argument>conSpecCpu=300</argument>
<argument>-F</argument>
<argument>runtime=2</argument>
<argument>-F</argument>
<argument>appTypeName=mss</argument>
<argument>-F</argument>
<argument>applicationRevision=${parsedVersion.majorVersion}.${parsedVersion.minorVersion}.${parsedVersion.nextIncrementalVersion}</argument>
<argument>-F</argument>
<argument>uploadedFileName=${artifactId}-${version}.jar</argument>
<argument>-F</argument>
<argument>runtimeProperties=runtimeProperties=[{"key":"k1","value":"e1"}]</argument>
<argument>-F</argument>
<argument>tags=[{"key":"k1","value":"t1"}]</argument>
<argument>-F</argument>
<argument>fileupload=@${project.build.directory}/${artifactId}-${version}.jar</argument>
<argument>-F</argument>
<argument>isFileAttached=true</argument>
<argument>-F</argument>
<argument>isNewVersion=true</argument>
</arguments>
</configuration>
</execution>
</executions>
</plugin>


You don't have to deploy it each time you build it. So you can set the phase of the executions to deploy as above. But it might try to deploy the artifact to nexus. To stop that you can skip deploy by adding following.

       <plugin>  
<artifactId>maven-deploy-plugin</artifactId>
<version>2.7</version>
<configuration>
<skip>true</skip>
</configuration>
</plugin>

In App Cloud to deploy the changes we need to create new version. So to do that we will always need to increase the version name of the create request. You can use helper plugin and replace plugin as a combination. With following configuration I am creating a property and I am replacing them in each deploy with next patch version number.

       <plugin>  
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<version>1.10</version>
<executions>
<execution>
<phase>deploy</phase>
<id>parse-version</id>
<goals>
<goal>parse-version</goal>
</goals>
<configuration>
<versionString>${appcloud.version}</versionString>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>com.google.code.maven-replacer-plugin</groupId>
<artifactId>replacer</artifactId>
<version>1.5.3</version>
<executions>
<execution>
<phase>deploy</phase>
<goals>
<goal>replace</goal>
</goals>
</execution>
</executions>
<configuration>
<file>pom.xml</file>
<replacements>
<replacement>
<token>${appcloud.version}</token>
<value>${parsedVersion.majorVersion}.${parsedVersion.minorVersion}.${parsedVersion.nextIncrementalVersion}</value>
</replacement>
</replacements>
</configuration>
</plugin>


And you need to have a property like below as well.

   <properties>  
<appcloud.version>1.0.7</appcloud.version>
</properties>

Rest of the details of the apis can be found in [1]. Following is the full build tag and the properties tag in the pom.xml. If you run mvn clean install this would not get triggered. This will only trigger when you run mvn deploy.


 <build>  
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<version>1.10</version>
<executions>
<execution>
<phase>deploy</phase>
<id>parse-version</id>
<goals>
<goal>parse-version</goal>
</goals>
<configuration>
<versionString>${appcloud.version}</versionString>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>com.google.code.maven-replacer-plugin</groupId>
<artifactId>replacer</artifactId>
<version>1.5.3</version>
<executions>
<execution>
<phase>deploy</phase>
<goals>
<goal>replace</goal>
</goals>
</execution>
</executions>
<configuration>
<file>pom.xml</file>
<replacements>
<replacement>
<token>${appcloud.version}</token>
<value>${parsedVersion.majorVersion}.${parsedVersion.minorVersion}.${parsedVersion.nextIncrementalVersion}</value>
</replacement>
</replacements>
</configuration>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.2</version>
<executions>
<execution>
<id>login</id>
<phase>deploy</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>curl</executable>
<arguments>
<argument>-v</argument>
<argument>-k</argument>
<argument>-c</argument>
<argument>cookies</argument>
<argument>-X</argument>
<argument>POST</argument>
<argument>-F</argument>
<argument>action=login</argument>
<argument>-F</argument>
<argument>userName=<email @ replaced with .>@<tenant domain></argument>
<argument>-F</argument>
<argument>password=<password></argument>
<argument>https://newapps.cloud.wso2.com/appmgt/site/blocks/user/login/ajax/login.jag</argument>
</arguments>
</configuration>
</execution>
<execution>
<id>create application</id>
<phase>deploy</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>curl</executable>
<arguments>
<argument>-v</argument>
<argument>-k</argument>
<argument>-b</argument>
<argument>cookies</argument>
<argument>-X</argument>
<argument>POST</argument>
<argument>https://newapps.cloud.wso2.com/appmgt/site/blocks/application/application.jag</argument>
<argument>-F</argument>
<argument>action=createApplication</argument>
<argument>-F</argument>
<argument>applicationName=Buzzwords&#x20;Backend</argument>
<argument>-F</argument>
<argument>applicationDescription=API&#x20;Producer&#x20;application&#x20;for&#x20;buzzword&#x20;sample</argument>
<argument>-F</argument>
<argument>conSpecMemory=512</argument>
<argument>-F</argument>
<argument>conSpecCpu=300</argument>
<argument>-F</argument>
<argument>runtime=2</argument>
<argument>-F</argument>
<argument>appTypeName=mss</argument>
<argument>-F</argument>
<argument>applicationRevision=${parsedVersion.majorVersion}.${parsedVersion.minorVersion}.${parsedVersion.nextIncrementalVersion}</argument>
<argument>-F</argument>
<argument>uploadedFileName=${artifactId}-${version}.jar</argument>
<argument>-F</argument>
<argument>runtimeProperties=runtimeProperties=[{"key":"k1","value":"e1"}]</argument>
<argument>-F</argument>
<argument>tags=[{"key":"k1","value":"t1"}]</argument>
<argument>-F</argument>
<argument>fileupload=@${project.build.directory}/${artifactId}-${version}.jar</argument>
<argument>-F</argument>
<argument>isFileAttached=true</argument>
<argument>-F</argument>
<argument>isNewVersion=true</argument>
</arguments>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<artifactId>maven-deploy-plugin</artifactId>
<version>2.7</version>
<configuration>
<skip>true</skip>
</configuration>
</plugin>
</plugins>
</build>
<properties>
<microservice.mainClass>org.wso2.carbon.mss.sample.Application</microservice.mainClass>
<appcloud.version>1.0.7</appcloud.version>
</properties>


[1] https://docs.wso2.com/display/AppCloud/Published+APIs

Prabath SiriwardenaHow Netflix secures Microservices with short-lived certificates?

Today we had our 6th Silicon Valley IAM meetup at the WSO2 office Mountain View. We are glad to have Bryan Payne from Netflix to talk on the topic — ‘PKI at Scale Using Short-Lived Certificates’. Bryan leads the Platform Security team at Netflix and prior to Netflix, he was the Director, Security Research at Nebula.

 This post on medium is written based on Bryan’s talk at the meetup and other related resources.

Malintha AdikariPreparing Machine Learning developing environment



1. Installing pycharm

Download and install pycharm from https://www.jetbrains.com/pycharm/download/#section=linux

2. Installing PIP

PIP is an easy installer for python packages

$ sudo apt-get install python-pip python-dev build-essential 
$ sudo pip install --upgrade pip
$ sudo pip install --upgrade virtualenv

3. Installing required python packages

$ pip install numpy
$ pip install scipy
$ pip install scikit-learn

4. Setting up pycharm 


1. Go to pycharm downloaded folder and extract pycharm-{}.tar.gz file   into a preferred location.

2.
Execute pycharm.sh file

3.
Click File -> New Project and create a project providing project 
name

4. Right click on your project -> new -> python file and create new 
python file providing file name

5. Add following import lines to your python file


HelloWorld.py




import csv



import numpy


import scipy


from sklearn import preprocessing


from sklearn import neighbors


from sklearn.cross_validation import train_test_split


from sklearn import metrics


from sklearn.naive_bayes import GaussianNB


from sklearn import cross_validation


from sklearn.grid_search import GridSearchCV





Note: If you are getting any import error click 

  • File -> Default Settings -> Project Interpreter

  • Select python 2.7.x version

  • Click Apply -> Ok


Happy coding...!!!!




 

  • >>


  • >



Lalaji SureshikaSharing applications and subscriptions across multiple application developers through WSO2 API Store



In previous WSO2 APIM versions before 1.9.0 version,only the application developer who logs into APIStore can view/manage his applications and subscriptions.But there was a requirement arose mainly due to following two reasons;

-- What if there’s a group of employees in an organization worked as developers for an application and how all those user group could get access to same subscription/application.

--  What if the APIStore logged in developer left the organization and organization want to manage his created subscriptions in-order to manage the developed applications under organization name and only prohibit the left developer of accessing those.

Since above two requirements are really valid in an app development organization perspective ,we have introduced the feature of sharing applications and subscriptions across user groups from APIM 1.9.0 version onwards. The API Manager provides facility to users of a specific logical group to view each other's' applications and subscriptions.  

We have written this feature with the capability to extend it depend on an organization requirement.As the attribute to define the logical user group will be vary based on organizations.For example:

1)In one organization,sharing applications and subscriptions need to be control based on user roles

2) In another scenario,an APIStore can be run as a common APIStore across multiple organizational users.And in that,user grouping has to be done based on organization attribute.

Because of above facts,the flow how the sharing apps/subscriptions flow is as below.


  1. An app developer of an organization tries to login to APIStore
  2. Then in the underlying APIM code,it checks,if  that APIStore server’s api-manager.xml have the config <GroupingExtractor> enabled and if a custom java class implementation defined inside it.
  3. If so,that java class implementation will run and a group ID for logged in user will be set.
  4. Once the app developer logged in and try to access ‘My Applications’ page and ‘My subscriptions’ page,from the underlying code,it’ll return all the database saved applications & subscriptions based on the user’s ‘Group ID’.
With above approach,the applications and subscriptions are shared based on defined ‘Group ID’ from the custom implementation defined in <GroupingExtractor> of api-manager.xml.
By default,we are shipping a sample java implementation as “org.wso2.carbon.apimgt.impl.DefaultGroupIDExtractorImpl” for this feature to consider the organization name which a signup user give at the time he sign up to the API Store as the group ID. From the custom java implementation,it extracts the claim http://wso2.org/claims/organization of the user who tries to login and uses the value specified in that claim as the group ID. This way, all users who specify the same organization name belong to the same group and therefore, can view each other's' subscriptions and applications. 
For more information on default implementation on sharing subscriptions and applications,please refer; https://docs.wso2.com/display/AM190/Sharing+Applications+and+Subscriptions
In a real organization,the requirement can be bit different.The API Manager also provides flexibility to change this default group id extracting implementation.
From this blog-post,I’ll explain how to write the group id extracting extension based on below use-case.

Requirement
An organization want to share subscriptions & applications based on user roles of the organization.They have disabled ‘signup’ option for users to access APIStore and their administrator is giving rights to users to access the APIStore. Basically the application developers of that organization can be categorized in-to two role levels.
  1. Application developers with ‘manager’ role
These developers control production environment deployed mobile applications subscriptions through API Store
2. Application developers with ‘dev’ role
These developers control testing environment deployed mobile applications subscriptions through API Store 
Requirement is to share the applications and subscriptions across these two roles separately.

Solution
Above can be achieved through writing a custom Grouping Extractor class to set ‘Group ID’ based on user roles.
1. First write a java class with implementing the interface org.wso2.carbon.apimgt.api.LoginPostExecutor interface  and make it as a maven module.
2. Then implement the method  logic for ‘getGroupingIdentifiers()’ of the interface.
In this method,it has to extract two separate ‘Group ID’s for users having ‘manager’ role and users having ‘dev’ role. Below is a written sample logic for similar requirement with implementing this method.You can find the complete code from here.

   public String getGroupingIdentifiers(String loginResponse) {
JSONObject obj;
String username = null;
String groupId = null;
try {
obj = new JSONObject(loginResponse);
//Extract the username from login response
username = (String) obj.get("user");
loadConfiguration();
/*Create client for RemoteUserStoreManagerService and perform user management operation*/
RoleBasedGroupingExtractor extractor = new RoleBasedGroupingExtractor(true);
//create web service client for userStoreManager
extractor.createRemoteUserStoreManager();
//Get the roles of the user
String[] roles = extractor.getRolesOfUser(username);
if (roles != null) {//If user has roles
//Match the roles to check either he/she is from manager/dev role
for (String role : roles) {
if (Constants.MANAGER_ROLE.equals(role)) {
//Set the group id as role name
groupId = Constants.MANAGER_GROUP;
} else if (Constants.ADMIN_ROLE.equals(role)) {
//Set the group id as role name
groupId = Constants.ADMIN_GROUP;
}
}
}

} catch (JSONException e) {
log.error("Exception occurred while trying to get group Identifier from login response");
} catch (org.wso2.carbon.user.api.UserStoreException e) {
log.error("Error while checking user existence for " + username);
} catch (IOException e) {
log.error("IO Exception occurred while trying to get group Identifier from login response");
} catch (Exception e) {
log.error("Exception occurred while trying to get group Identifier from login response");
}
//return the group id
return groupId;
}
3.  Build the java maven module and copy the jar into AM_Home/repository/components/lib folder.
4. Then open APIStore running AM server’s api-manager.xml located at {AM_Home}/repository/conf location and uncomment  <GroupingExtractor> config inside <APIStore> config and add your wrote custom java class name in it.
For eg: <GroupingExtractor>org.wso2.sample.gropuid.impl.RoleBasedGroupingExtractor</GroupingExtractor>5. Then restart the APIM server.6. Then try accessing API Store as different users with same ‘Group ID’ value.For example try login to API Store with a developer having ‘manager’ role and do a subscription.Then try to login as another user who also has ‘manager’ role and check his ‘My Applications’ and ‘My subscriptions’ views in API Store. The second user will able to see the first user created application and subscription in his API Store view as below.
Then try to login as an app developer with ‘dev’ role as well.He’ll not able to see the subscriptions/applications of users with ‘manager’ role.
  

  



Kalpa WelivitigodaWSO2 Application Server 6.0.0-M2 Released !

Welcome to WSO2 Application Server 6.0.0, the successor of WSO2 Carbon based Application Server. WSO2 Application Server 6.0.0 is a complete revamp and is based on vanilla Apache Tomcat. WSO2 provides a number of features by means of extensions to Tomcat to add/enhance the functionality. It provides first class support for generic web applications and JAX-RS/JAX-WS web applications. The performance of the server and individual application can be monitored by integrating WSO2 Application Server with WSO2 Data Analytics Server. WSO2 Application Server is an open source project and it is available under the Apache Software License (v2.0).

Read more at https://medium.com/@callkalpa/wso2-application-server-6-0-0-m2-released-97cdc4da1987#.udebn5roi

sanjeewa malalgodaHow to fix file upload issue due to header dropping in WSO2 API Manager 1.10

In last ESB run time release we’ve introduced a new new property (http.headers.preserve) to preserve headers. And as result for that sometimes content type(or any other headers) may not pass to back end and that can case this type of issues.

To fix that can you please add this(http.headers.preserve = Content-Type) to following file in product distribution and restart server.
repository/conf/passthru-http.properties.

Hope this solution will work for you. Also with that we can fix issues caused by missing media type (charset) at Pass Through Transport level. 

sanjeewa malalgodaTuning WSO2 API Manager gateway and key manager in distributed deployment.

I have discussed about tuning WSO2 API Manager in previous post as well. But in this article i will list some of the configurations related to distributed deployment when we have gateways and key managers. Please try to add below configurations and see how it help to improve performance.

We may tune synapse configuration by editing /repository/conf/synapse.properties file.
synapse.threads.core=100
synapse.threads.max=250
synapse.threads.keepalive=5
synapse.threads.qlen=1000

Validation interval can be increased to avoid frequent connection validations. As of now it was set to 30000ms.
<testOnBorrow>true</testOnBorrow>

<validationQuery>SELECT 1</validationQuery>
<validationInterval>120000</validationInterval>

Also consider following database tuning parameters as per database administrators recommendation(i have listed sample values we use for performance tests).
<maxWait>60000</maxWait>

<initialSize>20</initialSize>
<maxActive>150</maxActive>
<maxIdle>60</maxIdle>
<minIdle>40</minIdle>


Add following parameters to enable gateway resource and key cache.
<EnableGatewayKeyCache>true</EnableGatewayKeyCache>

<EnableGatewayResourceCache>true</EnableGatewayResourceCache>


For key manager following entry is enough. Since gateway cache is enabled we can disble key manager cache.
But if you have JWT usecase please enable following.
<EnableJWTCache>true</EnableJWTCache>



We need to have HTTP access logs to track incoming out going messages.
But for this deployment if we assume key managers are running in DMZ then no need track http access.
So we may disable http access logs for key manager. We need to consider this parameter case by case and if you don't use http access logs you can consider this option.
Here i assume we are using web service based key validation call from gateway to key manager(not thrift client).

To do that add following entry to /repository/conf/log4j.properties file.
log4j.logger.org.apache.synapse.transport.http.access=OFF

sanjeewa malalgodaHow to avoid swagger console issue in API Manager 1.9.1 due to "Can't read from server. It may not have the appropriate access-control-origin settings." error

Sometimes when you use API Manager swagger console in store side you may seen this error "Can't read from server. It may not have the appropriate access-control-origin settings.".



There is one other simple workaround for this issue and you can use same for your deployment. If we used double quotes for labels in swagger document then it can cause to this type of issue.

If you can survive with provided workaround for 1.9, then when you upgrade to next version(1.10) issue will not be there as mentioned early.

So here i have attached one sample with error and one with fix.

problematic swagger definition

{
  "paths": {
    "/*": {
      "put": {
        "x-auth-type": "Application & Application User",
        "x-throttling-tier": "Unlimited",
        "parameters": [
          {
            "schema": {
              "type": "object"
            },
            "description": "Request Body",
            "name": "Payload",
            "required": false,
            "in": "body"
          }
        ],
        "responses": {
          "200": {}
        }
      },
      "post": {
        "x-auth-type": "Application & Application User",
        "x-throttling-tier": "Unlimited",
        "parameters": [
          {
            "schema": {
              "type": "object"
            },
            "description": "Request Body",
            "name": "Payload",
            "required": false,
            "in": "body"
          }
        ],
        "responses": {
          "200": {}
        }
      },
      "get": {
        "x-auth-type": "Application & Application User",
        "x-throttling-tier": "Unlimited",
        "responses": {
          "200": {}
        }
      },
      "delete": {
        "x-auth-type": "Application & Application User",
        "x-throttling-tier": "Unlimited",
        "responses": {
          "200": {}
        }
      },
      "head": {
        "x-auth-type": "Application & Application User",
        "x-throttling-tier": "Unlimited",
        "responses": {
          "200": {}
        }
      }
    }
  },
  "definitions": {
    "Structure test \"test\" ssssss": {
      "properties": {
        "horaireCotation": {
          "description": "Horaire de cotation",
          "type": "string"
        },
        "statut": {
          "type": "string"
        },
        "distanceBarriere": {
          "format": "double",
          "type": "number"
        },
        "premium": {
          "format": "double",
          "type": "number"
        },
        "delta": {
          "format": "double",
          "type": "number"
        },
        "pointMort": {
          "format": "double",
          "type": "number"
        },
        "elasticite": {
          "format": "double",
          "type": "number"
        }
      }
    }
  },
  "swagger": "2.0",
  "info": {
    "title": "hello",
    "version": "1.0"
  }
}

Corrected Swagger definition

{
    'paths': {
        '/*': {
            'put': {
                'x-auth-type': 'Application & Application User',
                'x-throttling-tier': 'Unlimited',
                'parameters': [
                    {
                        'schema': {
                            'type': 'object'
                        },
                        'description': 'Request Body',
                        'name': 'Payload',
                        'required': false,
                        'in': 'body'
                    }
                ],
                'responses': {
                    '200': {}
                }
            },
            'post': {
                'x-auth-type': 'Application & Application User',
                'x-throttling-tier': 'Unlimited',
                'parameters': [
                    {
                        'schema': {
                            'type': 'object'
                        },
                        'description': 'Request Body',
                        'name': 'Payload',
                        'required': false,
                        'in': 'body'
                    }
                ],
                'responses': {
                    '200': {}
                }
            },
            'get': {
                'x-auth-type': 'Application & Application User',
                'x-throttling-tier': 'Unlimited',
                'responses': {
                    '200': {}
                }
            },
            'delete': {
                'x-auth-type': 'Application & Application User',
                'x-throttling-tier': 'Unlimited',
                'responses': {
                    '200': {}
                }
            },
            'head': {
                'x-auth-type': 'Application & Application User',
                'x-throttling-tier': 'Unlimited',
                'responses': {
                    '200': {}
                }
            }
        }
    },
    'definitions': {
        'Structure test \"test\" ssssss': {
            'properties': {
                'horaireCotation': {
                    'description': 'Horaire de cotation',
                    'type': 'string'
                },
                'statut': {
                    'type': 'string'
                },
                'distanceBarriere': {
                    'format': 'double',
                    'type': 'number'
                },
                'premium': {
                    'format': 'double',
                    'type': 'number'
                },
                'delta': {
                    'format': 'double',
                    'type': 'number'
                },
                'pointMort': {
                    'format': 'double',
                    'type': 'number'
                },
                'elasticite': {
                    'format': 'double',
                    'type': 'number'
                }
            }
        }
    },
    'swagger': '2.0',
    'info': {
        'title': 'hello',
        'version': '1.0'
    }
}


Also if you like to fix this issue by editing jagger file that is also possible. Please find the instructions below.

Edit store/site/blocks/api-doc/ajax/get.jag file and add following instead of just print(jsonObj)
print(JSON.stringify(jsonObj));

Prabath SiriwardenaJSON Message Signing Alternatives

In this post we explore following alternatives available to sign a JSON message and then build a comparison between each of them.
  • JSON Web Signature (JWS) 
  • JSON Cleartext Signature (JCS) 
  • Concise Binary Object Representation (CBOR) Object Signing 

Chathurika Erandi De SilvaTesting Dynamic Timeout with Property Mediator


Must READ


ESB 500 is the upcoming release of WSO2 Enterprise Service Bus. Since I am working on it these days, thought of writing on "Dynamic Timeout for Endpoints".

The following sample is currently tested in ESB 500 Alpha.

Read if you want an explanation on Dynamic Timeout for Endpoints
Testing Dynamic Timeout for Endpoints with Property mediator

In order to maintain the dynamic behaviour query parameters are used to send the timeout value with the request (testing purposes only)

Sample sequence configuration

<sequence name="dyn_seq_2" xmlns="http://ws.apache.org/ns/synapse">
   <property expression="$url:a" name="timeout" scope="default"
       type="INTEGER" xmlns:ns="http://org.apache.synapse/xsd"/>
   <send>
       <endpoint>
           <address uri="http://<ip>:8080/erandi">
               <timeout>
                   <duration>{get-property('timeout')}</duration>
                   <responseAction>discard</responseAction>
               </timeout>
           </address>
       </endpoint>
   </send>
</sequence>

In here as illustrated by using the XPath {get-property('timeout')} inside “duration” element, enables us to achieve the dynamic behaviour. When the Xpath expression is evaluated, the value referenced is read and used. The Property mediator reads the query parameter “a” and obtains the value passed with it.
Testing the sample

For testing purposes, I have setup a mock rest service using SoapUI with a response delay. Next we need to invoke the above sequence  (you can use either API / Inbound Endpoint for this purpose) using query parameters.

Sample Request
http://<ip>:8280/testapi?a=5000

When the this service is invoked through the ESB, following logs will be printed, indicating the timeout as configured dynamically.

[2016-05-04 16:07:44,571]  WARN - TimeoutHandler Expiring message ID : urn:uuid:8de1fdf9-8e0e-43a9-9a4b-2ab086fd019e; dropping message after timeout of : 5 seconds

Chathurika Erandi De SilvaHow Do I integrate WSO2 ESB and WSO2 DSS: A basic demonstration


Sample Scenario


Hotel GreatHeights needs to access an external service hosted to evaluate its customer’s loyalty towards the hotel. The external service which takes in the customer id and the customer name and assesses the customer’s loyalty towards the hotel based on the previous expenditure etc.... Hotel GreatHeights has a system that sends out the guest identification in form of ID/Passport and the guest name. As the obvious fact, here the input parameters and the expectation of the external system does not match. Further Hotel GreatHeights has a full database that consist of guest identification, customer id, customer name, etc… and it has shared the only the needed columns with the external system to maintain privacy. In addition the hotel does not want to change it’s system interface. 

Above scenario brings out a very simple basic integration of two legacy systems. In more understandable wordings legacy systems are hard to change and the integration should facilitate the communication between the two.

There are two aspects here as follows

  1. The client legacy system (Hotel GreatHeights system) sends in guest personal identification and name, whereas the end service expects the customer id and customer name
  2. The guest personal identification should be mapped against the hotel database to find the customer id. The hotel is not sharing the guest personal identification number with the external system
In order to address the above aspects, the integrator should query the database using the guest personal identification and find the customer id and thereafter transform the request to match the expectations of the backend service.

Sample Implementation using WSO2 ESB and WSO2 DSS

In order to achieve the objective, the implementation can be categorized as below

  1. Querying the database: A dataservice will be hosted in WSO2 DSS that communicates with the database to obtain necessary values
  2. Transformation: WSO2 ESB mediators will be used to obtain the above queried value and transform the request.

Sample Data Service in WSO2 DSS 




The above data service returns the customer id in the response.


Sample Sequence in WSO2 ESB




The above sequence has the Payload mediator, Call mediator. The first occurrence of the payload mediator is used to extract the guest NIC from the request and has used that in the transformed request to the dataservice. The transformed request is sent to the dataservice using Call mediator which provides blocking invocation. When the call mediator is executed in blocking mode, it waits for the response without proceeding to the next mediator. The second occurance of the payload mediator is used to transform the message as needed by the backend service by using the customer id obtained through the data service.

Chathurika Erandi De SilvaDynamic Timeout for Endpoints: WSO2 ESB


Must READ


ESB 500 is the upcoming release of WSO2 Enterprise Service Bus. Since I am working on it these days, thought of writing on "Dynamic Timeout for Endpoints", which is introduced in ESB 500

The following sample is currently tested in ESB 500 Alpha. 


Introduction
Before we step on to the details, let's see why we need dynamic time-out values in the first place. Before ESB 500, the time-out value was static. We could define a time-out value but we couldn't change it dynamically. To gain more flexibility now we can provide the time-out value dynamically. This gives us the opportunity to read an incoming request and set the time-out value dynamically through that, as well as define a value outside of endpoint configuration and use the reference in endpoint configuration. With this approach we can change the time-out values without changing the endpoint configuration itself.

Hoping I have given you an insight on the dynamic time-outs, let’s see how to achieve this with WSO2 ESB using a property mediator which is defining the time-out value outside of the endpoint configuration.

Sample sequence configuration



In here as illustrated, by using the XPath {get-property('timeout')} inside “duration” element, enables us to achieve the dynamic behaviour. When the Xpath expression is evaluated, the value referenced is read and used.

Testing the sample

For testing purposes, I have setup a mock service using SoapUI with a response delay. When the this service is invoked through the ESB, following logs will be printed, indicating the timeout as configured dynamically.

[2016-05-04 16:07:44,571]  WARN - TimeoutHandler Expiring message ID : urn:uuid:8de1fdf9-8e0e-43a9-9a4b-2ab086fd019e; dropping message after timeout of : 20 seconds


Chamara SilvaHow to enable HTTP wirelog for non synapse products (WSO2)

As we already know in WSO2 ESB and APIM can enable wirelog for trace the synapse massages in variius situations. But if you want to see messages going inside the Governance Registry, Application Server like non synapse based products, Following wirelog prroperties can be added in to the log4j.properties file.  log4j.logger.httpclient.wire.content=DEBUG  log4j.logger.httpclient.wire.header=DEBUG

Thilini IshakaPart 1: Developing a WS-BPEL Process using WSO2 Developer Studio


In this post I am going to discuss following list of items.

What's a BPEL Process?
A BPEL Process is a container where you can declare relationships to external partners, declarations for process data, handlers for various purposes and most importantly, the activities to be executed. 

Let's start with designing a simple workflow.

Create a BPEL process that returns addition of two integer numbers. For the number addition operation the BPEL process invokes an existing service and gets the result from the web service. This web service takes two integers and returns addition of the two integer values. The web service will be hosted on WSO2 AppServer (or else you can use any other appserver).

Figure 1, shows the axis2 service that we are going to invoke via the BPEL process. 
Create the axis2 archive (.aar) [Right click on the AdderService Project --> Export Project as a deployable archive and save it].

Now Start WSO2 AppServer (Goto AppServer_HOME/bin --> sh wso2server.sh)

Deploy AdderService.aar on AppServer.
1. Copy aar file to AppServer_HOME/repository/deployment/server/axis2services directory
      OR
2. Using the AppServer Management Console (Add --> AAR Service --> upload the service archive) 
We need to keep the wsdl file for the AdderService as, that is required later when developing the  BPEL process.

wget http://localhost:9763/services/AdderService?wsdl
Save the AdderService.wsdl to your local file system.

Figure 1

Let's start with designing the BPEL workflow.
Open eclipse which has WSO2 Developer Studio installed. 
Goto Dashboard (Developer Studio --> Open Dashboard menu and click on BPEL Workflow under Business Process Server category)
Figure 2: Create New BPEL Project


Give a project Name, Namespace and select the Template type. As we are going to create a short running bpel process, select the template type as Synchronous.

Synchronous interaction - Suppose a BPEL process invokes a partner service. The BPEL process then waits for the partner service's operation to be completed, and responded. After receiving this completion response from the partner service, the BPEL process will continue to carry on its execution flow. This transmission does not apply for the In-Only operations defined in the WSDL of the partner service.

Usually we'll use asynchronous services for long-lasting operations and synchronous services for operations that return a result in a relatively short time.
Figure 3 

Figure 4

Here you can see the template for our business process. The BPEL editor automatically generates receiveInput and replyOutput activities(Figure 5). Also it will generate partnerLink and variables used in these two activities.

Note: It will automatically generate AdderProcessArtifacts.wsdl and AdderProcess.bpel. If we look at the folder structure of the BPEL process, we can easily figure out these two files.

Figure 5

In our BPEL process we need to invoke an external service which is AdderService. To invoke this service we need to assign the input variables into external service’s input and again the reply from the external service to our BPEL process output. So that, here we need two assign activities and one invoke activity. 

Let’s add an assign activity in between receiveInput and replyOutput activities. To add assign activity drag it from the Action section of the Palette.

Figure 6 : AdderProcess workflow

Before filling the invoke activity, you need to import the AdderService.wsdl to you workflow project. 
Figure 7

Now start implementing the business logic.  
Goto Properties of invoke activity.
Goto 'Details' tab and from the 'Operation' drop down list, select 'Create Global Partner Link'
Figure 8

Give a Partner Link Name and click OK.
Figure 9


Now you'll prompt to the window shown in Figure 10. Click on 'Add WSDL'
Figure 10

Select the WSDL file which you have already imported to the workflow project. Click OK.
Figure 11

In the Partner Link Type window, you should select the correct PortType. then click OK.
Figure 12

Give a name to the Partner Link Type and click Next.
Figure 13


Give a name to the partner role and then select the correct PortType. Now click Finish. We have only one role for this invoke activity. If we have multiple roles (partner roles and my roles, we need to click on Next and create the next role).
Figure 14

Now you need to pick the 'add' operation from the Quick Pick box. For that 'Double' click on it. 
Figure 15


Now you are done with implementing invoke activity. Next step is to implement two assign activities. Before doing that, you need to identify what are the inputs and outputs of your process. We have two integer values as the request parameters and a resulting integer as the response.

Open the AdderProcessArtifacts.wsdl and find the Service, PortType and the binding there. Click on the arrow next to AdderProcessRequest.
Figure 16


Add two integer elements as shown in Figure 17. [To add a element, RightClick --> Add Element] Select the element type as int from the drop down list.
Figure 17


Configure AdderProcessResponse part similarly the above step.
There you need to click on the arrow next to AdderProcessResponse.
Figure 18

For the request, you have only one integer element as the output.
Now save the wsdl file and close it.
Figure 19

Go back to the bpel file and start implementing the first assign activity, that is 'AssignInputVars'.
Goto 'Details' tab and click on 'New'.
Do the mapping as shown in Figure 20.
Figure 20


It will automatically prompt for the initialization. Click on 'Yes'.
Figure 21


Figure 22

Now you are done with configuring the First Assign Activity.
Figure 23


Configure the second assign activity, that is 'AssignoutputVars'. 
Figure 24

Allow for the automatic variable initialization for response.
Figure 25

Now you are done with the bpel process flow design. Now open the deploy.xml (Deployment Descriptor).

Here you can specify the process state (whether it activated, deactivated or retired) after the deployment, set the process executed only in memory, Inbound Interfaces (Service) and Outbound Interfaces (Invokes) etc.

Figure 26

Now make the BPEL process as a deployable archive (Right click on the AdderProcess workflow --> Export Project as a deployable archive).

Start WSO2 Business Process Server (BPS_HOME/bin --> sh wso2server.sh)
Make the port offset to 1 (Change offset to 1 in BPS_HOME/repository/conf/carbon.xml)

Deploy AdderProcess.zip on WSO2 BPS.
1. Copy zip file to BPS_HOME/repository/deployment/server/bpel directory
      OR
2. Using the BPS Management Console (Processes --> Add --> BPEL Archive(zip) --> upload) 

Figure 27

To test the process use TryIt wizard or any other tool(eg: SOAP UI).
Figure 28 : Click on TryIt


Figure 29 : SOAP Request

Figure 30 : SOAP Request/Response 

Here, we get integer (a+b) as the response in the xml output.

Asanka DissanayakeResizing images in one line in Linux

Re-sizing images in one line

I hope you all have had the problem when you upload high quality pics to FB or some other social network.
You can use following command to resize the images by 50%. You can change the ratios .. just replace the value you desire with “50%”

first , you need to have ImageMagick

Install ImageMagick with following command

sudo apt-get install imagemagick

Then go to the directory that has photos to be resized
Run following command

mkdir resize;for f in *; do echo "converting $f"; convert $f -resize 50% resize/${f}; done

Then you will see the re-sized files in the resize directory.

Hope this will save someone’s time .. Enjoy !!!

 


Thilina PiyasundaraRunning your WordPress blog on WSO2 App Cloud

WSO2 App Cloud is now supporting Docker base PHP applications. In this blog post I will describe how to install a WordPress blog in this environment. In order to setup a WordPress environment we need to have two things;

  1. Web server with PHP support
  2. MySQL database

If we have both of these we can start setting up WordPress. In WSO2 App Cloud we can use the PHP application as the WordPress hosting web server which is a PHP enabled Apache web server docker image. Also it provides a database service where you can easily create and manage MySQL databases via the App Cloud user interface (UI).

Note:- 
For the moment WSO2 App Cloud is on beta therefore these docker images will have only 12h of lifetime with no data persistence in the file storage level. Data on MySQL databases will be safe unless you override. If you need more help don't hesitate to contact Cloud support.

Creating PHP application

Signup or signin to WSO2 App Cloud via http://wso2.com/cloud. Then click on the "App Cloud beta" section.
Then it will redirect you to the App Cloud user interface. Click on 'Add New Application' button on the left hand corner.
This will prompt you to several available applications. Select 'PHP Web Application' box and continue.
Then it will prompt you a wizard. In that give a proper name and a version to your application. Name and version will be use to generate the domain name for your application.

There are several options that you can use to upload PHP content to this application. For the moment I will download the wordpress-X.X.X.zip file from the wordpress site and upload it to application.
In the below sections of the UI you can set the run time and container specification. Give the highest Apache version as the runtime and use minimal container speck as wordpress does not require much processing and memory.
If the things are all set and the file upload is complete click on 'Create' button. You will get the following status pop-up when you click the create button and it will redirect you to the application when its complete.
In the application UI note the URL. Now you can click on the 'Launch App' button so that it will redirect you to your PHP application.
Newly installed WordPress site will be like this.
Now we need to provide database details to it. Therefore, we need to create database and a user.

Creating database

Go back to the Application UI and click on 'Back to listing' button.
In that UI you can see a button in the top left hand corner called 'Create database'. Click on that.
In the create database UI give a database name, database user name and a password . Password need to pass the password policy so you can click on 'Generate password' to generate a secure password easily. By the way of you use generate password option make sure you copy the generated password before you proceed with database creation. Otherwise you may need to reset the password.

Also note that database name and database user name will append tenant domain and random string accordingly to the end of both. Therefore, those fields will only get few number of input characters.
If all set then click on 'Create database' button to proceed. After successfully creating the database it will redirect you to a database management user interface like following.
Now you can use those details to login to the newly create mysql database as follows;
$ mysql -h mysql.storage.cloud.wso2.com -p'' -u
eg :-
$ mysql -h mysql.storage.cloud.wso2.com -p'XXXXXXXXX' -u admin_LeWvxS3l wpdb_thilina 
Configuring WordPress

If the database creation is successful and you can login to it without any issue we can use those details to configure WordPress.

Go back to the WordPress UI and click on 'let's go' button. It will prompt to a database configuration wizard. Fill those fields with the details that we got from the previous section.
If WordPress application can successfully establish a connection with the database using your inputs it will prompt you to a UI as follows.
On that click on 'Run the install'. Then WordPress will start populating database tables and insert initial data to the given database.

When its complete it will ask for some basic configurations like the site title, admin user name and passwords.
Click on 'Install WordPress' after filling those information. Then it will redirect you to the WordPress admin console login page. Login to that using the username and password gave in the previous section.
So now WordPress is ready to use. But the existing URL is not very attractive. If you have a domain you can use it as the base URL of this application.

Setting custom domain (Optional)

IN the application UI click on the top left three lines button shown in the following image.
It will show some advance configuration that we can use. In that list select the last one 'Custom URL' option.
It will prompt you following user interface. Enter the domain name that you are willing to use.
But before you validate make sure you add a DNS CNAME to that domain pointing to you application launch URL.

Following is the wizard that I got when adding the CNAME via Godaddy. This user interface and adding CNAME options will be different for you DNS provider.
You can validate the CNAME by running 'dig' command in Linux or nslookup in windows.
If the CNAME is working click on 'Update'.
 If that is successful you will get the above notification and if you access that domain name it will show your newly created WordPress blog.

Hasitha Aravinda[Sample] Order Processing Process

This sample illustrates usage of WS-BPEL 2.0, WS-HumanTask 1.1 and Rule capabilities in WSO2 Business Process Server and WSO2 Business Rule Server.


Order Processing Flow
alt text
  • The client place an order by providing client ID, item IDs, quantity, shipping address and shipping city.
  • Then Process submits order information to invoicing web service, which generates order ID and calculate total order value.
  • If total order value is greater than USD 500, process requires a human interaction to proceed. When a order requires a human interaction process creates a Review HumanTask for regional clerks. If review task is rejected by one of regional clerk user, workflow terminates after notifying the client.
  • Once the regional clerk approve the review task, workflow invokes Warehouse Locater rule service to calculate nearest warehouse.
  • Once receiving nearest warehouse, process invokes Place Order web service to finalize the order.
  • Finally user will be notified with the estimated delivery date.

This sample contains

Please checkout this sample from Github. 

Sameera JayasomaCarbon JNDI

WSO2 Carbon provides an in-memory JNDI InitialContext implementation. This is available from the WSO2 Carbon 5.0.0. This module also…

Chathurika Erandi De SilvaRafting and Middleware QA are they the same?

Rafting the River Twinkle

Mary is opening a rafting entertainment business based on river Twinkle. She has a major challenge, her team has to have the best idea on the river so that they can give the best experience to the customers itself.

So what did they do? They decided to raft the river first by themselves because they needed to identify the loop holes, dangers before they take any customers on it.


Rafting and QA?

Aren't QA folks do the same thing as Mary and the team did? They do various activities to identify what works and what is not working. This is crucial because this information is much needed by the customers who will be using a specific product.

QA and Middleware

Designing tests for middleware is not an easy task. It's not same as designing tests for a simple web app. Middleware testing can be compared to the rafting experience itself while assuring the quality of a web app is boating on a large lake.

If we are to assure the quality of a product such as WSO2 ESB, its a challenging task but i have found out the following golden rules of thumb that can be incorporated to any middleware product.

My Golden Rules of Thumb 

Know where to start

It's important to know where u start designing tests. In order to achieve this, greater understanding on the functionality as well how it's to be implemented is also needed. So obviously, the QA has to be technically competent as well thorough in knowledge on the respective domain. Reading helps a lot as well as trying out by yourself so that knowledge can be gained from these

Have a proper design

A graphical design lets you evaluate your knowledge as well as your competency in the area. QAs with middleware cannot just stick to black box testing, they have to go for the white box as well as they have to ensure the quality of the code it self. So a graphical representation is very valuable in designing the tests and what to test.

Have room for improvement

Its advantage to be self driven, find out about what you are doing, understanding what you are doing is very important to achieve a good middleware testing.

With all of above, its easy to put our selves in customer shoes, because in middleware, there can be various and unlimited customer demands. If we follow the above rules of thumb, i guess any QA can be a better one and more suitable for the middleware platform that changes rapidly.

I'll be discussing more on this, this is just a start...







Prabath SiriwardenaJWT, JWS and JWE for Not So Dummies!

JSON Web Token (JWT) defines a container to transport data between interested parties. It became an IETF standard in May 2015 with the RFC 7519. There are multiple applications of JWT. The OpenID Connect is one of them. In OpenID Connect the id_token is represented as a JWT. Both in securing APIs and Microservices, the JWT is used as a way to propagate and verify end-user identity.


This article on medium explains in detail JWT, JWS and JWE with their applications.

Dinusha SenanayakaHow to use App Manager Business Owner functionality ?

WSO2 App Manager new release (1.2.0) has introduced capability to define business owner for each application. (AppM-1.2.0  is yet to be released by the time this blog post is writing and you could download nightly build from here and tryout until the release is done.)

1. How to define business owners ?

Login as a admin user to admin-dashboard by accessing following URL.
https://localhost:9443/admin-dashboard

This will give you UI similar to bellow where you can define new business owners.


Click on "Add Business Owner" option to add new business owners.


All created business owners are listed in UI as follows, which allows you to edit or delete from the list.




2. How to associate business owner to application ?

You can login to Publisher by accessing the following URL to create new app.
https://localhost:9443/publisher 

In the add new web app UI, you should be able to see page similar to following, where you can type and select the business owner for the app.



Once the required data is filled and app is ready to publish to store, change the app life-cycle state to 'published' to publish app into app store.



Once the app is published, users could access app through the App Store by accessing the following URL.
https://localhost:9443/store

App users can find the business owner details in the App Overview page as shown bellow.





If you are using REST APIs to create and publish the apps, following sample command would help.

Create new policy group
curl -X POST -b cookies -H 'Content-Type: application/x-www-form-urlencoded' http://localhost:9763/publisher/api/entitlement/policy/partial/policyGroup/save  -d "policyGroupName=PG1&throttlingTier=Unlimited&userRoles&anonymousAccessToUrlPattern=false&objPartialMappings=[]&policyGroupDesc='Policy group1'"
{"success" : true, "response" : {"id" : 2}}


Create App
curl -X POST -b cookies -H 'Content-Type: application/x-www-form-urlencoded' http://localhost:9763/publisher/asset/webapp -d 'overview_provider=admin&overview_name=HelloApp1&overview_displayName=HelloApp1&overview_context=%2Fhello1&overview_version=1.0.0&optradio=on&overview_transports=http&overview_webAppUrl=http%3A%2F%2Flocalhost%3A8080%2Fhelloapplication&overview_tier=Unlimited&overview_allowAnonymous=false&overview_skipGateway=false&uritemplate_policyGroupIds=%5B2%5D&uritemplate_javaPolicyIds=[5]&uritemplate_urlPattern0=%2F*&uritemplate_httpVerb0=GET&uritemplate_policyGroupId0=2&autoConfig=on&providers=wso2is-5.0.0&sso_ssoProvider=wso2is-5.0.0&sso_singleSignOn=Enabled&webapp=webapp&overview_treatAsASite=false&overview_businessOwner=Henrry+Alex'


Change app lifecycle state to 'Published'
curl -X PUT -b cookies http://localhost:9763/publisher/api/lifecycle/Submit%20for%20Review/webapp/3d970fa3-1d82-4e64-9b05-777c05de3088
curl -X PUT -b cookies http://localhost:9763/publisher/api/lifecycle/Approve/webapp/3d970fa3-1d82-4e64-9b05-777c05de3088
curl -X PUT -b cookies http://localhost:9763/publisher/api/lifecycle/Publish/webapp/3d970fa3-1d82-4e64-9b05-777c05de3088

Afkham AzeezAWS Clustering Mode for WSO2 Products



WSO2 Clustering is based on Hazelcast. When WSO2 products are deployed in clustered mode on Amazon EC2, it is recommended to use the AWS clustering mode. As a best practice, add all nodes in a single cluster to the same AWS security group.

To enable AWS clustering mode, you simply have to edit the clustering section in the CARBON_HOME/repository/conf/axis2/axis2.xml file as follows:

Step 1: Enable clustering


<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent"
enable="true">

Step 2: Change membershipScheme to aws


<parameter name="membershipScheme">aws</parameter>

Step 3: Set localMemberPort to 5701

Any value between 5701 & 5800 are acceptable
<parameter name="localMemberPort">5701</parameter>


Step 4: Define AWS specific parameters

Here you need to define the AWS access key, secret key & security group. The region, tagKey & tagValue are optional & the region defaults to us-east-1

<parameter name="accessKey">xxxxxxxxxx</parameter>
<parameter name="secretKey">yyyyyyyyyy</parameter>
<parameter name="securityGroup">a_group_name</parameter>
<parameter name="region">us-east-1</parameter>
<parameter name="tagKey">a_tag_key</parameter>
<parameter name="tagValue">a_tag_value</parameter>

Provide the AWS credentials & the security group you created as values of the above configuration items.  Please note that the user account used for operating AWS clustering needs to have the ec2:DescribeAvailabilityZones & ec2:DescribeInstances permissions.

Step 5: Start the server

If everything went well, you should not see any errors when the server starts up, and also see the following log message:

[2015-06-23 09:26:41,674]  INFO - HazelcastClusteringAgent Using aws based membership management scheme

and when new members join the cluster, you should see messages such as the following:
[2015-06-23 09:27:08,044]  INFO - AWSBasedMembershipScheme Member joined [5327e2f9-8260-4612-9083-5e5c5d8ad567]: /10.0.0.172:5701

and when members leave the cluster, you should see messages such as the following:
[2015-06-23 09:28:34,364]  INFO - AWSBasedMembershipScheme Member left [b2a30083-1cf1-46e1-87d3-19c472bb2007]: /10.0.0.245:5701


The complete clustering section in the axis2.xml file is given below:
<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent"
enable="true">
<parameter name="AvoidInitiation">true</parameter>
<parameter name="membershipScheme">aws</parameter>
<parameter name="domain">wso2.carbon.domain</parameter>

<parameter name="localMemberPort">5701</parameter>
<parameter name="accessKey">xxxxxxxxxxxx</parameter>
<parameter name="secretKey">yyyyyyyyyyyy</parameter>
<parameter name="securityGroup">a_group_name</parameter>
<parameter name="region">us-east-1</parameter>
<parameter name="tagKey">a_tag_key</parameter>
<parameter name="tagValue">a_tag_value</parameter>

<parameter name="properties">
<property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/>
<property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/>
<property name="subDomain" value="worker"/>
</parameter>
</clustering>

Afkham AzeezHow AWS Clustering Mode in WSO2 Products Works

In a previous blog post, I explained how to configure WSO2 product clusters to work on Amazon Web Services infrastructure. In this post I will explain how it works.

 WSO2 Clustering is based on Hazelcast.

All nodes having the same set of cluster configuration parameters will belong to the same cluster. What Hazelcast does is, it calls AWS APIs, and then gets a set of nodes that satisfy the specified parameters (region, securityGroup, tagKey, tagValue).

When the Carbon server starts up, it creates a Hazelcast cluster. At that point, it calls EC2 APIs & gets the list of potential members in the cluster. To call the EC2 APIs, it needs the AWS credentials. This is the only time these credentials are used. AWS APIs are only used on startup to learn about other potential members in the cluster.

Once the EC2 instances are retrieved, a Hazelcast node will try to connect to potential members that are running on the same port as its localMember port. By default this port is 5701. If that port is open, it will try to do a Hazelcast handshake and add that member if it belongs to the same cluster domain (group). The new member will repeat the process of trying to connect to the next port (i.e. 5702 by default) in increments of 1, until next port is not reachable.

 Here is the pseudocode;

for each EC2 instance e
     port = localMemberPort;
     while(canConnect(e, port))
          addMemberIfIPossible(e, port)    // A Hazelcast member is running & in the same domain
          port = port +1

Subsequently, the connections established between members are point to point TCP connections.  Member failures are detected through a TCP ping. So once the member discovery is done, the rest of the interactions in the cluster are same as when the multicast & WKA (Well Known Address) modes are used.

With that facility, you don't have to provide any member IP addresses or hostnames, which may be impossible on an IaaS such as EC2.

NOTE: This scheme of trying to establish connections with open Hazelcast ports from one EC2 instance to another does not violate any AWS security policies because the connection establishment attempts are made from nodes within the same security group to ports which are allowed within that security group.

Prabath SiriwardenaGSMA Mobile Connect vs OpenID Connect

Mobile Connect is an initiative by GSMA. The GSMA represents the interests of mobile operators worldwide, uniting nearly 800 operators with more than 250 companies in the broader mobile ecosystem, including handset and device makers, software companies, equipment providers and internet companies, as well as organizations in adjacent industry sectors. The Mobile Connect initiative by GSMA focuses on building a standard for user authentication and identity services between mobile network operators (MNO) and service providers.


This article on medium explains the GSMA Mobile Connect API and see how it differentiates from the OpenID Connect core specification.

Sameera JayasomaStartup Order Resolving Mechanisms in OSGi

There are few mechanisms in OSGi to deal with the bundle startup order. Most obvious approach is to use “start levels”. The other approach…

Dhananjaya jayasingheApplying security for ESB proxy services...


Security is a major factor we consider when it comes to each and every deployment. WSO2 Enterprise Service Bus also capable of securing services.

WSO2 ESB 4.8 or previous versions were having the capability of applying the security for a proxy service from Admin Console as in [1]

However, From ESB 4.9.0 , we can no longer apply security for a proxy service from Admin Console of the ESB. We need to use WSO2 Developer Studio version 3.8 for this requirement for ESB 4.9.0.


You can find the documentation on  applying security to ESB 4.9.0 based proxy service here[2].  However, i would like to add a small modification to the doc in [2] at the end.

After securing the proxy according to the document, We need to create the Composite Application Project and export the CAR file. When exporting the CAR file, by default the server role of the Registry project is being selected as GovernanceRegistry as in the bellow image.




When we deploy that CAR file in ESB, We are getting following exception [3] due to above Server Role.

In order to fix the problem, we need to change the server role to ESB as bellow since we are going to deploy it in ESB.






[1] https://docs.wso2.com/display/ESB481/Securing+Proxy+Services
[2] https://docs.wso2.com/display/ESB490/Applying+Security+to+a+Proxy+Service
[3]

 [2016-04-12 14:34:48,658] INFO - ApplicationManager Deploying Carbon Application : MySecondCarProject1_1.0.1.car...  
[2016-04-12 14:34:48,669] INFO - EndpointDeployer Endpoint named 'SimpleStockQuote' has been deployed from file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/1460496888659MySecondCarProject1_1.0.1.car/SimpleStockQuote_1.0.0/SimpleStockQuote-1.0.0.xml
[2016-04-12 14:34:48,670] INFO - ProxyService Building Axis service for Proxy service : myTestProxy
[2016-04-12 14:34:48,671] WARN - SynapseConfigUtils Cannot convert null to a StreamSource
[2016-04-12 14:34:48,671] ERROR - ProxyServiceDeployer ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/1460496888659MySecondCarProject1_1.0.1.car/myTestProxy_1.0.0/myTestProxy-1.0.0.xml : Failed.
org.apache.synapse.SynapseException: Cannot convert null to a StreamSource
at org.apache.synapse.config.SynapseConfigUtils.handleException(SynapseConfigUtils.java:578)
at org.apache.synapse.config.SynapseConfigUtils.getStreamSource(SynapseConfigUtils.java:79)
at org.apache.synapse.core.axis2.ProxyService.getPolicyFromKey(ProxyService.java:822)
at org.apache.synapse.core.axis2.ProxyService.buildAxisService(ProxyService.java:608)
at org.apache.synapse.deployers.ProxyServiceDeployer.deploySynapseArtifact(ProxyServiceDeployer.java:80)
at org.wso2.carbon.proxyadmin.ProxyServiceDeployer.deploySynapseArtifact(ProxyServiceDeployer.java:46)
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(AbstractSynapseArtifactDeployer.java:194)
at org.wso2.carbon.application.deployer.synapse.SynapseAppDeployer.deployArtifacts(SynapseAppDeployer.java:130)
at org.wso2.carbon.application.deployer.internal.ApplicationManager.deployCarbonApp(ApplicationManager.java:263)
at org.wso2.carbon.application.deployer.CappAxis2Deployer.deploy(CappAxis2Deployer.java:72)
at org.apache.axis2.deployment.repository.util.DeploymentFileData.deploy(DeploymentFileData.java:136)
at org.apache.axis2.deployment.DeploymentEngine.doDeploy(DeploymentEngine.java:807)
at org.apache.axis2.deployment.repository.util.WSInfoList.update(WSInfoList.java:144)
at org.apache.axis2.deployment.RepositoryListener.update(RepositoryListener.java:377)
at org.apache.axis2.deployment.RepositoryListener.checkServices(RepositoryListener.java:254)
at org.apache.axis2.deployment.RepositoryListener.startListener(RepositoryListener.java:371)
at org.apache.axis2.deployment.scheduler.SchedulerTask.checkRepository(SchedulerTask.java:59)
at org.apache.axis2.deployment.scheduler.SchedulerTask.run(SchedulerTask.java:67)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.runAxisDeployment(CarbonDeploymentSchedulerTask.java:93)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.run(CarbonDeploymentSchedulerTask.java:138)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
[2016-04-12 14:34:48,672] ERROR - AbstractSynapseArtifactDeployer Deployment of the Synapse Artifact from file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/1460496888659MySecondCarProject1_1.0.1.car/myTestProxy_1.0.0/myTestProxy-1.0.0.xml : Failed!
org.apache.synapse.deployers.SynapseArtifactDeploymentException: ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/1460496888659MySecondCarProject1_1.0.1.car/myTestProxy_1.0.0/myTestProxy-1.0.0.xml : Failed.
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.handleSynapseArtifactDeploymentError(AbstractSynapseArtifactDeployer.java:475)
at org.apache.synapse.deployers.ProxyServiceDeployer.deploySynapseArtifact(ProxyServiceDeployer.java:112)
at org.wso2.carbon.proxyadmin.ProxyServiceDeployer.deploySynapseArtifact(ProxyServiceDeployer.java:46)
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(AbstractSynapseArtifactDeployer.java:194)
at org.wso2.carbon.application.deployer.synapse.SynapseAppDeployer.deployArtifacts(SynapseAppDeployer.java:130)
at org.wso2.carbon.application.deployer.internal.ApplicationManager.deployCarbonApp(ApplicationManager.java:263)
at org.wso2.carbon.application.deployer.CappAxis2Deployer.deploy(CappAxis2Deployer.java:72)
at org.apache.axis2.deployment.repository.util.DeploymentFileData.deploy(DeploymentFileData.java:136)
at org.apache.axis2.deployment.DeploymentEngine.doDeploy(DeploymentEngine.java:807)
at org.apache.axis2.deployment.repository.util.WSInfoList.update(WSInfoList.java:144)
at org.apache.axis2.deployment.RepositoryListener.update(RepositoryListener.java:377)
at org.apache.axis2.deployment.RepositoryListener.checkServices(RepositoryListener.java:254)
at org.apache.axis2.deployment.RepositoryListener.startListener(RepositoryListener.java:371)
at org.apache.axis2.deployment.scheduler.SchedulerTask.checkRepository(SchedulerTask.java:59)
at org.apache.axis2.deployment.scheduler.SchedulerTask.run(SchedulerTask.java:67)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.runAxisDeployment(CarbonDeploymentSchedulerTask.java:93)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.run(CarbonDeploymentSchedulerTask.java:138)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.synapse.SynapseException: Cannot convert null to a StreamSource
at org.apache.synapse.config.SynapseConfigUtils.handleException(SynapseConfigUtils.java:578)
at org.apache.synapse.config.SynapseConfigUtils.getStreamSource(SynapseConfigUtils.java:79)
at org.apache.synapse.core.axis2.ProxyService.getPolicyFromKey(ProxyService.java:822)
at org.apache.synapse.core.axis2.ProxyService.buildAxisService(ProxyService.java:608)
at org.apache.synapse.deployers.ProxyServiceDeployer.deploySynapseArtifact(ProxyServiceDeployer.java:80)
... 22 more
[2016-04-12 14:34:48,673] INFO - AbstractSynapseArtifactDeployer The file has been backed up into : NO_BACKUP_ON_WORKER.INFO
[2016-04-12 14:34:48,673] ERROR - AbstractSynapseArtifactDeployer Deployment of synapse artifact failed. Error reading /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/1460496888659MySecondCarProject1_1.0.1.car/myTestProxy_1.0.0/myTestProxy-1.0.0.xml : ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/1460496888659MySecondCarProject1_1.0.1.car/myTestProxy_1.0.0/myTestProxy-1.0.0.xml : Failed.
org.apache.axis2.deployment.DeploymentException: ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/1460496888659MySecondCarProject1_1.0.1.car/myTestProxy_1.0.0/myTestProxy-1.0.0.xml : Failed.
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(AbstractSynapseArtifactDeployer.java:201)
at org.wso2.carbon.application.deployer.synapse.SynapseAppDeployer.deployArtifacts(SynapseAppDeployer.java:130)
at org.wso2.carbon.application.deployer.internal.ApplicationManager.deployCarbonApp(ApplicationManager.java:263)
at org.wso2.carbon.application.deployer.CappAxis2Deployer.deploy(CappAxis2Deployer.java:72)
at org.apache.axis2.deployment.repository.util.DeploymentFileData.deploy(DeploymentFileData.java:136)
at org.apache.axis2.deployment.DeploymentEngine.doDeploy(DeploymentEngine.java:807)
at org.apache.axis2.deployment.repository.util.WSInfoList.update(WSInfoList.java:144)
at org.apache.axis2.deployment.RepositoryListener.update(RepositoryListener.java:377)
at org.apache.axis2.deployment.RepositoryListener.checkServices(RepositoryListener.java:254)
at org.apache.axis2.deployment.RepositoryListener.startListener(RepositoryListener.java:371)
at org.apache.axis2.deployment.scheduler.SchedulerTask.checkRepository(SchedulerTask.java:59)
at org.apache.axis2.deployment.scheduler.SchedulerTask.run(SchedulerTask.java:67)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.runAxisDeployment(CarbonDeploymentSchedulerTask.java:93)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.run(CarbonDeploymentSchedulerTask.java:138)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.synapse.deployers.SynapseArtifactDeploymentException: ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/1460496888659MySecondCarProject1_1.0.1.car/myTestProxy_1.0.0/myTestProxy-1.0.0.xml : Failed.
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.handleSynapseArtifactDeploymentError(AbstractSynapseArtifactDeployer.java:475)
at org.apache.synapse.deployers.ProxyServiceDeployer.deploySynapseArtifact(ProxyServiceDeployer.java:112)
at org.wso2.carbon.proxyadmin.ProxyServiceDeployer.deploySynapseArtifact(ProxyServiceDeployer.java:46)
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(AbstractSynapseArtifactDeployer.java:194)
... 20 more
Caused by: org.apache.synapse.SynapseException: Cannot convert null to a StreamSource
at org.apache.synapse.config.SynapseConfigUtils.handleException(SynapseConfigUtils.java:578)
at org.apache.synapse.config.SynapseConfigUtils.getStreamSource(SynapseConfigUtils.java:79)
at org.apache.synapse.core.axis2.ProxyService.getPolicyFromKey(ProxyService.java:822)
at org.apache.synapse.core.axis2.ProxyService.buildAxisService(ProxyService.java:608)
at org.apache.synapse.deployers.ProxyServiceDeployer.deploySynapseArtifact(ProxyServiceDeployer.java:80)
... 22 more
[2016-04-12 14:34:48,674] ERROR - ApplicationManager Error occurred while deploying Carbon Application
org.apache.axis2.deployment.DeploymentException: ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/1460496888659MySecondCarProject1_1.0.1.car/myTestProxy_1.0.0/myTestProxy-1.0.0.xml : Failed.
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(AbstractSynapseArtifactDeployer.java:213)
at org.wso2.carbon.application.deployer.synapse.SynapseAppDeployer.deployArtifacts(SynapseAppDeployer.java:130)
at org.wso2.carbon.application.deployer.internal.ApplicationManager.deployCarbonApp(ApplicationManager.java:263)
at org.wso2.carbon.application.deployer.CappAxis2Deployer.deploy(CappAxis2Deployer.java:72)
at org.apache.axis2.deployment.repository.util.DeploymentFileData.deploy(DeploymentFileData.java:136)
at org.apache.axis2.deployment.DeploymentEngine.doDeploy(DeploymentEngine.java:807)
at org.apache.axis2.deployment.repository.util.WSInfoList.update(WSInfoList.java:144)
at org.apache.axis2.deployment.RepositoryListener.update(RepositoryListener.java:377)
at org.apache.axis2.deployment.RepositoryListener.checkServices(RepositoryListener.java:254)
at org.apache.axis2.deployment.RepositoryListener.startListener(RepositoryListener.java:371)
at org.apache.axis2.deployment.scheduler.SchedulerTask.checkRepository(SchedulerTask.java:59)
at org.apache.axis2.deployment.scheduler.SchedulerTask.run(SchedulerTask.java:67)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.runAxisDeployment(CarbonDeploymentSchedulerTask.java:93)
at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.run(CarbonDeploymentSchedulerTask.java:138)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.axis2.deployment.DeploymentException: ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/1460496888659MySecondCarProject1_1.0.1.car/myTestProxy_1.0.0/myTestProxy-1.0.0.xml : Failed.
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(AbstractSynapseArtifactDeployer.java:201)
... 20 more
Caused by: org.apache.synapse.deployers.SynapseArtifactDeploymentException: ProxyService Deployment from the file : /Users/shammi/wso2/Support-Issues/MOTOROLAMOBPROD-44/wso2esb-4.9.0/tmp/carbonapps/-1234/1460496888659MySecondCarProject1_1.0.1.car/myTestProxy_1.0.0/myTestProxy-1.0.0.xml : Failed.
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.handleSynapseArtifactDeploymentError(AbstractSynapseArtifactDeployer.java:475)
at org.apache.synapse.deployers.ProxyServiceDeployer.deploySynapseArtifact(ProxyServiceDeployer.java:112)
at org.wso2.carbon.proxyadmin.ProxyServiceDeployer.deploySynapseArtifact(ProxyServiceDeployer.java:46)
at org.apache.synapse.deployers.AbstractSynapseArtifactDeployer.deploy(AbstractSynapseArtifactDeployer.java:194)
... 20 more
Caused by: org.apache.synapse.SynapseException: Cannot convert null to a StreamSource
at org.apache.synapse.config.SynapseConfigUtils.handleException(SynapseConfigUtils.java:578)
at org.apache.synapse.config.SynapseConfigUtils.getStreamSource(SynapseConfigUtils.java:79)
at org.apache.synapse.core.axis2.ProxyService.getPolicyFromKey(ProxyService.java:822)
at org.apache.synapse.core.axis2.ProxyService.buildAxisService(ProxyService.java:608)
at org.apache.synapse.deployers.ProxyServiceDeployer.deploySynapseArtifact(ProxyServiceDeployer.java:80)
... 22 more



Amila MaharachchiTomcat returns 400 for requests with long headers

We noticed this while troubleshooting an issue which popped up in WSO2 Cloud. We have configured SSO for the API Publisher and Store at WSO2 Identity Server. SSO was working fine except for one scenario. We checked the SSO configuration and couldn't find anything wrong.

Then we checked the load balancer logs. It revealed that LB was passing the request to the server i.e. Identity server, but gets a 400 from it. Then we looked the Identity Server logs to find nothing printed there. But, there were logs in the access log of the identity server which told us it was getting the request, but it was not letting it go through. Instead it was dropping it saying it was a bad request and was returning a 400 response.

We did some search in the internet and found out this kind of rejection can occur if the header values are too long. In the SAML SSO scenario, there is a referrer header which sends a lengthy value which was about 4000 characters long. When doing further search, we found out the property maxHttpHeaderSize in tomcat configs where we can configure the max http header size allowed in bytes. You can read about this config from here.

Once we increased that value, everything started working fine. So, I thought of blogging this down for the benefit of people using tomcat and also WSO2 products since WSO2 products have tomcat embedded in it. 

Dinusha SenanayakaExposing a SOAP service as a REST API using WSO2 API Manager

This post explains how we can publish an existing SOAP service as a  REST API using WSO2 API Manager.

We will be using a sample data-service called "OrderSvc"as the SOAP service which can be deployed as a SOAP service in WSO2 Data Services Server. But this could be any of SOAP service.

1. Service Description of ‘OrderSvc’ SOAP Backend Service

This “orderSvc” service provides WSDL with 3 operations (“submitOrder”, “cancelOrder”, “getOrderStatus”). 


submitOrder operation takes ProductCode and Quantity as parameters.
Sample request :
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:dat="http://ws.wso2.org/dataservice">
  <soapenv:Header/>
  <soapenv:Body>
     <dat:submitOrder>
        <dat:ProductCode>AA_1</dat:ProductCode>
        <dat:Quantity>4</dat:Quantity>
     </dat:submitOrder>
  </soapenv:Body>

</soapenv:Envelope>

cancelOrder operation takes OrderId as parameter and does an immediate cancellation and returns a confirmation code.
Sample request:
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:dat="http://ws.wso2.org/dataservice">
  <soapenv:Header/>
  <soapenv:Body>
     <dat:cancelOrder>
        <dat:OrderId>16</dat:OrderId>
     </dat:cancelOrder>
  </soapenv:Body>
</soapenv:Envelope>

orderStatus operaioin takes the orderId as parameter and return the order status as response.
Sample request :
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:dat="http://ws.wso2.org/dataservice">
  <soapenv:Header/>
  <soapenv:Body>
     <dat:orderStatus>
        <dat:OrderId>16</dat:OrderId>
     </dat:orderStatus>
  </soapenv:Body>
</soapenv:Envelope>


We need to expose this "OrderSvc" SOAP service as a REST API using API Manager. And once we exposed this as a REST API, “submitOrder”, “cancelOrder”, “getOrderStatus” operations should map to REST resources as bellow which takes the user parameters as query parameters.

“/submitOrder” (POST) => request does not contain order id or date; response is the full order payload.

“/cancelOrder/{id}” (GET) => does an immediate cancellation and returns a confirmation code.

“/orderStatus/{id}” (GET) => response is the order header (i.e., payload excluding line items).


Deploying the Data-Service :

1. Login to the MySQL and create a database called “demo_service_db” . (This database name can be anything , we need to update the data-service (.dbs file) accordingly).

mysql> create database demo_service_db;
mysql> demo_service_db;

2. Execute the dbscript given here on the above created database. This will create two tables ‘CustomerOrder’, ‘OrderStatus’ and one stored procedure ‘submitOrder’. Also it will insert some sample data into two tables.

3. Include mysql jdbc driver into DSS_HOME/repository/components/lib directory.


4. Download the data-service file given here. Before deploy this .dbs file, we need to modify the data source section defined in it. i.e in the downloaded orderSvc.dbs file, change the following properties by providing correct jdbcUrl ( need to point to the database that you created in step 1)  and change the userName/ Pwd of mysql connection, if those are different than the one defined here.


<config id="ds1">
     <property name="driverClassName">com.mysql.jdbc.Driver</property>
     <property name="url">jdbc:mysql://localhost:3306/demo_service_db</property>
     <property name="username">root</property>
     <property name="password">root</property>
  </config>


5. Deploy the orderSvc.dbs file in Data services server by copying this file into “wso2dss-3.2.1/repository/deployment/server/dataservices” directory. Start the server.


6. Before expose through API Manager, check whether all three operations works as expected using try-it tool or SOAP-UI.


submitOrder
Sample request :
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:dat="http://ws.wso2.org/dataservice">
  <soapenv:Header/>
  <soapenv:Body>
     <dat:submitOrder>
        <dat:ProductCode>AA_1</dat:ProductCode>
        <dat:Quantity>4</dat:Quantity>
     </dat:submitOrder>
  </soapenv:Body>
</soapenv:Envelope>


Response :
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
  <soapenv:Body>
     <submitOrderResponse xmlns="http://ws.wso2.org/dataservice">
        <OrderId>16</OrderId>
        <ProductCode>AA_1</ProductCode>
        <Quantity>4</Quantity>
     </submitOrderResponse>
  </soapenv:Body>
</soapenv:Envelope>


cancelOrder
Sample request:
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:dat="http://ws.wso2.org/dataservice">
  <soapenv:Header/>
  <soapenv:Body>
     <dat:cancelOrder>
        <dat:OrderId>16</dat:OrderId>
     </dat:cancelOrder>
  </soapenv:Body>
</soapenv:Envelope>


Response:
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
  <soapenv:Body>
     <axis2ns1:REQUEST_STATUS xmlns:axis2ns1="http://ws.wso2.org/dataservice">SUCCESSFUL</axis2ns1:REQUEST_STATUS>
  </soapenv:Body>
</soapenv:Envelope>


orderStatus
Sample request :
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:dat="http://ws.wso2.org/dataservice">
  <soapenv:Header/>
  <soapenv:Body>
     <dat:orderStatus>
        <dat:OrderId>16</dat:OrderId>
     </dat:orderStatus>
  </soapenv:Body>
</soapenv:Envelope>


Response:
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
  <soapenv:Body>
     <OrderStatus xmlns="http://ws.wso2.org/dataservice">
        <OrderStatus>CANCELED</OrderStatus>
     </OrderStatus>
  </soapenv:Body>

</soapenv:Envelope>



2. Configuring API Manager

1. Download the custom sequence given here and save it to the APIM registry location “/_system/governance/apimgt/customsequences/in/”.  This can be done by login to the API Manager carbon management console. 

In the left menu section, expand the Resources -> Browse -> Go to "/_system/governance/apimgt/customsequences/in" -> Click in "Add Resource" -> Browse the file system and upload the "orderSvc_supporting_sequence.xml" sequence that downloaded above. Then click "Add". This step will save the downloaded sequence into registry.


4. Create orderSvc API by wrapping orderSvc SOAP service.

Login to the API Publisher and create a API with following info.

Name: orderSvc
Context: ordersvc
Version: v1

Resource definition1
URL Pattern: submitOrder
Method: POST

Resource definition2
URL Pattern: cancelOrder/{id}
Method: GET

Resource definition3
URL Pattern: orderStatus/{id}
Method: GET


Endpoint Type:* : Select the endpoint type as Address endpoint. And go to the “Advanced Options” and select the message format as “SOAP 1.1”.
Production Endpoint:https://localhost:9446/services/orderSvc/ (Give the OrderSvc service endpoint)

Tier Availability : Unlimited

Sequences : Click on the Sequences checkbox and selected the previously saved custom sequence under “In Flow”.


Publish the API into gateway.

We are done with the API creation.

Functionality of the custom sequence "orderSvc_supporting_sequence.xml"

OrderSvc backend service expecting a SOAP request while user invoking API by sending parameters as query parameters (i.e cancelOrder/{id}, orderStatus/{id}).

This custom sequence will take care of building SOAP payload required for cancelOrder, orderStatus operations by looking at the incoming request URI and the query parameters.

Using a switch mediator, it read the request path . i.e 

<switch xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope" xmlns:ns3="http://org.apache.synapse/xsd" source="get-property('REST_SUB_REQUEST_PATH')">

Then check the value of request path using a regular expression and construct the payload either for cancelOrder or orderStatus according to the matched resource. i.e

<case regex="/cancelOrder.*">
<payloadFactory media-type="xml">
<format>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
<soapenv:Header/>
<soapenv:Body xmlns:dat="http://ws.wso2.org/dataservice">
<dat:cancelOrder>
<dat:OrderId>$1</dat:OrderId>
</dat:cancelOrder>
</soapenv:Body>
</soapenv:Envelope>
</format>
<args>
<arg evaluator="xml" expression="get-property('uri.var.id')"/>
</args>
</payloadFactory>
<header name="Action" scope="default" value="urn:cancelOrder"/>
</case>

<case regex="/orderStatus.*">
<payloadFactory media-type="xml">
<format>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
<soapenv:Header/>
<soapenv:Body xmlns:dat="http://ws.wso2.org/dataservice">
<dat:orderStatus>
<dat:OrderId>$1</dat:OrderId>
</dat:orderStatus>
</soapenv:Body>
</soapenv:Envelope>
</format>
<args>
<arg evaluator="xml" expression="get-property('uri.var.id')"/>
</args>
</payloadFactory>
<header name="Action" scope="default" value="urn:orderStatus"/>
</case>


Test OrderSvc API published in API Manager

Login to the API Store and subscribe to the OrderSvc API and generate a access token. Invoke the orderStatus resource as given bellow. This will call to OrderSvc SOAP service and give you the response.

curl -v -H "Authorization: Bearer  _smfAGO3U6mhzFLro4bXVEl71Gga" http://localhost:8280/order/v1/orderStatus/3

Chandana NapagodaConfigure External Solr server with Governance Registry

In WSO2 Governance Registry 5.0.0, we have upgraded Apache Solr version into 5.2 release. With that you can connect WSO2 Governance Registry into an external Solr server or Solr cluster. External Solr integration provides features to gain comprehensive Administration Interfaces, High scalability and Fault Tolerance, Easy Monitoring and many more Solr capabilities.

Let me explain how you can connect WSO2 Governance Registry server with an external Apache Solr server.

1). First, you have to download Apache Solr 5.x.x from the below location.
http://lucene.apache.org/solr/mirrors-solr-latest-redir.html
Please note that we have only verified with Solr 5.2.0 and 5.2.1 versions only.

2). Then unzip Solr Zip file. Once unzipped, it's content will look like the below.



The bin folder contains the scripts to start and stop the server. Before starting the Solr server, you have to make sure JAVA_HOME variable is set properly. Apache Solr is shipped with an inbuilt Jetty server.

3). You can start the Solr server by issuing "solr start" command from the bin directory. Once the Solr server is started properly, following message will be displayed in the console. "Started Solr server on port 8983 (pid=5061). Happy searching!"

By default, server starts on port "8983" and you can access the Solr admin console by navigating to "http://localhost:8983/solr/".

4) To create a new Solr Core, you have to copy and paste Solr configuration directory(registry-indexing) found in G-REG_HOME/repository/conf/solr to SOLR_HOME/server/solr/ directory. Please note that only "registry-indexing" directory needs to be copied from the G-Reg pack.  This will create a new Solr Core named as "registry-indexing".

5). After creating "registry-indexing" Solr core, you can see it from the Solr admin console as below.



6). To integrate newly created Solr core with WSO2 Governance Registry, you have to modify registry.xml file located in <greg_home>/repository/conf directory. There you have to add "solrServerUrl" under indexingConfiguration as follows and need to comment out "IndexingHandler".

    <!-- This defines index cofiguration which is used in meta data search feature of the registry -->

<indexingConfiguration>
<solrServerUrl>http://localhost:8983/solr/registry-indexing</solrServerUrl>
<startingDelayInSeconds>35</startingDelayInSeconds>
<indexingFrequencyInSeconds>3</indexingFrequencyInSeconds>
<!--number of resources submit for given indexing thread -->
<batchSize>50</batchSize>
<!--number of worker threads for indexing -->
<indexerPoolSize>50</indexerPoolSize>
.................................
</indexingConfiguration>


7). After completing external Solr configurations as above, you have to start the WSO2 Governance Registry server. If you have configured External Solr integration properly, you can notice below log message in the Governace Registry server startup logs(wso2carbon log).

[2015-07-11 12:50:22,306] INFO {org.wso2.carbon.registry.indexing.solr.SolrClient} - Http Sorl server initiated at: http://localhost:8983/solr/registry-indexing

Further, you can view indexed data by querying via Solr admin console as well.

Happy Indexing and Searching...!!!

Note(added on March 2016): If you are moving from older G-Reg version to the latest one,  you have to replace existing Solr Core(registry-indexing) with the latest one available in the G-Reg pack.

Chamila WijayarathnaExtending WSO2 Identity Server to Engage Workflows with non User Store Operations

In my previous blog, I described about adding more control to an user store operation using workflows. By default Identity Server only supports engaging workflows to user store operations. But is this limited to user store operations? No, you can engage any operation with a workflow, if there exist an interceptor where we can start a workflow when the event is occurred.

First before seeing how to achieve this, let's try out a simple example on this. So here, I am going to demonstrate controlling 'adding service provider' operation using workflows. For this I am going to use sample workflow event handler which is available at [1].

Let's first clone the source code of this sample handler and built it. Then we should put the jar created at the target folder at handler source to repository/components/dropins folder of your Identity Server. Now start the Identity Server.

Now as usual, first you have to create the roles and users required for the approval process and then create a workflow with desired approval steps as I described in my previous blog [2].

If you have followed my previous blog [2], steps until this should be very comfortable for you. You know that after creating the workflow with approval steps, next part to do it is engaging the operation with the workflow. Here we are planning to engage 'add service provider' operation which is a non user store operation with this workflow.

In the 'add workflow engagement' page, by default, it will only show user store operations as the operations that can be engaged with workflow. But now since we have added new service-provider workflow handler, it will show service provider related operations in that UI as well.



Now we can fill the rest of the 'add workflow engagement' form in the usual way we did.

Now we have engaged 'add service provider' operation with a approval process. Now if we add a new service provider, it will not directly added until it was accepted in the approval process. Only after it is approved, it will shown in the UI and is usable.

So now we know that, not only user store operations, but other operations also can be engaged to workflows. But the most challenging thing here is how do we write the custom event handler. Anyway I'm not going to describe that part here, even though its the most important part of this, because its already available in WSO2 docs at [3].

[1]. https://github.com/wso2/product-is/tree/master/modules/samples/workflow/handler/service-provider
[2]. http://cdwijayarathna.blogspot.com/2016/04/making-use-of-wso2-identity-servers.html
[3]. https://docs.wso2.com/display/IS510/Writing+a+Custom+Event+Handler

Prabath SiriwardenaThirty Solution Patterns with the WSO2 Identity Server

WSO2 offers a comprehensive open source product stack to cater to all needs of a connected business. With the single code base structure, WSO2 products are weaved together to solve many enterprise-level complex identity management and security problems. By believing in open standards and supporting most of the industry leading protocols, the WSO2 Identity Server is capable of providing seamless integration with a wide array of vendors in the identity management domain. The WSO2 Identity Server is one of the most powerful open source Identity and Entitlement Management server, released under the most business friendly Apache 2.0 license.


This article on medium explains thirty solution patterns, built with the WSO2 Identity Server and other WSO2 products to solve enterprise-level security and identity management related problems.

Chamara SilvaHow to generate random strings or number from Jmeter

While testing soap services, most of the time we may need jmeter scripts to generate random string or numbers as a service parameters. I had a soap service to send the name (string value) and age (integer value) contentiously and each value should not be repeatable need to be random unique values. I used Random and RandomString functions to generate these values. Following Jmeter scrips may help

Dhananjaya jayasingheHow to get the Client's IP Address in WSO2 API Manager/ WSO2 ESB

Middleware solutions are designed to communicate with multiple parties and most of them are integrations. While integration different systems, It is required to validate the requests and collect statistics. When it comes to collecting statistics, Client's / Request Originator's IP Address plays a vital role.

In order to publish the client's IP to the stat collector, We need to extract the client's IP from the request received to the server.

When the deployment contains WSO2 API Manager or WSO2 Enterprise Service Bus, We can obtain the client's IP address using a property mediator in the InSequence.

If the deployment has a Load Balancer in front of ESB/APIManager, We can use X-Forwarded-For Header property as explained in the blog post of Firzhan.

In a deployment which doest not has Load Balancer in front of WSO2 ESB / API Manager, We can use REMOTE_ADDR to obtain the client's IP Address.

We can extract it as follows with using a property mediator.


 <property name="api.ut.REMOTE_ADDR"
expression="get-property('axis2','REMOTE_ADDR')"/&gt

Then we can use it in the sequence. As an example, if we extract the IP Address as above and log it, synapse configuration for it will look like bellow.


<property name="api.ut.REMOTE_ADDR"
expression="get-property('axis2','REMOTE_ADDR')"/>
<log level="full">
<property name="Actual Remote Address"
expression="get-property('api.ut.REMOTE_ADDR')"/>
</log>

You can use this in the InSequence of ESB or API Manager to obtain the client's IP Address.

Chathurika Erandi De SilvaWhy Message Enriching ? - A Simple Answer

 What is Message Enriching?


Message enriching normally happens when the incoming request does not contain all the needed information the backend is expecting. The message can be enriched by inserting data to the request midway as needed.

Graphically it can be illustrated as below



Message Enriching













Golden Rule of Message Enriching (my version)

Of course there are lot of use cases where enriching can be used, but ultimately, they can be narrowed down to the following three, to keep things simple

1. The message is enriched through a calculation using the existing values
2. The message is enriched using values from environment
3. The message is enriched using values from external systems, databases, etc...

WSO2 ESB in to the equation 

Now we have to see where WSO2 ESB fits in the picture. The Enrich mediator can be used to achieve message enriching. Following samples are basic demonstrations that are designed to cover the above mentioned "Golden Rule"

The message is enriched through a calculation using the existing values

For demonstration, I have created a sample sequence with Enrich mediator in it. This sequence takes in the request, matches a parameter in the request with a predefined one, and enriches messages when condition is true.

Sample Sequence





In above when a request reaches ESB with customerType as 1,2,3,4 a reference value is assigned to the customerType because the backend is expecting the customerType to come in either gold, platinum, silver or bronze

Now let's look at the Golden Rule #2

The message is enriched using values from environment

This rule is relatively simple. If the request is missing a certain value and if that value can be obtained through the environment, then its injected to the request.

Sample Sequence


story_seq_3.png


In above SystemDate is inserted to the request and later value is populated through enrich mediator.

The final Golden Rule, Rule # 3


The message is enriched using values from external systems, databases, etc...

This is the simplest, put in simple words, this rule says, if you don't have it, ask from some one who does and include it in the request.

Sample Sequence


story_4_seq.png

In above the request doesn't have the customer id, its inserted and populated through enrich mediator. The customer id is obtained from the database using DbLookUp mediator

Winding up, the Golden Rules are purely based on my understanding and of course, if anyone reads better, any one can come up with better set of Golden Rules.




Asanka DissanayakeValidate a URL with Filter Mediator in WSO2 ESB

In day to day development life, you may have come across this requirement lot of times. When you are getting a url as a field in the request, you may need to validate it.
Whether the url is in correct structure or whether it contains any unallowed characters.

This can be achieved using filter mediator in WSO2 ESB.

Matter is figuring out correct regular expression. So the code structure would be as follows.

<filter source="//url" regex="REGEX">
	<then>
		<log level="custom">
			<property name="propStatus" value="url is valid" />
		</log>
	</then>
	<else>
		<log level="custom">
			<property name="propStatus" value="!!!!url is not valid!!!!" />
		</log>
	</else>
</filter>

Refer to following table to figure out regular expressions for each use case.

Regex Sample
http(s)?:\/\/((\w+\.+){1,})+(\w+){1}\w*(\w)*([: \/][0-9]+)* http or https with host name/domain name with optional port http://www.asanka.com:2131
http(s)?:\/\/((\w+\.+){1,})+(\w+){1}\w*(\w)*([: \/][0-9]+)*[\/\w\?=&]* url with query parameters, special characters like other than ?,&,= not allowed https://www.asanka.com:2131/user/info?id=2&role=admin
http(s)?:\/\/((\w+\.+){1,})+(\w+){1}\w*(\w)*([: \/][0-9]+)*[\/\w]* url without query parameters https://www.asanka.com:2131/user/info
http(s)?:\/\/((\w+\.+){1,})+(\w+){1}\w*(\w)*([: \/][0-9]+)*[\/\w\W]* url with query parameters, special characters https://www.asanka.com:2131/user/info?id=2&role=admin

You can play around with this using following api

https://github.com/asanka88/BlogSamples/blob/master/ESB/urlvalidate.xml

Payload:

<data>
<url>https://www.asanka.com/asdasd?a=b</url>
</data>

api url :

http://localhost:8280/url/validate

 

Try changing the regex and url values from the above table.

 

Happy Coding !!!:)


Asanka DissanayakeCheck for existence of element/property with Filter Mediator in WSO2 ESB

In day to day development , sometimes you will need to check for the existence of an element or property or some thing. In other words, you will need to check something if it is null. It can be easily done in WSO2 ESB using filter mediator.

Learning by example is the best way ,

Let’s take a simple example. Suppose there is a payload incoming like below

<user>
 <name>asanka</name>
 <role>admin</role>
</user>

Suppose you need to read this field into a property. and suppose <role/> is an optional element. In that case what are you going to do?

The expected behavior is , if role is not coming with the payload, it is considered as the default role “generic_user”.

So the following code segment of filter mediator will do it for you.

<filter xpath="//role">
   <then>
      <log level="custom">
         <property name="propStatus" value="role is available" />
      </log>
      <property name="userRole" expression="//role" />
   </then>
   <else>
      <log level="custom">
         <property name="propStatus" value="!!!!role is not available!!!!" />
      </log>
      <property name="userRole" value="generic_user" />
   </else>
</filter>

“xpath” attribute in filter element provides the xpath expression to be evaluated.
If the xpath expression is evaluated to “true”, synapse code in the “then” block will be executed.
Otherwise code in the else block will be executed.

If the evaluation of the xpath returns something not null. It will be considered as true. If it is null it will be considered as false.

If you want to play with this, create filter.xml with following content and copy it to

$CARBON_HOME/repository/deployment/server/synapse-configs/default/api

https://github.com/asanka88/BlogSamples/blob/master/ESB/nullcheck.xml

and make an HTTP POST to http://localhost:8280/user/rolecheck with following payloads.

<user>
 <name>asanka</name>
 <role>admin</role>
</user>

Check the log file and you will see following output.

[2016-04-11 22:49:38,041] INFO - LogMediator propStatus = role is available
[2016-04-11 22:49:38,042] INFO - LogMediator status = ====Final user Role====, USER_ROLE = admin

<user>
 <name>asanka</name>
</user>

Check the log file and you will see following output.

[2016-04-11 22:49:43,083] INFO - LogMediator propStatus = !!!!role is not available!!!!
[2016-04-11 22:49:43,084] INFO - LogMediator status = ====Final user Role====, USER_ROLE = generic_user


Hope this helps someone:) happy coding ….


Prabath SiriwardenaSecuring Microservices with OAuth 2.0, JWT and XACML

Microservices is one of the most trending buzzword, along with the Internet of Things (IoT). Everyone talks about microservices and everyone wants to have microservices implemented. The term ‘microservice’ was first discussed at a software architects workshop in Venice, in May 2011. It’s being used to explain a common architectural style they’ve been witnessing for some time. With the granularity of the services and the frequent interactions between them, securing microservices is challenging. This post, which I published on medium presents a security model based on OAuth 2.0, JWT and XACML to overcome such challenges.

Ushani BalasooriyaHow to hide credentials used in mediation configuration using Secure Vault in WSO2 ESB

Eventhough we use secure vault to encrypt password, it is not possible to use secure vault directly in the mediation configuration. As an example imagine you need to hide a password given in a proxy.

All you have to do is using Secure Vault Password Management screen in WSO2 ESB.


1. Run sh ciphertool.sh -Dconfigure and enable secure vault
2. Start the WSO2 ESB with
3. Go to  Manage -> Secure Vault Tool and then click Manage Passwords
4. You will see the below screen.

Srinath PereraUnderstanding Causality and Big Data: Complexities, Challenges, and Tradeoffs

image credit: Wikipedia, Amitchell125

“Does smoking causes cancer?”

We have heard that lot of smokers have lung cancer. However, can we mathematically tell that smoking causes cancer?

We can look at cancer patients and check how many of them are smoking. We can look at smokers and check will they develop cancer. Let’s assume that answers come up 100%. That is, hypothetically, we can see a 1–1 relationship between smokers and cancer.

Ok great, can we claim that smoking causes cancer? Apparently it is not easy to make that claim. Let’s assume that there is a gene that causes cancer and also makes people like to smoke. If that is the cause, we will see the 1–1 relationship between cancer and smoking. In this scenario, cancer is caused by the gene. That means there may be an innocent explanation to 1–1 relationship we saw between cancer and smoking.

This example shows two interesting concepts: correlation and causality from statistics, which play a key role in Data Science and Big Data. Correlation means that we will see two readings behave together (e.g. smoking and cancer) while causality means one is the cause of the other. The key point is that if there is a causality, removing the first will change or remove the second. That is not the case with correlation.

Correlation does not mean Causation!

This difference is critical when deciding how to react to an observation. If there is causality between A and B, then A is responsible. We might decide to punish A in some way or we might decide to control A. However, correlation does warrant such actions.

For example, as described in the post The Blagojevich Upside, the state of Illinois found that having books at home is highly correlated with better test scores even if the kids have not read them. So they decide the distribute books. In retrospect, we can easily find a common cause. Having the book in a home could be an indicator of how studious parents are, which will help with better scores. Sending books home, however, is unlikely to change anything.

You see correlation without a causality when there is a common cause that drives both readings. This is a common theme of the discussion. You can find a detailed discussion on causality from the talk “Challenges in Causality” by Isabelle Guyon.

Can we prove Causality?

Great, how can I show causality? Casualty is measured through randomized experiments (a.k.a. randomized trials or AB tests). A randomized experiment selects samples and randomly break them into two groups called the control and variation. Then we apply the cause (e.g. send a book home) to variation group and measure the effects (e.g. test scores). Finally, we measure the casualty by comparing the effect in control and variation groups. This is how medications are tested.

To be precise, if error bars for groups does not overlap for both the groups, then there is a causality. Check https://www.optimizely.com/ab-testing/ for more details.

However, that is not always practical. For example, if you want to prove that smoking causes cancer, you need to first select a population, place them randomly into two groups, make half of the smoke, and make sure other half does not smoke. Then wait for like 50 years and compare.

Did you see the catch? it is not good enough to compare smokers and non-smokers as there may be a common cause like the gene that cause them to do so. Do prove causality, you need to randomly pick people and ask some of them to smoke. Well, that is not ethical. So this experiment can never be done. Actually, this argument has been used before (e.g.https://en.wikipedia.org/wiki/A_Frank_Statement. )

This can get funnier. If you want to prove that greenhouse gasses cause global warming, you need to find another copy of earth, apply greenhouse gasses to one, and wait few hundred years!!

To summarize, Casualty, sometime, might be very hard to prove and you really need to differentiate between correlation and causality.

Following are examples when causality is needed.

  • Before punishing someone
  • Diagnosing a patient
  • Measure effectiveness of a new drug
  • Evaluate the effect of a new policy (e.g. new Tax)
  • To change a behavior

Big Data and Causality

Most big data datasets are observational data collected from the real world. Hence, there is no control group. Therefore, most of the time all you can only show and it is very hard to prove causality.

There are two reactions to this problem.

First, “Big data guys does not understand what they are doing. It is stupid to try to draw conclusions without randomized experiment”.

I find this view lazy.

Obviously, there are lots of interesting knowledge in observational data. If we can find a way to use them, that will let us use these techniques in many more applications. We need to figure out a way to use it and stop complaining. If current statistics does not know how to do it, we need to find a way.

Second is “forget causality! correlation is enough”.

I find this view blind.

Playing ostrich does not make the problem go away. This kind of crude generalizations make people do stupid things and can limit the adoption of Big Data technologies.

We need to find the middle ground!

When do we need Causality?

The answer depends on what are we going to do with the data. For example, if we are going to just recommend a product based on the data, chances are that correlation is enough. However, if we are taking a life changing decision or make a major policy decision, we might need causality.

Let us investigate both types of cases.

Correlation is enough when stakes are low, or we can later verify our decision. Following are few examples.

  1. When stakes are low ( e.g. marketing, recommendations) — when showing an advertisement or recommending a product to buy, one has more freedom to make an error.
  2. As a starting point for an investigation — correlation is never enough to prove someone is guilty, however, it can show us useful places to start digging.
  3. Sometimes, it is hard to know what things are connected, but easy verify the quality given a choice. For example, if you are trying to match candidates to a job or decide good dating pairs, correlation might be enough. In both these cases, given a pair, there are good ways to verify the fit.

There are other cases where causality is crucial. Following are few examples.

  1. Find a cause for disease
  2. Policy decisions ( would 15$ minimum wage be better? would free health care is better?)
  3. When stakes are too high ( Shutting down a company, passing a verdict in court, sending a book to each kid in the state)
  4. When we are acting on the decision ( firing an employee)

Even, in these cases, correlation might be useful to find good experiments that you want to run. You can find factors that are correlated, and design the experiments to test causality, which will reduce the number of experiments you need to do. In the book example, state could have run a experiment by selecting a population and sending the book to half of them and looking at the outcome.

Some cases, you can build your system to inherently run experiments that let you measure causality. Google is famous for A/B testing every small thing, down to the placement of a button and shade of color. When they roll out a new feature, they select a polulation and rollout the feature for only part of the population and compare the two.

So in any of the cases, correlation is pretty useful. However, the key is to make sure that the decision makers understand the difference when they act on the results.

Closing Remarks

Causality can be a pretty hard thing to prove. Since most big data is observational data, often we can only show the correlation, but not causality. If we mixed up the two, we can end up doing stupid things.

Most important thing is having a clear understanding at the point when we act on the decisions. Sometimes, when stakes are low, correlation might be enough. On some other cases, it is best to run an experiment to verify our claims. Finally, some systems might warrant building experiments into the system itself, letting you draw strong causality results. Choose wisely!

Original Post from my Medium account: https://medium.com/@srinathperera/understanding-causality-and-big-data-complexities-challenges-and-tradeoffs-db6755e8e220#.ca4j2smy3


Chamila WijayarathnaMaking Use of WSO2 Identity Server's Workflow Feature

WSO2 IS 5.1.0 which was released in the end of 2015 contains workflow support which can be used to add more control to the operations that can be done through Identity Server. By default WSO2 IS 5.1.0 support to control user store operations by engaging them to an approval process where 1 or more privileged users need to approve the operation before it will take into operation. Even though only this is supported by default, Identity Server can be extended by adding custom templates and custom handlers to do much more advanced tasks using workflow framework.

Since I was one of the developers involved in developing this feature, I thought of writing a blog to describe how to make use of this feature. So in this blog I will be writing about implementing a simple use case using workflow feature. In future blogs I will be writing more advanced use cases with custom event handlers and custom templates.

Following are some use cases that can be implemented using this.

  1. When user registered to IS using self sign up, get approval by an admin user before he can login
  2. Get approval from an admin user before lock / unlock user accounts due to invalid login attempts
  3. When user update his user account (eg : update profile picture), check if its appropriate and get approval from an admin
  4. Get approval from an privileged user before increasing privileges of a user
So lets see how to implement one of these use cases. I will describe how to implement a scenario, when a new user added to the identity server with admin privileges, that operation need to get accepted by  a 'senior manager' and company CEO respectively. This is a common use case that we come across in most of the enterprises.

WSO2 IS contains WSO2 BPS features embedded within it which can be used to manage the approval process of this scenario. Instead of using this, you can use a separate WSO2 BPS for this purpose also. First let's see how we can achieve this with BPS feature embedded to IS without using a separate BPS.

You can download latest version of WSO2 IS from here. Extract it and start it. 

Following are the users and roles we are going to have in our setup.



User 'ceo' and role 'senior_manager' which we are going to use in approval process need to have at least following permission.

  • Login
  • Human Tasks > View Human Tasks
  • Workflow Management > BPS Profiles > View
So now we have to define the approval process through defining a new workflow. To do this, we have to log into Identity Server and then select Workflow Definitions -> Add from Main menu.


Then you'll be directed to the 'Add Workflow Definition' wizard. In the first step, you have to define a name to identify the workflow and a small description about the workflow.


Now in the next step, you can define the approval process of this workflow. As I mentioned earlier, here we want the operation to be accepted by a senior_manager and then by CEO. We can define this process as follows.






Now we have added the step 1 of the approval process. We have to add the next step also in the same page. This can be done by following below steps.








Now we can proceed to the next step. In next step we need to select the BPS profile details we are using for this approval process. For now let's use the BPS embedded in to Identity Server. We can use an external BPS as well in this process.


By clicking finish button, we have created the approval process. Out next task is to integrate the 'add admin user' to this approval process. This can done by going to Workflow Engagements -> Add in main menu.


Now you can engage 'add admin user' event to the created approval process by adding a workflow engagement as follows.

Now we have finished the setup, we can test how this works. We can go to management console and add user with assigning 'admin' role to the user. When we do this you'll observe that user is not directly added. Even though user is shown in the user list, he will shown as an user in pending state.

User account will be only activated once both a manager and the CEO accepted the operation. In the first step, a manager need to approve the user addition. Manager can do this by login to the user portal of Identity Server. When a manager logged into user portal and access the 'Approvals' gadget there, he will see the list of operations which require is approval.



If the CEO logged into dashboard and access 'Approvals' gadget, he won't see adding 'newAdmin' user there. He will only see it once a manager approves this.

So manager can now accept or reject the operation from here.


If the manager approve the operation, CEO can also approve/reject the operation. User account will be only activated if user account is approved in both steps. If its rejected at any stage, user account will be deleted as it didn't existed there at any time.

In the same manner, we can engage any user store operations to this kind of multi step approval processes in Identity Server. This functionalities are available in Identity Server by default. You don't have to do any customizations to make use of these. By customizing we can do lot more stuff and I will be writing about few of them in my next few blogs.

Here we used the BPS embedded in Identity Server for implementing this. We can use a external BPS for this as well. You can add a external BPS by 'Workflow Engine Profiles -> Add' in configure menu.


When we add a new profile, it will also be shown in the drop down menu in the 3rd step of add workflow wizard. To do this we have to share user store and identity database of Identity Server with the BPS.

Dhananjaya jayasingheCustomize HTTP Server Response Header in WSO2 API Manager / WSO2 ESB


You may know that in the response header from WSO2 ESB invocations or WSO2 API Manager invocations, You are getting "Server" header as bellow.

HTTP/1.1 200 OK
Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type,SOAPAction
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: POST,GET,DELETE,PUT,HEAD
Content-Type: application/json
Access-Control-Allow-Credentials: true
Date: Sat, 09 Apr 2016 20:02:58 GMT
Server: WSO2-PassThrough-HTTP
Transfer-Encoding: chunked
Connection: Keep-Alive

{
"origin": "50.185.34.119"
}


You can see that Server Header Contains WSO2 as bellow.

Server: WSO2-PassThrough-HTTP

Sometimes there are situations like It needs to customize this header.

Eg: If we need to customize this as bellow.

Server: PassThrough-HTTP

What we need to do is , Add the http.origin-server to passthru-http.properties file located in ESB_HOME/repository/conf/ directory with customized value as bellow.


http.origin-server=PassThrough-HTTP

Once you restart the server, Above response will be changed as bellow.

HTTP/1.1 200 OK
Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type,SOAPAction
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: POST,GET,DELETE,PUT,HEAD
Content-Type: application/json
Access-Control-Allow-Credentials: true
Date: Sat, 09 Apr 2016 20:11:47 GMT
Server: PassThrough-HTTP
Transfer-Encoding: chunked
Connection: Keep-Alive

{
"origin": "50.185.34.119"
}

Dhananjaya jayasingheActiveMQ - WSO2 ESB - Redelivery Delay - MaximumRedeliveries Configuration


There are use cases which we need to configure Redelivery Delay and the Maximum Redeliveries for Message Consumers of Activemq.

When consuming ActiveMq Queue, we can configure these parameters as mentioned in [1]

WSO2 ESB also can act as a message consumer and a message producer. Information of configuring that can be found in ESB documentation. [2]  Then we can find Consumer / Producer configurations in [3]

In ESB JMS proxy, we can configure Redelivery Delay and MaximumRedeliveries with using following parameters.

Eg:  By default Redelivery Delay in ActiveMq is one second and MaximumRedelivery count is 6. If you need to change the as bellow, you can do it with following parameters in the proxy service.

Redelivery Delay - 3 Seconds
MaximumRedelivery Count - 2


   <parameter name="redeliveryPolicy.maximumRedeliveries">2</parameter>
<parameter name="transport.jms.SessionTransacted">true</parameter>
<parameter name="redeliveryPolicy.redeliveryDelay">3000</parameter>
<parameter name="transport.jms.CacheLevel">consumer</parameter>

Other than enabling the default configurations for JMS transport receiver, you dont need to add any other parameters to the axis2.xml to achive this.

Here is a sample proxy service which i have added above parameters.

Tested Versions : ESB 4.9.0 / Apache ActiveMQ 5.10.0


<proxy xmlns="http://ws.apache.org/ns/synapse"
name="JMStoHTTPStockQuoteProxy"
transports="jms"
statistics="disable"
trace="disable"
startOnLoad="true">
<target>
<inSequence>
<log level="full">
<property name="Status" value=" Consuming the message"/>
<property name="transactionID" expression="get-property('MessageID')"/>
</log>
<property name="SET_ROLLBACK_ONLY" value="true" scope="axis2"/>
<drop/>
</inSequence>
</target>
<parameter name="redeliveryPolicy.maximumRedeliveries">2</parameter>
<parameter name="transport.jms.DestinationType">queue</parameter>
<parameter name="transport.jms.SessionTransacted">true</parameter>
<parameter name="transport.jms.Destination">JMStoHTTPStockQuoteProxy</parameter>
<parameter name="redeliveryPolicy.redeliveryDelay">3000</parameter>
<parameter name="transport.jms.CacheLevel">consumer</parameter>
<description/>
</proxy>


When we configure these redelivery parameters, we need to make sure that we have enabled transactions for the proxy. We have done it using following parameter.

<parameter name="transport.jms.SessionTransacted">true</parameter>

Once we enable transactions, If the transaction is successful there is no redelivery happens.  So, in order to test the redelivery functionality, After consuming the message, we need to RollBack the transaction. In order to do that we need add following parameter inside the InSequence of the proxy service.


<property name="SET_ROLLBACK_ONLY" value="true" scope="axis2"/>

With above property, we notify the server that, The tranaction is Rollbacked.

All these properties are passed in to the server when we make the connection from ESB to the Message Broker. So, at that time we need to specify all these parameters.


[1] http://activemq.apache.org/redelivery-policy.html
[2] https://docs.wso2.com/display/ESB490/Configure+with+ActiveMQ
[3] https://docs.wso2.com/display/ESB490/ESB+as+a+JMS+Consumer



Chathurika Erandi De SilvaWS-Addressing: A simple demonstration with WSO2 ESB

WS-Addressing as I understand

WS-Addressing or web service addressing is a mechanism that is used with web services, so that we can invoke services regardless of the transport. We include message routing data in the soap headers, so that the request will be routed in a transport neutral manner.

More on WS-Addressing

BUT this is not to explain WS-Addressing, then what?

I have been working with WSO2 ESB this week, and thought of sharing the below for a person who is looking at an entry point to WS-Addressing relevant tasks


In this post, I am discussing the below

1. Enabling WS-Addressing for an endpoint
2. Enabling WS-Addressing for the whole proxy service
3. Invoking a WS-Addressing enabled proxy through SOAP-UI

If you are a begineer to WSO2 ESB, spend a little time to read...

Enabling WS-Addressing for an endpoint

Using below configuration I have enabled WS-Addressing for the endpoint.


<endpoint>
    <address uri="http://1.1.1.1:9793/services/ValueGetter/">
        <enableAddressing/>
    </address>
</endpoint>


Enabling WS-Addressing for the whole proxy service

Using below I have enabled WS-Addressing for the proxy service

<parameter name="enforceWSAddressing">true</parameter>


Sample

<proxy
    xmlns="http://ws.apache.org/ns/synapse"
       name="WSAddressing"
       transports="http,https"
       statistics="disable"
       trace="disable"
       startOnLoad="true">
    <target>
        <inSequence>
            <send>
                <endpoint>
                    <address uri="http://1.1.1.1:9793/services/ValueGetter/">
                        <enableAddressing/>
                    </address>
                </endpoint>
            </send>
            <log level="full"/>
        </inSequence>
    </target>
    <publishWSDL uri="http://10.100.5.63:9793/services/ValueGetter?wsdl"/>
    <parameter name="enforceWSAddressing">true</parameter>
    <description/>
</proxy>


Invoking a WS-Addressing enabled proxy through SOAP-UI

Follow the below steps to invoke WS-Addressing enabled web service using SOAP-UI.

1. Give the relevant wsdl of the proxy service and open the SOAP UI project

2. Open the request.

3. In the left hand side Request Properties panel, enable WS-Addressing



Request Properties-SOAP UIFig1. - Request Properties-SOAP UI.

















4. In the Request window click on WS-Addressing to include WS-Addressing related headers.


Fig2. - Request-Enabling WS-Addressing headers.


















5. Click "Add default wsa-action" and "Add default wsa-To"


When the request is sent, below can be seen if the wire logs are enabled in WSO2 ESB

[2016-04-08 16:56:07,113] DEBUG - wire >> "
<soapenv:Header
xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsa:Action>urn:getValues</wsa:Action>
<wsa:To>http://erandi-Latitude-E6540:8280/services/WSAddressing.WSAddressingHttpSoap11Endpoint</wsa:To>
</soapenv:Header>[\n]"


The response will look as below

<soapenv:Envelope
xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
<soapenv:Header
xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsa:Action>urn:getValuesResponse</wsa:Action>
<wsa:RelatesTo>urn:uuid:f89791e8-9bab-4a03-a615-a26b0287802f</wsa:RelatesTo>
</soapenv:Header>
<soapenv:Body>
<ns:getValuesResponse
xmlns:ns="http://sample.wso2.org">
<ns:return xsi:type="ax2435:ValueSetter"
xmlns:ax2435="http://sample.wso2.org/xsd"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<ax2435:myValue>1</ax2435:myValue>
<ax2435:myValue>2</ax2435:myValue>
<ax2435:myValue>3</ax2435:myValue>
<ax2435:myValue>0</ax2435:myValue>
<ax2435:myValue>0</ax2435:myValue>
</ns:return>
</ns:getValuesResponse>
</soapenv:Body>
</soapenv:Envelope>


As illustrated, the repsonse too carries, WS-Addressing headers.

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
User administrators by the user store

Problem:
  • Define user administrators by user store. For example, a user belongs to the role foo-admin will be able to perform user admin operations on the foo user store, while he/she won’t be able to perform user admin operations on the bar user store.
Solution:
  • Deploy the WSO2 Identity Server as an identity provider over multiple user stores. 
  • Define a XACML policy, which specified who should be able to do which operation on user stores. 
  • Create a user store operation listener and talk to the XACML PDP during user admin operations. 
  • Create roles by user store and user administrators to appropriate roles. Also, make sure each user administrator has the user admin permissions from the permission tree. 
  • Products: WSO2 Identity Server 4.6.0+ 

Nuwan BandaraMicroservices gateway pattern

With microservices outer architecture the gateway pattern is something quite popular, which is also elaborately explained at nginx blogs. In summary, linking your microservices directly to the client applications is almost always considered a bad idea.

You need to keep updating and upgrading ur microservices and you should be able to do it transparently. With a larger services based ecosystem microservices wont always be HTTP bound, its probably be using jms, mqtt or maybe thrift for their transports. In such scenarios having a gateway to deal with those complexities is always a good idea.

untitled1

Proving the concept I created couple of microservices (ticket listing/catalog service, ticket purchase service and a validate service) which get deployed in their respective containers. WSO2 Gateway act as the microservice gateway in this PoC and the routs are defines in it. Gateway also deploys in a container on its own.

To build the microservices I am using MSF4j the popular microservices framework and the ticket data is stored in a redis store.

The PoC is committed to github with setup instructions, do try it out and leave a comment.


Kalpa WelivitigodaWSO2 Application Server 6.0.0-M1 Released

Welcome to WSO2 Application Server, the successor of WSO2 Carbon based Application Server. WSO2 Application Server 6.0.0 is a complete revamp and is based on vanilla Apache Tomcat. WSO2 provides a number of features by means of extensions to Tomcat to add/enhance the functionality. It provides first class support for generic web applications and JAX-RS/JAX-WS web applications. The performance of the server and individual application can be monitored by integrating WSO2 Application Server with WSO2 Data Analytics Server. WSO2 Application Server is an open source project and it is available under the Apache Software License (v2.0) .

Download WSO2 Application Server 6.0.0-M1 from here.

Key Features

  • HTTP Statistics Monitoring
  • Webapp Classloading Runtimes

Fixed Issues

Known Issues

Reporting Issues

Issues, documentation errors and feature requests regarding WSO2 Application Server can be reported through the public issue tracking system. https://wso2.org/jira/browse/WSAS.

Contact us

WSO2 Application Server developers can be contacted via the Developmentand Architecturemailing lists.
Alternatively, questions can also be raised in the stackoverflow forum : http://stackoverflow.com/questions/tagged/wso2

Thank you for your interest in WSO2 Application Server.

-The WSO2 Application Server Development Team -

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Service provider-specific user stores

Problem:
  • The business users need to access multiple service providers supporting multiple heterogeneous identity federation protocols. 
  • When the user gets redirected to the identity provider, the users only belong to the user stores specified by the corresponding service provider, should be able to login or get an authentication assertion. 
  • In other words, each service provider should be able to specify from which user store it accepts users.
Solution:
  • Deploy the WSO2 Identity Server as an identity provider over multiple user stores and register all the service providers. 
  • Extend the pattern 18.0 Fine-grained access control for service providers to enforce user store domain requirement in the corresponding XACML policy. 
  • Use a regular expression to match allowed user store domain names with the authenticated user’s user store domain name. 
  • Products: WSO2 Identity Server 5.0.0+ 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Home realm discovery

Problem:
  • The business users need to login to multiple service providers via multiple identity providers. 
  • Rather than providing a multi-login option page with all the available identity provider, once redirected from the service provider, the system should find out who the identity provider corresponding to the user and directly redirect the user there.
Solution:
  • Deploy WSO2 Identity Server as an identity provider and register all the service providers and identity providers. 
  • For each identity provider, specify a home realm identifier. 
  • The service provider prior to redirecting the user to the WSO2 Identity Server must find out the home realm identifier corresponding to the user and send it as a query parameter. 
  • Looking at the home realm identifier in the request the WSO2 Identity Server redirect the user to the corresponding identity provider. 
  • In this case, there is a direct one-to-one mapping between the home realm identifier in the request and the home realm identifier value set under the identity provider configuration. This pattern can be extended by writing a custom home realm discovery connector, which knows how to relate and find the corresponding identity provider by looking at the home realm identifier in the request, without maintaining a direct one-to-one mapping. 
  • Products: WSO2 Identity Server 5.0.0+ 

Nuwan BandaraContainerized API Manager

wso2-api-manager-logo+docker-logo

So while continuing my quest to make all demos dockerized; I containerized WSO2 API Manager this week. This is two folded, one is with simple API Manager deployment with integrated analytics (WSO2 DAS). The other is fully distributed API Manager with analytics.

This is making things easier and demos are becoming more and more re-usable. You can find instructions to execute in github repo.

Docker ! Docker ! Docker !😀


Evanthika AmarasiriCommon SVN related issues faced with WSO2 products and how they can be solved

Issue 1

TID: [0] [ESB] [2015-07-21 14:49:55,145] ERROR {org.wso2.carbon.deployment.synchronizer.subversion.SVNBasedArtifactRepository} -  Error while attempting to create the directory: http://xx.xx.xx.xx/svn/wso2/-1234 {org.wso2.carbon.deployment.synchronizer.subversion.SVNBasedArtifactRepository}
org.tigris.subversion.svnclientadapter.SVNClientException: org.tigris.subversion.javahl.ClientException: svn: authentication cancelled
    at org.tigris.subversion.svnclientadapter.javahl.AbstractJhlClientAdapter.mkdir(AbstractJhlClientAdapter.java:2524)
    at org.wso2.carbon.deployment.synchronizer.subversion.SVNBasedArtifactRepository.checkRemoteDirectory(SVNBasedArtifactRepository.java:240)


Reason : The user is not authenticated to write to the provided SVN location i.e.:- http://xx.xx.xx.xx/svn/wso2/ . When you see this type of an error, verify the credentials you have given under the svn configuration in the carbon.xml

    <DeploymentSynchronizer>
        <Enabled>true</Enabled>
        <AutoCommit>false</AutoCommit>
        <AutoCheckout>true</AutoCheckout>
        <RepositoryType>svn</RepositoryType>
        <SvnUrl>http://svnrepo.example.com/repos/</SvnUrl>
        <SvnUser>username</SvnUser>
        <SvnPassword>password</SvnPassword>
        <SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
    </DeploymentSynchronizer>


Issue 2

TID: [0] [ESB] [2015-07-21 14:56:49,089] ERROR {org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask} -  Deployment synchronization commit for tenant -1234 failed {org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask}
java.lang.RuntimeException: org.wso2.carbon.deployment.synchronizer.DeploymentSynchronizerException: A repository synchronizer has not been engaged for the file path: /home/wso2/products/wso2esb-4.9.0/repository/deployment/server/
    at org.wso2.carbon.deployment.synchronizer.internal.DeploymentSynchronizerServiceImpl.commit(DeploymentSynchronizerServiceImpl.java:116)
    at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.deploymentSyncCommit(CarbonDeploymentSchedulerTask.java:207)
    at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.run(CarbonDeploymentSchedulerTask.java:128)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)


Reasons:
    (I) SVN version mismatch between local server and the SVN server (Carbon 4.2.0 products support SVN 1.6 only.

    Solution - Use the SVN kit jar 1.6 in Carbon server

    See https://docs.wso2.com/display/CLUSTER420/SVN-based+Deployment+Synchronizer)

      (II) If you have configured your server with a different SVN version than what's in the SVN server and even if you use the correct svnkit jar at the Carbon server side later, the issue will not get resolved

      Solution - Remove all the .svn files under $CARBON_HOME/repository/deployment/server folder

      (III) A similar issue can be observed when the SVN server is not reachable.

      Issue 3

       
        [2015-08-28 11:22:27,406] ERROR {org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask} - Deployment synchronization update for tenant -1234 failed java.lang.RuntimeException: org.wso2.carbon.deployment.synchronizer.DeploymentSynchronizerException: No Repository found for type svn at org.wso2.carbon.deployment.synchronizer.internal.DeploymentSynchronizerServiceImpl.update(DeploymentSynchronizerServiceImpl.java:98) at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.deploymentSyncUpdate(CarbonDeploymentSchedulerTask.java:179) at org.wso2.carbon.core.deployment.CarbonDeploymentSchedulerTask.run(CarbonDeploymentSchedulerTask.java:137) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.wso2.carbon.deployment.synchronizer.DeploymentSynchronizerException: No Repository found for type svn at org.wso2.carbon.deployment.synchronizer.repository.CarbonRepositoryUtils.getDeploymentSyncConfigurationFromConf(CarbonRepositoryUtils.java:167) at org.wso2.carbon.deployment.synchronizer.repository.CarbonRepositoryUtils.getActiveSynchronizerConfiguration(CarbonRepositoryUtils.java:97) at org.wso2.carbon.deployment.synchronizer.internal.DeploymentSynchronizerServiceImpl.update(DeploymentSynchronizerServiceImpl.java:66) ... 9 more 

        Reason:

        You will notice this issue when the svn kit (i.e. for latest versions of Carbon i.e. 4.4.x the jar version would be svnkit-all-1.8.7.wso2v1.jar) jar is not available in $CARBON_HOME/repository/components/dropins folder

        Sometimes dropping the svn-kit-all-1.8.7.wso2v1.jar would not solve the problem. In such situations, verify whether the trilead-ssh2-1.0.0-build215.jar is also available under $CARBON_HOME/repository/components/lib folder.

Thilini Ishaka[NEW] OData support in WSO2 Data Services Server

OData (Open Data Protocol) is an OASIS standard that defines the best practice for building and consuming RESTful APIs. OData helps to build RESTful APIs and it provides facility for extension to fulfill any custom needs of your RESTful APIs.

OData RESTful APIs are easy to consume. The OData metadata, a machine-readable description of the data model of the APIs, enables the creation of powerful generic client proxies and tools. Some of them can help you interact with OData even without knowing anything about the protocol.

WSO2 DSS 3.5.0 onwards we have come up with the support for OASIS OData protocol version 4.0.0. So that now you can easily expose your databases as an OData service.

Chathurika Erandi De SilvaFilter request on content: WSO2 ESB - Part 1

Use Case

There is a requirement to route a message to different endpoints based on the request itself. This can be achieved by various methods

1. Reading a message context property as Action / To
2. Reading the request itself

In this post it will be explained on reading a property in message context then route the request accordingly.

For the above to be achieved, a filtering mechanism should be there to filter out the message based on the message context property. WSO2 ESB provides Filter Mediator to achieve these kind of requirements.

Heream using the message context property 'Action'. By comparing the value set in this property i will filter out the messages.

Sample Sequence Configuration

<?xml version="1.0" encoding="UTF-8"?>
<sequence xmlns="http://ws.apache.org/ns/synapse" name="filter_1">
   <log level="full"/>
   <filter xmlns:ns="http://org.apache.synapse/xsd"
           source="get-property('Action')"
           regex=".*getMacMenu">
      <then>
         <send>
            <endpoint>
               <address uri="http://<ip>:9793/services/MenuProvider/getMacMenu"/>
            </endpoint>
         </send>
         <log level="custom">
            <property name="macMenu" value="INSIDE MAC MENU"/>
         </log>
      </then>
      <else>
         <send>
            <endpoint>
               <address uri="http:/<ip>:9793/services/MenuProvider/getOtherMenu"/>
            </endpoint>
         </send>
         <log level="custom">
            <property name="otherMenu" value="INSIDE OTHER MENU"/>
         </log>
      </else>
   </filter>
   <log/>
</sequence>



Above we are matching the value returned by the get-property('Action') to the regex expression. In this particular scenario if the message context, Action property contained "getMacMenu" or put in other words, if the request contains getMacMenu operation then the request is directed to the particular endpoint. If it doesn't then the else part of the filter mediator is executed.  

Sample Request

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:sam="http://sample.wso2.org">
   <soapenv:Header/>
   <soapenv:Body>
      <sam:getMacMenu/>
   </soapenv:Body>
</soapenv:Envelope>




Chathurika Erandi De SilvaFilter request on content: WSO2 ESB - Part 2


In the previous post we have discussed how to use the message context properties and filter out based on that.

Use Case

If the request contains a certain operation, then it should be routed to a certain endpoint. The others that does not contain the above operation should be routed to another endpoint.

In order to achieve the above requirement, I have used the xpath expression of Filter Mediator.

Sample Sequence Configuration

<?xml version="1.0" encoding="UTF-8"?>
<sequence xmlns="http://ws.apache.org/ns/synapse" name="filter_seq_2">
   <filter xmlns:ns="http://org.apache.synapse/xsd"
           xmlns:sam="http://sample.wso2.org"
           xpath="boolean(//sam:getMacMenu)">
      <then>
         <log level="custom">
            <property name="getMacMenu" value="INSIDE MAC MENU"/>
         </log>
         <send>
            <endpoint>
               <address uri="http://<ip>:9793/services/MenuProvider/getMacMenu"/>
            </endpoint>
         </send>
      </then>
      <else>
         <log level="custom">
            <property name="getOtherMenu" value="INSIDE OTHER MENU"/>
         </log>
         <send>
            <endpoint>
               <address uri="http://<ip>:9793/services/MenuProvider/getOtherMenu"/>
            </endpoint>
         </send>
      </else>
   </filter>
</sequence>


Above I have used the xpath expression and have checked whether the incoming request contains the given element. If it does, the request is routed to a certain endpoint while the other requests that return false at the xpath expression is routed to the else part of the mediator

When a request as following is sent, the elements are read from the Xpath expression and evaluated.

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:sam="http://sample.wso2.org">
   <soapenv:Header/>
   <soapenv:Body>
      <sam:getMacMenu/>
   </soapenv:Body>
</soapenv:Envelope>


The above request contains the getMacMenu element so the Filter Mediator Xpath expression is evaluated as true.

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:sam="http://sample.wso2.org">
   <soapenv:Header/>
   <soapenv:Body>
      <sam:getOtherMenu/>
   </soapenv:Body>
</soapenv:Envelope>


The above request contains getOtherMenu element so the Filter Mediator Xpath expression is evaluated as false.

Chathurika Erandi De SilvaFilter request on content: WSO2 ESB - Part 3

In previous posts we have discussed Filter Mediator condition using Xpath and message context properties using regex.

Use Case

If the incoming request contains a certain key / word, then the request should be routed to a certain endpoint, whereas the requests that does not contain the specific key should be routed to another endpoint

Sample Sequence Configuration

<sequence xmlns="http://ws.apache.org/ns/synapse" name="filter_seq_4">
   <filter xmlns:ns="http://org.apache.synapse/xsd"
           xmlns:sam="http://sample.wso2.org"
           source="//sam:Order/sam:menu/sam:type"
           regex="MAC">
      <then>
         <log level="custom">
            <property name="getMacMenu" value="INSIDE MAC MENU"/>
         </log>
         <send>
            <endpoint key="conf:/send_mac"/>
         </send>
      </then>
      <else>
         <log level="custom">
            <property name="getOtherMenu" value="INSIDE OTHER MENU"/>
         </log>
         <send>
            <endpoint key="conf:/send_other"/>
         </send>
      </else>
   </filter>
</sequence>


Above Filter Mediator uses the source expression to isolate a relevant element in the incoming request, then the value of that element is matched with the provided regex expression

When the following request is sent to the ESB, since the sam:type element contains MAC, it's validated as true from Filter Mediator

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:sam="http://sample.wso2.org">
   <soapenv:Header/>
   <soapenv:Body>
       <sam:Order>
           <sam:menu>
               <sam:type>MAC</sam:type>
           </sam:menu>
       </sam:Order>
   </soapenv:Body>
</soapenv:Envelope>


If the following request is sent to the ESB, since the sam:type element contains OTHER, it's validated as false from Filter Mediator. Thus the else section is executed

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:sam="http://sample.wso2.org">
   <soapenv:Header/>
   <soapenv:Body>
       <sam:Order>
           <sam:menu>
               <sam:type>OTHER</sam:type>
           </sam:menu>
       </sam:Order>
   </soapenv:Body>
</soapenv:Envelope>


Chathurika Erandi De SilvaUser Stories; How do i formulate them?

I have being working with user stories this week to derive test scenarios so thought of writing on it a bit.

The first question came to my mind when thinking about user stories is that, although there are lot of very good definitions, what is the easiest and concrete way to understand what a user story is?

After reading alot and thinking about it, i figured, the easiest way is to question "why is a particular product is used by a specific person"

This way we really put ourselves in the user's shoes and think on the user's perspective.

There can many answers for this particular question, or just one answer.

If there are multiple answers, then each of those answers will become stories related to that particular user. And of course if there is one answer, then that becomes the only story with relevance to that user.

Of course when getting the story to words, the keywords as "who", "what" and "why" should be addressed and it's always good to keep the user story short but we should keep in mind that the specific story has the expected business value.
 
As an example, let's take the following story

As a user I want to log in to the system so that I can do some profile tasks

Is there any business value in the above story with respect to the user? If we are to go forward and implement this kind of story, can we see a value in implementing it at all? Does the Why part of the above story consists of a business value?

It's important to define the why part incorporating the business value in what the user wants to do.




As a personal user I want to log in to the system so that I can change my profile picture

In above we can see a straight forward business value.

It's essential to write user stories so that it brings out the business value of it being implemented.


Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Authenticate the users against one user store but fetch user attributes from multiple other sources

Problem:
  • User credentials are maintained in a one user store while user attributes are maintained in multiple sources. 
  • When the user logs into the system via any SSO protocol (SAML 2.0, ODIC, WS-Federation), build the response with user attributes coming from multiple sources.
Solution:
  • Mount the credential store and all the attribute stores as user stores to the WSO2 Identity Server. Follow a naming convention while naming the user stores where the attributes store can be differentiated from the credentials stores just by looking at the user store domain name. 
  • Build a custom user store manager (extending the current user store manager corresponding to the type of the primary user store), which is aware of all the attribute stores in the system and override the method, which returns user attributes. The overridden method will iterate through the attribute stores find the user’s attributes and will return back the aggregated result. 
  • Set the custom user store manager from the previous step as the user store manager corresponding to the primary user store. 
  • Products: WSO2 Identity Server 4.6.0+ 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
User administration operations from a third-party web app

Problem:
  • A third party web app needs to perform all user management operations such as all CRUD operations on users and roles, user/role assignments and password recovery, without having to deal directly with underlying user stores (LDAP, AD, JDBC).
Solution:
  • Deploy the WSO2 Identity Server over the required set of user stores. 
  • The WSO2 Identity Server exposes a set of REST endpoints as well as SOAP-based services for user management, the web app just need to talk to these endpoints, without having to deal directly with underlying user stores (LDAP, AD, JDBC). 
  • Products: WSO2 Identity Server 4.0.0+ 

Thilina PiyasundaraAdd Let's Encrypt free SSL certificates to WSO2 API Cloud

Let's encrypt is a free and open certificate authority runs for the public benefit. This service is provided by the Internet Security Research Group and there are lots of companies working with them to make the Internet secure. People who have a domain name can get free SSL certificate for their websites using this service for three months. I they need to use for more than that three months we need to renew the certificate and its also for free. But the best thing is that this certificate is accepted by most of the new web browsers and systems by default. So you don't need to add CA certs to you browsers any more.

In this article I will explain how we can use that service to get a free SSL certificate and add that to WSO2 API Cloud. So that you can have your own API store like;

https://store.thilina.piyasundara.org

In order to do that you need to have following things in hand.
  • Domain name.
  • Rights to add/delete/modify DNS A records and CNAMEs.
  • Publicly accessible webserver with root access or a home router with port forwarding capabilities. 

Step 1

If you have a publicly accessible webserver you can skip this step.If you don't have a publicly accessible webserver you can make your home PC/Laptop a temporary webserver if you can do port forwarding/NATing in you home router. I will show how I did that with my ADSL router. You can get help on port forwarding information by referring to this website http://portforward.com.

a. Add a port forwarding rule in your home router.

Get your local (laptop) IP (by running ifconfig/ip addr) and put that as the backend server in your router for. Set the WAN port as 80 and LAN port as 80.


After adding the rule it will be like this.

b. Start a webserver in your laptop. We can use the simple Python server for this. Make sure to check the IPTable rules/Firewall rules.

mkdir /tmp/www
cd /tmp/www/
echo 'This is my home PC :)' > index.html
sudo python3 -m http.server 80

c. Get the public IP of your router. Go to this link : http://checkip.dyndns.org it will give the public IP address. This IP is changing time-to-time so no worries.


d. Try to access that IP from a browser.
If it is giving the expected output you have a publicly accessible webserver.


Step 2

Now we need to update a DNS entry. My expectation is to have a single SSL certificate for both domains 'store.thilina.piyasundara.org' and 'api.thilina.piyasundara.org'.

a. Go to your DNS provides console and add an A record for both domain names to point to the public IP of your webserver (or the IP that we got from the previous step).


b. Try to access both via a browser and if its giving the expected out put you can proceed to the next step.


Step 3

I'm follow the instruction in the 'let's encrypt' guide. As I'm using the python server I need to use the 'certonly' option when running the command to generate the certs.

a. Get the git clone of the letsencrypt project.

git clone https://github.com/letsencrypt/letsencrypt
cd letsencrypt

b. Run cert generation command. (this requires root/sudo access)

./letsencrypt-auto certonly --webroot -w /tmp/www/ -d store.thilina.piyasundara.org -d api.thilina.piyasundara.org

If this succeed you can find the SSL keys and certs in '/etc/letsencrypt/live/store.thilina.piyasundara.org' location.

Step 4

Check the content of the certs. (Be root before you try to 'ls' that directory)

openssl x509 -in cert.pem -text -noout

Step 5

Create an API in WSO2 API Cloud if you don't have one. Else start on adding a custom domain to your tenant.

a. Remove both A records and add CNAME records to those two domains. Both should point to the domain 'customdns.api.cloud.wso2.com'.


b. Now click on the 'Configure' option in the top options bar and select the 'Custom URL' option.


c. Make ready you SSL certs. Copy 'cert.pem', 'chain.pem' and 'privkey.pem' to you home directory.

d. Modify API store domain. Click on the modify button, add the domain name click on verify. It will take few seconds. If that succeed you have correctly configured the CNAME to point to WSO2 cloud.

e. Add cert files to the API Cloud. The order should be the certificate (cert.pem), private key (privatekey.pem) and the CAs chain file (chain.pem). Again it will take sometime to verify uploaded details.


f. Update the gateway domain same as the previous.

Now if you go the API Store it will show something like this.



g. Same way you can use the gateway domain when you need to invoke APIs.

curl -X GET --header 'Accept: application/json' --header 'Authorization: Bearer ' 'https://gateway.api.cloud.wso2.com:8243/t/thilina/gituser/1.0.0/thilinapiy'

Now you don't need '-k' option. If not make sure you operating system (CA list) is up to date.

Step 6

Make sure to remove port forwarding in you home router if you use that and any changes that you make while obtaining the SSL certificates.

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Fine-grained access control for SOAP services

Problem:
  • Access to the business services must be done in a fine-grained manner. 
  • Only the users belong to the business-admin role should be able to access foo and bar SOAP services during a weekday from 8 AM to 5 PM.
Solution:
  • Deploy WSO2 Identity Server as a XACML PDP (Policy Decision Point). 
  • Define XACML policies via the XACML PAP (Policy Administration Point) of the WSO2 Identity Server. 
  • Front the SOAP services with WSO2 ESB and represent each service a proxy service in the ESB. 
  • Engage the Entitlement mediator to the in-sequence of the proxy service, which needs to be protected. The Entitlement mediator will point to the WSO2 Identity Server’s XACML PDP. 
  • All the requests to the SOAP service will be intercepted by the Entitlement mediator and will talk to the WSO2 Identity Server’s XACML PDP to check whether the user is authorized to access the service. 
  • Authentication to the SOAP service should happen at the edge of the WSO2 ESB, prior to Entitlement mediator. 
  • If the request to the SOAP service brings certain attributes in the SOAP message itself, the Entitlement mediator can extract them from the SOAP message and add to the XACML request. 
  • Products: WSO2 Identity Server 4.0.0+, WSO2 ESB, Governance Registry 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Render menu items in a web app based on the logged-in user’s fine-grained permissions

Problem:
  • When a business user logs into a web app, the menu items in the web app should be rendered dynamically based on the user’s permissions. 
  • There can be a case if the same user logs at 9 AM and then again at 9 PM could see different menu items as the permission can also be time sensitive. 
  • There can be a case if the same user logs in from China and then again from Canada could see different menu items as the permission can also be location sensitive.
Solution:
  • Deploy WSO2 Identity Server as a XACML PDP (Policy Decision Point). 
  • Define XACML policies via the XACML PAP (Policy Administration Point) of the WSO2 Identity Server. 
  • When a user logs into the web app, the web app will talk to the WSO2 Identity Server’s XACML PDP endpoint with a XACML request using XACML multiple decision profile and XACML multiple resource profile. 
  • After evaluating the XACML policies against the provided request, the WSO2 Identity Server returns back the XACML response, which includes the level permissions the user has on each resource under the parent resource specified in the initial XACML request. Each menu item is represented as a resource in the XACML policy. 
  • The web app caches the decision to avoid further calls to the XACML PDP. 
  • Whenever some event happens at the XACML PDP side, which requires expiring the cache, the WSO2 Identity Server will notify a registered endpoint of the web app. 
  • Products: WSO2 Identity Server 4.0.0+ 

Afkham AzeezMicroservices Circuit Breaker Implementation



Circuit breaker


Introduction

Circuit breaker is a pattern used for fault tolerance and the term was first introduced by Michael Nygard in his famous book titled "Release It!". The idea is, rather than wasting valuable resources trying to invoke an operation that keeps failing, the system backs off for a period of time, and later tries to see whether the operation that was originally failing works.

A good example would be, a service receiving a request, which in turn leads to a database call. At some point in time, the connectivity to the database could fail. After a series of failed calls, the circuit trips, and there will be no further attempts to connect to the database for a period of time. We call this the "open" state of the circuit breaker. During this period, the callers of the service will be served from a cache. After this period has elapsed, the next call to the service will result in a call to the database. This stage of the circuit breaker is called the "half-open" stage. If this call succeeds, then the circuit breaker goes back to the closed stage and all subsequent calls will result in calls to the database. However, if the database call during the half-open state fails, the circuit breaker goes back to the open state and will remain there for a period of time, before transitioning to the half-open state again.

Other typical examples of the circuit breaker pattern being useful would be a service making a call to another service, and a client making a call to a service. In both cases, the calls could fail, and instead of indefinitely trying to call the relevant service, the circuit breaker would introduce some back-off period, before attempting to call the service which was failing.

Implementation with WSO2 MSF4J

I will demonstrate how a circuit breaker can be implemented using the WSO2 Microservices Framework for java (MSF4J) & Netflix Hystrix. We take the stockquote service sample, and enable circuit breaker. Assume that the stock quotes are loaded from a database. We wrap the calls to this database in a Hystrix command. If database calls fail, the circuit trips and stock quotes are served from cache.

The complete code is available at https://github.com/afkham/msf4j-circuit-breaker

NOTE: To keep things simple and focus on the implementation of the circuit breaker patter, rather than make actual database calls, we have a class called org.example.service.StockQuoteDatabase and calls to its getStock method could result in timeouts or failures. To see an MSF4J example on how to make actual database calls, see https://github.com/sagara-gunathunga/msf4j-intro-webinar-samples/tree/master/HelloWorld-JPA.

The complete call sequence is shown below. StockQuoteService is an MSF4J microservice.



Configuring the Circuit Breaker

 The circuit breaker is configured as shown below.

 We are enabling circuit breaker & timeout, and then setting the threshold of failures which will trigger circuit tripping to 50, and also timeout to 10ms. So any database call that takes more than 10ms will also be registered as a failure. For other configuration parameters, please see https://github.com/Netflix/Hystrix/wiki/Configuration

Building and Running the Sample

Checkout the code from https://github.com/afkham/msf4j-circuit-breaker & use Maven to build the sample.

mvn clean package

Next run the MSF4J service.

java -jar target/stockquote-0.1-SNAPSHOT.jar 

Now let's use cURL to repeatedly invoke the service. Run the following command;

while true; do curl -v http://localhost:8080/stockquote/IBM ; done

The above command will keep invoking the service. Observe the output of the service in the terminal. You will see that some of the calls will fail on the service side and you will be able to see the circuit breaker fallback in action and also the circuit breaker tripping, then going into the half-open state, and then closing.








Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Single Sign On between a legacy web app, which cannot change the user interface and service providers, which support standard SSO protocols.

Problem:
  • The business users need to access a service provider,where its UI cannot be changed. The users need to provide their user credentials to the current login form of the service provider. 
  • Once the user logs into the above service provider, and then clicks on a link to another service (which follows a standard SSO protocol), the user should be automatically logged in. The vice-versa is not true.
Solution:
  • Deploy WSO2 Identity Server as the Identity Provider and register all the service providers with standard inbound authenticators (including the legacy app). 
  • For the legacy web app, which does not want to change the UI of the login form, enable basic auth request path authenticator, under the Local and Outbound Authentication configuration. 
  • Once the legacy app accepts the user credentials from its login form, post them along with the SSO request (SAML 2.0/OIDC) to the WSO2 Identity Server. 
  • The WSO2 Identity Server will validate the credentials embedded in the SSO request and if valid, will issue an SSO response and the user will be redirected back to the legacy application. The complete redirection process will be almost transparent to the user. 
  • When the same user tries to log in to another service provider, the user will be automatically authenticated, as the previous step created a web session for the logged in user, under the WSO2 Identity Server domain. 
  • Products: WSO2 Identity Server 5.0.0+ 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Access a microservice from a web app protected with SAML 2.0 or OIDC

Problem:
  • The business users need to access multiple service providers, supporting SAML 2.0 and OIDC-based authentication. 
  • Once the user logs into the web app, it needs to access a microservice on behalf of the logged in user.
Solution:
  • Deploy WSO2 Identity Server as the Identity Provider and register all the service providers with OIDC or SAML 2.0 as the inbound authenticator. 
  • Enable JWT-based access token generator in the WSO2 Identity Server. 
  • Develop and deploy all the microservices with WSO2 MSF4J. 
  • If the service provider supports SAML 2.0 based authentication, once the user logs into the web app, exchange the SAML token to an OAuth access token by talking to the /token endpoint of the WSO2 Identity Server, following the SAML 2.0 grant type for OAuth 2.0 profile. This access token itself is a self-contained JWT. 
  • If the service provider supports OIDC based authentication, once the user logs into the web app, exchange the ID token to an OAuth access token by talking to the /token endpoint of the WSO2 Identity Server, following the JWT grant type for OAuth 2.0 profile. This access token itself is a self-contained JWT. 
  • To access the microservice, the pass the JWT (or the access token) in the HTTP Authorization Bearer header over TLS. 
  • MSF4J will validate access token (or the JWT) and the token will be passed across all the downstream microservices. 
  • More about microservices security: https://medium.com/@prabath/securing-microservices-with-oauth-2-0-jwt-and-xacml-d03770a9a838 
  • Products: WSO2 Identity Server 5.1.0+, WSO2 MSF4J 

Chandana NapagodaHow to disable Registry indexing


Sometimes people complain that they have seen background DB queries executed by some WSO2 products(EX: WSO2 API Manager Gateway profile). These query executions are not harmful, and those correspond to registry indexing task that runs in the background.

It is not required to enable indexing task for APIM 1.10.0 based Gateway or Key Manager nodes. So you can disable the indexing task by setting "startIndexing" parameter as false. This "startIndexing" parameter should be configured in the registry.xml file under "indexingConfiguration" section.

Ex:
 <indexingConfiguration>  
<startIndexing>false</startIndexing>
......
</indexingConfiguration>

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Enforce users to provide missing required attributes while getting JIT provisioned to the local system

Problem:
  • The business users need to access multiple service providers via federated identity provider (i.e Facebook, Yahoo, Google). 
  • Need to JIT provision all the user coming from federated identity providers with a predefined set of attributes. 
  • If any required attributes are missing in the authentication response from the federated identity provider, the system should present a UI to the user to provide those.
Solution:
  • Deploy WSO2 Identity Server as the Identity Provider and register all the service providers and federated identity providers. 
  • Enable JIT provisioning for each federated identity provider. 
  • Build a connector to validate the attributes in the authentication response and compare those against the required set of attributes. The required set of attributes can be defined via a claim dialect. If there is a mismatch between the attributes from the authentication response and the required set of attributes then this connector will redirect the user to web page (deployed under authenticationendpoints web app) to accept the missing attributes from the user. 
  • Engage the attribute checker connector from the previous step to an authentication step after the step, which includes the federated authenticator. 
  • Products: WSO2 Identity Server 5.0.0+ 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Accessing a SOAP service secured with WS-Trust from a web app on behalf of the logged-in user (SAML 2.0)

Problem:
  • The business users need to access multiple service providers supporting SAML 2.0 web SSO-based authentication. 
  • Once the user logs into the web app, the web app needs to access a SOAP service secured with WS-Trust on behalf of the logged in user.
Solution:
  • Deploy WSO2 Identity Server as an identity provider, and register all the service providers (with SAML 2.0 as the inbound authenticator). Further, it will also act as a Security Token Service(STS) based on WS-Trust. 
  • Deploy the SOAP service in WSO2 App Manager and secure it with WS-Security Policy to accept a SAML token as a supporting token. 
  • Deploy the web app in the WSO2 App Manager. 
  • Write a filter and deploy it in the WSO2 App Server, which will accept a SAML token coming from Web SSO flow and build a SOAP message embedding that SAML token. 
  • Since we are using SAML bearer tokens here, all the communication channels that carry the SAML tokens must be over TLS. 
  • Once the web app gets the SAML token, it will build a SOAP message with the security headers out of it (embedding the SAML token inside ActAs element of the RST) and talk to the WSO2 Identity Server’s STS endpoint to get a new SAML token to act-as the logged in user, when talking to the secured SOAP service. 
  • WSO2 App Server will validate the security of the SOAP message. It has to trust the WSO2 Identity Server, who is the token issuer. 
  • Products: WSO2 Identity Server 3.0.0+, WSO2 Application Server 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Self-signup during the authentication flow with service provider specific claim dialects

Problem:
  • The business users need to access multiple service providers supporting multiple heterogeneous identity federation protocols. 
  • When the user gets redirected to the identity provider for authentication, the identity provider should provide a page with the login options and also an option to sign up. 
  • If the user picks the sign-up option, the required set of fields for the user registration must be specific to the service provider who redirected the user to the identity provider. 
  • Upon user registration, the user must be in the locked status, and confirmation mail has to be sent to the user’s registered email address. 
  • Upon email confirmation, the user should be prompted for authentication again and should be redirected back to the initial service provider.
Solution:
  • Deploy WSO2 Identity Server as the Identity Provider and register all the service providers. 
  • Customize the login web app (authenticationendpoints) deployed inside WSO2 Identity Server to give an option user signup in addition to the login options. 
  • Follow a convention and define a claim dialect for each service provider, with the required set of user attributes it needs during the registration. The service provider name can be used as the dialect name as the convention. 
  • Build a custom /signup API, which retrieves required attributes for user registration, by passing the service provider name. 
  • Upon registration, the /signup API will use email confirmation feature in the WSO2 Identity Server to send the confirmation mail and in addition to that the /signup API also maintain the login status of the user, so upon email confirmation user can be redirected back to the initial service provider. 
  • Products: WSO2 Identity Server 5.0.0+ 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Fine-grained access control for service providers

Problem:
  • The business users need to access multiple service providers supporting multiple heterogeneous identity federation protocols. 
  • Each service provider needs to define an authorization policy at the identity provider, to decide whether a given user is eligible to log into the corresponding service provider. 
  • For example, one service provider may have a requirement that only the admin users will be able to login into the system after 6 PM. 
  • Another service provider may have a requirement that only the users from North America should be able to login into the system.
Solution:
  • Deploy WSO2 Identity Server as the Identity Provider and register all the service providers. 
  • Build a connector, which connects to the WSO2 Identity Server’s XACML engine to perform authorization. 
  • For each service provider, that needs to enforce access control during the login flow, engage the XACML connector to the 2nd authentication step, under the Local and Outbound Authentication configuration. 
  • Each service provider, that needs to enforce access control during the login flow, creates its own XACML policies in the WSO2 Identity Server PAP (Policy Administration Point). 
  • To optimize the XACML policy evaluation, follow a convention to define a target element under each XACML policy, that can uniquely identify the corresponding service provider. 
  • Products: WSO2 Identity Server 5.0.0+ 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Single Page Application (SPA) proxy

Problem:
  • Authenticate users to a single page application in a secure manner, via OAuth 2.0. 
  • The SPA accessing an OAuth-secured API, the access token must be made invisible to the end-user. 
  • The SPA accessing an OAuth-secured API, the client (or the SPA) must be authenticated in a legitimate manner.
Solution:
  • There are multiple ways to secure an SPA and this presentation covers some options: http://www.slideshare.net/prabathsiriwardena/securing-singlepage-applications-with-oauth-20 
  • This explains the SPA proxy pattern, where a proxy is introduced, and the calls from the SPA will be routed through the proxy. 
  • Build an SPA proxy and deploy it in WSO2 Identity Server. A sample proxy app is available at https://github.com/facilelogin/aratuwa/tree/master/oauth2.0-apps. 
  • The SPA proxy must be registered in the WSO2 Identity Server as a service provider, having OAuth inbound authenticator. 
  • To make the SPA proxy stateless, the access_token and the id_token obtained from the WSO2 Identity Server (after the OAuth flow) are encrypted and set as a cookie. 
  • Products: WSO2 Identity Server 5.0.0+ 

Nuwan BandaraDockerizing a proof of concept

Few weeks back I was working on a proof of concept to demonstrate a long running workflow based orchestration scenario. More about the architecture behind the PoC can be found at WSO2 solutions architecture blog. But this blog is not related to the architecture, this is simply about delivering the proof of concept in a completely contained environment.

What inspired me to do this: As a day to day job I happened to show how enterprise solutions architectures work in real world. I cook up a use-case in my machine, often with couple of WSO2 products (like the ESB/DSS/DAS/API-M) and some other non-WSO2 ones, then demonstrate the setup to who ever the interested party. I always thought it would be cool if the audience can run this themselves after the demo without any hassle (They can run it even now with bit of work😉 but thats time someone can easily save). The other motivation is to save my own time by re-using the demos I’ve build.

Docker ! Docker ! Docker !

I’ve been playing with docker on and off, thought its a cool technology and I found that creating and destroying containers in a matter of milliseconds is kind of fun😀 okey jokes aside I was looking for a way to do something useful with Docker, and finally found inspiration and the time.

I took the orchestration PoC (Bulk ordering work-flow for book publishers) as the base model that I am going to Dockerize.

architecture

I made sure that I cover my bases first with making everything completely remotely deployable. If am to build a completely automated deployment and a start up process I shouldn’t configure any of the products from the management UI.

The artifacts:

https://github.com/nuwanbando/bookshop-sample/tree/master/artifacts

All the ESB and DSS artifacts went to a .car file {bookshop-app_1.0.0.car} with ESB and DSS profiles respectively. The long running orchestration was developed as a BPMN workflow and that was exported to a .bar file {BookOrderApprovalProcess.bar}

Thats pretty much I had to do. After that its more or less bit of dev-ops work and automation. Some of the decisions I took along the dev-ops process were

[1] Not to create Docker images (and maybe push to docker-hub) from WSO2 product bundles + artifacts. Why: Mainly because then the images will get too heavy (~700MB)

[2] Use docker-compose instead of something like Vagrant. Why: I was bit lazy to explore something new and also wanted to stick to one tool for simplicity. And also docker-compose served the purpose.

With #1 deciton I wrote a dockerfile for each of the product so anyone can build an image with a simple command. The bookshop PoC touches WSO2 ESB, DSS, BPS and additionally to store the book orders I created a database in an external mysql server. Its altogether four Dockerfiles

ESB Dockerfile gist:

Once thats done, docker-compose does the wiring.

Docker compose definitions gist:

The composition will build all the images, expose the ports to the host machine and start-up all the containers in an isolated docker network.

# build and start the containers
$ docker-compose up -d

# Stop and kill all the containers with their images
$ docker-compose down --rmi all

Thats about it. The project can be found at github and you can find instructions to run the PoC on the readme fine.

I intend to build all my PoC demos in the above fashion, so unless I get really lazy😀 I should be publishing docker compositions more often.


Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Mobile identity provider proxy

Problem:
  • A company builds a set of native mobile apps and deployed into company owned set of devices, which are handed over to its employees. 
  • When a user logs into one native mobile app, he/she should automatically log into all the other native apps, without further requests to provide his/her credentials. 
  • No system browser in the device.
Solution:
  • Build a native mobile app, which is the identity provider (IdP) proxy and deploy it in each device along with all the other native apps. 
  • This IdP proxy must be registered with the WSO2 Identity Server, as a service provider, having OAuth 2.0 as the inbound authenticator. 
  • Under the IdP proxy service provider configuration in WSO2 Identity Server, make sure to enable only the resource owner password grant type. 
  • Each of the native app must be registered with the WSO2 Identity Server as a service provider, having OAuth 2.0 as the inbound authenticator and make sure only the implicit grant type is enabled. 
  • Under the native app service provider configuration in WSO2 Identity Server, make sure to have oauth-bearer as a request-path authenticator, configured under Local and Outbound Authentication configuration. 
  • The IdP proxy app has to provide a native API for all the other native apps. 
  • When a user wants to login into an app, the app has to talk to the login API of the IdP proxy app passing its OAuth 2.0 client_id. 
  • The IdP proxy app should first check whether it has a master access token, if not it should prompt the user to enter username/password and then using the password grant type talk to the WSO2 Identity Server’s /token API to get the master access token. The IdP proxy must securely store the master access token — and it’s per user. If the master access token is already there, the user needs to not to authenticate again. 
  • Now, using the master access token (as the Authorization Bearer header), the IdP proxy app should talk (HTTP POST) to the /authorize endpoint of the WSO2 Identity Server, following the implicit grant type with the client_id provided by the native app. Also, use openid as the scope. 
  • Once the access token and the ID token are returned from the WSO2 Identity Server, the IdP proxy will return them back to the native app, who did the login API call first. 
  • Products: WSO2 Identity Server 5.2.0+ 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Federation Proxy

Problem:
  • All the inbound requests for all the service providers inside the corporate domain must be intercepted centrally and enforce authentication via an Identity Hub. 
  • Users can authenticate to the hub, via different identity providers. 
  • All the users, who authenticate via the hub must be provisioned locally. 
  • One user can have multiple accounts with multiple identity providers connected to the hub and when provisioned into the local system, the user should be given the option to map or link all his/her accounts and then pick under which account he/she needs to login into the service provider.
Solution:
  • Deploy WSO2 App Manager to front all the service providers inside the corporate domain. 
  • Configure WSO2 Identity Server as the trusted Identity Provider of the WSO2 App Manager. Both the Identity Server + the App Manager setup we call it as the federation proxy. 
  • Introduce the identity provider running at the hub (it can be another WSO2 Identity Server as well) as a trusted identity provider to the WSO2 Identity Server running as the proxy. 
  • Configure git provisioning against the hub identity provider, configured in WSO2 Identity Server. 
  • For all the service provider, the initial authentication will happen via the hub identity provider and once that is done, configure a connector to the 2nd step to do the account linking. 
  • Products: WSO2 Identity Server 5.0.0+, WSO2 App Manager 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Enforce password reset for expired passwords during the authentication flow

Problem:
  • During the authentication flow, enforce to check whether the end-user password is expired and if so, prompt the user to change the password.
Solution:
  • Configure multi-step authentication for the corresponding service provider. 
  • Engage basic authenticator for the first step, which accepts username/password from the end-user. 
  • Write a handler (a local authenticator) and engage it in the second step, which will check the validity of the user’s password and if it is expired then prompt the user to reset the password. 
  • Sample implementation: http://blog.facilelogin.com/2016/02/enforce-password-reset-for-expired.html 
  • Products: WSO2 Identity Server 5.0.0+ 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Fine-grained access control for APIs

Problem:
  • Access to the business APIs must be done in a fine-grained manner. 
  • Only the users belong to the business-admin role should be able to access foo and bar APIs during a weekday from 8 AM to 5 PM.
Solution:
  • Setup the WSO2 Identity Server as the key manager of the WSO2 API Manager. 
  • Write a scope handler and deploy it in the WSO2 Identity Server to talk to it’s XACML engine during the token validation phase. 
  • Create XACML policies using the WSO2 Identity Server’s XACML policy wizard to address the business needs. 
  • Products: WSO2 Identity Server 5.0.0+, API Manager, Governance Registr

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Claim Mapper

Problem:
  • The claim dialect used by the service provider is not compatible with the default claim dialect used by the WSO2 Identity Server. 
  • The claim dialect used by the federated (external) identity provider is not compatible with the default claim dialect used by the WSO2 Identity Server.
Solution:
  • Represent all the service providers in the WSO2 Identity Server and configure the corresponding inbound authenticators (SAML, OpenID, OIDC, WS-Federation). 
  • For each service provider define custom claims and map them to the WSO2 default claim dialect. 
  • Represent all the identity providers in the WSO2 Identity Server and configure corresponding federated authenticators (SAML, OpenID, OIDC, WS-Federation). 
  • For each identity provider define custom claims and map them to the WSO2 default claim dialect. 
  • Products: WSO2 Identity Server 5.0.0+ 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Identity federation between service providers and identity providers with incompatible identity federation protocols

Problem:
  • The business users need to login into a SAML service provider with an assertion coming from an OpenID Connect identity provider. 
  • In other words, the user is authenticated against an identity provider, which only supports OpenID Connect, but the user needs to login into a service provider, which only supports SAML 2.0.
Solution:
  • Represent all the service providers in the WSO2 Identity Server and configure the corresponding inbound authenticators (SAML, OpenID, OIDC, WS-Federation). 
  • Represent all the identity providers in the WSO2 Identity Server and configure corresponding federated authenticators (SAML, OpenID, OIDC, WS-Federation). 
  • Associate identity providers with service providers, under the Service Provider configuration, under the Local and Outbound Authentication configuration, irrespective of the protocols they support. 
  • Products: WSO2 Identity Server 5.0.0+

Chandana NapagodaWSO2 Governance Registry: Support for Notification

With WSO2 Governance Registry 5.x releases, now you can send rich email messages when email notification is triggered in WSO2 Governance Registry with the use of email templating support we have added. In the default implementation, administrator or any privileged user can store email templates in “/_system/governance//repository/components/org.wso2.carbon.governance/templates” collection and the template name must be same as the lower case of the event name.

For an example if you want to customize “PublisherResourceUpdated” event, template file should be as: “/_system/governance/repository/components/org.wso2.carbon.governance/templates/publisherresourceupdated.html”.

If you do not want to define event specific email templates, then you can add a template called “default.html”.

By default, $$message$$ message section in email templates will be replaced with the message generated in the event.

FAQ:
How can I plug my own template mechanism and modify the message?

You can override the default implementation by adding a new custom implementation. First, you have to create a Java project. Then you have to implement “NotificationTemplate” interface and override the “populateEmailMessage” method. There you can write your own implementation.

After that, you have to add the compiled JAR file to WSo2 Governance Registry. If it’s an OSGI bundle, please add it to :  <GREG_HOME>/repository/compoments/dropins/ folder Otherwise jar needs to be added to  <GREG_HOME>/repository/compoments/lib/ folder

Finally, you have to add the following configuration to registry.xml file.

<notificationConfiguration>
   <class>complete class name with package</class>
</notificationConfiguration>

What are the notification types available in the Store, publisher and Admin Console?

Store: StoreLifeCycleStateChanged,StoreResourceUpdated,
Publisher: PublisherResourceUpdated, PublisherLifeCycleStateChanged,     PublisherCheckListItemUnchecked, PublisherCheckListItemChecked

Admin Console: Please refer this documentation (Adding a Subscription)

Do I need to enable worklist for console subscriptions?

Yes, you have to enable Worklist configuration.(Configuration for Work List)

Does notifications visible in each application?

If you have login access to the Publisher, Store and Admin Console, then you can view notifications from each of those applications. However, some notifications may have customized to fit the context of relevant applications.






Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Single Sign On with delegated access control

Problem:
  • The business users need to login into multiple service providers with single sign on via an identity provider. 
  • Some service providers may need to access backend APIs on behalf of the logged in user. For example, a user logs into the Cute-Cup-Cake-Factory service provider via SAML 2.0 web SSO and then the service provider (Cute-Cup-Cake-Factor) needs to access user’s Google Calendar API on behalf of the user to schedule the order pickup.
Solution:
  • Represent all the service provider in the WSO2 Identity Server as Service Providers and configure inbound authentication appropriately either with SAML 2.0 or OpenID Connect. 
  • For each service provider that needs to access backend APIs, configure OAuth 2.0 as an inbound authenticator, in addition to the SSO protocol (SSO protocol can be SAML 2.0 or OpenID Connect). 
  • Once a user logs into the service provider, either via SAML 2.0 or OpenID Connect, use the appropriate grant type (SAML grant type for OAuth 2.0 or JWT grant type for OAuth 2.0) to exchange the SAML or the JWT token for an access token, by talking to the token endpoint of the WSO2 Identity Server 
  • Products: WSO2 Identity Server 5.0.0+, WSO2 API Manager, WSO2 Application Server 

Aruna Sujith KarunarathnaHow to Enable Asynchronous Logging with C5

In this post we are going to explore on how to enable asynchronous logging on C5 based servers. More on asynchronous logging can be found here. 1. Copy the disrupter dependency to the /osgi/plugins folder. You can get the disrupter OSGi bundle from here. 2. Edit launch.properties the /bin/bootstrap/org.wso2.carbon.launcher-5.1.0.jar and add the disrupter jar to the initial bundles list.

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
User management upon multi-layer approval

Problem:
  • All the user management operations must be approved by multiple administrators in the enterprise in a hierarchical manner. 
  • When an employee joins the company, it has to be approved by a set of administrators while, when the same employee is assigned to the sales team, must be approved by another set of administrators.
Solution:
  • Create a workflow with multiple steps. In each step specify who should provide the approval. 
  • Define a workflow engagement for user management operations and associate the above workflow with it. 
  • When defining the workflow, define the criteria for its execution. 
  • Products: WSO2 Identity Server 5.1.0+ 

Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Rule-based user provisioning

Problem:
  • The identity admin needs to provision all the employees to Google Apps at the time they join the company. 
  • Provision only the employees belong to the sales-team to Salesforce.
Solution:
  • Represent Salesforce and Google Apps as provisioning identity providers in the WSO2 Identity Server. 
  • Under Salesforce Provisioning Identity Provider Configuration, under the Role Configuration, set sales-team as the role for outbound provisioning. 
  • Under the Resident Service Provider configuration, set both Salesforce and Google Apps as provisioning identity providers for outbound provisioning. 
  • Products: WSO2 Identity Server 5.0.0+ 

Nuwan BandaraDebuging & troubleshooting WSO2 ESB

ddI am asked this question almost always I do a ESB demonstration, hence thought of documenting the answer for a wider audience.

WSO2 ESB is a mediation & an orchestration engine for enterprise integrations, you can read more about the product at WSO2 docs.

Building a mediation or a orchestration with multiple external services sometimes can become a tedious task. You will have to transform, clone and create messages to send to multiple external endpoints. You will have to handle the responses and sometime handle the communications reliably with patterns like store and forward etc. In such scenarios being able to debug the message flow, understand the messages going out and coming in from the and to the ESB runtime will come very handy.

There are couple of out of the box capabilities exposed from the ESB to help the developer. Namely the LogMediator which is the simplest, you can also use the TCPMonitor to understand the messages in wire and if the communication is over SSL you can use ESB wire log dump capability.

With the log mediator you can inspect the message at each mediation stage, which is much like we used to debug php scripts back in the day with alot of <?php echo “{statement}”; ?> statmenets

The wire logging capability thats built into the ESB provide you all the information about messages coming into the runtime and going out from the runtime.

You can enable wire logs by editing the log4j.properties (in repository/conf) file or through ESB Management console

More information about wire logs can be found at following post – http://mytecheye.blogspot.com/2013/09/wso2-esb-all-about-wire-logs.html

Finally if you want to put break points and understand what really happens to the message, you can debug with ESB source. For ESB 4.9.0 its as follows for any later or any upcoming releases the source link will change.

[1] Download the mediation engine source from

[synapse-mediators] https://github.com/wso2/wso2-synapse/tree/release-2.1.3-wso2v11
[wso2 specific mediators] https://github.com/wso2/carbon-mediation/tree/release-4.4.10/components/mediators

[2] Build the source
[3] Open it in Eclipse as a maven project
[4] Setup Eclipse with remote debug

[5] Start the ESB in debug mode

sh wso2esb-4.9.0/bin/wso2server.sh -debug 8000

[6] Put a break point at one the mediators, you have in the sequence (for me its the log mediator, just to test like follows)

[6] Deploy the sequence you are trying out and send a message, that should hit the breakpoint in eclipse

[NEWS] We are also working on a graphical ESB debugging tool for WSO2 ESB 5. So folks the future is bright stay tuned:)

Happy debugging !!!


Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Login to multiple service providers with the current Windows login session

Problem:
  • The business users need to login to multiple service providers supporting multiple heterogeneous identity federation protocols. 
  • Some service providers are on-premise while others are in the cloud. 
  • A user logs into his Windows machine and should be able to access any service provider without further authentication.
Solution:
  • Deploy WSO2 Identity Server over the enterprise active directory as the user store. 
  • Represent all the service providers in the WSO2 Identity Server and configure the corresponding inbound authenticator (SAML, OpenID, OIDC, WS-Federation). 
  • For each service provider, under local and outbound authentication configuration, enable IWA local authenticator. 
  • In each service provider, configure the WSO2 Identity Server as the trusted identity provider. For example, if Salesforce is a service provider, in Salesforce, add WSO2 Identity Server as a trusted identity provider. 
  • Products: WSO2 Identity Server 5.0.0+ 

Pushpalanka JayawardhanaUser Store Count with WSO2 Identity Server 5.2.0

This post is to provide details on one of the new functionalities introduced with WSO2 Identity Server 5.2.0, to be released soon. This feature comes with a service to count the number of users based on user names patterns and claims and also to count the number of roles matching a role name pattern in user store. By default this supports JDBC user store implementations only and provides freedom to extend the functionality to LDAP user stores or any other type as well.

How to Use?

A new property is introduced in user store manager configuration named 'CountRetrieverClass', where we can configure the class name that carries the count implementation for particular user store domain.

Using Service

The functionality is exposed via a service named 'UserStoreCountService' which provides relevant operations as below.

Separate operations are provided to get the counts of a particular user store or the whole user store chain for following functionalities.
  • Count users matching a filter for user name
  • Count roles matching a filter for role name
  • Count users matching a filter for a claim value
  • Count users matching filters for a set of claim values (eg: count of users whose email address ends with 'wso2.com' and mobile number starts with '033')

Extending

In order to extend the functionality, this interface 'org.wso2.carbon.identity.user.store.count.UserStoreCountRetriever' should be implemented by the class, packaged into an OSGI bundle and dropped into the dropins folder within WSO2 Identity Server.


Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Provision federated users to a tenant

Problem:
  • The business users need to login to multiple service providers via multiple identity providers. For example login to Drupal via Facebook or Yahoo! credentials. 
  • Irrespective of the service provider, need to provision federated users to a single tenant (let’s say, individual tenant).
Solution:
  • Define a user store with CarbonRemoteUserStoreManager in the WSO2 Identity Server pointing to the individual tenant. 
  • Represent each federated identity provider in Identity Server. For example, represent Facebook as an identity provider in Identity Server. 
  • Enable JIT provisioning for each identity provider, and pick the user store domain(CarbonRemoteUserStoreManager) to provision users. 
  • Products: WSO2 Identity Server 5.0.0+ 

Sriskandarajah SuhothayanSensing the world with Data of Things


Henry Ford once said “Any customer can have a car painted any colour that he wants so long as it is black!” which it’s now long gone. In the current context people seek for personalized treatment. Imagine calling customer service, every time you call you have to go through all the standard questions, and they don’t have a clue why you might be calling? Or whether you called before? and in the case of shopping, even if you are a regular customer and have a platinum or gold membership card you will not get any special treatment at the store, may be presenting the card at the cashier can get you a discount. 

What’s missing here? They don’t know anything about the customer to give a better service. Hence the simple remedy for the above issue is building customer profiles, this can be done with the historical data you might have about the customer, next you need to understand and react to the context the customer evolves such as whether he is in an urgency, has he contacted you before, etc, and finally you have to  react in real time to give the best customer satisfaction. Therefore to provide the best customer satisfaction identifying the context is a key element, and in the present world the best way of identifying the customer context is via the devices your customer has and via the sensors that’s around him which indeed the Internet of Things (IoT)


IoT is not a new thing, we have had lots of M2M systems that have monitored and controlled devices in the past, and when it comes to IoT we have more devices having sensors and a single device having more sensors. IoT it’s an ecosystem where IoT devices should be manufactured, app for those devices should be developed (e.g apps for phone), users 
should be using those devices and finally they should be monitored and managed. WSO2’s IoT Platform plays a key role in managing and providing analytics for the IoT devices in the ecosystem.




Data Types in IoT Analytics


Data from IoT devices are time bound because these devices do continuous monitoring and reporting. With this we can do time series processing such as energy consumption over time. OpenTSDB is a specialised DB implemented for time based processing.


Further since IoT devices are deployed in various geological locations and since some of those devices move, location is also becomes another important data type for IoT Devices. IoT devices are usually tacked with GPS and currently iBeacons are used when the devices are  within a building. Location based data enables geospatial processing such as traffic planning and better route suggestion for vehicles. Geospatially optimised processing engines such as GeoTrellis are especially developed for these type of usecases.


IoT is Distributed


Since IoT is distributed by nature, components of the IoT network constantly get added and removed. Further since IoT devices get connected to the IoT network through all type of communication networks such as from weak 3G networks to Ad-hoc peer-to-peer networks, and they also use various communication protocols such as Message Queuing Telemetry Transport (MQTT), Common Open Source Publishing Platform (CoApp) and ZigBee or Bluetooth low energy (BLE). Due to these the data flow of the IoT network continuously get modified and repurposed. As data load varies dynamically in the IoT network, on-premise deployment will not be suitable, and hence we have to move towards public or hybrid cloud based deployment. IoT has an event driven architecture to accommodate its distributed nature where its sensors reports data as continuous event streams working in an asynchronous manner.


Analytics for IoT


IoT usually produces perishable data where their value drastically degrades over time. This imposes the importance of Realtime Analytics in IoT. With Realtime Analytics temporal patterns, logical patterns, KPIs and thresholds can be detected and immediately alerted to respective stakeholders, such as alarming when temperature sensor hits a limit and notifying  via car dashboard if the tire pressure is low. Systems such as Apache Storm, Google Cloud DataFlow & WSO2 CEP are build for implementing such Realtime Analytics usecases.


Realtime alone is not enough! We should be able to understand how current situation deviates from the usual behaviour, to do so we have to process historical data. With Batch Analytics, periodic summarisation and analytics can be performed on historic data against which we can compare at realtime. Average temperature in a room last month, and total power usage of the factory last year are some example summarizations that can be done using systems like Apache Hadoop & Apache Spark on the data stored in scalable databases such as Apache Cassandra and Apache HBase.


Ok, with Batch Analytics we defined the thresholds and with Realtime Analytics we detected and alerted threshold violations. Notifying violations may results in preventing disasters but it does not help stopping similar issues arising again. To do so we need to investigate the historical data and identify the root course of the issue and eliminate that. This can be done through Interactive Analytics and with it’s Ad-Hoc Queries, it enables us to search the data set how system and all related entities have behaved before the alert was raised. Apache Drill, Apache Lucene and indexed storage systems such as Couchbase are some systems provides Interactive Analytics.


Than being reactive, staying a step ahead predicting issues & opportunities bring great value. This can be achieved through Predictive Analytics which helps in scenarios such as proactive maintenance, fraud detection and health warnings. Systems such as Apache Mahout, Apache Spark MLlib, Microsoft Azure Machine Learning, WSO2 ML & Skytree are systems that can help us build Predictive Analytics models.


An Integrated Solution for IoT Analytics


From the about technologies by selecting WSO2 Siddhi, Apache Storm, Apache Spark, Apache Lucene, Apache HBase, Apache Spark MLLib and with many other open source softwares WSO2 has built and integrated Data Analytics solutions that support Realtime, Batch, Interactive and Predictive analytics solution called WSO2 Data Analytics Server.




Issues in IoT Analytics



Extreme Load


With compared with the scale of the data produced by sensors, distributed centralised analytic platforms cannot scale and even if they can - it will not be cost effective. Hence we should look at whether we need to process and/or store all the data produced by the sensors? In most cases we only need the aggregations over time, trends that exceed thresholds, outliers, event matching a rare condition, and when the system is unstable or changing. For example from a temperature sensor we only need to send readings when there is a change in temperature and its no point periodically sending same value. This directs us to optimise sensors or data collection points to focus on doing local optimisations before publishing data. This helps in quick detection of issues as part of the data is already processed locally and instant notifications since decisions are also taken at edges. Taking decision at the edge can be implemented with the help of complex event processing libraries such as WSO2 Siddhi and Esper.


Uncertainty


Due to the distributed nature of IoT, data produced can be duplicated, arrives out of order, missing or even be wrong.

Redundant sensors & network latency can introduce duplicated events and out of order event arrival. This can impose difficulty doing temporal event processing, such as Time Windows & Pattern Matching. These are very useful for usecases such as Fraud detection, and Realtime Soccer Analytics (based on DEBS 2013 dataset) https://goo.gl/c2gPrQ where we build a system that monitors the soccer players and the ball and identified ball kicks, ball possession, shot on goal & offside. Algorithms based on K-Slack can help to order events before processing them on realtime.

Due to network outages data produced by the IoT sensors can go missing, and in these situations using complimenting sensor reading are very important where one of those sensor value will be some sort of an aggregation done at the edge which will help us to approximate the missing sensor values based on its aggregation. Such as publishing Load and Work reading when monitoring electricity where when some events are missed, from a later Work Event reading we will be able to approximate the Load reading that should have arrived during the outage. The other alternative is using fault-tolerant data streams such as Google Millwheel.

Further at times sensor reading won't be correct, this can be due to various reasons such as sensor quality and environment noise, in such situations we can use kalman filtering to smoothen consecutive sensor readings for better approximation. These type of issues are quite common when we use iBeacons for location sensing.


Visualisation of IoT data


Visualisation is one of the most important aspect of effective analytics and hence with Big Data and IoT visualisation becomes even complicated. Per-device & Summarization Views are essential and more than that users should be able to visualize as device groups based on various categories such as device type, location, device owner types, deployed zone and many more. Since these categories are dynamic and each person monitoring the system have various personal preferences, therefore composable & customisable dashboard is essential. Further charts and graphs should be able to visualise the huge stored data, where sampling & indexing techniques can be used for better responsiveness.  


Communicating with devices


In IoT sending a command/alert to a devices is complicated, to do so we have to use client side pooling based techniques. Here we store the data the need to be pushed to the client in a database or queue and expose them via secured APIs (through systems like WSO2 API Manager).


Reference Architecture for IoT Analytics




Here data is collected through message broker such as MQTT, immediately written to the disk by WSO2 Data Analytics Server (DAS), at the meantime the collected data is cleaned in realtime, this cleaned data is also persisted, and parallely the cleaned data is fed into realtime event processing which in deed sends alerts and provides realtime visualisations. Stored clean data is used by WSO2 Machine Learner (ML) to build machine learning models and deploy them at WSO2 DAS for realtime predict ions. Further the stored clean data is also used by Spark to run Batch analytics producing sumarisation, which are then visualised in dashboards.



It’s a pleasure for me presenting “Sensing the world with Data of Things” at Structure Data 2016, San Francisco. Please find the slides below. 




Prabath Siriwardena

Identity Patterns with the WSO2 Identity Server
Multi-factor authentication for WSO2 Identity Server management console

Problem:
  • Enable MFA for the WSO2 Identity Server Management Console. 
  • In other words, the Identity Server’s Management Console itself must be protected with MFA.
Solution:
  • Introduce WSO2 Identity Server as a service provider to itself. 
  • Under the service provider configuration, configure multi-step authentication having authenticators, which support MFA in each step. 
  • Enable SAML SSO carbon authenticator through the corresponding configuration file. 
  • How-to: http://blog.facilelogin.com/2016/03/enabling-mult-factor-authentication-for.html 
  • Products: WSO2 Identity Server 5.0.0+ 

Chanaka FernandoWSO2 ESB Passthrough Transport in a nutshell

If you have ever used WSO2 ESB, you might already know that it is one of the highest performing open source ESB solutions in the integration space. The secret behind it’s performance is the so-called Pass Through Transport (PTT) implementation which handles the HTTP requests. If you are interested in learning about PTT from scratch, you can refer the fololwing article series written by Kasun Indrasiri.








If you read through the above mentioned posts, you can get a good understanding about the concepts and the implementation. But one thing which is harder to do is remember all the diagrams in your memory. It is not impossible, but little bit harder for a person with average brain. I have tried to draw a picture to capture all the required information related to the PTT. Here is my drawing on WSO2 ESB PTT.

WSO2 ESB Passthrough Transport


If you look at the above picture, it contains 3 main features of the PTT.
  • The green boxes at the edges of the middle box contains the different methods executed from http-core library towards the Source handler of the ESB when there is any new events
  • The orange boxes represents the internal state transition of the PTT starting from REQUEST_READY up until RESPONSE_DONE.
  • Light blue boxes depicts the objects created within a lifecycle of a single message execution flow and how those objects interacted and at which time they get created.
In addition to above 3 main features, axis2 engine and synapse engine also depicted with purple and shiny purple boxes. These components were depicted as black boxes without considering the actual operations happen within those components.

Footnotes