WSO2 Venus

Evanthika AmarasiriEnabling SSL Tunneling through a Squid Proxy Server

This post will describe how we can proxy our outgoing requests through ESB using a Squid Proxy Server. For more information on the scenario, you can refer the WSO2 ESB documentation.

Step 1 - Setting up Squid Proxy Server

To setup a Squid Proxy Server locally, you can follow the instructions available here.

Step 2 - Configuring Squid Proxy Server - updating the squid.conf file

Add the following line under the acl section

acl squid.proxy.server src appserver.wso2.com


The following should be added before the http_access TAG

http_access allow squid.proxy.server


Note: We will be referring to this proxy server instance by the name squid.proxy.server. Hence, you need to add this entry to the /etc/hosts file which resides in your local instance as well as to the instance where the Squid server is running.

Add the following port information before the https_port TAG section

http_port 8888


Once the above is added to the squid.conf file, restart the Squid server

sudo service squid3 restart
 
Step 3 - Enabling the proxy configuration in WSO2 ESB

To do this, add the below configuation to the axis2.xml under the PassThroughHttpSender, PassThroughHttpSSLSender configuration

<parameter name="http.proxyHost" locked="false">squid.proxy.server</parameter> <parameter name="http.proxyPort" locked="false">8888</parameter>
 
Steps 4 - Creating a Proxy Service

Once the above configurations are done and the WSO2 ESB server is restarted, you can create a simple Passthrough Proxy service to test the scenario.
Note that as the endpoint, I am using a backend where I'm referring to from a host name called appserver.wso2.com. This was the hostname which we added to the squid.conf file above under the acl section.

<proxy name="SSLTunnelingProxy"
          transports="https http"
          startOnLoad="true"
          trace="disable">
      <description/>
      <target>
         <inSequence>
            <send>
               <endpoint>
                  <address uri="https://appserver.wso2.com/services/SimpleStockQuoteService"/>
               </endpoint>
            </send>
         </inSequence>
         <outSequence>
            <send/>
         </outSequence>
      </target>
   </proxy>


Steps 5 - Invoking the Proxy Service

Using a preferred client of yours you can test the scenario. If the message is sent through the Proxy server, you should see logs as shown below in /var/logs/squid/access.log file.

1493112155.126  49234 127.0.0.1 TCP_MISS/200 2335 CONNECT appserver.wso2.com:443 - HIER_DIRECT/192.168.53.176 -
1493112888.241      0 10.100.7.144 TCP_DENIED_REPLY/403 3429 CONNECT appserver.wso2.com:443 - HIER_NONE/- text/html


Evanthika AmarasiriReason for "PasswordInvalidAsk Password Feature is disabled" error when adding through RemoteUserStoreManager

When trying to add users from RemoteUserStoreManager it returned the following SOAP fault.


<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
       <soapenv:Body>
        <soapenv:Fault>
            <faultcode>soapenv:Server</faultcode>
            <faultstring>PasswordInvalidAsk Password Feature is disabled</faultstring>
            <detail/>
        </soapenv:Fault>
    </soapenv:Body>
</soapenv:Envelope>


The reason for this issue is that I have forgotten to add the element in the SOAP message.  Once this element was added, I was able to successfully create the user.

Yasassri RatnayakeBlocking non existing server_name in NginX



Server names are defined using the server_name directive and determine which server block is used for a given request. See also “How nginx processes a request”. They may be defined using exact names, wildcard names, or regular expressions:

NginX behaves in a way, such that,

Nginx first decides which server should process the request. Let’s start with a simple configuration where all three virtual servers listen on port *:80:
server {
listen 80;
server_name www.yasassri.org;
...
}

server {
listen 80;
server_nam www.yasassri.net;
...
}

server {
listen 80;
server_name www.yasassri.com;
...
}
In the above configuration Nginx checks only the request’s header field “Host” to determine which server the request should be routed to. If its value does not match any server name, or the request does not contain this header field at all, then Nginx will route the request to the default server for this port.

Let me elaborate this with an example, If a client sends a request to www.yasassri.org or www.yasassri.net or www.yasassri.com NginX will route the messages to the corresponding server block. (If the Host header contains the Host-name) But what if client sends a message with Host Header www.abcd.com, this message doesn't match with any server names, so it shouldn't be routed anywhere Right? No that's not what really happens, the default behavior of NginX is to route this message to the default server configuration. In the configuration above, the default server is the first one — this is Nginx’s standard default behavior. It can also be set explicitly which server should be default, with the default_server parameter in the listen directive:
server {
listen 80 default_server;
server_name example.net www.yasassri.net;
...
}

So what if you want to block all the calls that doesn't match with the defined server names? NginX doesn't provide a cofiguration for this, to achieve this you can simply add the following server blocks as a workaround. So the following will be your default server block.


server {
listen 80 default_server;
return 404;
}

So when ever your server name doesn't match the request will be routed to the above server block, and a 404 is sent to the client.

So that's it, please drop a comment if you have more queries.

Yashothara ShanmugarajahInstall Moses in Ubuntu

In this blog, We will mainly focus on installing Moses and data processing tools in Ubuntu Operating System. We need to install some other packages before installing Moses. We will see those also in this blog.

Before we start, we will make sure that we have installed those packages.

g++
git
subversion
automake
libtool
zlib1g-dev
libboost-all-dev
libbz2-dev
liblzma-dev
python-dev
libtcmalloc-minimal4


If you have not install  above packages you can install using below command.

sudo apt-get install <package name>

g++ and boost is needed for compile Moses. Already we could install boost by above command. So below we will see how to install boost.

Installing Boost

For that, we need to download the boost. You can use wget command to download the boost. If you have any trouble to download it, you can straightly download the latest version of the boost from https://sourceforge.net/projects/boost/files/boost/. After you download boost<version>.tar.gz, 
you can extract it using the following command.

tar zxvf boost<version>.tar.gz
Then go inside the boost folder and u need to start the script.

cd boost<version>/ 
./bootstrap.sh 
./b2 -j5 --prefix=$PWD --libdir=$PWD/lib64 --layout=tagged link=static threading=multi,single install || echo FAILURE

This creates library file in the directory lib64, NOT in the system directory.

Note: In the last command " -j5 " indicates my PC is 5 Core machine (i.e my processor is CORE I5 ) If you are using different core machine change it in your core value.

Installing Moses

For installing Moses, you need to clone it from the GitHub. That is why we installed git in our system.

You can clone the Moses from this git hub link https://github.com/moses-smt/mosesdecoder by below code.
git clone https://github.com/moses-smt/mosesdecoder.git
cd mosesdecoder/ 
Then you can compile Moses using

make -f contrib/Makefiles/install-dependencies.gmake 
./compile.sh

Installing Word Alignment tool

Moses requires a word alignment tool, such as giza++, mgiza, or Fast Align. Here I am going to mention about installing GIZA++ and mgiza. You can select what you want to use for word alignment. So you can install one of them.

  • Installing GIZA++
You can clone GIZA++ from https://github.com/moses-smt/giza-pp.
Untar the package in the folder you wish to install GIZA++.
tar zxvf  giza-pp
cd giza-pp 
make

If you copy the GIZA ++ into theMosesdecoder tools package, it is easy when you are training the system afterward.

cd ~/mosesdecoder mkdir tools
cp ~/giza-pp/GIZA++-v2/GIZA++ ~/giza-pp/GIZA++-v2/snt2cooc.out \ ~/giza-pp/mkcls-v2/mkcls tools

  •  Installing MGIZA
You can clone MGIZA from https://github.com/moses-smt/mgiza.

Untar the package in the folder you wish to install MGIZA.

cd mgiza/mgizapp
cmake . $ make $ make install
make 
make install

It will take some time to install, so you can take rest for some time.

Installing IRSTLM

You can create language model using IRSTLM. Language model toolkits perform two main tasks: training and querying. You can train a language model with any of them, produce an ARPA file, and query with a different one. To train a model, just call the relevant script.
If you want to use SRILM or IRSTLM to query the language model, then they need to be linked
with Moses.

You need to download IRSTLM from http://sourceforge.net/projects/irstlm/

tar zxvf irstlm-<version>.tgz
cd irstlm-<version>
./regenerate-makefiles.sh
./configure --prefix=$HOME/irstlm-<version>
make install



Fine, Now we have installed Moses and related tools. Now we are ready to do baseline system. In the next blog, we will see how to build a baseline system for Tamil to Sinhala translation.
 

Farasath AhamedJWT Bearer Grant - OAuth2


Previously I wrote a post on my first step towards understanding OAuth. This post continues builds on that. OAuth has different types of flows targeting various scenarios or use cases. The main feature that differentiates each of these flows is the grant type.

What exactly is an OAuth grant type?

An OAuth grant is something that a client application could exchange for an access token from an Authorization Server. An access token typically represents a user's permission for the client application to access the resources on their behalf

OAuth Grant Types

The OAuth 2.0 core specification defines four types of grants,
  • Authorization code grant
  • Implicit grant
  • Resource owner credentials grant
  • Client credentials grant
In addition to these the core specification also defines a refresh grant type.

There are few new additions to these as well,
  • Message authentication code (MAC) tokens
  • SAML 2.0 Bearer Assertion Profiles
  • JSON Web Token grant

I would like to focus on the JSON Web Token Grant in this post. I hope to write my finding about the rest of the grant types in another post.


What is a JWT Bearer Grant?

JSON Web Token bearer grant simply is a json string containing claim values that will be evaluated and validated by the JWT Grant Handlers at the Authroization Server end before issuing an access token. The anatomy of JWT grant,validation process are clearly mentioned in the JSON Web Token (JWT) Profile for OAuth 2.0.

A sample JWT Bearer grant would look like,

{
           "iss":"https://jwt-idp.example.com",
           "sub":"mailto:mike@example.com",
           "aud":"https://jwt-rp.example.net",
           "nbf":1300815780,
           "exp":1300819380,
           "http://claims.example.com/member":true
}

a JWT would have a header declaring the algorithms that will be used to hash and sign the JWT to verified by the token endpoint(Grant Handler).

{
            "alg":"ES256"
}

a JWT will be signed and base64 encoded before sending a POST request to a token endpoint to exchange for an access token like given below,

POST /token.oauth2 HTTP/1.1
Host: authz.example.net
Content-Type: application/x-www-form-urlencoded

grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Ajwt-bearer &assertion=eyJhbGciOiJSUzI1NiJ9.
eyJleHAiOjE0MjczOTUwODMsInN1YiI6ImZhcmF6YXRoIiwibmJmIjoxNDI3Mzk0NDgzLCJhdWQiOlsiaHR0cHM6XC9cL2xvY2FsaG9zdDo5NDQzXC9vYXV0aDJcL3Rva2VuIiwid3NvMi1JUyJdLCJpc3MiOiJMT0NBTCIsImp0aSI6IlRva2VuNTY3NTYiLCJpYXQiOjE0MjczOTQ0ODN9.
MIXN0t9suRHBnwzG0FZXfetcs1iYcFHax-OLbxF2Vn13-NfzzFYwhKqngFE8BksH1r_2hY0X2XaIU2FlTaUxi1F4pyR59tU55qYnWwYxwhnprOBTqVJormuaBi0olDsyeD8veG_D59Oyp98C8KIGrbjDVblrwoqPCiO3u3W5rrU


The assertion value contains three values separated by a dot,


  • base64urlencoded JWT header
  • base64urlencoded JWT payload
  • Signature (signature is calculated by concatenating the base64 encoded header and the base64 encoded payload and signing it)
    simply, 
    Signature = sign(base64URlEncode(header) + '.' + 
    base64URlEncode
    (payload))

you can easily decode this assertion using jwt-debugger online tool. Simply copy paste the assertion part and you will be enlightened :P

the decoded JWT for the above assertion would be,

{
  "alg": "RS256"
}
{
  "exp": 1427395083,
  "sub": "farazath",
  "nbf": 1427394483,
  "aud": [
    "https://localhost:9443/oauth2/token",
    "wso2-IS"
  ],
  "iss": "LOCAL",
  "jti": "Token56756",
  "iat": 1427394483
}


please note in the post request body the grant_type parameter takes a value of "urn:ietf:params:oauth:grant-type:jwt-bearer" and the assertion parameter value is the signed base64 encoded JWT. The value of the assertion parameter MUST contain a single JWT.

The claim values and how they are validated will be discussed in the latter part of the post along with validation rules.



How does JWT Bearer Grant Work?

By looking at the above diagram, you should be able to get a basic idea. Let me go through the steps


  • A client application authenticates at an identity provider and is issued a JSON Web Toke(JWT) containing the claim values which are mentioned in the latter part of the blog.
  • The identity provider will issue a JWT after successfully authenticating the client.
  • The issued JWT could be used by the client to obtain access tokens from Authorization Servers which the trust the Identity Server that issued the JWT. So simply the Identity Server vouches for your identity and the Authorization Server accepts it trusting the Identity Server.
  • After validating the signature of the identity server in the received assertion and validating the claim values in the issued JWT, the authorization server issues an access token. 



Above diagram should put it all together for you.

Validating a JSON Web Token Bearer Grant

The validation rules specified to JWT Bearer Grant type are quite simple and have some similarities to SAML Bearer grant type as well.

Before jumping into the validation rules lets look at the claim values/ parameters that come with a valid JWT. There are two types of these claims one set is mandatory and the other, yeah you guessed it right "OPTIONAL".

  • Mandatory Values
    • iss (issuer)
    • sub (subject)
    • aud (audience)
    • exp (expiration time)
  • Optional Values
    • nbf (not before)
    • iat (issued at)
    • jti (json web token Id)
    • other custom claims


Rules for validating a JWT bearer grant
    • The JWT MUST contain an iss (issuer) claim that contains a unique identifier for the entity that issued the JWT.
    The issuer(iss) value is a string that identifies the Identity Provider or the entity that issued the JWT uniquely.
      • The JWT MUST contain a sub (subject) claim identifying the principal that is the subject of the JWT.
      The subject(sub) value identifies the entity that the identity provider or the entity that issued the JWT vouches for
        • The JWT MUST contain an aud (audience) claim containing a value that identifies the authorization server as an intended audience. The token endpoint URL of the authorization server MAY be used as a value for an aud element to identify the authorization server as an intended audience of the JWT. 
        The audience(aud) value/values are the intended recipients of the JWT denoted by the Identity provider or the JWT issuing entity.
          • The JWT MUST contain an exp (expiration) claim that limits the time window during which the JWT can be used. The authorization server MUST reject any JWT with an expiration time that has passed, subject to allowable clock skew between systems. Note that the authorization server may reject JWTs with an exp claim value that is unreasonably far in the future
          The expiration time(exp) value limits the usage of the JWT beyond the specified value of time. 
            • The JWT MAY contain an nbf (not before) claim that identifies the time before which the token MUST NOT be accepted for processing
            The not before time(nbf) value forces a JWT to be used only after a specified time.
            • The JWT MAY contain an iat (issued at) claim that identifies the time at which the JWT was issued. Note that the authorization server may reject JWTs with an iat claim value that is unreasonably far in the past.
            • The JWT MAY contain a jti (JWT ID) claim that provides a unique identifier for the token. The authorization server MAY ensure that JWTs are not replayed by maintaining the set of used jti values for the length of time for which the JWT would be considered valid based on the applicable exp instant.
            JWT ID(jti) value is an identity value for the JWT issued. This could be used by the entity validating to prevent used JWTs to be replayed. This value need not be unique always which means thats the a JWT with an already validated jti could be reused after a certain threshold value of time determined by the entity validating and the application context.
            • The JWT MAY contain other claims.
            JWT may contain claims other than the above mentioned ones. This really is the extension point of the JWT specification. Custom claims could be made mandatory or optional and their validation logic will depend on the application context.
            • The JWT MUST be digitally signed or have a Message Authentication Code applied by the issuer. The authorization server MUST reject JWTs with an invalid signature or Message Authentication Code.
            The digital signature or the MAC values ensures the integrity of the JWT exchanged between the issuing and the validating entity. The algorithm used for this comes with an inherent vulnerability discussed here.
            • The authorization server MUST reject a JWT that is not valid in all other respects per JSON Web Token (JWT) [JWT].

    Amalka SubasingheAdd multiple database users with different privileges for the same database

    Currently, the WSO2 Integration Cloud supports adding multiple database users for a same database, but does not support changing user privileges.

    Let's say someone has a requirement of using same database via two different user, one user has full access, where other user should have READ_ONLY access. How we do this in Integration Cloud?
    We are planning to add this as feature to change the user permissions, but until that you can do it as I have mentioned below.

    Steps:

    1. Login Create a database with a user


    2. Once you create a database you can see it as below, and you can add another user when clicking on the All users icon


    3. There you can create new user or you can attach existing user to the same database


    I added two users u_mb_2NNq0tjT and test_2NNq0tjT to the database wso2mb_esbtenant1
    My requirement is to give full access to the u_mb_2NNq0tjT user and remove INSERT permission from test_2NNq0tjT user.

    4. Login to the mysql.storage.cloud.wso2.com via mysql client as user u_mb_2NNq0tjT and revoke the INSERT permission of test_2NNq0tjT

    first login as test_2NNq0tjT and check grants
    mysql -u  test_2NNq0tjT -pXXXXX -h mysql.storage.cloud.wso2.com

    show grants
    +-----------------------------------------------------------------------------------------+
    | Grants for test_2NNq0tjT@%                                                             |
    +-----------------------------------------------------------------------------------------+
    | GRANT USAGE ON *.* TO 'test_2NNq0tjT'@'%' IDENTIFIED BY PASSWORD <secret>              |
    | GRANT ALL PRIVILEGES ON `wso2mb_esbtenant1`.* TO 'test_2NNq0tjT'@'%' WITH GRANT OPTION |
    +-----------------------------------------------------------------------------------------+


    login as u_mb_2NNq0tjT and revoke the insert permission
    mysql -u  u_mb_2NNq0tjT -pXXXXX -h mysql.storage.cloud.wso2.com

    REVOKE INSERT ON wso2mb_esbtenant1.* FROM 'test_2NNq0tjT'@'%';

    login again as test_2NNq0tjT and check grants
    mysql -u  test_2NNq0tjT -pXXXXX -h mysql.storage.cloud.wso2.com

    show grants

    +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | Grants for test_2NNq0tjT@%                                                                                                                                                                                                                                   |
    +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    | GRANT USAGE ON *.* TO 'test_2NNq0tjT'@'%' IDENTIFIED BY PASSWORD <secret>                                                                                                                                                                                    |
    | GRANT SELECT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER ON `wso2mb_esbtenant1`.* TO 'test_2NNq0tjT'@'%' WITH GRANT OPTION |
    +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
    2 rows in set (0.24 sec)


    With this approach we can change the permissions of another user who is attached to the same database.

    To make an read-only user you need to revoke the permissions as follows
    REVOKE INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER on `wso2mb_esbtenant1`.*  from 'test_2NNq0tjT'@'%'; 

    Please note: after you change the user privileges, do not detach/attach the test_2NNq0tjT user to the same or different database. Then it will set the all privileges automatically.

    Yashothara ShanmugarajahIntroduction to Moses

    Before coming to the Moses we need to know brief introduction to Natural Language Processing and Language translation. Then we could understand the Moses easily.

    Natural Language Processing

    Natural Language Processing is an Artificial Intelligence method which is used to communicate with Intelligent system such as Computers using natural language such as English, Tamil and Sinhala. By utilizing NLP, developers can organize and structure knowledge to perform tasks such as automatic summarization, translation, named entity recognition, relationship extraction, sentiment analysis, speech recognition, and topic segmentation. NLP considers the hierarchical structure of language: several words make a phrase, several phrases make a sentence. NLP is commonly used for text mining, machine translation, and automated question answering.

    We can use NLP to translate from one language to another language. Instead of hand-coding large sets of rules, NLP can rely on machine learning to automatically learn these rules by analyzing a set of examples (i.e. a large corpus) and making a statistical inference.

    Moses

    Moses is a statistical machine translation system which allows you to translate from one language to another language by training translation models. For training the model you need collection of translated texts in both language (parallel corpus). Once you have a trained model, an efficient search algorithm quickly finds the highest probability translation among the exponential number of choices. It is a data driven machine translation approach. Moses system based on the Bayes theorem. 

    If we explain this in translation point of view, probability of translation from f language to e language is depend on probability of translation from e language to f language and probability of  e language.

    Further the system can be drilled down as log linear models.


    • Weight-t Translation
    • Weight-l Language model
    • Weight-d distortion (reordering)
    • Weight-w word penalty

    Moses was developed in C++ for efficiency and followed modular, object-oriented design.The toolkit is a complete out-of-the-box translation system for academic research. It consists of all the components needed to preprocess data, train the language models and the translation models. It also contains tools for tuning these models using minimum error rate training and evaluating the resulting translations using the BLEU score.
     

     Moses requires two main things
    • Parallel text- Collection of sentences in two different languages, which is sentence-aligned, each sentence in one language is matched with its corresponding translated sentence in the other language.
    • Monolingual target set- A statistical model built using monolingual data in the target language and used by the decoder to try to ensure the fluency of the output.
    There are two main components in the Moses
    • Training Pipeline- Take the raw data and turn it into a machine translation model
    • Decoder- Translate the source sentence into the target language
    Decoder Modules 
    • Input: This can be a plain sentence, or it can be annotated with xml-like elements to guide the translation process, or it can be a more complex structure like a lattice or confusion network.
    • Translation model: This can use phrase-phrase rules, or hierarchical (perhaps syntactic) rules.
    • Decoding algorithm: Decoding is a huge search problem.
    • Language model: Moses supports several different language model toolkits (SRILM, KenLM, IRSTLM, RandLM)
     The process of Moses system is described in a picture is given below.



     We need to install Moses to train the system and get output. So in the next blog we will see how to install Moses in the Ubuntu operating system.

    Rajjaz MohammedWhat to do with PermGen space when login WSO2EI 6.1.0

    After the successful start of the WSO2 Enterprise Integrator (EI) 6.1.0 I tried to log in the management console and encountered an error java.lang.OutOfMemoryError: PermGen space. Please follow the steps to solve the issue.  Error  java.lang.OutOfMemoryError: PermGen spaceDumping heap to /home/rajjaz/Documents/support/wso2ei-6.1.0/repository/logs/heap-dump.hprof ...Heap dump file created [

    Nadeeshaan GunasingheBallerina Composer — Tutorial (Part III — Services)

    In most of the integration scenarios, you have the requirement of exposing a certain functionality to the outside world as a service. In such a scenario, you can achieve this requirement so much easily with Ballerina Service Definitions. It as simple as just dragging and dropping a service from the composer tool palette to start your service logic.
    Ballerina service encloses the Resources which contains the integration logic. You can add multiple resources to expose various functionalities as resources, and expose them as types of requests.
    A service can be consist with following artifacts
    1. Resources
    2. Connectors — Accessible by all the resources (Global to service scope)
    3. Variables — Accessible by all the resources (Global to service scope)

    Composing your first Service


    Nadeeshaan GunasingheBallerina Composer — Tutorial (Part II — Function Definitions)

    With the first part of this tutorial series, you were provided with basic idea about the ballerina composer and how to use it for writing your integration flows. As you can remember we wrote a simple echo service which is similar to the echo service sample provided with the ballerina distribution. I’ll provide a detailed description about ballerina services in our next tutorial.
    In this tutorial let’s have a look at one of the top level ballerina language construct, Functions. We will be have a look at what a ballerina function is and how we can use them in a real world integration scenario to provide more flexibility to your integration scenario.

    What is a Ballerina Function


    Nadeeshaan GunasingheBallerina Composer — Flexible, powerful and Smartest ever graphical tool for composing your Ballerina Programs

    Ballerina Programming Language

    Ballerina is the next generation of general purpose programming languages which is concurrent, strongly typed mapped with both textual and graphical syntax. Ballerina is highly optimized to write programs to integrate with data sources, services, and network-connected APIs of all kinds. To get a more detailed insight about Ballerina (The Language for the the future of Integration :) ), refer ballerinalang.org. You can download and give it a swirl and you’ll feel how cool it is.

    Ballerina Composer

    Now it’s time to get your dance choreographed with some style. That’s where Ballerina Composer come to the action. Ballerina Composer provides you a greater flexibility on writing your Ballerina Program. Ballerina composer can be introduced as the best and the most strongest graphical representation of a programming language. Ballerina Composer covers each and every corner of the Ballerina Lang.

    Read More...

    Rajjaz MohammedWSO2 ESB 4.8.1 support for FileConnector V2

    WSO2 ESB File connector version2 introduces the atomic operation related to the file system and allows you to easily manipulate files based on your requirement. The file streaming functionality using Apache Commons I/O lets you copy large files and will reduce the file transfer time between the two file systems resulting in a significant improvement in performance that can be utilised in file

    Lasindu CharithWSO2 Data Analytics Server - Delete and Create local indexes


    WSO2 Analytics Server uses local indexing to speedup data search. WSO2 DAS has a distributed indexing engine which is built on top of Apache Lucene. Data is indexed and saved in the Data Access Layer of DAS in a store referred to as the Analytics File System.

    More information on Indexing can be found in [1]. By default the indexed data is stored in <DAS_HOME>/repository/data of each node. To reindex all the data from the scratch follow below steps. (Please note following steps are for a single node Analytics setup)

    - Stop Analytics Server
    - Delete the content in <DAS_HOME>/repository/data
    - In <DAS_HOME>/repository/conf/analytics/local-shard-allocation-config.conf , changing all entries from NORMAL to INIT (This is to reindex all shards - 6 shards by default)
    - Start the Analytics Server

    Refer [2], for more information on shard indexes. Re-indexing can be particularly important if you have a currupt indexes, done database migration etc, so that it can create new indexes from the current avaiable data.

    References

    [1] https://docs.wso2.com/display/DAS310/Configuring+Indexes
    [2] https://docs.wso2.com/display/DAS310/Storing+Index+Data

    Chathura DilanHow to change the Raspberry Pi Resolution for Remote Login

    If you have enabled VNC in Raspberry pi and say you are accessing it with a remote VNC viwer, you might be wondering how to change its resolution.

    Here is the simple trick to change it.

    Login to your raspberry pi

    Open the /boot/config.txt file and uncomment following line as set values as belows

    hdmi_group=2
    hdmi_mode=58

    Reboot your Raspberry Pi. and that’s it.

    Ayyoob HamzaFog Computing — Is it a hype or a requirement ?

    The computation and storage capability that usually took up an entire building years ago can now fit into our laptops, phones and wearables. This is because of the continuous development on the computation and storage capabilities of hardware devices. The devices that are connected to the Internet have surpassed the human population and also 90% of the data that is found in the Internet was accumulated in the past 2 years. Cisco predicts that there would be more than 10 billion devices, which would be connected to the Internet by 2020 and also predicts that there would be an approximate of 30.6 Exabytes of mobile traffic by 2020[1]. The growth rate in the development of computation power and storage is seen to accelerate in a much faster pace than that of the bandwidth. This shows that there would be an enormous data-flow and network congestion in the Internet in the near future. One of the major factors that leads to this issue is the continuous focus towards achieving the vision of Internet of Things(IoT). Therefore to solve the issue of network congestion we had to delve into the expectation of cloud for IoT[2]. This expectation can be brought down to the following 3 requirements.

    • Should work on a low bandwidth environment (eg: Oil platform)
    • low latency (eg: Control Systems)
    • network reliability (Smart home)

    However, in a cloud perspective it has a large data flow due to its current collect & act (Batch Processing) paradigm. Cloud mostly focuses on collecting the data and then processing it to finally act based on the outcome. In addition the cloud is located within Internet — complex heterogeneous network, which leads to high latency and also increased cost when it comes to moving this large data set. Further resilience/denial of service is an issue in cloud, which causes network un-reliability. Therefore to solve these problems a new paradigm was introduced to extend the existing cloud to the edge: Fog Computing. Fog consists of nodes that are heterogeneous and it ranges from high-end servers, routers, vehicles to hand-held mobile phones.

    The main difference between the cloud and the fog is that fog computing works in a distributed approach and cloud computing happens in a centralized approach. Fog is not a new paradigm shift that is bought to replace cloud but rather an extension to support current cloud approach.

    References

    [1] Cisco. Cisco visual networking index: Global mobile data traffic forecast update, 2015 2020 white paper. http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual- networking-index-vni/mobile-white-paper-c11-520862.pdf.

    [2] Mung Chiang. Fog networking: An overview on research opportunities. arXiv preprint arXiv:1601.00835, 2016.

    Dhananjaya jayasingheWSO2 Server Startup Taking a lot of time on Mac ??? Solved...

    With MacOS Sierra, I was experiencing a huge delay in server start ups for WSO2 latest versions. They were like follows.


    ServerVersionJava VersionStartup Time
    WSO2 ESB4.8.11.7.0_8015 Seconds
    WSO2 ESB5.0.01.7.0_8090 Seconds
    WSO2 ESB5.0.01.8.0_10189 Seconds
    API Manager1.7.01.7.0_8017 seconds
    API Manager2.0.01.7.0_80166 seconds
    API Manager2.0.01.8.0_101167 seconds


    My Processing power was as bellow.



    I was in really doubt, Why it took so much of time to start the server.  When researching on that i could locate following discussion [1]. It was really interesting , you can go through it and understand it.

    The solution i did as int he above blog post, i added a mapping to the canonical 127.0.0.1 address of my macbook hostname to my /etc/hosts file as bellow.



    Once i done that, My ESB 5.0.0 server startup was 13 seconds..  So it reduced from 90 --> 13 seconds... Amazing haa... :D

    [1] https://thoeni.io/post/macos-sierra-java/

    Tharindu EdirisingheRetrieving User Resources from Facebook over the OAuth 2.0 Authorization Code Grant Type

    In a previous article, I wrote about how to retrieve user profile information from LinkedIn over the OAuth 2.0 Authorization Code Grant type. This article explains the same flow which you can follow for retrieving user resources from Facebook. (By the time of this writing, the latest Facebook API is 2.8 and the steps given below are tested on that)

    So let’s get started ! The following diagram shows all the steps associated in this flow.


    Step 1 - Registering the Client App in Facebook Developer Website

    First step is to create an application in the developer account on Facebook. Visit https://developers.facebook.com/  and add a new application.


    Provide a display name for your application and your contact email and create the application.

    Once your app is created, associate “Facebook Login” with it.



    Under the Settings of the “Facebok Login”, you need to provide the Redirection Endpoint URL. This URL should be within your client web application and Facebook will send all responses to this URL. However, for trying out this flow, you don’t need to have a working URL available. You can simply provide a dummy URL here for the moment. The same URL you add here should be sent along with requests in next steps.


    In the Dashboard, you can see the App ID and the App Secret for your app. In OAuth terminology, we call the same as Client ID and Client Secret, or Consumer Key and Consumer Secret.


    Now we have successfully registered our app in facebook and configured it. You need to take down the App ID and App Secret which is generated for your app and also the Redirection Endpoint URL which you defined where we will use these three values in next steps when making requests to facebook for retrieving user resources.

    Step 2 - Obtaining the Authorization Code

    In order to obtain the authorization code from facebook, we need to send a HTTP GET request to the Authorize Endpoint of Facebook, which is https://www.facebook.com/dialog/oauth . Along with the request, you need to send several parameters which are described below.

    Parameter Name
    Value
    response_type
    code
    client_id
    The App ID value of your application
    redirect_uri
    The Redirection Endpoint URL which you defined in “Facebook Login” settings. This value should be URL encoded when sending with the request.
    scope
    The scopes (permissions to resources) which your app needs to access. When you have multiple scopes, separate them with spaces and the string should be URL encoded when sending with the request. Refer https://developers.facebook.com/docs/facebook-login/permissions to know more about Facebook scopes.

    These are the sample values I use. When trying out this, you need to add your own App ID and App Secret. (may be the redirect_uri too). As the scopes (permissions for the app to access Facebook user resources), I define few here, but you need to pick the scopes you need depending on the requirements of your client app based on the resources of the user which it needs to access.

    Parameter Name
    Sample Value
    URL Encoded Value
    response_type
    code
    code
    client_id
    183994178774345
    183994178774345
    redirect_uri
    http%3A%2F%2Flocalhost%3A8080%2Ffacebookapp%2Fcallback
    scope
    public_profile user_posts user_friends user_photos
    public_profile%20user_posts%20user_friends%20user_photos

    Based on above values, I prepare the following URL and since this is a HTTP GET request, I have added all parameters as query parameters in the URL. You need to put your own values in this URL (values highlighted). Then I can simply call this URL in the browser.

    Sample Request :


    If you are not logged into facebook in the browser, first it will ask you to login.


    Once you login, it will show the following popup. We call this as the “User Consent Page” in OAuth terminology. In there, it will show what are the resources from the user account that this external app would be able to access on behalf of you.


    Once you Continue, facebook will redirect the browser to the Redirection Endpoint URL which you defined in the app settings and along with the URL, it will send the query parameter code, which is the authorization code. If you had a client website, you can extract this parameter value from the webapp itself.

    At the moment, I don’t have a client web application, but defined a dummy URL as the Redirection Endpoint URL, so that in the browser I will not see a web page. But I can simply manually extract the authorization code value received in the URL.


    Following is the URL I got in the browser.

    http://localhost:8080/facebookapp/callback?code=AQDfw1CLKYt-TuoGq1m8oChT8LHbWxz01zWgmkdxRRodgJua5TbEI_HMYHaL-64LzpL56KCfNz12Yt3WXlIeep4t0Mc9VCQ9-i7SPEIk7gPSmzy4m3fpNawmQCvtw5FEU6pM0ON8EMDv-6Vp1-ty907V4Cnu5sp__QTuJ2c9wz9Co1GIrOO3qEF2Vu9ruaKkMhZDSNAa0fgbd-5PLiivkN75nr7nsFCHlJEkadBfkIVddJTqd4AH7zc8KFXWta87KA3Kt3Taz7h0lTJff3wQuciWRqhvytOpE90snQPyNJkitpaQeX3VSLHeLd77QOKMNUGw2TnMr6B9d-Y6AZx1M-Of6MeQmeogsyhE0QzihAI6eQ#_=_

    So from the above URL, the authorization code value is the value of the code parameter (highlighted).

    Step 3 - Obtaining the Access Token

    Now that we have got the authorization code, next step is to request an OAuth access token from facebook, which can be used to access user resources (permitted with the scopes which we requested). For that, the client web application has to send a HTTP POST request to the Token Endpoint of facebook sending the authorization code received in previous step. The Token Endpoint of facebook is https://graph.facebook.com/oauth/access_token .

    Since we don’t have a client web application running, we can manually do this and obtain the access token.

    We need to send the following parameters in the body of the HTTP POST request.

    Parameter Name
    Value
    grant_type
    authorization_code
    client_id
    The App ID value of your application
    redirect_uri
    The Redirection Endpoint URL which you defined in “Facebook Login” settings. This value should be URL encoded when sending with the request.
    code
    The authorization code you received in previous step.

    In addition to that, we need to send credentials of the facebook application (App ID and App Secret) in the HTTP Header. Here, we need to combine the App ID and Secret separating them in a Colon (:) and the value should be encoded in Base64.

    Authorization: Basic <Base64encode(AppID:AppSecret)>

    These are the sample values I use. You need to use your own parameter values when sending the request.

    Parameter Name
    Sample Value
    URL Encoded Value
    grant_type
    authorization_code
    authorization_code
    client_id
    183994178774345
    183994178774345
    redirect_uri
    http%3A%2F%2Flocalhost%3A8080%2Ffacebookapp%2Fcallback
    code
    AQDfw1CLKYt-TuoGq1m8oChT8LHbWxz01zWgmkdxRRodgJua5TbEI_HMYHaL-64LzpL56KCfNz12Yt3WXlIeep4t0Mc9VCQ9-i7SPEIk7gPSmzy4m3fpNawmQCvtw5FEU6pM0ON8EMDv-6Vp1-ty907V4Cnu5sp__QTuJ2c9wz9Co1GIrOO3qEF2Vu9ruaKkMhZDSNAa0fgbd-5PLiivkN75nr7nsFCHlJEkadBfkIVddJTqd4AH7zc8KFXWta87KA3Kt3Taz7h0lTJff3wQuciWRqhvytOpE90snQPyNJkitpaQeX3VSLHeLd77QOKMNUGw2TnMr6B9d-Y6AZx1M-Of6MeQmeogsyhE0QzihAI6eQ#_=_
    AQDfw1CLKYt-TuoGq1m8oChT8LHbWxz01zWgmkdxRRodgJua5TbEI_HMYHaL-64LzpL56KCfNz12Yt3WXlIeep4t0Mc9VCQ9-i7SPEIk7gPSmzy4m3fpNawmQCvtw5FEU6pM0ON8EMDv-6Vp1-ty907V4Cnu5sp__QTuJ2c9wz9Co1GIrOO3qEF2Vu9ruaKkMhZDSNAa0fgbd-5PLiivkN75nr7nsFCHlJEkadBfkIVddJTqd4AH7zc8KFXWta87KA3Kt3Taz7h0lTJff3wQuciWRqhvytOpE90snQPyNJkitpaQeX3VSLHeLd77QOKMNUGw2TnMr6B9d-Y6AZx1M-Of6MeQmeogsyhE0QzihAI6eQ#_=_

    In the HTTP Headers, I need to add the Authorization header with the App credentials. For that I can prepare the value like this.

    App ID = 183994178774345
    App Secret = dc321ebea29283cd4092b6b476ccadbd

    AppID:AppSecret = 183994178774345:dc321ebea29283cd4092b6b476ccadbd
    Base64(AppID:AppSecret) = MTgzOTk0MTc4Nzc0MzQ1OmRjMzIxZWJlYTI5MjgzY2Q0MDkyYjZiNDc2Y2NhZGJk

    So, I can add the header as following.

    Authorization: Basic MTgzOTk0MTc4Nzc0MzQ1OmRjMzIxZWJlYTI5MjgzY2Q0MDkyYjZiNDc2Y2NhZGJk

    When sending the request, I can use a HTTP Client browser plugin like RESTClient.


    In the Response, we receive the Access Token.


    Alternatively, you can do the same in the Terminal with curl command. You can replace the highlighted values with your own ones.
    curl -X POST --header "Authorization: Basic MTgzOTk0MTc4Nzc0MzQ1OmRjMzIxZWJlYTI5MjgzY2Q0MDkyYjZiNDc2Y2NhZGJk" --data "grant_type=authorization_code&redirect_uri=http%3A%2F%2Flocalhost%3A8080%2Ffacebookapp%2Fcallback&client_id=183994178774345&code=AQDfw1CLKYt-TuoGq1m8oChT8LHbWxz01zWgmkdxRRodgJua5TbEI_HMYHaL-64LzpL56KCfNz12Yt3WXlIeep4t0Mc9VCQ9-i7SPEIk7gPSmzy4m3fpNawmQCvtw5FEU6pM0ON8EMDv-6Vp1-ty907V4Cnu5sp__QTuJ2c9wz9Co1GIrOO3qEF2Vu9ruaKkMhZDSNAa0fgbd-5PLiivkN75nr7nsFCHlJEkadBfkIVddJTqd4AH7zc8KFXWta87KA3Kt3Taz7h0lTJff3wQuciWRqhvytOpE90snQPyNJkitpaQeX3VSLHeLd77QOKMNUGw2TnMr6B9d-Y6AZx1M-Of6MeQmeogsyhE0QzihAI6eQ#_=_"  https://graph.facebook.com/oauth/access_token


    In the response, the access token is received.

    Step 4 - Retrieving User Resources from Facebook, providing the Access Token

    You can refer the Facebook Graph API Reference [3] to get to how to retrieve user resources. However, I am listing down few sample requests for you to try out.

    Now that we have received the OAuth access token from facebook, in all the requests we make to the facebook API, we need to include it as a  HTTP header.

    Authorization: Bearer <access token value>

    Retrieving user’s timeline posts

    Send a HTTP GET request to https://graph.facebook.com/v2.8/me/feed?limit=25 and in response, you will get the user’s timeline posts. You will get a JSON response. You can limit the number of results using the limit query parameter.


    Get User’s Facebook ID

    For invoking many operations like retrieving user’s photos, albums etc. We need to know the user’s facebook ID. For that, we can send a HTTP GET request to the URL https://graph.facebook.com/v2.8/me?fields=id which would return the ID in a JSON response.
    {
      "id": "1021167613XXXXXXX"
    }

    Get User’s Friend List

    For retrieving the friend list of the user, you need to send a HTTP GET request as following.

    https://graph.facebook.com/v2.8/<FB User ID>/taggable_friends

    For this, you need to know the user’s facebook ID which we can get to know following the previous step I mentioned.


    For more information, you can refer the documentation [4].

    Retrieving User’s Photo Album Details

    For this, you need to send a HTTP GET request to https://graph.facebook.com/v2.8/me/albums which would return a JSON response with the photo album details.


    Retrieving Photos of an Album

    From previous step, we can get to know the album IDs and we can use them to get the details of the photos in a particular album. For that, you need to send a HTTP GET request to following URL. In the response, you receive IDs of all photos in the album as a JSON response.



    Retrieving a Photo by ID

    Now that we know the ID of a photo from previous step, we can get the details of it by sending a HTTP GET request as following.

    https://graph.facebook.com/<photo ID>/picture

    In the response, we receive the image.

    These are few operations only, but you can do many things referring the Facebook API reference.

    References


    Tharindu Edirisinghe (a.k.a thariyarox)
    Independent Security Researcher

    Malintha AdikariRecommender systems: When you have no idea

    Yesterday, I went to dinner with a group of my friends. I was an invitee, so I had no choice but accepting their invitation and go to the restaurant they had already reserved. I was completely new to that restaurant and had no idea about what were the best options for me. I went through the menu and found few familiar dishes I have had before from other places. We had 10 people sitting around the table and 7 out of them had already decided what to get for the dinner as they were members of the restaurant. 3 of us were completely lost :). I was switching between dishes as I had no idea. What did I do?

     
     

    I decided to ask for help from my friends to take the decision on dishes. I selected 3 friends who have similar taste to me. We know each other from the childhood. I asked the best options from those 3 folks. 2 of them recommend me fried noodles with chicken and other one suggested me pasta instead of noodles. Thanks to my friends, I could reduce the list to 2 dishes. Then, I decided to take the final decision considering my past experience. I like noodles more than pasta and I took fried noodles with chicken dish. It was a great choice and their recommendation worked for me. I would like to recommend you that dish if you go to that restaurant one day. Just let me know :)
    Sometimes we face these kinds of situations where we need suggestions, advice or recommendations from experts on specific subjects. It may be a movie to watch, book to read, dish from a restaurant, whom to vote, doctor to visit, song to listen, hotel to stay and much more. By nature, we would like to ask it from our friend or a similar person who has expertise in the topic or with previous experience. Also, we consider our own experience or preferences in such situations. In real life, sometimes we have to rely on other's recommendations or our previous experiences/choices or take the decision considering both. This is the foundation of Recommender Systems.
    Recommender systems are tools and technologies which recommend items/concepts/services/actions or solutions to users. Today, Recommender Systems are heavily and widely used in social networks, e-commerce websites, recommendation software which recommends movies, books, songs. Let's look at the technical concepts and methods under the hood.
    We can use several different methods to generate recommendations. We can recommend the most popular items to users. If we look at our restaurant example, the restaurant can recommend most popular dishes to new(or existing) customers. It can collect data from purchasing history and identify a list of popular items which can be recommended later. This method is called Popularity Based Recommendations. Do you see any problem in this method? While this is very easy to implement, we are missing personalization in generated recommendations. Personalization is the process of tailoring items/solutions to individual users' characteristics or preferences. Systems based on popularity based recommendations provide a fixed set of items for any user in a given time interval. Can't we think the problem as a classification problem? Yes, we can. We can consider an item and predict whether the user is going to like it or not. Here we look at the attributes of the item and take decisions based on those attributes. We can consider user attributes too. Identifying recommendation problem as a classification problem is called Classification Based Recommendations method. Normally, recommendation problems deal with a very large number of users and items. So, we are facing a problem here. If we are going forward with classification approach, we have to work with a large number of features(from user's perspective, items are features and vice versa). Can't we find a solution for the problem of generating recommendations based on our real life experiences? Can't we use our neighbor's experiences or our own experiences to solve this? Yes, we can. Neighborhood Based Recommendations method addresses the problem using our neighborhood. Content Based Recommendations method uses user's past experiences to generate recommendations. Let's discuss these two methods it in detail.



    There are two commonly used methods in RS.
    1. Collaborative filtering
    2. Content-based filtering

     

    Collaborative Filtering(CF)

    Collaborative Filtering(CF) is the most popular and widely adapted method among two. CF relies on past ratings of the active user and other users. It generates recommendations based on the concept of 'similar users prefer similar items'. So, based on similarity of users, CF generates recommendations for the active user.
    If we peep into our restaurant experience through Collaborative Filtering window, First, I selected people who have similar taste to me. Then, they suggested me few dishes according to their ratings and I accepted their choices considering the similarity between us.


    Content-based filtering

    CBF generates recommendations based on user's past behavior or preferences. This method retrieves the list of items the user has used/rated previously and try to find new items which have similar features/attributes with past items. Here, we are walking through user profile considering attributes of the user to generate recommendations.
    This can be related to my past experience of taking dinner from a new restaurant. At the moment of getting the decision of dishes, I considered my preferences and past experience. I was looking for similar dishes to my favorite choices.



    *Hybrid method

    Another blooming research area is using a hybrid method combining Collaborative filtering and Content-based filtering methods. The goal is to mitigate weaknesses of individual methods while taking advantages of strengths of both.
    When I get my final decision on the dish, I used concepts of the hybrid method. I got suggestions from my friends and then used my preferences/past experience to select one.

     
     
    According to what we have discussed so far, it is obvious that the science of Recommender Systems is developed imitating human rational decision-making process.
    Let me wind up our recommendation story with a true story of Touching the Void and Into the Air books.
    In 1988, a British mountain climber Joe Simpson wrote a book called Touching the Void, a harrowing account of near-death in the Peruvian Andes. It got good reviews, only a modest success, it was soon forgotten. Then, a decade later, a strange thing happened. Jon Krakauer wrote Into Thin Air, another book about a mountain-climbing tragedy, which become a publishing sensation. Suddenly, Touching the Void started to sell again. Amazon’s recommendation system noticed a few people who bought both books, and started recommending Touching the Void to people who bought, or were considering, Into Thin Air. Had there been no on-line bookseller, Touching the Void might never have been seen by potential buyers, but in the on-line world, Touching the Void eventually became very popular in its own right, in fact, more so than Into Thin Air.

    Let's meet again.

    Ayesha DissanayakaWSO2GREG-5.3.0 - Enable visibility of assets in Store only for assets in particular lifecycle state

    There can be use cases that some organizations only needs to make artifacts visible in the store, only when they reach a particular state of its lifecycle. This requirement can be catered via the extension[1] model of WSO2GREG.

    As an example let’s take soapservice, and assume that we need to make soapservices visible in store, only when they reach "Published" state in their lifecycle.
    Follow below steps for that.

    Open [Home]/repository/deployment/server/jaggeryapps/store/extensions/assets/soapservice/asset.js file.

    Inside asset.manager look for 'search' function, which overrides the default store behavior. [2]

    Enable below line inside 'search' function.

    query = buildPublishedQuery(query);
    //originally commented this inorder to let anystate visible in store.

    Let's decide which state should be allowed in store. For that in the asset.configure section of the same file[3], add below meta section.

    lifecycle: {
                    publishedStates: ['Production']
                }

    Now the asset.configure section should look something like below.
    asset.configure = function() {
        return {
            meta: {
            lifecycle: {
                    publishedStates: ['Production']
                },
                ui: {
                    icon: 'fw fw-soap',
                    iconColor: 'orange'
                },
                isDependencyShown: true,
                isDiffViewShown:false
            }
        }
    };

    Restart the server. Now in the store only soapservices in "published" state will be visible.

    If you need to have same behavior for other asset types as well, then follow the same steps in each assets extension type, by editing [Home]/repository/deployment/server/jaggeryapps/store/extensions/assets/[ASSET-TYPE]/asset.js file.

    [2] https://github.com/wso2/product-greg/blob/v5.3.0/modules/es-extensions/store/asset/soapservice/asset.js#L119
    [3] https://github.com/wso2/product-greg/blob/v5.3.0/modules/es-extensions/store/asset/soapservice/asset.js#L147

    Nipun SuwandaratnaDevice Grouping - WSO2 IoT Server Mobile Device Management (MDM) Features

    The WSO2 IoT server is an extensible version of the product previously known as the WSO2 Enterprise Mobility Manager. The IoT server inherits all Mobile Device Management (MDM) features as well as Mobile Application Management (MAM) features from the WSO2 EMM and supports Android, iOS and Windows mobile platforms. The significant additions that come with the IoT server are out-of-the-box support for well-known development boards such as Arduino UNO and Raspberry Pi and the ability to be extended to support any type of device through device agent implementations.


    In this post we'll take a look at the Device Grouping functionality in IoT server 3.0.0. The WSO2 EMM product already provided means to group users based on roles (this is because WSO2 products use role-based authentication). For example all managers could be assigned the Manager role. It was then possible to apply policies and provision applications (Enterprise app installation) based on user roles. The IoT server goes one step forward by providing Device Grouping. With device grouping you can group a set of devices immaterial of the users or user-roles of the device owners. You can then apply policies to a specific device grouping.

    Login to the device management console and go to Group Management
    You will see a list of existing devices. By default the system would have created a grouping for all BYOD devices.

    Click on Add Group to create a new group


     Enter a Group Name and Description and click Add


    Now the group has been created. You can add devices by either going to the device management console or by clicking Assign from My Devices in the group summary (this will also direct you to the device management console.



    Click on the Select button, then click on the devices you want to add to select them, and then click Add To Group. You will be prompted with a drop-down to select the group to add the devices.



    Once you've added the devices they will show in the Device Group summary.

    The device group summary also has options to search devices based on device name, owner, active status, platform and ownership (BYOD or COPE). Furthermore, the Advance Search option allows you to search by device location and by advance search parameters such as Device model, Vendor, OS version, internal memory, SSID, CUP Usage etc. and allows AND & OR operators in the search query. 



    Chathura DilanArtificial Intelligent Terms

    Deterministic vs Stochastic

    Deterministic

    Same set of initial values produce same output. As an example of chess game, if we take the movement of pawn we know exactly how it moves on the next move. So the movement can be determined.

    Stochastic

    Same set of initial values produce different outputs. It poses some inherent randomness. As an example when rolling a dice, we cannot actually determine what the result would be.

    Discrete vs Continuous

    Discrete

    Discrete is something you would represent with a whole number like 0, 1, 2 or category like male or female.

    Continuous

    Continuous is not restricted to defined separate values. Like temperature, it could be vary 26, 26.1, 27.004

     

    Evanthika AmarasiriWhy doesn't my WSO2 server restart as expected when the Windows instance restarts when WSO2 server is installed as a service

    When the Startup Type of a Carbon server that is configured as a Windows service is set to Manual and when the instance that the WSO2 server is running on restarts, the WSO2 Server will not get restarted along with it.

    For the WSO2 server to be restarted when the Windows instance restarts, you need to set Startup Type to Automatic instead as shown below. See attachment below.






    To change this setting, go to Control Panel\All Control Panel Items\Administrative Tools\Services and change the type from Manual to Automatic.

    Supun SethungaMocking Services with Ballerina in a Minute

    Suppose you are writing an integration scenario (in Ballerina or any technology), and you need to test the end-to-end scenario you just wrote. Way to achieve this is to mock a back-end service, and test your integration flow, without having to hassle with the actual back-end servers. With Ballerina, mocking a service has made easier than ever. All it takes is one minute to mock your service and make it up and running, with the flexibility of Ballerina. Lets look at how we can achieve this.


    Prerequisites:


    Mock the Service:


    Lets consider a scenario where we need to test sending a payload to a back-end server, and receiving a different payload form it. Lets also assume following are the two pyaloads, we are sending to the back-end, and receiving from it, respectively.


    Sending Payload:

    <Order>
        <orderId>order100</orderId>
        <items>
            <item>
                <itemId>101</itemId>
                <price>2</price>
                <quantity>1</quantity>
            </item>
            <item>
                <itemId>106</itemId>
                <price>7</price>
                <quantity>2</quantity>
            </item>
        </items>
    </Order>



    Receiving Payload:

    {
          "orderId":"order100",
          "status":"accpeted"
    }

    Lets create the mocked service to accept the above "sending payload" and send back the "receiving payload" as the response.

    import ballerina.lang.xmls;
    import ballerina.lang.messages;
    import ballerina.lang.system;
    import ballerina.net.http;

    @http:BasePath{value:"/pizzaService"}
    service PizzaService {

        @http:POST{}
        @http:BasePath{value:"/placeOrder"}
        resource placeOrder(message m) {

            // Get the order Id
            xml request = messages:getXmlPayload(m);
            string orderId = xmls:getString(request, "/Order/orderId/text()");

            // generate the json payload
            json responseJson = `{"orderId":${orderId}, "status":"accpeted"}`;

            // generate the response  
            message response = {};
            messages:setJsonPayload(response, responseJson);
            reply response;
        }
    }

    Don't we need to set the content type?

    No, we don't need to. Ballerina will automatically set the content type to application/json, when we set a json as the payload!
    Lets save this service in pizzeService.bal file.


    Run the Mocked Service


    To run our service, execute the following:
    <ballerina_home>/bin/ballerina run service <path/to/bal/file>pizzeService.bal

    Now the mocked service is up, and you can test your integration scenario using this service as the back-end.

    Supun SethungaAnnotations in Ballerina

    What are Annotations?


    An annotation is a ballerina code snippet that can be attached to some ballerina code, and will hold some metadeta related to that attached code. Annotations are not executable, but can be used to alter the behavior of some executable code.

    Sample:
    @http:POST{}
    @http:BasePath("/pizzaService")
    service MyService {
       ...
       ...
    }


    Where Can I attach an Annotation?

    Annotations can be attached to any one of the following:

    • Services
    • Resources
    • Functions
    • Connectors
    • Actions (of Connectors)
    • TypeMappers
    • Structs
    • Constants
    • Annotations

    However, this differs for different types of annotations. For example, http:BasePath{} can only be attached to a Service, here as http:POST{} can only be attached to a resource. This is specified in the definition of each annotation.


    What are the inbuilt ones?

    Currently (in ballerina v0.85), there are inbuilt annotations in four packages, each are for HTTP, JMS, WebSockets, and Documentations. You can find the definitions of each annotations in [1], [2], [3], and [4] respectively.


    Can I define my own Annotation?

    Absolutely! You can define your own annotation and apply it anywhere you want. A definition of an annotation takes the following form:
    package foo.bar;
    annotation MyAnnotation attach service, resource, function {
        string status;
        int code;
    }

    Here the starting 'annotation' is a keyword that describes this is an annotation definition, followed by the name of your annotation. Then you can specify to where this annotation can be applied, after the keyword 'attach'. In the above sample, my 'MyAnnotation' is allowed to be attached to any service, resource, or a function (or all of them).

    You can apply the above annotation in a function like below:
    @bar:MyAnnotation{status:"working", code:1}
    function myFunction(string str) {
        ...
    }

    Note that, if the function is in the same package as the annotation, then you need to ignore the preceding package name in the annotation attachment.
    e.g.:
    @MyAnnotation{status:"working", code:1}
    function myFunction(string str) {
        ...
    }

    What can be the types of attributes?

    Type of an attribute of an annotation can be on of the following:
    • A value type (string, int, float, boolean).
    • An Annotation.
    • Array of any of above.


    Default Values for Annotation Attributes?

    Indeed. Attributes of an annotation can have default values, But only for attributes of value types (string, int, float, boolean).
    e.g:

    annotation MyAnnotation attach service, resource, function {
        string status = "failed";
        int code = 0;
    }

    This means, if you have an attribute of an array type (eg: int[] a), or an attribute of another annotation (eg MyAnnotation a;), then those attributes cannot have default values.


    Annotate an Annotation by itself?

    You can also annotate an annotation using the same annotation itself. For example, say we write an annotations called 'Desciption' to add an dexscription to any of the ballerina construct we write. And suppose we need to add a description to that 'Description' annotation. We can do it as follows:

    @Description{value:"This is an annotation that can be used to describe a construct"}
    annotation Description attach service, annotation, function {
        string value;
    }

    Will the above work? Absolutely!



    References

    [1] https://github.com/ballerinalang/ballerina/blob/master/modules/ballerina-native/src/main/ballerina/ballerina/net/http/annotation.bal
    [2] https://github.com/ballerinalang/ballerina/blob/master/modules/ballerina-native/src/main/ballerina/ballerina/net/jms/annotations.bal
    [3] https://github.com/ballerinalang/ballerina/blob/master/modules/ballerina-native/src/main/ballerina/ballerina/net/ws/annotation.bal
    [4] https://github.com/ballerinalang/ballerina/blob/master/modules/ballerina-native/src/main/ballerina/ballerina/doc/annotation.bal

    Supun SethungaGetting Started with Ballerina

    Download:


    You can download ballerina runtime and tooling from http://ballerinalang.org/downloads/. Ballerina runtime contains the runtime environment (bre) needed to run ballerina main programs and ballerina services. Ballerina tooling distribution contains the runtime environement (bre), The Composer (visual editor), Docerina (for API document generation) and Testerina (Test framework for ballerina).

    However, for our first ballerina program, only the runtime environment (bre) is sufficient.


    First Main Program

    To get the things started, lets write a very simple ballerina main program which will print some text to the console. It will look like follow:

    import ballerina.lang.system;

    function main(string[] args) {
    system:println("My first ballerina main program!");
    }

    Now lets save this program as myFirstMainProg.bal.


    Run the main Program:

    To run our main program, execute the following

    <ballerina_home>/bin/ballerina run main <path/to/bal/file>myFirstMainProg.bal

    In the console, you would see the below output.

    My first ballerina main program!


    First Ballerina Service

    Now that we have written an executed a main program, it's time to write our first service with ballerina. Lets try to write a simple service that will print the same text as our main program. I will be writing my service to be executed with a http GET request.

    import ballerina.lang.system;
    import ballerina.net.http;

    @http:BasePath{value:"/myService"}
    service echoService {

        @http:GET{}
        @http:BasePath{value:"/echo"}
        resource echoResource (message m) {
            system:println("My first ballerina service!");
            reply m;
        }
    }

    Lets save this service as myFirstService.bal


    Run the Service:

    To run our service, execute the following:

    <ballerina_home>/bin/ballerina run service <path/to/bal/file>myFirstService.bal

    In the console, you would see the below output.

    ballerina: deploying service(s) in 'passthrough.bal'
    ballerina: started server connector http-9090

    And unlike in the main program, server will not exit, and will be keep listening to the port 9090. Now to invoke our service, lets run the following curl command, which will send a get request to the service. Note that, in our service, "myService" will be the base path followed by the resource path "echo".

    curl http://localhost:9090/myService/echo

    Once the request is sent to our service, following will be printed in the console of the service.

    My first ballerina service!

    Thus, as you see, despite whether it is a main program or a http service, its pretty easy to write and execute them with ballerina!.

    Udara LiyanageIntegrating New Relic with WSO2 Carbon products

    New Relic is a popular performance monitoring system which provided realtime analytics such as performance, memory usage and cpu usage, threads, web page response time etc. You can even profile application remotely using New Relic dashboard.

    This article explains how to integrate New Relic performance monitoring Java agent with WSO2 Carbon products.

    Tested platform: Java 8, WSO2 ESB 5.0.0, Mac OS Sierra 10.12.3

    1) Signup in New Relic website.
    You will get a license key once subscribe

    2) Download and extract New Relic agent jar zip files as below. It contains
    i) New relic Agent Jar file
    ii) newrelic.yml configuration yaml file

    wget -N https://download.newrelic.com/newrelic/java-agent/newrelic-agent/current/newrelic-java.zip
    unzip -q newrelic-java.zip
    

    3) Copy newrelic.jar and newrelic.yml into

    mkdir $CARBON_HOME/newrelicAgent
    cp newrelic.yml $CARBON_HOME/newrelicAgent
    cp newrelic.yml $CARBON_HOME/newrelicAgent
    

    4) Set New Relic licence key in newrelic.yml
    Locate to this section in license_key: ‘<%= license_key %>’ and replace it with the licence key you received at Step 1.

    license_key: 'e5620kj287aee4ou7613c2ku7d56k12387bd5jyb'

    5) Set java agent into $CARBON_HOME/bin/wso2server.sh as below

    -javaagent:$CARBON_HOME/newrelicAgent/newrelic.jar \

    Sample section looks like this

    while [ "$status" = "$START_EXIT_STATUS" ]
    do
        $JAVACMD \
        -Xbootclasspath/a:"$CARBON_XBOOTCLASSPATH" \
        $JVM_MEM_OPTS \
        -XX:+HeapDumpOnOutOfMemoryError \
        -XX:HeapDumpPath="$CARBON_HOME/repository/logs/heap-dump.hprof" \
        $JAVA_OPTS \
        -javaagent:$CARBON_HOME/newrelicAgent/newrelic.jar \

    4) sh $ESB_HOME/bin/wso2server.sh

    At startup you will see below logs in carbon log file

    Mar 26, 2017 13:08:58 +0800 [12884 1] com.newrelic INFO: New Relic Agent: Loading configuration file "/Users/udara/projects/testings/relic/wso2esb-5.0.0-BETA2/newrelicAgent/./newrelic.yml"
    Mar 26, 2017 13:08:59 +0800 [12884 1] com.newrelic INFO: New Relic Agent: Writing to log file: /Users/udara/projects/testings/relic/wso2esb-5.0.0-BETA2/newrelic/logs/newrelic_agent.log

    5) Do some operations such as accessing management console, accessing apis etc. Then Login to New Relic dashboard where you will find statistics about your carbon product.

    Screen Shot 2017-03-26 at 12.59.05 PM.jpg

    Screen Shot 2017-03-26 at 1.42.35 PM

    Screen Shot 2017-03-26 at 1.25.43 PM.jpg

    Screen Shot 2017-03-26 at 1.47.07 PM

    Beware of below error

    When I tried the same with WSO2 API Manager 2.1.0 I encountered the below error at server startup. Post [2] has suggested that it is due to an issue with temp directory.  The root cause for this is WSO2 startup scripts deletes TMP_DIR at startup script which causes New Relic not able to write to the temp directory. The fix is to delete the content of TMP_DIR instead of deleting the whole directory. So you will have to change CARBON_HOME/bin/wso2server.sh as below. Just comment TMP_DIR folder deletion and modify it to remove only the folder content.

    TMP_DIR="$CARBON_HOME"/tmp
    #if [ -d "$TMP_DIR" ]; then
    #rm -rf "$TMP_DIR"
    #fi
    
    if [-d "$TMP_DIR"]; then
    rm -rf "$TMP_DIR/*"
    fi
    Error bootstrapping New Relic agent: java.lang.RuntimeException: java.io.IOException: No such file or directory
    java.lang.RuntimeException: java.io.IOException: No such file or directory
        at com.newrelic.bootstrap.BootstrapLoader.load(BootstrapLoader.java:122)
        at com.newrelic.bootstrap.BootstrapAgent.startAgent(BootstrapAgent.java:110)
        at com.newrelic.bootstrap.BootstrapAgent.premain(BootstrapAgent.java:79)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at sun.instrument.InstrumentationImpl.loadClassAndStartAgent(InstrumentationImpl.java:386)
        at sun.instrument.InstrumentationImpl.loadClassAndCallPremain(InstrumentationImpl.java:401)
    Caused by: java.io.IOException: No such file or directory

    References

    [1] http://lasanthatechlog.blogspot.com/2015/06/integrating-wso2-products-with-new-relic.html

    [2] https://discuss.newrelic.com/t/error-bootstrapping-new-relic-agent-in-hadoop-mapreduce-job/23763


    Charini NanayakkaraJMX monitoring with remote WSO2 Server

    Add following line to wso2server.sh file (under $JAVACMD) of WSO2 product

    -Djava.rmi.server.hostname=<IP of  wso2 server> \

    Now you can perform remote JMX monitoring on wso2 product

    Lakshman UdayakanthaJDBC drivers and connection strings

    Recently I was fixing a bug in gadget creation in WSO2 DAS 3.1.0 in which gadget creation throws errors on some database types. So I have to check for major database types for gadget creation and I came up with following database drivers and connection strings and little more information their JDBC drivers.

    MySQL
    Driver class : com.mysql.jdbc.Driver
    Connection string : jdbc:mysql://localhost:3306/databaseName

    You can download JDBC driver from their official site.

    MSSQL
    Driver class : com.microsoft.sqlserver.jdbc.SQLServerDriver
    Connection string :jdbc:sqlserver://hostName:1433;database=databaseName

    You can download MSSQL driver from microsoft site. According to the JRE it comes from several flavours as below.

    • Sqljdbc41.jar requires a JRE of 7 and supports the JDBC 4.1 API
    • Sqljdbc42.jar requires a JRE of 8 and supports the JDBC 4.2 API

    Apart from official MSSQL driver there are other supported drivers like jtds as well. You can find more information about them by referring this stackoverflow question.

    PostgreSQL
    Driver class : org.postgresql.Driver
    Connection string : jdbc:postgresql://localhost:5432/databaseName

    You can download the PostgreSQL driver from their official site and it also comes in different flavours depend on the Java version. It would be very easy to work with PostgresSQL if you are using postgres.app. For mac users, note that to uninstall all previous versions of PostgreSQL versions to work with postgres app.

    DB2
    Driver class : com.ibm.db2.jcc.DB2Driver
    Connection string : jdbc:db2://myhost:5021/mydb

    You can download db2 JDBC driver from their official site.

    Oracle
    Driver class : oracle.jdbc.OracleDriver
    Connection string : jdbc:oracle:thin:@hostName:1521/wso2qa11g

    You can download Oracle JDBC driver from their official site.

    Maneesha WijesekaraSetup WSO2 API Manager Analytics with WSO2 API Manager 2.0 using RDBMS

    In this blog post I'll explain on how to configure RDBMS to publish APIM analytics using APIM analytics 2.0.0.

    The purpose of having RDBMS is to fetch and store summarized data after the analyzing process. API Manager used this data to display on APIM side using dashboards.

    Since the APIM 2.0.0, RDBMS use as the recommended way to publish statistics for API Manager. Hence, I will explain step by step configuration with RDBMS in order to view statistics in Publisher and Store through this blog post.

    Steps to configure,

    1. First download the WSO2 API Manager Analytics 2.0.0 release pack and unzip it.

    2. Go to carbon.xml([APIM_ANALYTICS_HOME]/repository/conf/carbon.xml) and set port offset as 1 (default offset for APIM Analytics)

    <Ports>
    <!-- Ports offset. This entry will set the value of the ports defined below to
    the define value + Offset.
    e.g. Offset=2 and HTTPS port=9443 will set the effective HTTPS port to 9445
    -->
    <Offset>1</Offset>

    Note - This is only necessary if both API Manager 2.0.0 and APIM Analytics servers run in a same machine.

    3. Now add the data source for Statistics DB in stats-datasources.xml ([APIM_ANALYTICS_HOME]/repository/conf/datasources/stats-datasources./xml) according to the preferred RDBMS. You can use any RDBMS such as h2, mysql, oracle, postgres and etc and here I choose mysql to use in this blog post.


    <datasource>
    <name>WSO2AM_STATS_DB</name>
    <description>The datasource used for setting statistics to API Manager</description>
    <jndiConfig>
    <name>jdbc/WSO2AM_STATS_DB</name>
    </jndiConfig>
    <definition type="RDBMS">
    <configuration>
    <url>jdbc:mysql://localhost:3306/statdb?autoReconnect=true&amp;relaxAutoCommit=true</url>
    <username>maneesha</username>
    <password>password</password>
    <driverClassName>com.mysql.jdbc.Driver</driverClassName>
    <maxActive>50</maxActive>
    <maxWait>60000</maxWait>
    <testOnBorrow>true</testOnBorrow>
    <validationQuery>SELECT 1</validationQuery>
    <validationInterval>30000</validationInterval>
    </configuration>
    </definition>
    </datasource>

    Give the correct hostname and name of the db in <url> (in this case, localhost and statdb respectively), username and password for the database and drive class name.

    4. WSO2 analytics server automatically create the table structure for statistics database at the server start up using ‘-Dsetup’. 

    5. Copy the related database driver into <APIM_ANALYTICS_HOME>/repository/components/lib directory.

    If you use mysql - Download
    If you use oracle 12c - Download
    If you use Mssql - Download

    6. Start the Analytics server

    7. Download the WSO2 API Manager 2.0.0 pack and unzip it ( Download )

    8. Open api-manager.xml ([APIM_HOME]/repository/conf/api-manager.xml ) and enables the Analytics. The configuration should be look like this. (by default the value set as false)

    <Analytics>
    <!-- Enable Analytics for API Manager -->
    <Enabled>true</Enabled>

    9. Then configure Server URL of the analytics server used to collect statistics. The define format is ' protocol://hostname:port/'. Although admin credentials to login to the remote DAS server has to be configured like below.

    <DASServerURL>{tcp://localhost:7612}</DASServerURL>
    <DASUsername>admin</DASUsername>
    <DASPassword>admin</DASPassword>

    Assuming Analytics server in the same machine as the API Manager 2.0, the hostname I used here is 'localhost'. Change according to the hostname of remote location if the Analytics server runs on a different instance. 

    By default, the server port is adjusted with offset '1'. If the Analytics server has a different port offset ( check {APIM_ANALYTICS_HOME}/repository/conf/carbon.xml for the offset ), change the port in <DASServerURL> accordingly. As an example if the Analytics server has the port offset of 3, <DASServerURL> should be {tcp://localhost:7614}.

    10. For your information, API Manager 2.0 enables RDBMS configuration to proceed with statistics, by default. To enable publishing using RDBMS, <StatsProviderImpl> should be uncommented (By default, it's not in as a comment. So this step can be omitted)

    <!-- For APIM implemented Statistic client for DAS REST API -->
    <!--StatsProviderImpl>org.wso2.carbon.apimgt.usage.client.impl.APIUsageStatisticsRestClientImpl</StatsProviderImpl-->
    <!-- For APIM implemented Statistic client for RDBMS -->
    <StatsProviderImpl>org.wso2.carbon.apimgt.usage.client.impl.APIUsageStatisticsRdbmsClientImpl</StatsProviderImpl>

    11. The next step is to configure the statistics database in API Manager side. Add the data source for Statistics DB which used to configure in Analytics by opening master-datasources.xml ([APIM_HOME]/repository/conf/datasources/master-datasources./xml)


    <datasource>
    <name>WSO2AM_STATS_DB</name>
    <description>The datasource used for setting statistics to API Manager</description>
    <jndiConfig>
    <name>jdbc/WSO2AM_STATS_DB</name>
    </jndiConfig>
    <definition type="RDBMS">
    <configuration>
    <url>jdbc:mysql://localhost:3306/statdb?autoReconnect=true&amp;relaxAutoCommit=true</url>
    <username>maneesha</username>
    <password>password</password>
    <driverClassName>com.mysql.jdbc.Driver</driverClassName>
    <maxActive>50</maxActive>
    <maxWait>60000</maxWait>
    <testOnBorrow>true</testOnBorrow>
    <validationQuery>SELECT 1</validationQuery>
    <validationInterval>30000</validationInterval>
    </configuration>
    </definition>
    </datasource>

    12. Copy the related database driver into <APIM_HOME>/repository/components/lib directory as well.

    13. Start the API Manager server.

    Go to statistics in publisher and the screen should looks like this with a message of 'Data Publishing Enabled. Generate some traffic to see statistics.'


    To view statistics, you have to create at least one API and invoke it in order to get some traffic to display in graphs.


    Amalka SubasingheHow to run a Jenkins in WSO2 Integration Cloud

    This blog post guides you on how to run Jenkins in WSO2 Integration Cloud and configure it to build an GitHub project. Currently the WSO2 Integration Cloud does not support Jenkins as a app type, but we can use Custom docker app type with Jenkins docker image.


    First we need to find out, proper Jenkins docker image, which we can use for this  or we have to build it from the scratch.

    If you go to https://hub.docker.com/_/jenkins/ you can find official Jenkins images in docker hub, but we can't use this images as it is due to several reasons. So I'm going to create a fork of https://github.com/jenkinsci/docker and do some changes to the Dockerfile.

    I use the https://github.com/amalkasubasinghe/docker/tree/alpine branch here.

    A. You will see it has VOLUMN mount - at the moment WSO2 Integration Cloud does not allow you to upload an image which has VOLUMN mount. So we need to comment it out

    #VOLUME /var/jenkins_home

    B. My plan is to build Git hub project, so I need enable Git hub Integration plugin. So I add the following line at the end of the file

    RUN install-plugins.sh docker-slaves github-branch-source

    C. I want to build projects using Maven, so I add the following segment to the Dockerfile to install and configure Maven.

    ARG MAVEN_VERSION=3.3.9

    RUN mkdir -p /usr/share/maven /usr/share/maven/ref/ \
      && curl -fsSL http://apache.osuosl.org/maven/maven-3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_VERSION-bin.tar.gz \
        | tar -xzC /usr/share/maven --strip-components=1 \
      && ln -s /usr/share/maven/bin/mvn /usr/bin/mvn

    ENV MAVEN_HOME /usr/share/maven
    COPY settings-docker.xml /usr/share/maven/ref/
    RUN chown -R ${user} "$MAVEN_HOME"


    D. I don't want to expose slave agent port 50000 to the outside. Just comment it out.

    #EXPOSE 50000

    E. I want to configure the Jenkins job to build the https://github.com/amalkasubasinghe/HelloWebApp/ project periodically, so I need to copy the required configurations to the Jenkins and give the correct permissions.

    Note: You can first run a Jenkins on your local machine, configure the job and get the config.xml file.
    I configured the Jenkins job to poll the Github project every 2 minutes and build. (You can configure the interval as you wish)

    Here's the Jenkins configurations https://github.com/amalkasubasinghe/docker/blob/jenkins-alpine-hellowebapp/HelloWebApp/config.xml

    <?xml version='1.0' encoding='UTF-8'?>
    <project>
      <description></description>
      <keepDependencies>false</keepDependencies>
      <properties>
        <com.coravy.hudson.plugins.github.GithubProjectProperty plugin="github@1.26.1">
          <projectUrl>https://github.com/amalkasubasinghe/HelloWebApp/</projectUrl>
          <displayName></displayName>
        </com.coravy.hudson.plugins.github.GithubProjectProperty>
      </properties>
      <scm class="hudson.plugins.git.GitSCM" plugin="git@3.1.0">
        <configVersion>2</configVersion>
        <userRemoteConfigs>
          <hudson.plugins.git.UserRemoteConfig>
            <url>https://github.com/amalkasubasinghe/HelloWebApp</url>
          </hudson.plugins.git.UserRemoteConfig>
        </userRemoteConfigs>
        <branches>
          <hudson.plugins.git.BranchSpec>
            <name>*/master</name>
          </hudson.plugins.git.BranchSpec>
        </branches>
        <doGenerateSubmoduleConfigurations>false</doGenerateSubmoduleConfigurations>
        <submoduleCfg class="list"/>
        <extensions/>
      </scm>
      <canRoam>true</canRoam>
      <disabled>false</disabled>
      <blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
      <blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
      <triggers>
        <hudson.triggers.SCMTrigger>
          <spec>H/2 * * * *</spec>
          <ignorePostCommitHooks>false</ignorePostCommitHooks>
        </hudson.triggers.SCMTrigger>
      </triggers>
      <concurrentBuild>false</concurrentBuild>
      <builders>
        <hudson.tasks.Maven>
          <targets>clean install</targets>
          <usePrivateRepository>false</usePrivateRepository>
          <settings class="jenkins.mvn.DefaultSettingsProvider"/>
          <globalSettings class="jenkins.mvn.DefaultGlobalSettingsProvider"/>
          <injectBuildVariables>false</injectBuildVariables>
        </hudson.tasks.Maven>
      </builders>
      <publishers/>
      <buildWrappers/>
    </project>

    We need to create the following content in the JENKINS_HOME/jobs folder, to configure a job

    JENKINS_HOME
     --> jobs
             ├── HelloWebApp
             │   └── config.xml

    Add the following to the Dockerfile.

    RUN mkdir -p $JENKINS_HOME/jobs/HelloWebApp
    COPY HelloWebApp $JENKINS_HOME/jobs/HelloWebApp

    RUN chmod +x $JENKINS_HOME/jobs/HelloWebApp \
      && chown -R ${user} $JENKINS_HOME/jobs/HelloWebApp


    So let's build the Jenkins image and test locally.
    Go to the folder where the Dockerfile exist and execute

    docker build -t jenkins-alpine .

    Run the Jenkins

    docker run -p 80:8080 jenkins-alpine

    You will see the Jenkins logs in the command line

    You can access the Jenkins via http://localhost/ and see build jobs running in every 2 minutes when it detects any changes in GitHub project.

    If you click on the HelloWebApp and go to configure, then you will see the Jenkins job configurations.



    So now the image is ready and let's push it to the docker hub and deploy it in WSO2 Integration Cloud.

    docker images

    REPOSITORY               TAG                               IMAGE ID            CREATED             SIZE
    jenkins-alpine                 latest                            d7dc03cec1df        51 minutes ago      257.4 MB

    docker tag d7dc03cec1df amalkasubasinghe/jenkins-alpine:hellowebapp

    docker login

    docker push amalkasubasinghe/jenkins-alpine:hellowebapp

    When you login to the docker hub you can see the image you push



    Let's login to the WSO2 Integration Cloud -> Create Application -> and select Custom Docker Image


    add the image providing image URL


    Wait until the security scanning finished and then create the Jenkins application selecting scanned image



    Here I select Custom Docker http-8080 and https-8443 runtime, as Jenkins run in 8080 port.


    Wait until the Jenkins instance fully up and running. Check the logs


    Now you can access the Jenkins UI via http://esbtenant1-jenkinshellowebapp-1-0-0.wso2apps.com/

    That's all :). Now every 2 minutes our Jenkins job will poll the GitHub project and if there are any changes it will pull the changes and build.

    This is how you can setup and configure Jenkins in WSO2 Integration Cloud.

    You can see the docker file here https://github.com/amalkasubasinghe/docker/blob/jenkins-alpine-hellowebapp/Dockerfile






    Prabath AriyarathnaSelect different backend pools based on the HTTP Headers in Nginx


    Some scenarios we need to select different backend pools based on some attributes in the request.
    Nginx has that capability to selecting different backend pools based on the request header value.





    To accomplish this in Nginx you can use the following code in your configuration.


    upstream gold {
    server 127.0.0.1:8080;
    server 127.0.0.1:8081;
    server 127.0.0.1:8082;
    }

    upstream platinum {
    server 127.0.0.1:8083;
    server 127.0.0.1:8084;
    server 127.0.0.1:8085;
    }

    upstream silver {
    server 127.0.0.1:8086;
    server 127.0.0.1:8087;
    }

    # map to different upstream backends based on header
    map $customer_type $pool {
    default "gold";
    platinum "platinum";
    silver "silver";
    }

    server {
    listen 80;
    server_name localhost;
    location / {
    proxy_pass http://$pool;

    #standard proxy settings
    proxy_set_header X-Real-IP $remote_addr;
    proxy_redirect off;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_redirect off;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_set_header X-NginX-Proxy true;
    proxy_connect_timeout 600;
    proxy_send_timeout 600;
    proxy_read_timeout 600;
    send_timeout 600;
    }

    You can test this setup by sending request with custom http header customer_type.

    Ex:- customer_type = gold

    This will load balance only among the 8080, 8081 and 8082.

    Samisa AbeysingheBallerina Lang. Why New? Why Now?

    Last Month, WSO2 announced the new language it designed for enabling integration, the flexible, powerful and beautiful Ballerina Language.



    ESB is Dead?

    Why a whole new language for integration. What is wrong with all the tools that we already have?
    The tools that we have, including the proven, stable and powerful WSO2 ESB is configuration driven. Uses mostly XML or something similar for configuration. Tough we call it configuration, for complex integration scenarios it can get really complex.Configuration over code does not scale. 
    In addition, every ESB is based on data flow architecture, and that does not scale either. The model is not good when it comes to complex scenarios.

    So we need a language, because it better scales for complex problems. 
    Scripting languages such as JavaScript are great. 
    Even Java and C# has lots of formidable alternatives and options. And why not use those? 
    In fact people do use them. And with the advent of micro services and with the existence of easy to use container friendly programming frameworks with Java and sometimes C# people have already implemented loads of integration services replacing traditional ESBs.

    In the micro services world, there is no much room for the ESB pattern, and ESB is dying and often frowned upon. 
    The programming model is inherently micro, so there is little room to worry about a third proxy layer. You are implementing a thin micro service anyway. A programming model would do the mere job. That is the simple thinking. Yet it could be over simplified at times.

    Micro Integrations in a Micro Services World 

    While the philosophy of micro services is absolutely right and the design principle is here to stay, there need to be more thought on what we are actually doing. What is being done today is actually mostly integration. Why is that?

    Everything that you do today requires stuff from others.

    The primary reason is that no matter what the business units do, they cannot live in the enterprise in a silo today. Either they have to re-use their own, or they got to connect to other services from other business units. 
    Even if they do not want to re-use or connect to other business units at all, most useful IT assets are on the cloud today. The need to connect to the could and re-use those is inevitable. 
    So re-use of existing IT assets in the form of services and/or the use of could services is a must for today’s software. If you are not doing that in your software, it could be an undergraduate assignment and not an enterprise application that you are talking about.

    Ballerina is a programming language optimized for micro integration.

    If you are already using existing programming languages for your micro services and wonder why you need a new programming language, the simple 80/20 rule apply. That is, if you are talking to other services 80% of the time then use Ballerina. In fact, if you take a step back and analyze what you are actually doing in your micro services, you will realize that bulk of your micro services are in this category. 

    Ballerina looks at the micro services world form this view and enables micro integrations. If you are to integrate, the existing programming options and frameworks give you almost nothing as a programmer other than the usual programming constructs. 
    So either you have to fall back to an ESB and then your logic is in configuration or you have to convert all that is in ESB configurations into Java, C# or whatever the programing language you use. So you are drinking the cool aid of micro services, but your are just moving logic across layers and not doing micro services right. 
    With ballerina, designed to do the job, you can do micro services with micro integrations with fewer lines of code and more importantly with the right design in place. 

    action tweet(Twitter t, string msg)(message ) {

        message request = {};

        string oauthHeader = constructOAuthHeader(consumerKey, consumerSecret, accessToken, accessTokenSecret, msg);

        string tweetPath = "/1.1/statuses/update.json?status=" + uri:encode(msg);

        messages:setHeader(request, "Authorization", oauthHeader);

        message response = http:ClientConnector.post(tweeterEP, tweetPath, request);

        return response;
    }


    The language also comes with visual tooling that uses sequence diagrams to help model the design of the integration. Sequence diagram based model is perfect for describing parallel, coordinated activities of many parties.




    It is a new language, so, what is the effort for me to learn? Not much! If you are familiar with any major programming language, you can lean it quickly.

    In addition to calming micro services ready and micro integrations enabling, Ballerina is truly lives up to the promises of needs of micro services architecture in that it is container friendly. It starts up in seconds and runs with a small footprint that is key requirements to make it natively micro services friendly.

    Thought Leadership

    WSO2 when started more than a decade ago was the new kid in the block in Web Services world. It took a novel path to solve the enterprise integration problem. It had leaders who knew the space but as a company, it did not have much industry experience. After more than a decade, and seasoned with delivering services and support for diverse range of large scale customers and hardened with that experience, WSO2 designed Ballerina with a much more practical view of the world. It is a mature moonshot for the next decade of integration solutions that would revolutionize the space.

    However, it should also be noted, while WSO2 is coming up with Ballerina as an experienced decade old company there is no technical leftover debt baggage dragged over to ballerina. This is a fresh new design and perspective into new world of integration. There has not been backward compatibility worries brought into the table when the designs where done to drag the innovators from there freedom of thought.




    sanjeewa malalgodaHow to expose sensitive services to outside as APIs

    APIM 2.0.0 supports oauth 2.0.0 based security for APIs(with JWT support) out of the box and we can utilize that for secure services. Let me explain how we can use it. Lets consider how mobile client application can use those secured APIs. 
    • User logs into system(using multi step authentication including OTP etc ). If we are using SAML SSO then we need browser based redirection from native  application.
    • Then once user authenticated we can use same SAML assertion and obtain OAuth 2.0.0 access token on behalf of logged in user and application(which authenticate both user and client application).
    • Then we can use this token for all subsequent calls(service calls). 
    • Then when requests come to API gateway we will fetch user information from token and send them to back end.
    • Also at gateway we can do resource permission validation.
    • If we need extended permission validation we can do that as well before request routed to core services.
    • So internal service can invoke only if user is authenticated and authorized to invoke that particular API.
    This complete flow can be implement using WSO2 API Manager and Identity server.

    sanjeewa malalgodaJWT token and its usage in WSO2 API Manager

    JSON Web Token (JWT) represents claims to be transferred between two parties. The claims in a JWT are encoded as a JavaScript Object Notation (JSON) object that is used as the payload of a JSON Web Signature (JWS) structure or as the plain text of a JSON Web Encryption (JWE) structure, enabling the claims to be digitally signed. A JWToken is self-­contained, so when we create one, it will have all the necessary pieces needed inside it.

    To authenticate end users, the API manager passes attributes of the API invoker to the back-end API implementation. JWT is used to represent claims that are transferred between the end user and the backend. A claim is an attribute of the user that is mapped to the underlying user store. A set of claims are called a dialect (e.g. http://wso2.org/claims). The general format of a JWT is {token infor}.{claims list}.{signature}. The API implementation uses information, such as logging, content filtering, and authentication/authorization that is stored in this token. The token is Base64-encoded and sent to the API implementation in a HTTP header variable. The JWT header will contain three main components.

    What are those pieces? The JWT token string can be divided into three parts.
    A header
    A payload
    A signature

    We will discuss about these parts later and understand them.

    First let's consider very simple use case to understand the usage of API Manager JWT use case.
    Let's say we have shopping cart web application. And user is invoking some web services associated with that application through API Manager. In such cases we may need to pass user details to backend service. So as discussed above we may use JWT to pass that information to backend.

    Lets think we have 2 users named Bob and frank.
    • Bob is 25 years old and he is enjoying watching cricket. He based on colombo, sri lanka.
    • Frank is 52 years old and he enjoys watching football.He is living in frankfurt germany


    They both realized coming weekend is free for them and then decided to go out to watch game.
    Both Bob and Frank installed find
    They both will call find ticket for games API from their mobile devices. Each time they invoke APIs we cannot ask them to send their details. They will only send access token which is mandatory for their security.
    When request come to API Manager it will first validate token. Then if user is authenticated to call service we will follow up with next steps.
    From access token we can get username of the user.Then we can get user attributes and create JSON payload with it.
    Then API Management server will send user details as JSON payload in transport header.
    So when request reach backend server they can get user details from JWT message. Then back end server will generate customized response for each user.
    Bob will get response with cricket events happens around colombo area while Frank is getting response with football events happen in germany.

    This is one of the most common use case of JWT.
    .

    JWT_USECASE (1).png








    In most production deployments, service calls go through the API manager or a proxy service. If we enable JWT generation in WSO2 API Manager, then each API request will carry a JWT to the back-end service. When the request goes through the API manager, we can append the JWT as a transport header to the outgoing message. So, the back-end service can fetch JWT and retrieve required information about the user, application, or token. There are two kinds of access tokens we use to invoke APIs in WSO2 API Manager.

    Application access token: Generate as application owner and there is no associated user for this token (the actual user will be the application owner). In this case, all information will be fetched from the application owner, so the JWT will not have real meaning/usage when we use the application access token.
    User access token: User access tokens are always bound to the user who generated token. So, when you access the API with user access token, the JWT will generate user details. On the back-end server side, we can use this to fetch user details.
    Sample JWT message

    {
    "iss":"wso2.org/products/am","exp":1407285607264,
    "http://wso2.org/claims/subscriber":"xxx@xxx.xxx",
    "http://wso2.org/claims/applicationid":"2",
    "http://wso2.org/claims/applicationname":"DefaultApplication",
    "http://wso2.org/claims/applicationtier":"Unlimited",
    "http://wso2.org/claims/apicontext":"/t/xxx.xxx/aaa/bbb",
    "http://wso2.org/claims/version":"1.0.0",
    "http://wso2.org/claims/tier":"Unlimited",
    "http://wso2.org/claims/keytype":"PRODUCTION",
    "http://wso2.org/claims/usertype":"APPLICATION",
    "http://wso2.org/claims/enduser":"anonymous",
    "http://wso2.org/claims/enduserTenantId":"1",
    "http://wso2.org/claims/emailaddress":"sanjeewa@wso2.com",
    "http://wso2.org/claims/fullname":"xxx",
    "http://wso2.org/claims/givenname":"xxx",
    "http://wso2.org/claims/lastname":"xxx",
    "http://wso2.org/claims/role":"admin,subscriber,Internal/everyone"  
    }
    As you can see JWT contain claims associated with User (end user TenantId, full name, given name, last name, role, email address, user type) API (API context, version, tier), Subscription (keytype, subscriber)
    Application (application ID, application name, application tier)

    However, in some production deployments, we might need to generate custom attributes and embedded to the JWT.

    Rukshan PremathungaRevoke OAuth application In APIM 2.1.0

    Revoke OAuth application In APIM 2.1.0

    1. Introduction

    2. In APIM when subscriber create and application and generate a key in identity component it will generate an appropriate OAuth application. When an application is added it will contain the consumer key and consumer secret. These values are also shown in the store application. And those are used to generate or renew token later using store UI or token endpoint.

      But these application credentials is a constant for entire life cycle of the application and it can be destroy only if application is delete. That mean there are no any way to change the consumer secret of the application.

      Usage of changing a consumer secret is, some time organization need to be invalidate current token and regenerating those token for that application. A possible solution would be changing this consumer secret only. But up to APIM 2.0.0 this was not possible. But APIM latest version(2.1.0) this feature is available.

    3. Revoke consumer secret

    4. Admin users can change the consumer secret of a any OAuth application my login in to the management console of Auth components are available(APIM or IS). Once consumer secret is revoked all the associated tokens are invalidated and cache are also get cleared. Thus it prevent API invocation for that access token as well as it prevent to token re-generate for that application. Once a consumer secret is revoked OAuth application also get invalided and it is inactive. But this behavior will be affect to the API subscription and still allowed to subscribe to the API in APIM store. Also if an OAuth application is revoked it is impossible to regenerate token using store UI or token endpoint. Even though consumer secret is revoked it is not get removed from the OAuth application and store will show the same value further.

      • Logging to management console and select the appropriate service provide for the application.
      • Edit the service provider and expand it to get “OAuth/OpenID Connect Configuration”
      • Then the OAuth application will be listed
      • Click the revoke button to revoke the consumer secret
    5. Regenerate consumer secret

      • Login the management console and go the OAuth application
      • Next to revoke button, “Regenerate secret” button will appear
      • Click it to re-generate consumer secret
      • Then store also reload the new consumer secret
    6. References

      1. https://docs.wso2.com/display/IS530/Configuring+OAuth2-OpenID+Connect+Single-Sign-On

    Tanya MadurapperumaReceiving HL7 Messages with WSO2 EI

    Before getting EI involved in this story,let's get to know what are these HL7 messages? Why do we need such messages ? From where does these messages come?

    What is HL7 ?

    HL7 refers to a set of international standards defined for exchanging, integrating, sharing, and retrieval of clinical and administrative data between software applications used by various healthcare providers. In simple words HL7 is the common language that is used by different types of health care instruments to talk to each other.

    Use case

    Now let's see how these HL7 messages can be received from WSO2 EI using a simple use case.

    In order to simulate emitting messages from Health instruments, we will be using Hapi Test Panel. If you are not familiar with the Hapi Test Panel you can go thorough this previous blog post about Hapi Test Panel.

    Messages sent from the Hapi Test Panel will be captured by WSO2 EI's HL7 inbound endpoint and the mediated messages will be saved in a MYSQL database as shown in the below architecture diagram.



    Approach

    Note that we are building the above use case starting from the right side of the above diagram.

       1.  Create MySQL table and data service for storing messages
       2.  Writing Mediation logic for extracting data from HL7 messages and calling data service
       3.  Writing HL7 inbound to receive HL7 messages
       3.  Sending HL7 messages to WSO2 EI from Hapi Test Panel

       
        Note : As the purpose of this blog post is to demonstrate the HL7 capabilities of EI and not to deploy in any production     environment as is, we will be creating synapse configurations using management console of EI.



    Creating data service in WSO2 EI

    Let's first create the MySQL table for storing mediated messages. Below given is the sample table that we created in MySQL database.



       1.  Copy the MySQL driver in to EI_HOME/lib folder and start EI by running integrator.sh script at EI_HOME/bin
       2.  Log into the EI management console and Go to Configure --> Datasources and click Add Datasource
       3.  Fill in the details as per your created table in MySQL and Save


       4.  Then go to Main --> Generate under Data Source
       5.  Go through the wizard and generate the data service.

    Note the data service name that we have used in this use case is "patient_data_DataService"

       6.  Go to Main --> Endpoints and click on Add endpoint
       7.  Choose Address Endpoint from the list of endpoints and fill in the data as given below

    Since our data service name is "patient_data_DataService", our endpoint address is "http://localhost:8280/services/patient_data_DataService"



    Writing mediation for the HL7 messages in WSO2 EI

       1.  Select Main --> Sequences and click on Add Sequence
       2.  Switch to the source view and paste the below given sequence



    Note that we have used a payload factory mediator to extract data from HL7 message and at the end of the sequence we are calling the data service with the newly built payload.

    Creating HL7 Inbound endpoint in WSO2 EI

       1.  Go to Main --> Inbound Endpoints
       2.  Then click on Add Inbound Endpoint
       3.  Give an endpoint name and select Type as HL7
       4.  Fill in the details as shown in the below image


    Note that the inbound HL7 port is 20000

    Sending HL7 messages from Hapi Test Panel

    Send below HL7 message of type ORU^R01 and version 2.4


    Now go to your MySQL table and verify whether the following entry is inserted.

    Note that the payload factory mediator is written only to accept messages of type ORU^R01 and version 2.4 and in a real use case we can write the mediation logic in a more generic way to accept differnt type of HL7 messages.

    Tanya MadurapperumaSending HL7 messages using Hapi Test Panel

    The purpose of this blog post is to describe how to install the Hapi Test Panel in an ubuntu environment and send HL7 messages using that.

    What is Hapi Test Panel ?

    The HAPI Test Panel is a tool that can be used to send, receive and edit HL7 messages.



    How to install in Ubuntu ?

    There are multiple ways to install Hapi Test panel and you can find more information here. The approach that I followed was
       1.  Download hapi-testpanel-2.0.1-linux.tar.bz2 from download page
       2.  Extract the download to your preferred location
       3.  Run the testpanel.sh file which is at the Home of the Hapi Test Panel extraction

    How to send HL7 messages using Test Panel ?

       1.  Click on Test Menu and then select Populate Test Panel with Sample Message and Connections
       2.  You can send the created new message by clicking Send button which is in the top of the middle panel

    If you need any specific version or type of message, you can click on File Menu and then select New Message. You can choose your preferred message version and type from the pop up window.


    Enjoy sending HL7 messages with Hapi Test Panel !!!

    Yashothara ShanmugarajahEnterprise Application Integration and Enterprise Serivce Bus

    Enterprise Application Integration
    • Integrating systems and applications together
    • Get software systems to work in perfect synchronism with each other
    • Not limited to integration within an organization
      • Integrating with customer applications
      • Integrating with supplier/partner applications
      • Integrating with public services
    By using EAI, we have got another problem that how can we talk to each different services which develop on different technologies, different languages, different protocols, different platforms, different message formats and different QoS requirements (security, reliability). ESB is the rescue for this problem. 

    Now we will see how can we use ESB to resolve this problem. Think a real scenario. A Chinese girl is joining who does not know English in your classroom. Think you know only Englis and you don't know Chinese. So how can you communicate with that Chinese girl? In this scenario, you can a friend who knows Chinese and English. Through that, you can easily communicate with that girl. This is cost and time effective as you don't need to study Chinese. 

    Now we can apply this solution to the software industry. Let's assume, you are developing a business application for buying a dress. There you need to talk to Sales System, Customer System, and Inventory system. In this example, let's assume sales system built using SOAP protocol (Exposing SOAP services). Customer system using XML based REST services and Inventory system using JSON-based REST services. Now you need to deal with multiple protocols. Here we can use ESB as the rescue. 

    What is ESB?

    The normal bus is used to transfer from one place to another place. In ESB, you need to pass your message to ESB, ESB will pass your message to a destination. Also if destination sends a response, ESB will take that response and deliver to you. In the previous example, sales system will send the soap message to ESB. ESB will take this message and convert it to XML based REST message to the cusomer system. You may connect to multiple application through ESB. But you only need to do one simple connection which calls ESB only. ESB will talk to rest of the applications.

    Nifras IsmailMake available to autocomplete suggestions in your terminal

    Again and again typing the same things in the terminal is the worst things. After a long search, I found a simple solution to enable autocomplete options on the terminal.

    Normally, Linux bash use readline for its auto-completions, add the following line to enable auto-completions on your terminal.

    **** Caution ****

    The following change file contains most important configuration. Be careful on work with this. Don’t touch other lines.

    Open the /etc/inputrc in your favourite editor. I am using nano

    sudo nano /etc/inputrc

    Then add the following commands

    # mappings for making up and down arrow searching through history:
    “\e[A”: history-search-backward
    “\e[B”: history-search-forward
    “\e[C”: forward-char
    “\e[D”: backward-char

    #Use [Tab] and [Shift]+[Tab] to cycle through all the possible completions:
    “\t”: menu-complete
    “\e[Z”: menu-complete-backward

    That’s it. ctrl + x to save and exit the terminal and reopen it.

    Yeah! Auto-completion is enabled.

    Alternative Option: You may able to using the reverse key functionalities as Ctrl + R and use the arrows to move the relevant regular expression.


    Rukshan PremathungaNext WSO2 APIM (3.0.0) powered by WSO2 Ballerina

    Next WSO2 APIM (3.0.0) powered by WSO2 Ballerina

    1. Introduction

    2. Ballerina is Next WSO2 Gateway language is release recently. APIM also targeted new ballerina language to used for APIM Gateways. Ballerina language is used as integrate various service and have ability to implement new logics based on that. Which is also have more advanced feature than apache synapse we have used earlier. The in build connectors allow to connect to the world via many protocols. Which is a graphical modeling programming language which is easy to implement using graphical component.
    3. Why Ballerina for APIM

    4. Ballerina is a awesome programming language which is easy to use for connecting. The following features help it making API management so easy.

      • Connector which let to connect services
      • In Built utilities function(json, string etc)
      • Ballerina composer, help to implement logic graphically
      • Ballerina composer’s swagger to ballerina and wise verse code generation Support
      • Swagger specification

      The above feature available in the Ballerina, can be used in APIM to make it easy for users to come up with customizable APIs. unlike the previous releases, we encourage users to update ballerina source to introduce new logic and even resources. Because of the composer feature it let users to implement API resources and mediation logic easily. And also which is more reliable and unwilling for errors. Also it support most of the swagger specification. That mean we can write a ballerina service equal to a swagger API. Which feature is used by APIM and it can directly generate API by importing swagger definition. Also in composer which enable to design ballerina or swagger API and generate ballerina or swagger equal source.

    5. Play with Ballerina

    6. To get an idea about ballerina visit ballerina official website and try. There is a tryout editor where you can run your own code there and get the result. Also you can see there are different resource which contain about the ballerina. Also you can visit ballerina blog(https://medium.com/ballerinalang) which have list of blog.
    7. Try with APIM 3.0.0 M1

    8. APIM 3.0.0 M1 released recently and which used ballerina as a Gateway. So please refer APIM 3.0.0 official document from here

      1. Follow the step below to configure the Gateway.
        • For the API Manager 3.0.0 M1 release, Ballerina runtime is used as the Gateway.
        • Download Ballerina v0.8.1 runtime from here and extract it.
        • Both the Ballerina and WSO2 API Manager runtime servers are required to be hosted in the same for the moment.
        • Since both runtimes are using the same node, offset the Ballerina port by doing the following,
          • Open the <gwHome>/bre/conf/netty-transports.yml file.
          • Change the default port from 9090 to 9091.
          • Set the environment variable gwHome by pointing to the Ballerina home directory.
      2. Start the Ballerina runtime server.
        • If gwHome is configured, the Ballerina source for created APIs is generated in the /deployment/org/wso2/apim/ directory.
        • Open the terminal and change the directory to gwHome.
        • Start the Ballerina runtime by giving the relative path to the ballerina sources.

          $ cd $gwHome
          $ bin/ballerina run service deployment/org/wso2/apim/
      3. Follow the steps below to invoke an API.
        • Before invoking an API, create and publish an API in the API Publisher.
        • Once a new API is published, the Ballerina server needs to be restarted for the APIs to be deployed. See step 2 above.
        • Subscribe to the API by creating new application.
        • Make sure you generate an access token for the application.
        • Invoke the API using the following cURL command,

          $ curl -H 'Authorization: Bearer e9352afd-a19d-3d40-9db3-b60e963ae91c' 'http://localhost:9091/hello/'
          $ Hello World!

    sanjeewa malalgodaSMS OTP Two Factor Authentication for WSO2 API Manager publisher


    In this post, I will explain how to use SMS OTP multi factor authenticator through WSO2 Identity server. We will be using Twilio SMS Provider which was used to send the OTP code via SMS at the time authentication happens. For this solution we will 2 WSO2 API Manager(2.1.0) and Identity server(5.3.0). Please note that we need to set port offset as 1 in carbon.xml configuration file available with identity server. So it will be running on https 9444 port.

    First to to https://www.twilio.com and create account there. Then provide your mobile number and register that.

    Then generate mobile number from twilio. Then it will give you new number for you to use.

    curl -X POST 'https://api.twilio.com/2010-04-01/Accounts/({Account-Sid}/SMS/Messages.json'  --data-urlencode 'To={Sender SMS}' --data-urlencode 'From={generated MobileNumber from Twilio}' --data-urlencode 'Body=enter this code'  -H 'Authorization: Basic {base64Encoded(Account SId:Auth Token)}'

    Please see the sample below.

    curl -X POST 'https://api.twilio.com/2010-04-01/Accounts/fsdfdfdsfdfdsfdfsdfdsfdfdfdf/SMS/Messages.json'  --data-urlencode 'To=+94745345779' --data-urlencode 'From=+1454543535' --data-urlencode 'Body=enter this code'  -H 'Authorization: Basic LKJADJAD:ADJLJDJ:LAJDJDJJJJJJL::LMMIYGBNJN=='

    Now it should send you message to your mobile number provided.

    Now login to identity server and create new identity provider. When you do that provide following inputs in user interface.
    Screenshot from 2017-03-09 15-09-17.png

    Then go to federated authenticator section and add SMS OTP Configuration as follows. Provide basic authentication credentials like we did for CuRL request.

    Please fill following fields as below.
    SMS URL: https://api.twilio.com/2010-04-01/Accounts/ACd5ac7ff9sdsad3232432414b/SMS/Messages.json   HTTP Method: POST    HTTP Headers: Authorization: Basic QUNkNWFjN2ZmOTNTNmNjMzOWMwMjBkZQ==HTTP Payload: Body=$ctx.msg&To=$ctx.num&From=+125145435333434

    Screenshot from 2017-03-09 15-10-01.png


    Then let's add service provider as follows. Provide service provider name as follows.

    Screenshot from 2017-03-09 15-29-50.png


    Then go to SAML2 web SSO configuration and provide following configurations. Then save it and move forward.


    Screenshot from 2017-03-09 15-30-08.png

    Now save this service provider and come back to identity server UI. Then we need to configure local and outbound authentication steps as follows. We need to configure advanced authentication configuration as we are going to configure multi step authentication.

    Screenshot from 2017-03-09 16-47-42.png

    Then add username and password based local authentication as first steps and add out SMS otp as second authentication step. Now we can save this configuration and move forward.

    Screenshot from 2017-03-09 16-48-12.png

    Also add following configuration to wso2is-5.3.0/repository/conf/identity/application-authentication.xml file and restart server.


    <AuthenticatorConfig name="SMSOTP" enabled="true">
                 <Parameter name="SMSOTPAuthenticationEndpointURL">https://localhost:9444/smsotpauthenticationendpoint/smsotp.jsp</Parameter>
                 <Parameter name="SMSOTPAuthenticationEndpointErrorPage">https://localhost:9444/smsotpauthenticationendpoint/smsotpError.jsp</Parameter>
                 <Parameter name="RetryEnable">true</Parameter>
                 <Parameter name="ResendEnable">true</Parameter>
                 <Parameter name="BackupCode">false</Parameter>
                 <Parameter name="EnableByUserClaim">false</Parameter>
                 <Parameter name="MobileClaim">true</Parameter>
                 <Parameter name="enableSecondStep">true</Parameter>
                 <Parameter name="SMSOTPMandatory">true</Parameter>
                 <Parameter name="usecase">association</Parameter>
                 <Parameter name="secondaryUserstore">primary</Parameter>
                 <Parameter name="screenUserAttribute">http://wso2.org/claims/mobile</Parameter>
                 <Parameter name="noOfDigits">4</Parameter>
                 <Parameter name="order">backward</Parameter>
           </AuthenticatorConfig>



    Now it's time to configure API publisher to configure, so it can work with identity server to do authentication based on SSO. Add following configuration to /wso2am-2.1.0/repository/deployment/server/jaggeryapps/publisher/site/conf/site.js file.

     "ssoConfiguration" : {
           "enabled" : "true",
           "issuer" : "apipublisher",
           "identityProviderURL" : "https://localhost:9444/samlsso",
           "keyStorePassword" : "",
           "identityAlias" : "",
           "verifyAssertionValidityPeriod":"true",
           "timestampSkewInSeconds":"300",
           "audienceRestrictionsEnabled":"true",
           "responseSigningEnabled":"false",
           "assertionSigningEnabled":"true",
           "keyStoreName" :"",
           "signRequests" : "true",
           "assertionEncryptionEnabled" : "false",
           "idpInit" : "false",
        }

    Now go to API publisher URL and you will be directed to identity server login page. There you will see following window and you have to enter user name and password there.


    Then once you completed it you will get SMS to number you have mentioned in your user profile. If you havent already added mobile number to your user profile please add by login to identity server. You can go to users and roles window and select user profile from there. Then edit it as follows.


    Then enter the OTP you obtain in next window.


    Vinod KavindaInstalling new features to WSO2 EI

    In WSO2 Enterprise Integrator, you cannot install new features via management console. That option has been removed. So in order to install a feature, we must use the POM based feature installation. This is explained in WSO2 docs [1]. There few changes you need to made in order to this POM.xml to work.

    • The "destination" element value should be changed to, "wso2ei-6.0.0/wso2/components".
    • Value of the "dir" attribute in "replace" element should be, "wso2ei-6.0.0/wso2/components/default/configuration/org.eclipse.equinox.simpleconfigurator".
    Optionally, Other than downloading p2 repo (which is over 2GB), the URL to P2 repo "http://product-dist.wso2.com/p2/carbon/releases/wilkes/" can be set as "metadataRepository" and "artifactRepository".

    Following is a sample pom.xml that is used to install HL7 feature in EI.


    [1] - https://docs.wso2.com/display/Carbon440/Installing+Features+using+pom+Files

    Gobinath LoganathanWSO2 CEP - Publish Events Using Java Client

    The last article: WSO2 CEP - Hello World!, explained how to set up WSO2 CEP with a simple event processor. This article shows you the way to send events to the CEP using a Java client. Actually it is nothing more than an HTTP client which can send the event to the CEP through HTTP request. Step 0: Follow the previous article and setup the CEP engine. This article uses the same event processor

    Gobinath LoganathanComplex Event Processing - An Introduction

    Today almost all the big brothers in the software industry are behind big data and data analytics. Not only the large companies, even small scale companies need data processing in order to track and lead their business. Complex Event Processing(CEP) is one of the techniques being used to analyse streams of events for interested events or patterns. This article explains the big picture of

    Gobinath LoganathanNever Duplicate A Window Again - WSO2 Siddhi Event Window

    A new feature known as Event Window is introduced in WSO2 Siddhi 3.1.1 version, which is quite similar to the named window of Esper CEP in some aspects. This article presents the application and the architecture of Event Window using a simple example. According to Siddhi version 3.1.0, a window can be defined on a stream inside a query and the output can be used in the same query itself. For

    Gobinath LoganathanWSO2 CEP - Output Mapping Using Registry Resource

    Publishing the output is an important requirement of CEP. WSO2 CEP allows to convert an event to TEXT, XML or JSON, which is known as output mapping . This article explains how a registry resource can be used for custom event mapping in WSO2  CEP 4.2.0. Step 1: Start the WSO2 CEP and login to the management console. Step 2: Navigate to Home → Manage → Events → Streams → Add Event Stream.

    Gobinath LoganathanSiddhi 4.0.0 Early Access

    Siddhi 4.0.0 is being developed using Java 8 with major core level changes and features. One of the fundamental change to note is that some features of Siddhi have been moved as external extensions to Siddhi and WSO2 Complex Event Processor. This tutorial shows you, how to migrate the complex event project developed using Siddhi 3.x to Siddhi 4.x. Take the sample project developed in "Complex

    Gobinath LoganathanWSO2 DAS - Hello World!

    WSO2 Data Analytics Server is a smart analytics platform for both real-time and batch analytics. The real-time analytics is provided through their powerful open source Complex Event Processing Engine engine Siddhi. This article focuses on the complex event processing capability of the DAS server and provides a quick start guide on how to setup an event stream and process events generated by an

    Gobinath LoganathanIs WSO2 CEP Dead? No! Here’s Why…

    During the WSO2 Con US 2017, a major business decision is announced. Due to some business decisions, WSO2 promotes the Data Analytic Server (DAS) (They may change this name very soon) over the complex event processor. For those who haven’t heard about DAS even though it has been there for a long period, it is another product of WSO2 which contains the Complex Event Processor for real-time

    Gobinath LoganathanApache Thrift Client for WSO2 CEP

    In the series of WSO2 CEP tutorials, this article explains how to create Apache Thrift publisher and receiver for a CEP server in Python. Even though this is javahelps.com, I use Python since publisher and receiver in Java are already available in WSO2 CEP samples.  One of the major advantages of Apache Thrift is the support for various platforms. Therefore this tutorial can be simply adapted

    Vinod KavindaResolving SSL related issue in WSO2 products for MySql 5.7 upward

    If you try to start aWSO2 product with Mysql 5.7 it will give the following warning and the product will not work.

    Wed Dec 09 22:46:52 CET 2015 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.


    This can be avoided for development purposes by not using SSL. For this the JDBC url for the database should be appended with "useSSL=false". But it cannot be appended with a & sign like in a normal URL. Use the following format. If not it may give xml parsing errors.


    Then it will work as usual.

    Lakshman UdayakanthaWSO2 APP Manager(APPM) and WSO2 Enterprise Mobility Manager (EMM) integration

    There are two separate cases for APPM and EMM integration

    1. APPM and EMM on a single JVM. ex : EMM standalone pack.
    2. APPM and EMM on separate JVMs. ex : clustered scenario

    For the first case, EMM standalone vanilla pack should work without changing any configuration.

    For the second case, There are some configurations which should be done. Follow the below steps to configure APPM and EMM on separate JVMs.

    1. If you run APPM and EMM on same machine change the port offset of one pack. Let's change the port offset of APPM pack.

    i) Change the port offset of carbon.xml to 10 which is in <APPM_HOME>/repository/conf directory.
    ii) Since APPM default authentication mechanism is SAML SSO change the port of IdentityProviderUrl also in app-manager.xml

     ex : Change the port as shown in light green

    <SSOConfiguration>

            <!-- URL of the IDP use for SSO -->
            <IdentityProviderUrl>https://localhost:9453/samlsso</IdentityProviderUrl>

            <Configurators>
                <Configurator>
                    <name>wso2is</name>
                    <version>5.0.0</version>
                    <providerClass>org.wso2.carbon.appmgt.impl.idp.sso.configurator.IS500SAMLSSOConfigurator</providerClass>
                    <parameters>
                        <providerURL>https://localhost:9453</providerURL>
                        <username>admin</username>
                        <password>admin</password>
                    </parameters>
                </Configurator>
            </Configurators>

        </SSOConfiguration>

    iii) Change the port offset to 9453 for all the ports found in sso-idp-config.xml which is located in <APP_HOME>/repository/conf/identity directory.

    Now setting port offset is done.

    2. Now create a mobile app by going to App Manager publisher. publish it and it will be available in APPM store.
    3. Create an OAuth application in EMM by following article How to map existing oauth apps in wso2.
    4. Open the app-manager.xml in APPM and find for a configuration called MobileAppsConfiguration. change ActiveMDM property to WSO2MDM.

    ex: <Config name="ActiveMDM">WSO2MDM</Config>

    Change the MDM properties named as WSO2MDM as follows. Change the port to EMM port of ServerURL and TokenApiURL. Here client key and client secret is which returned from EMM when OAuth application is created.

    <MDM name="WSO2MDM" bundle="org.wso2.carbon.appmgt.mdm.restconnector">
                    <Property name="ImageURL">/store/extensions/assets/mobileapp/resources/models/%s.png</Property>
                    <Property name="ServerURL">https://localhost:9453/mdm-admin</Property>
                    <Property name="TokenApiURL">https://localhost:9453/oauth2/token</Property>
                    <Property name="ClientKey">veQtMV1aH1iX0AFWQckJLiooTxUa</Property>
                    <Property name="ClientSecret">cFGPUbV11yf9WgsL18d1Oga6JR0a</Property>
                    <Property name="AuthUser">admin</Property>
                    <Property name="AuthPass">admin</Property>
                </MDM>

    5. Enrol your device in MDM.
    6. Now you can install apps using app manager store to devices enrolled in EMM.



    Samisa AbeysingheThoughts on Life – 2


    I called this post a second, as I already had a previous oneon life.

    People are good with stereotyping. They often think I am a Buddhist. I often wonder what that means to be called a Buddhist.

    If that means that I am born to a couple of Buddhist parents, you are wrong, I was born a Catholic.
    Being a Catholic child, I wanted to learn and understand. So as a kid I started reading the bible. That was what a good Catholic supposed to do. But actually, not many did even those days 25 to 30 years ago,

    So many years ago, when I started reading, I first read the preface of the book. It said, this book, that is the bible, would help you understand who you are and why you are here on this earth. To this day, I can still remember those words very clearly.

    So, that is what I am still doing. I seek to understand who I am and why I am here.
    I do not go to church much or pray much. So, the Catholics do not consider me to be a good one of them. However, in my understanding the morale of the story of prayer and the bible and worship is not about God but about us. Yes, we think it is about us and we go and ask for so many things form God in our prayers.

    But it is about us understanding us. It is appreciating all that we have got around us for free, the air that we breath, the eyes that we see, the light that surrounds us the water that rains and runs around us. And live life in a grateful manner.

    I have worked with a blind man in my life, between my ALs and before I went to university. I was his guide while he sold his envelopes. We were walking along the way to offices. The moral of the story is that, you must do something like that to appreciate the value of sight you got. Then the beauty of the things you see, the colors, the nature the people and so on. He could not see any of them. I could see all of them. You take it for granted, but when you do not have the sight, the ability to see things, you miss it. There are many things like that, so many little little things in life that you got that you take for granted. Where did they come from? How did they come to you? How come you are here to enjoy these gifts of life? Where is your gratitude? Should you be grateful or should you not?


    Life is an interesting journey. Do not let it just pass. See if there is something in it. Even if there is nothing in it, even the experience and curiosity and excitement of looking for some meaning in life is rewarding enough for us as intellectual creatures. 

    Chanaka FernandoGetting started with Ballerina in 10 minutes

    Ballerina is the latest revelation in programming languages. It has been built with the mind of writing network services and functions. With this post I’m going to describe how to write network services and functions within a 10 minute tutorial.


    First things first, go to ballerinalang website and download the latest ballerina tools distribution which has the runtime and all the tools required for writing Ballerina programs. After downloading, you can extract the archive into a directory (let’s say BALLERINA_HOME) and set the PATHenvironment variable to the bin directory of BALLERINA_HOME which you have extracted the downloaded tools distribution. In linux, you can achieve this as mentioned below.
    export PATH = $PATH:/BALLAERINA_HOME/bin
    e.g. export PATH = $PATH:/Users/chanaka-mac/ballerinalang/Testing/ballerina-tools-0.8.3/bin
    Now you have setup the ballerina in your system. Now it is time to run the first example of all, the Hello World example. Ballerina can be used to write 2 types of programs.
    • Network services
    • Main functions
      Here, network services are long running services which keeps on running after it is started until the process is killed or stopped by external party. Main functions are programs which executes a given task and exit by itself.
      Let’s run the more familiar main program style Hello World example. Only thing you have to do is run the ballerina command pointing to the hello world sample. You change your directory to the samples directory within ballerina tools distribution ($BALLERINA_HOME/samples). Now run the following command from your terminal.
      $ ballerina run main helloWorld/helloWorld.bal
      Hello, World!
      Once you run the above command, you will see the output “Hello, World!” and you are all set (voila!).
      Let’s go to the file and see how a ballerina hello world program looks like.
      import ballerina.lang.system;
      function main(string[] args) {
      system:println("Hello, World!");
      }
      This small program has several key concepts covered.
      • Signature of the main function is similar to other programming languages like C, Java
      • You need to import native utilities before using them (no auto-import)
      • How to run the program using ballerina run command


        Now the basics are covered. Let’s move on to the next step. Which is running a service which says “Hello, World!” and keeps on running.
        All you have to do is execute the below command in your terminal.
        $ ballerina run service helloWorldService/helloWorldService.bal
        ballerina: deploying service(s) in 'helloWorldService/helloWorldService.bal'
        ballerina: started server connector http-9090
        Now things are getting little bit interesting. You can see 2 lines which describes what has happened with the above command. It has deployed a service which was described the the mentioned file and there is a port (9090) opened for http communication. Now this service is started and listening on port 9090. We need to send a request to get the response out of this service. If you browse to the README.txt within the helloWorldService sample directory, you can find the below curl command which can be used to invoke this service. Let’s run this command from another command window.
        $ curl -v http://localhost:9090/hello
        > GET /hello HTTP/1.1
        > Host: localhost:9090
        > User-Agent: curl/7.51.0
        > Accept: */*
        >
        < HTTP/1.1 200 OK
        < Content-Type: text/plain
        < Content-Length: 13
        <
        * Curl_http_done: called premature == 0
        * Connection #0 to host localhost left intact
        Hello, World!
        You can see that, we got a response message from the service saying “Hello, World!”. Let’s crack into the program which does this. Go the Ballerina file within helloWorldService/helloWorldService.bal.
        import ballerina.lang.messages;
        @http:BasePath ("/hello")
        service helloWorld {

        @http:GET
        resource sayHello (message m) {
        message response = {};
        messages:setStringPayload(response, "Hello, World!");
        reply response;

        }

        }
        This program covers several important aspects of a Ballerina program.
        • annotations are used to define the service related entities. In this sample, “/hello” is the context of the service and “GET” is the HTTP method accepted by this service
        • message is the data carrier coming from the client. Users can do what ever they want with message and they can create new messages and many other things.
        • “reply” statement is used to send a reply back to the service client.
          In the above example, we have created a new message called “response” and set the payload as “Hello, World!” and then replied back to the client. The way you executed this service was
          In the above command, we specified the port (9090) which the service was started and the context (/hello) we defined in the code.


          We have few mins left, let’s go for another sample which is bit more advanced and completes the set.
          Execute the following command in your terminal.
          ballerina run service passthroughService/passthroughService.bsz
          ballerina: deploying service(s) in 'passthroughService/passthroughService.bsz'
          ballerina: started server connector http-9090
          Here, we have run a file with a different extension (bsz) but the result was similar to the previous section. File has been deployed and the port is opened. Let’s quickly invoke this service with the following command as mentioned in the README.txt file.
          curl -v http://localhost:9090/passthrough
          > GET /passthrough HTTP/1.1
          > Host: localhost:9090
          > User-Agent: curl/7.51.0
          > Accept: */*
          >
          < HTTP/1.1 200 OK
          < Content-Type: application/json
          < Content-Length: 49
          <
          * Curl_http_done: called premature == 0
          * Connection #0 to host localhost left intact
          {"exchange":"nyse","name":"IBM","value":"127.50"}
          Now we got an interesting response. Let’s go inside the source and see what we have just executed. This sample is bit advanced and hence it covers several other important features we have not mentioned in previous sections.
          • Ballerina programs can be run as a self-contatining archive. In this sample, we have run service archive file (.bsz) which contains all the artifacts required to run this service.
          • Ballerina programs can have packages and the package structure follows the directory structure. In this sample, we have a package called “passthroughservice.samples” and the directory structure is similar passthroughservice/samples.
            Here are the contents of this sample.
            passthroughService.bal
            package passthroughservice.samples;
            import ballerina.net.http;
            @http:BasePath ("/passthrough")
            service passthrough {
            @http:GET
            resource passthrough (message m) {
            http:ClientConnector nyseEP = create http:ClientConnector("http://localhost:9090");
            message response = http:ClientConnector.get(nyseEP, "/nyseStock", m);
            reply response;
            }
            }
            nyseStockService.bal
            package passthroughservice.samples;
            import ballerina.lang.messages;
            @http:BasePath ("/nyseStock")
            service nyseStockQuote {
            @http:GET
            resource stocks (message m) {
            json payload = `{"exchange":"nyse", "name":"IBM", "value":"127.50"}`;
            message response = {};
            messages:setJsonPayload(response, payload);
            reply response;
            }
            }
            In this sample, we have written a simple integration by conncting to another service which is also written in Ballerina and running on the same runtime. “passthroughService.bal” contains the main Ballerina service logic in which,
            • Create a client connector to the backend service
            • Send a GET request to a given path with the incoming message
            • Reply back the response from the backend service
              In this sample, we have written the back end service also from ballerina. In that service “nyseStockService.bal”,
              • Create a json message with the content
              • Set that message as the payload of a new message
              • Reply back to the client (which is the passthroughService)
                It’s Done! Now you can run the remainder of the sample or write your own programs using Ballerina.
                Happy Dancing !

    Chanaka FernandoBallerina — Why it is different from other programming languages?

    In this post, we’re going to talk about special features of the Ballerina language which are unique to itself. These features are specifically designed to address the requirements of the technology domain we are targeting with this new language.

    XML , JSON and datatable are native data types

    Communication is all about messages and data. XML and JSON are the most common and heavily used data types in any kind of integration eco system. In addition to those 2 types, interaction with databases (SQL, NoSQL) is the other most common use case. We have covered all 3 scenarios with native data types.
    You can define xml and json data types inline and manipulate them easily with utility methods in jsons and messages packages.
    json j = `{"company":{"name":"wso2", "country":"USA"}}`;
    messages:setJsonPayload(m, j);
    With the above 2 lines, you can define your own json message and replace the current message with your message. You can do the same thing for XML messages as well.
    If you need to extract some data from a message which is of type application/json, you can easily do that with following lines of code.
    json newJson = jsons:getJson(messages:getJsonPayload(m), "$.company");
    The above code will set the following json message to the newJson variable.
    {"name":"wso2","country":"USA"}
    Another cool feature of this inline representation is the variable access within these template expressions. You can access any variable when you define your XML/JSON message like below.
    string name = "WSO2";
    xml x = `<name>{$name}</name>`;
    The above 2 lines create an xml message with following data in it.
    <name>WSO2</name>
    You can do the same thing for JSON messages in a similar fashion.
    Datatable is a representation of a pointer to a result set returned from a database query. It works in a streaming manner. The data will be consumed as it is used in the program. Here is a sample code for reading data within a ballerina program using the datatable type.
    string s;
    datatable dt = sql:ClientConnector.select(testDB, "SELECT int_type, long_type, float_type, double_type, boolean_type,
    string_type from DataTable LIMIT 1",parameters);
    while (datatables:next(dt)) {
    s = datatables:getString(dt, "string_type");
    // do something with s
    }
    You can find the complete set of functions in Ballerina API documentation.

    Parallel processing is as easy as it can get

    The term “parallel processing” scares even experienced programmers. But with Ballerina, you can do parallel processing as you do any other action. The main concept of term “Ballerina” stems from the concept of a ballet dance where so many different ballet dancers synchronized with each other during the dance act by sending messages between each other. The technical term for this process is called “Choreography”. Ballerina (language) brings this concept into a more programmer friendly concept with following 2 features.

    Parallel processing with worker

    The concept of a worker is that, it is an execution flow. The execution will be carried by the “Default Worker”. If the Ballerina programmer wants to delegate his work to another “Worker” which is working in parallel to the “Default Worker”, he can create a worker and send a message to that worker with the following syntax.
    worker friend(message m) {
    //Do some work here
    reply m';
    }
    msg -> friend;
    //Do my own work
    replyMsg <- friend;
    There are few things special about this task delegation.
    • worker (friend) will run in parallel to the default worker.
    • default worker can continue it’s worker independently
    • when default worker wants to get the result from the friend worker, it will call the friend worker and block their until it gets the result message or times out after 1 minute.

    Parallel processing with fork-join (multiple workers)

    Sometimes users needs to send the same message to multiple workers in the same time and process results in different ways. That is where fork-join comes into rescue. The Ballerina programmer can define workers and their actions within the fork-join statement and then decide on what to do once the workers are done with their work. Given below is a sample code of a fork-join.
    fork(msg) {
    worker chanaka(message m1) {
    //Do some work here
    reply m1';
    }
    worker sameera(message m2) {
    //Do something else
    reply m2';
    }
    worker isuru(message m3) {
    //Do another thing
    reply m3';
    } join (all)(message[] results) {
    //Do something with results message array
    } timeout (60)(message[] resultsBeforeTimeout) {
    //Do something after timeout
    }
    The above code sample is a powerful program which will be really hard to implement in any other programming language (some languages cannot do this even). But with Ballerina, you get all the power with simplicity. Here is an explanation of the above program.
    • workers “chanaka”, “sameera” and “isuru” are executed in parallel to the main “Default worker”
    • join condition specifies how user would need to get the results of the above started workers. In this sample, it waits for “all” workers. It is possible to join the workers in one of the following options
    — join all of 3 workers
    — join all of named workers
    — join any 1 of all 3 workers
    — join any 1 of named workers
    • timeout condition is coupled with the join block. User can specify the tiemout value in seconds to wait until the join condition is satisfied. If that join condition is not satisfied within the given time duration, timeout block will get executed with any results returned from the completed workers.
    • Once the fork-join statement is started and executing, “default worker” is waiting until it completes the join block or timeout block. It will be stayed idle during that time (some rest).
    In addition to the above mentioned features, workers can invoke any function declared within the same package or any other package. One limitation with the current worker/fork-join implementation is that workers cannot be communicated with any other worker than “Default worker”.

    Comprehensive set of developer tools to make your development experience as easy as it can get

    Ballerina is not the language and the runtime itself. It comes with a complete set of developer tools which can help you to start your Ballerina experience as quickly and easily as possible.

    Composer

    The Composer is the main tool for writing Ballerina programs. Here’s some of what it can do:
    • Source, Design and Swagger view of the same implementation and ability to edit through any interface
    • Run/Debug Ballerina programs directly from the editor
    • Drag/Drop program elements and compose your program

    Testerina

    This is the unit testing framework for Ballerina programs. Users can write unit tests to test their Ballerina source code with this framework. It allows users to mock Ballerina components and emulate the actual Ballerina programs within a unit testing environment. You can find details from thismedium post.

    Connectors

    These are the client connectors which are written to connect with different cloud APIs and systems. This is one of the extension points Ballerina has and users can write their own connectors from Ballerina language and use within any other Ballerina program.

    Editor plugins

    Another important set of tools coming with Ballerina tooling distribution is the set of editor plugins for popular source code editors like Intellij Idea, Atom, VSCode, Vim. This will make sure if you are a hardcore script editing person who is not interested in IDEs, you are also given the power of ballerina language capabilities in your favourite editor.
    I am only half done with the cool new features of Ballerina, but this is enough for a single post. You can try out these cool features and let us know your experience and thoughts through our Google user groupTwitterFacebookMedium or any other channel or by putting a comment to this post.

    Chanaka FernandoBallerina, the programming language for geeks, architects, marketers and rest

    We@ WSO2 thrilled to announce our latest innovation at the WSO2ConUSA 2017. It is a programming language for all. It is for geeks who like to write scripts for everything they do, for architects who barely speaks without diagrams, for marketing folks who has no idea what programming is and for so called programmers who cracks any kind of programming language you throw at them. Simply put it is a programming language with visual and textual representation. You can try out live samples at ballerinalang web site.
    Programming language inventions are not something we see so often. The reason is that, when people are happy with a language and get used to it, they are reluctant to move from that eco system. Unless it is super awesome and can’t live without it, they prefer holding their position. This is even harder for general purpose programming languages than Domain Specific Languages (DSLs).
    Integration of systems has been a tedious task from the beginning and nothing much has changed even today. While working with our customers, we identified that there is a gap in the integration space where programmers and architects speaks in different languages and sometimes this resulted in huge losses of time and money. Integration has lot to do with diagrams. Top level people always prefer diagrams than code but programmers do the other way around. We thought of filling this gap with a more modernized programming language. That was our starting point.
    Once we started the development and while doing the design of this programming language, we identified that there are so many cool features spread across different programming languages but there is no one programming language with all the cool features. Then we made design changes to make ballerina a more general purpose language than a DSL.
    Today, we are happy to announce the “Flexible, Powerful, Beautiful” programming language “Ballerina”. Here are the main features of the language in a short list.
    • Textual, Visual and Swagger representation of your code
    • Parallel programming made easier with workers and fork-join
    • XML, JSON and DataTable as built in data types for easier data handling
    • Packaging and module system to write, share, distribute code in elegant fashion
    • Composer (editor) makes it easier to write programs in a more visual manner
    • Built in debugger and test framework (testerina) makes it easier to develop and test
    Tryout ballerina and let us know your thoughts on medium, twitter, facebook, slack, google and many other channels. We are happy to hear from you make integration great again.

    Anupama PathirageWSO2 DSS - Exposing Excel Data in Non Query Mode

    If query mode is disabled for the spreadsheet, you cannot use SQL statements to query data in the excel sheet. Note that in non-query mode, you can only get data from the sheet and you cannot insert, update or modify any data.

    The below sample use DSS 3.5.1 with DSSTest.xls excel file. Download the DSSTest.xls from [1] and update the file system location in the URL field.

    Data Service

    <data name="ExcelTestService" transports="http https local">
       <config enableOData="false" id="ExcelDS">
          <property name="excel_datasource">/home/anupama/DSSTest.xls</property>
       </config>
       <query id="SelectData" useConfig="ExcelDS">
          <excel>
             <workbookname>Alerts</workbookname>
             <hasheader>true</hasheader>
             <startingrow>2</startingrow>
             <maxrowcount>-1</maxrowcount>
             <headerrow>1</headerrow>
          </excel>
          <result element="AlertDetails" rowName="Alert">
             <element column="AlertID" name="Alert_ID" xsdType="string"/>
             <element column="Owner" name="OwnerName" xsdType="string"/>
             <element column="AlertType" name="Alert_Type" xsdType="string"/>
          </result>
       </query>
       <operation name="getdata">
          <call-query href="SelectData"/>
       </operation>
    </data>


    Request

    http://localhost:9763/services/ExcelTestService.SOAP11Endpoint/

    <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:dat="http://ws.wso2.org/dataservice">
       <soapenv:Header/>
       <soapenv:Body>
          <dat:getdata/>
       </soapenv:Body>
    </soapenv:Envelope>



    Response

    <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
    <soapenv:Body>
    <AlertDetails xmlns="http://ws.wso2.org/dataservice">
    <Alert>
    <Alert_ID>1</Alert_ID>
    <OwnerName>Peter</OwnerName>
    <Alert_Type>3</Alert_Type>
    </Alert>
    <Alert>
    <Alert_ID>2</Alert_ID>
    <OwnerName>James</OwnerName>
    <Alert_Type>4</Alert_Type>
    </Alert>
    <Alert>
    <Alert_ID>3</Alert_ID>
    <OwnerName>Anne</OwnerName>
    <Alert_Type>1</Alert_Type>
    </Alert>
    <Alert>
    <Alert_ID>4</Alert_ID>
    <OwnerName>Jane</OwnerName>
    <Alert_Type>11</Alert_Type>
    </Alert>
    <Alert>
    <Alert_ID>30</Alert_ID>
    <OwnerName>Smith</OwnerName>
    <Alert_Type>1</Alert_Type>
    </Alert>
    </AlertDetails>
    </soapenv:Body>
    </soapenv:Envelope>




    References:


    [1] https://github.com/anupama-pathirage/DemoFiles/raw/master/Blog/Excel/DSSTest.xls



    Anupama PathirageWSO2 DSS - Exposing Excel Data in Query Mode

    In Query mode you can query data in the spreadsheet using SQL statements. The query mode supports only basic SELECT, INSERT, UPDATE and DELETE queries. The org.wso2.carbon.dataservices.sql.driver.TDriver class is used internally as the SQL Driver. It is a JDBC driver implementation used with tabular data models such as Google spreadsheets, Excel sheets etc. Internally it use the Apache POI - the Java API for Microsoft Documents to read and modify documents [1].

    The below sample use DSS 3.5.1 with DSSTest.xls excel file. Download the DSSTest.xls from [2] and update the file system location in the URL field.

    Data Service


    <data name="ExcelTest" transports="http https local">
       <config enableOData="false" id="ExcelDS">
          <property name="driverClassName">org.wso2.carbon.dataservices.sql.driver.TDriver</property>
          <property name="url">jdbc:wso2:excel:filePath=/home/Anupama/DSSTest.xls</property>
       </config>
       <query id="QueryData" useConfig="ExcelDS">
          <sql>Select AlertID, Owner, AlertType from Alerts where AlertType &gt; 3</sql>
          <result element="Entries" rowName="Entry">
             <element column="AlertID" name="AlertID" xsdType="string"/>
             <element column="Owner" name="Owner" xsdType="string"/>
             <element column="AlertType" name="AlertType" xsdType="string"/>
          </result>
       </query>
       <query id="InsertData" useConfig="ExcelDS">
          <sql>Insert into Alerts(AlertID, Owner, AlertType) values (?,?,?)</sql>
          <param name="ID" sqlType="INTEGER"/>
          <param name="Owner" sqlType="STRING"/>
          <param name="Type" sqlType="INTEGER"/>
       </query>
       <operation name="GetData">
          <call-query href="QueryData"/>
       </operation>
       <operation name="InsertData" returnRequestStatus="true">
          <call-query href="InsertData">
             <with-param name="ID" query-param="ID"/>
             <with-param name="Owner" query-param="Owner"/>
             <with-param name="Type" query-param="Type"/>
          </call-query>
       </operation>
    </data>



    Request and Response

    http://localhost:9763/services/ExcelTest.SOAP11Endpoint/

    For Get Data

    Request :


    <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:dat="http://ws.wso2.org/dataservice">
       <soapenv:Header/>
       <soapenv:Body>
          <dat:GetData/>
       </soapenv:Body>
    </soapenv:Envelope>



    Response


    <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
       <soapenv:Body>
          <Entries xmlns="http://ws.wso2.org/dataservice">
             <Entry>
                <AlertID>2.0</AlertID>
                <Owner>James</Owner>
                <AlertType>4.0</AlertType>
             </Entry>
             <Entry>
                <AlertID>4.0</AlertID>
                <Owner>Jane</Owner>
                <AlertType>11.0</AlertType>
             </Entry>
          </Entries>
       </soapenv:Body>
    </soapenv:Envelope>


    For Insert Data:

    Request:


    <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:dat="http://ws.wso2.org/dataservice">
       <soapenv:Header/>
       <soapenv:Body>
          <dat:InsertData>
             <dat:ID>30</dat:ID>
             <dat:Owner>Smith</dat:Owner>
             <dat:Type>1</dat:Type>
          </dat:InsertData>
       </soapenv:Body>
    </soapenv:Envelope>



    Response :


    <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
       <soapenv:Body>
          <axis2ns3:REQUEST_STATUS xmlns:axis2ns3="http://ws.wso2.org/dataservice">SUCCESSFUL</axis2ns3:REQUEST_STATUS>
       </soapenv:Body>
    </soapenv:Envelope>


    References : 

    [1] https://poi.apache.org/spreadsheet/index.html
    [2] https://github.com/anupama-pathirage/DemoFiles/raw/master/Blog/Excel/DSSTest.xls

    Anupama PathirageWSO2 DSS - Calling Stored Procedures with IN and OUT Parameters

    This article will explain how we can call a stored procedure using WSO2 Data Services Server (WSO2 DSS). It also include details on how a stored procedures with IN and OUT parameters work with DSS. This example uses MySQL DB with DSS 3.5.1.

    SQL Script for table and procedure

    CREATE TABLE ALERT_DETAILS (ALERT_ID integer,OWNER VARCHAR(50),ALERT_TYPE integer);
    INSERT INTO ALERT_DETAILS(ALERT_ID,OWNER,ALERT_TYPE) values (1, 'Peter',2);
    INSERT INTO ALERT_DETAILS(ALERT_ID,OWNER,ALERT_TYPE) values (2, 'James',0);

    CREATE PROCEDURE GET_ALERT_DETAILS (IN VIN_ALERT_ID INT, OUT VOUT_ALERT_TYPE INT,OUT VOUT_OWNER VARCHAR(50))
    BEGIN
    SELECT ALERT_TYPE,OWNER INTO VOUT_ALERT_TYPE, VOUT_OWNER FROM ALERT_DETAILS WHERE ALERT_ID = VIN_ALERT_ID ;
    END




    DSS Service 

    <data name="ProcedureTest" transports="http https local">
       <config enableOData="false" id="TestMySQL">
          <property name="driverClassName">com.mysql.jdbc.Driver</property>
          <property name="url">jdbc:mysql://localhost:3306/ActivitiEmployee</property>
          <property name="username">root</property>
          <property name="password">root</property>
       </config>
       <query id="getAlertIds" useConfig="TestMySQL">
          <sql>call GET_ALERT_DETAILS(?,?,?)</sql>
          <result element="AlertDetails" rowName="Alerts">
             <element column="QPARAM_ALERT_TYPE" name="TYPE" xsdType="integer"/>
             <element column="QPARAM_OWNER" name="ALERTOWNER" xsdType="string"/>
          </result>
          <param name="QPARAM_ALERT_ID" sqlType="INTEGER"/>
          <param name="QPARAM_ALERT_TYPE" sqlType="INTEGER" type="OUT"/>
          <param name="QPARAM_OWNER" sqlType="STRING" type="OUT"/>
       </query>
       <operation name="getAlertOp">
          <call-query href="getAlertIds">
             <with-param name="QPARAM_ALERT_ID" query-param="SEARCH_ALERT_ID"/>
          </call-query>
       </operation>
    </data>



    Request and Response

    Request 

    http://localhost:9763/services/ProcedureTest.SOAP11Endpoint/

    <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:dat="http://ws.wso2.org/dataservice">
       <soapenv:Header/>
       <soapenv:Body>
          <dat:getAlertOp>
             <dat:SEARCH_ALERT_ID>1</dat:SEARCH_ALERT_ID>
          </dat:getAlertOp>
       </soapenv:Body>
    </soapenv:Envelope>



    Response

    <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
       <soapenv:Body>
          <AlertDetails xmlns="http://ws.wso2.org/dataservice">
             <Alerts>
                <TYPE>2</TYPE>
                <ALERTOWNER>Peter</ALERTOWNER>
             </Alerts>
          </AlertDetails>
       </soapenv:Body>
    </soapenv:Envelope>


    Hariprasath ThanarajahStep by step guide to create a third party web APIs Client Connector for Ballerina and invoke its action by writing a Ballerina main function


    First, We might need to understand what is Ballerina and what does this third party meant to be. Here we go about the explanation about those following.

    What is Ballerina: Ballerina is a new programming language for integration built on a sequence diagram metaphor. Ballerina is:
    • Simple
    • Intuitive
    • Visual
    • Powerful
    • Lightweight
    • Cloud Native
    • Container Native
    • Fun
    The conceptual model of Ballerina is that of a sequence diagram. Each participant in the integration gets its own lifeline and Ballerina defines a complete syntax and semantics for how the sequence diagram works and execute the desired integration.
    Ballerina is not designed to be a general-purpose language. Instead, you should use Ballerina if you need to integrate a collection of network connected systems such as HTTP endpoints, Web APIs, JMS services, and databases. The result of the integration can either be just that - the integration that runs once or repeatedly on a schedule, or a reusable HTTP service that others can run.

    What is third party Ballerina Connectors: A connector that allows you to interact with a third-party product's functionality and data and enabling you to connect to and interact with the APIs of services such as Twitter, Gmail, and Facebook.

    Requirements

    Need to build the ballerina, docerina and the plugin-maven in the order.

    Now we move to the part about how to write this connector. Here we create a connector for gmail and the with the operation getUserProfile.

    How to write a ballerina connector

    First, create a maven project with the groupId as org.ballerinalang.connectors and the artifactId should be gmail.

    Need to add the following parent in the pom,

        <parent>
           <groupId>org.wso2</groupId>
           <artifactId>wso2</artifactId>
           <version>5</version>
        </parent>

    Need to add the following dependencies to the pom as follows,

    <dependencies>
           <dependency>
               <groupId>org.ballerinalang</groupId>
               <artifactId>ballerina-core</artifactId>
               <version>${ballerina.version}</version>
           </dependency>
           <dependency>
               <groupId>org.ballerinalang</groupId>
               <artifactId>ballerina-native</artifactId>
               <version>${ballerina.version}</version>
           </dependency>
           <dependency>
               <groupId>org.ballerinalang</groupId>
               <artifactId>annotation-processor</artifactId>
               <version>${ballerina.version}</version>
           </dependency>
    </dependencies>

    We need to add the following plugins to copy the resources to build jar,

    <!-- For creating the ballerina structure from connector structure -->
               <plugin>
                   <groupId>org.apache.maven.plugins</groupId>
                   <artifactId>maven-resources-plugin</artifactId>
                   <version>${mvn.resource.plugins.version}</version>
                   <executions>
                       <execution>
                           <id>copy-resources</id>

                           <phase>validate</phase>
                           <goals>
                               <goal>copy-resources</goal>
                           </goals>
                           <configuration>
                               <outputDirectory>${connectors.source.temp.dir}</outputDirectory>
                               <resources>
                                   <resource>
                                       <directory>gmail/src</directory>
                                       <filtering>true</filtering>
    </resource>
                               </resources>
                           </configuration>
                       </execution>
                   </executions>
    </plugin>

    And the following plugins need to Auto generate the Connectors API docs,

               <!-- Generate api doc -->
               <plugin>
                   <groupId>org.ballerinalang</groupId>
                   <artifactId>docerina-maven-plugin</artifactId>
                   <version>${docerina.maven.plugin.version}</version>
                   <executions>
                       <execution>
                           <phase>validate</phase>
                           <goals>
                               <goal>docerina</goal>
                           </goals>
                           <configuration>
                               <outputDir>${project.build.directory}/docs</outputDir>
                               <sourceDir>${connectors.source.temp.dir}</sourceDir>
                           </configuration>
                       </execution>
                   </executions>
               </plugin>

    And the below plugin is for the Annotation process,

    <!-- For ballerina natives processing/validation -->
               <plugin>
                   <groupId>org.bsc.maven</groupId>
                   <artifactId>maven-processor-plugin</artifactId>
                   <version>${mvn.processor.plugin.version}</version>
                   <configuration>
                       <processors>
                           <processor>org.ballerinalang.natives.annotation.processor.BallerinaAnnotationProcessor</processor>
                       </processors>
                       <options>
                           <packageName>${native.constructs.provider.package}</packageName>
                           <className>${native.constructs.provider.class}</className>
                           <srcDir>${connectors.source.directory}</srcDir>
                           <targetDir>${generated.connectors.source.directory}</targetDir>
                       </options>
                   </configuration>
                   <executions>
                       <execution>
                           <id>process</id>
                           <goals>
                               <goal>process</goal>
                           </goals>
                           <phase>generate-sources</phase>
                       </execution>
                   </executions>
               </plugin>
               <!-- For ballerina natives processing/validation -->
               <plugin>
                   <groupId>org.codehaus.mojo</groupId>
                   <artifactId>exec-maven-plugin</artifactId>
                   <version>${mvn.exec.plugin.version}</version>
                   <executions>
                       <execution>
                           <phase>test</phase>
                           <goals>
                               <goal>java</goal>
                           </goals>
                           <configuration>
                               <mainClass>org.ballerinalang.natives.annotation.processor.NativeValidator</mainClass>
                               <arguments>
                                   <argument>${generated.connectors.source.directory}</argument>
                               </arguments>
                           </configuration>
                       </execution>
                   </executions>
               </plugin>

    So finally the pom file would be like as following,

    <?xml version="1.0" encoding="UTF-8"?>
    <project xmlns="http://maven.apache.org/POM/4.0.0"
            xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
            xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
        <modelVersion>4.0.0</modelVersion>

        <parent>
           <groupId>org.wso2</groupId>
           <artifactId>wso2</artifactId>
           <version>5</version>
        </parent>


        <groupId>org.wso2.ballerina.connectors</groupId>
        <artifactId>gmail</artifactId>
        <version>1.0-SNAPSHOT</version>

        <dependencies>
           <dependency>
               <groupId>org.ballerinalang</groupId>
               <artifactId>ballerina-core</artifactId>
               <version>${ballerina.version}</version>
           </dependency>
           <dependency>
               <groupId>org.ballerinalang</groupId>
               <artifactId>ballerina-native</artifactId>
               <version>${ballerina.version}</version>
           </dependency>
           <dependency>
               <groupId>org.ballerinalang</groupId>
               <artifactId>annotation-processor</artifactId>
               <version>${ballerina.version}</version>
           </dependency>
        </dependencies>

        <build>
           <resources>
               <resource>
                   <directory>src/main/resources</directory>
                   <excludes>
                       <exclude>ballerina/**</exclude>
                   </excludes>
               </resource>
               <!-- copy built-in ballerina sources to the jar -->
               <resource>
                   <directory>${generated.connectors.source.directory}</directory>
                   <targetPath>META-INF/natives</targetPath>
               </resource>
               <!-- copy the connector docs to the jar -->
               <resource>
                   <directory>${project.build.directory}/docs</directory>
                   <targetPath>DOCS</targetPath>
               </resource>
           </resources>
           <plugins>
               <!-- For creating the ballerina structure from connector structure -->
               <plugin>
                   <groupId>org.apache.maven.plugins</groupId>
                   <artifactId>maven-resources-plugin</artifactId>
                   <version>${mvn.resource.plugins.version}</version>
                   <executions>
                       <execution>
                           <id>copy-resources</id>

                           <phase>validate</phase>
                           <goals>
                               <goal>copy-resources</goal>
                           </goals>
                           <configuration>
                               <outputDirectory>${connectors.source.temp.dir}</outputDirectory>
                               <resources>
                                   <resource>
                                       <directory>gmail/src</directory>
                                       <filtering>true</filtering>
                                   </resource>
                               </resources>
                           </configuration>
                       </execution>
                   </executions>
               </plugin>
               <!-- Generate api doc -->
               <plugin>
                   <groupId>org.ballerinalang</groupId>
                   <artifactId>docerina-maven-plugin</artifactId>
                   <version>${docerina.maven.plugin.version}</version>
                   <executions>
                       <execution>
                           <phase>validate</phase>
                           <goals>
                               <goal>docerina</goal>
                           </goals>
                           <configuration>
                               <outputDir>${project.build.directory}/docs</outputDir>
                               <sourceDir>${connectors.source.temp.dir}</sourceDir>
                           </configuration>
                       </execution>
                   </executions>
               </plugin>
               <!-- For ballerina natives processing/validation -->
               <plugin>
                   <groupId>org.bsc.maven</groupId>
                   <artifactId>maven-processor-plugin</artifactId>
                   <version>${mvn.processor.plugin.version}</version>
                   <configuration>
                       <processors>
                           <processor>org.ballerinalang.natives.annotation.processor.BallerinaAnnotationProcessor</processor>
                       </processors>
                       <options>
                           <packageName>${native.constructs.provider.package}</packageName>
                           <className>${native.constructs.provider.class}</className>
                           <srcDir>${connectors.source.directory}</srcDir>
                           <targetDir>${generated.connectors.source.directory}</targetDir>
                       </options>
                   </configuration>
                   <executions>
                       <execution>
                           <id>process</id>
                           <goals>
                               <goal>process</goal>
                           </goals>
                           <phase>generate-sources</phase>
                       </execution>
                   </executions>
               </plugin>
               <!-- For ballerina natives processing/validation -->
               <plugin>
                   <groupId>org.codehaus.mojo</groupId>
                   <artifactId>exec-maven-plugin</artifactId>
                   <version>${mvn.exec.plugin.version}</version>
                   <executions>
                       <execution>
                           <phase>test</phase>
                           <goals>
                               <goal>java</goal>
                           </goals>
                           <configuration>
                               <mainClass>org.ballerinalang.natives.annotation.processor.NativeValidator</mainClass>
                               <arguments>
                                   <argument>${generated.connectors.source.directory}</argument>
                               </arguments>
                           </configuration>
                       </execution>
                   </executions>
               </plugin>

           </plugins>
        </build>
        <properties>
           <ballerina.version>0.8.0-SNAPSHOT</ballerina.version>
           <mvn.exec.plugin.version>1.5.0</mvn.exec.plugin.version>
           <mvn.processor.plugin.version>2.2.4</mvn.processor.plugin.version>
           <mvn.resource.plugins.version>3.0.2</mvn.resource.plugins.version>

           <!-- Path to the generated natives ballerina files temp directory -->
           <native.constructs.provider.package>org.wso2.ballerina.connectors</native.constructs.provider.package>
           <native.constructs.provider.class>BallerinaConnectorsProvider</native.constructs.provider.class>
           <generated.connectors.source.directory>${project.build.directory}/natives</generated.connectors.source.directory>
           <connectors.source.directory>${connectors.source.temp.dir}</connectors.source.directory>
           <connectors.source.temp.dir>${basedir}/target/extra-resources</connectors.source.temp.dir>
           <docerina.maven.plugin.version>0.8.0-SNAPSHOT</docerina.maven.plugin.version>
        </properties>
    </project>

    Create the gmail connector and the operation(Action)

    Create the folder structure under the root folder as follows

    gmail ->src -> org -> wso2 -> ballerina -> connectors -> gmail and under that create the gmailConnector bal file call gmailConnector.bal



    Here we create the Connector for gmail in the gmailConnector.bal as follows,

    package org.wso2.ballerina.connectors.gmail; //The package name should be like of the package structure

    import ballerina.net.http;
    import ballerina.lang.messages;

    //This is the annotation for generate the API docs using docerina in the build time
    @doc:Description("Gmail client connector")
    @doc:Param("userId: The userId of the Gmail account which means the email id")
    @doc:Param("accessToken: The accessToken of the Gmail account to access the gmail REST API")
    connector ClientConnector (string userId, string accessToken) {

        http:ClientConnector gmailEP = create http:ClientConnector("https://www.googleapis.com/gmail");

        @doc:Description("Retrieve the user profile")
        @doc:Return("response object")
        action getUserProfile(ClientConnector g) (message) {

           message request = {};

           string getProfilePath = "/v1/users/" + userId + "/profile";
           messages:setHeader(request, "Authorization", "Bearer " + accessToken);
           message response = http:ClientConnector.get(gmailEP, getProfilePath, request);

           return response;
        }
    }

    Using the above code we are creating a connector for gmail using connector keyword, the name of the connector would be ClientConnector and the userId and the accessToken are the parameters needed to invoke the gmail getUserProfile action.

    Here we create an instance of an http ClientConnector to call the API endpoint. For that, we need to give the baseURL of gmail “https://www.googleapis.com/gmail” to http ClientConnector path.

    Then we need to create an action to call that particular operation like in above.

    action getUserProfile(ClientConnector g) (message) {
    }

    The action is the keyword, the action name is getUserProfile and the return type here is the message(This should be given).

    Then call the getUserProfile endpoint using http get method as follows,

    message response = http:ClientConnector.get(gmailEP, getProfilePath, request);

    For the authentication, we are setting the Authorization header with Bearer <The accessToken>. The valid accessToken should be pass to invoke this action.

    Here we don’t have the refresh mechanism. If you need the refresh flow then you can just integrate the ballerinalang oauth2 connector with ballerinalang gmail connector. For more information about it just click here.

    After that, you need to add a dummy class to build the jar.

    dummy.png

    The Builder class should be like as following,

    import org.ballerinalang.natives.annotations.BallerinaConnector;

    /**
    * This is a dummy class needed for annotation processor plugin.
    */
    @BallerinaConnector(
           connectorName = "ignore"
    )
    public class Builder {

    }

    Then go to the root folder and build it using mvn clean install. You can get a build jar in the target folder If the build got succeeded.

    How to invoke the action:

    When you build the ballerina you will get the ballerina zip under the modules -> distribution ->target

    Extract the zip file and place the build jar for gmail into the extracted ballerina distribution ballerina-{version}/bre/lib folder

    And create a main function to invoke the action as follows,

    import org.wso2.ballerina.connectors.gmail;

    import ballerina.lang.jsons;
    import ballerina.lang.messages;
    import ballerina.lang.system;

    function main (string[] args) {

        gmail:ClientConnector gmailConnector = create gmail:ClientConnector(args[0], args[1]);

        message gmailResponse;
        json gmailJSONResponse;
        string deleteResponse;

        gmailResponse = gmail:ClientConnector.getUserProfile(gmailConnector);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        system:println(jsons:toString(gmailJSONResponse));

    }

    Save it as samples.bal and place it in the ballerina-{version}/bin folder and invoke the action with the following command.

    $bin ./ballerina run main samples.bal tharis63@gmail.com ya29.Glz4A3Vh7XwHd8XQQKe1qMls5J7KmIBaC6y5fClTcKoDO45TlYN_BRCH7RH2mzknJQ4_3mdElAk1tM5VD-oKf6Zkn7rK2HsNtfb6nqy6tW2Qifdtzo16bjuA4pNYsw

    Or the main function would be like below as well,

    import org.wso2.ballerina.connectors.gmail;

    import ballerina.lang.jsons;
    import ballerina.lang.messages;
    import ballerina.lang.system;

    function main (string[] args) {

        string username = "tharis63@gmail.com";
        string accessToken = "ya29.Glz4A3Vh7XwHd8XQQKe1qMls5J7KmIBaC6y5fClTcKoDO45TlYN_BRCH7RH2mzknJQ4_3mdElAk1tM5VD-oKf6Zkn7rK2HsNtfb6nqy6tW2Qifdtzo16bjuA4pNYsw";
        gmail:ClientConnector gmailConnector = create gmail:ClientConnector(username,accessToken);

        message gmailResponse;
        json gmailJSONResponse;
        string deleteResponse;

        gmailResponse = gmail:ClientConnector.getUserProfile(gmailConnector);
        gmailJSONResponse = messages:getJsonPayload(gmailResponse);
        system:println(jsons:toString(gmailJSONResponse));

    }


    For invoking, the above action can use the following command,

    bin$ ./ballerina run main samples.bal

    You will get the response as below for above two main functions,

    {"emailAddress":"tharis63@gmail.com","messagesTotal":36033,"threadsTotal":29027,"historyId":"2635536"}

    That’s it!

    Welcome to Ballerina Language.



    References





    Suhan DharmasuriyaBallerina 101

    In this tutorial you will learn the basic concepts of Ballerina and why we call it Flexible, Powerful and Beautiful. You will also learn to run your ballerina programs in two modes: server and standalone modes with simple examples.

    Introduction

    Ballerina is a programming language designed from the ground up specifically for integration which allows you to draw code to life. It allows you to connect apps and services to handle all kinds of integration scenarios. Why do we call it Flexible, Powerful and Beautiful?

    You can build your integrations by drawing sequence diagrams, or write your code in swagger or in ballerina. You can add plugins and write ballerina code in IntelliJ IDEA[2], Vim[3], Atom[4], Sublime Text 3[5] and more. Therefore it is FLEXIBLE.

    Ballerina can handle everything from a simple Hello World program to complex service chaining and content-based routing scenarios. It comes with native support for REST, Swagger, JSON, and XML, and it includes connectors for popular services like Twitter and Facebook. It has an incredibly fast lightweight runtime which can be deployed in a production environment without any development tools. Therefore it is POWERFUL.

    The integration code is written for you as you create the diagram in Ballerina Composer.


    Cool! isn't it? All you have to do is drag and drop elements needed for your use case onto a canvas who will easily create your integration scenario for you. You can switch between Source view and Design view anytime. Therefore it is BEAUTIFUL.

    Key Concepts

    Each Ballerina program represents a discrete unit of functionality that performs an integration task. The complexity of the ballerina program is upon your discretion.

    You can create your ballerina program in two ways.
    1. Server mode: as a service that runs in the Ballerina server and awaits requests over HTTP.
    2. Standalone mode: as an executable program that executes a main() function and then exits.

    Following are the available constructs you can use [1].

    1. Service: When defining a Ballerina program as a service instead of an executable program, the service construct acts as the top-level container that holds all the integration logic and can interact with the rest of the world. Its base path is the context part of the URL that you use when sending requests to the service.

    2. Resource: A resource is a single request handler within a service. When you create a service in Ballerina using the visual editor, a default resource is automatically created as well. The resource contains the integration logic.

    3. Function: A function is a single operation. Ballerina includes a set of native functions you can call and you can define additional functions within your Ballerina programs.
     The main() function contains the core integration logic when creating an executable program instead of a service. When you run the program, the main() function executes, and then the program terminates. You can define additional functions, connectors, etc. inside the program and call them from main(). See here for a complex example.

    4. Worker: A worker is a thread that executes a function.

    5. Connector: A connector represents a participant in the integration and is used to interact with an external system or a service you've defined in Ballerina. Ballerina includes a set of standard connectors that allow you to connect to Twitter, Facebook, and more, and you can define additional connectors within your Ballerina programs.

    6. Action: An action is an operation you can execute against a connector. It represents a single interaction with a participant of the integration.

    See language reference for more information.

    Quick Start

    1. Download complete ballerina tools package from http://ballerinalang.org/downloads/
    2. Unzip it on your computer and lets call it <ballerina_home>
    e.g.: /WSO2/ballerina/ballerina-tools-<version>
    3. Add <ballerina_home>/bin directory to your $PATH environment variable so that you can run ballerina commands from anywhere.
    e.g.: on Mac OS X


    export BALLERINA_HOME=/WSO2/ballerina/ballerina-tools-0.8.1
    export PATH=$BALLERINA_HOME/bin:$PATH

    Run HelloWorld - Standalone Mode

    Now we are going to run HelloWorld classical example using a main() function, i.e. in standalone mode as follows.

    1. Create /WSO2/ballerina/tutorial/helloWorld directory.
    2. In this directory, create the file helloWorld.bal with following contents.


    import ballerina.lang.system;

    function main (string[] args) {
      system:println("Hello, World!");
    }

    This is how the famous hello world sample looks like in ballerina programming language!

    3. Issue the following command to run the main() function in helloWorld.bal file.
    $ ballerina run main helloworld.bal

    You can observe the following output in command line.
    > Hello, World!

    After the HelloWorld program is executed, Ballerina will be stopped. This is useful when you want to execute a program once and then stop as soon as it has finished its job. It runs the main() function of the program you specify and then exits

    Run HelloWorld - Server Mode

    Here Ballerina will deploy one or more services in the ballerina program that wait for requests.
    1. Create the file helloWorldService.bal with following contents.

    import ballerina.lang.messages;
    @http:BasePath ("/hello")
    service helloWorld {

        @http:GET
        resource sayHello (message m) {
            message response = {};
            messages:setStringPayload(response, "Hello, World!");
            reply response;

        }

    }

    3. Issue the following command to deploy the helloWorld service in helloWorldService.bal file.
    $ ballerina run service helloWorldService.bal

    You can observe the following output in command line that the service is waiting for requests.
    > ballerina: deploying service(s) in 'helloWorldService.bal'
    > ballerina: started server connector http-9090

    The Ballerina server is available at localhost:9090, and helloWorld service is available at context hello

    4. Open another command line window and use the curl client to call helloWorld service as follows.
    $ curl -v http://localhost:9090/hello

    The service receives the request and executes its logic, printing "Hello, World!" on the command line as follows. 
    *   Trying 127.0.0.1...
    * Connected to localhost (127.0.0.1) port 9090 (#0)
    > GET /hello HTTP/1.1
    > Host: localhost:9090
    > User-Agent: curl/7.49.1
    > Accept: */*
    < HTTP/1.1 200 OK
    < Content-Type: text/plain
    < Content-Length: 13
    * Connection #0 to host localhost left intact
    Hello, World!

    Notice that the Ballerina server is still running in the background, waiting for more requests to serve. 

    5. Stop the Ballerina server by pressing Ctrl-C (Command-C). 


    Reference:
    [1] http://ballerinalang.org/
    [2] https://github.com/ballerinalang/plugin-intellij/releases
    [3] https://github.com/ballerinalang/plugin-vim/releases
    [4] https://github.com/ballerinalang/plugin-atom/releases
    [5] https://github.com/ballerinalang/plugin-sublimetext3/releases


    Himasha GurugeAdd failed endpoint name and address through fault sequence in WSO2 ESB

    When generating custom fault sequence messages, one common use case is when you need to send the endpoint name and endpoint address of the failed endpoint back to the client. This can be done by getting the value of two properties which are 'last_endpoint' and 'ENDPOINT_PREFIX'.

    However you can't get 'last_endpoint' property value directly as it sends the endpoint object. Therefore you will have to write a class mediator like below and get the value of that endpoint object and set it to a custom property that you have in your fault sequence.

     public boolean mediate(MessageContext mc) {
              // Get 'last_endpoint' property from message context
             AbstractEndpoint failedEP =(AbstractEndpoint)mc.getProperty("last_endpoint");
             // Get name of the failed endpoint
             String failedEPName =  failedEP.getName();
             // Set value to the endpoint name holder in proxy
             mc.setProperty("default_ep", failedEPName);
                return true;
            }

    Now you can create your fault sequence like below.

    <faultSequence>
             <property name="default_ep" value="default"/>
             <class name="org.test.EPMediator"/>
             <payloadFactory media-type="xml">
                <format>
                   <tp:fault xmlns:tp="http://test.com">
                      <tp:message>Error connecting to the backend</tp:message>
                      <tp:description>Endpoint $1 with address $2 failed!</tp:description>
                   </tp:fault>
                </format>
                <args>
                   <arg evaluator="xml" expression="get-property('default_ep')"/>
                   <arg evaluator="xml" expression="get-property('ENDPOINT_PREFIX')"/>
                </args>
             </payloadFactory>
            </send>
    </faultSequence>

    Imesh GunaratneRethinking Service Integrations with Microservices Architecture

    Image Reference: https://www.pexels.com/photo/ballet-ballet-dancer-beautiful-choreography-206274/

    The dawn of the microservices architecture (MSA) begun revolutionizing the software paradigm in the past few years by revealing a new architectural style for optimizing the infrastructure usage to its optimal level. MSA defines a complete methodology for developing software applications as a collection of independently deployable, lightweight services in which each service would run on a dedicated process with decentralized control of languages and data. In spite of the wide variety of frameworks introduced for implementing business services in this architectural style nearly none were introduced for implementing service integrations. Very recently WSO2 initiated a new open source programming language and a complete ecosystem for this specific purpose.

    A new programming language? Yes, you heard it right, it’s not another integration framework with many different domain-specific languages (DSLs). It’s a purposely built programming language for integration, with native constructs for implementing enterprise integration patterns (EIPs) including support for industry standard protocols, message formats, etc optimized for containerized environments. It may be worth to note that Ballerina is designed ground up with nearly a decade of experience in implementing integration solutions at WSO2 with the vision of making service integrations much more easier to design, implement, deployable, and more importantly adhere to MSA.

    The Impact on Microservices Architecture

    Figure 1: Using a Monolithic ESB in Outer Architecture

    Today most enterprises seek for mechanisms for integrating services from various internal and external service providers for meeting their business needs. Traditionally this could be achieved using an integration framework, ESB or using an integration suite depending on the complexity of the integrations. As illustrated in figure 1, one option would be to use a monolithic ESB in the outer architecture while implementing business services in the inner architecture inline with MSA. Despite the fact that it is technically feasible it may contradict the main design goals of MSA as an ESB or an integration suite would consume a considerable amount of resources while taking longer to bootstrap, having to use in-process multi-tenancy, with comparatively higher development and deployment cost, etc.

    For an example WSO2 ESB would need around 2 GB of memory for running a typical integration solution, it would take around 20 to 30 seconds to bootstrap, it may not evenly share resources among all the tenants with in-JVM multi-tenancy, the development process may take longer as it may depend on a single set of configurations and data stores, and finally the deployment would utilize more resources than optimally needed. Considering all of the above plus the vision of adapting serverless architecture, a much lighter, ultra-fast integration framework with a higher throughput would be needed for gaining the best out of MSA.

    Figure 2: A Reference Architecture for Implementing Integration Services in MSA

    The above figure illustrates a reference architecture for implementing integration services in MSA. Unlike an ESB where a collection of integration workflows are deployed in a single process, in this architecture, each integration work flow will have its own process and a container. Hence services can be independently designed, developed, deployed, scaled and managed. More importantly, it will allow resources to be specifically allocated for each integration service container cluster for optimizing the overall resource usage. Moreover, container cluster managers such as Kubernetes provide completely isolated contexts within a single container host cluster for managing multi-tenancy. Therefore this approach would naturally fit in MSA for implementing integration services.

    Ballerina Language Design

    As explained earlier Ballerina language has been carefully designed by studying constructs of widely used programming languages such as Java, Golang, C, C++, etc. The following section illustrates the high-level language design in brief:

    Packages

    The package is the topmost container in Ballerina which holds functions or services. It is important to note that package definition is optional and if a package is defined, ballerina source files need to be stored in a hierarchical folder structure according to its package hierarchy.

    Functions

    A function represents a set of instructions that performs a specific task that is intended to be reusable. Mainly there are two types of functions; native functions which support returning multiple return parameters, and throwing exceptions.

    Main Function

    The main function is the entry point in Ballerina executable programs. Executables can be used for implementing integration logic which needs to be run in the background on a time interval or an event trigger.

    Services

    Ballerina services allow integration workflows to be exposed as services. Services are protocol agnostic and can be extended to work with any messaging protocol with required message formats. Currently, services can be exposed as HTTP REST APIs, WebSockets, HTTP/2 services, and messages can also be delivered to mediation pipelines via JMS topics/queues (using an external broker), and files.

    Resources

    A resource represents a functional unit of a Ballerina service. A service exposed via a given protocol would use resources for managing different types of messages. For an example HTTP REST API would use resources for implementing API resources, a JMS service would use resources for receiving messages from a topic/queue,

    Workers

    A worker is a thread according to general programming terms. Workers provide the ability to execute a series of integration functions in parallel for reducing the overall mediation latency of an integration service.

    Connectors

    Connectors provide language extensions for talking to well known external services from Ballerina such as Twitter, Google, Medium, etcd, Kubernetes, etc. Moreover, it also provides the ability to plug-in authentication and authorization features to the language.

    Ballerina Composer

    Figure 3: Ballerina Composer Design View

    Composer is the visual designer tool of the Ballerina language. It has been designed as a web application and shipped with the Ballerina Tools distribution. Execute the below set of commands to download it and run, once started access http://localhost:9091 in a web browser:

    $ version=0.8.1 # change this to the latest version
    $ wget http://ballerinalang.org/downloads/ballerina-tools/ballerina-tools-${version}.zip
    $ unzip ballerina-tools-${version}.zip
    $ cd ballerina-tools-${version} # consider this as [ballerina.home]
    # cd bin/
    $ ./composer

    Not only Composer provides a charming graphical designer, it also provides a text editor with syntax highlighting and code completion features, and a Swagger editor for HTTP based services. Composer provides all language constructs and native functions needed for implementing integration programs and services. More interestingly those can be run and debugged using the same editor.

    Figure 4: Ballerina Composer Source View

    For detailed information on Composer please refer this article.

    Ballerina CLI

    Ballerina ships two distributions, one for the Ballerina runtime and the other for the tooling. Ballerina runtime only includes features required for running Ballerina programs and services. The tools distribution include features for executing test cases, generating API documentation, generating swagger definitions and building Docker images:

    $ cd [ballerina.home]/bin/
    $ ./ballerina --help
    Ballerina is a flexible, powerful and beautiful programming language designed for integration.
    * Find more information at http://ballerinalang.org
    Usage:
    ballerina [command] [options]
    Available Commands:
    run run Ballerina main/service programs
    build create Ballerina program archives
    docker create docker images for Ballerina program archives
    doc generate Ballerina API documentation
    swagger Generate connector/service using swagger definition
    test test Ballerina program
    Flags:
    --help, -h for more information
    Use "ballerina help [command]" for more information about a command.

    Ballerina Packaging Model

    Ballerina programs and services can be packaged into archive files for distribution. These files will take the extension BSZ. Consider the below sample HTTP service, the source code of this can be found here:

    .
    └── hello-ballerina
    ├── README.md
    └── org
    └── foo
    └── bar
    ├── helloWorldService.bal
    └── helloWorldServiceTest.bal

    Following command can be executed for generating an archive file for this service:

    $ cd /path/to/hello-service/
    $ /path/to/ballerina-home/bin/ballerina build service org/foo/bar/

    The generated bar.bsz file would contain following files:

    .
    ├── BAL_INF
    │ └── ballerina.conf
    ├── ballerina
    │ └── test
    │ └── assert.bal
    └── org
    └── foo
    └── bar
    ├── helloWorldService.bal
    └── helloWorldServiceTest.bal

    Ballerina API Documentation Generator

    Ballerina tools distribution ships an API documentation generation tool called Docerina as a part of the Ballerina CLI. This allows developers to generate API documentation for Ballerina functions, connectors, structs, and type mappers. Currently, it does not include API documentation generation for Ballerina services as they are already supported with the Swagger integration for HTTP based services. In a future release, it may support non-HTTP services such as JMS and file.

    API documentation of Ballerina native functions of v0.8 release can be found here. Execute ballerina doc help command for more information on generating API documentation for Ballerina code:

    $ cd ballerina-tools-${version}/bin/
    $ ./ballerina doc --help
    generate Ballerina API documentation
    Usage:
    ballerina doc <sourcepath>... [-o outputdir] [-n] [-e excludedpackages] [-v]
    sourcepath:
    Paths to the directories where Ballerina source files reside or a path to
    a Ballerina file which does not belong to a package
    Flags:
    --output, -o path to the output directory where the API documentation will be written to
    --native, -n read the source as native ballerina code
    --exclude, -e a comma separated list of package names to be filtered from the documentation
    --verbose, -v enable debug level logs

    Ballerina Test Framework

    Ballerina provides a test framework called Testerina for implementing unit tests for Ballerina code. In v0.8 release, following native test functions are available for starting services, asserting values and setting mock values:

    package ballerina.test;
    startService(string servicename)
    assertTrue(boolean condition)
    assertTrue(boolean condition, string message)
    assertFalse(boolean condition)
    assertFalse(boolean condition, string message)
    assertEquals(string actual, string expected)
    assertEquals(string actual, string expected, string message)
    assertEquals(int actual, int expected)
    assertEquals(int actual, int expected, string message)
    assertEquals(float actual, float expected)
    assertEquals(float actual, float expected, string message)
    assertEquals(boolean actual, boolean expected)
    assertEquals(boolean actual, boolean expected, string message)
    assertEquals(string[] actual, string[] expected)
    assertEquals(string[] actual, string[] expected, string message)
    assertEquals(float[] actual, float[] expected)
    assertEquals(float[] actual, float[] expected, string message)
    assertEquals(int[] actual, int[] expected)
    assertEquals(int[] actual, int[] expected, string message)
    package ballerina.mock;
    setValue(string pathExpressionToMockableConnector)

    Following is a sample HTTP service written in Ballerina:

    package org.foo.bar;
    import ballerina.lang.messages as message;
    @http:BasePath ("/hello")
    service helloService {
    @http:GET
    resource helloResource(message m) {
    message response = {};
    message:setStringPayload(response, "Hello world!");
    reply response;
    }
    }

    It can be tested by implementing a test case as follows:

    package org.foo.bar;
    import ballerina.lang.messages as message;
    import ballerina.test;
    import ballerina.net.http;
    function testHelloService () {
    message request = {};
    message response = {};
    string responseString;
    string serviceURL = test:startService("helloService");
    http:ClientConnector endpoint = create  http:ClientConnector(serviceURL);
    response = http:ClientConnector.get(endpoint, "/hello", request);
    responseString = message:getStringPayload(response);
    test:assertEquals(responseString, "Hello world!");
    }

    Ballerina Container Support

    Ballerina Docker CLI command can be used for creating Docker images for Ballerina program archives. Execute the below command for more information on this:

    cd ballerina-tools-${version}/bin/
    $./ballerina docker --help
    create docker images for Ballerina program archives
    Usage:
    ballerina docker <package-name> [--tag | -t <image-name>] [--host | -H <docker-hostURL>] --help | -h --yes | -y
    Flags:
    --tag, -t docker image name. <image-name>:<version>
    --yes, -y assume yes for prompts
    --host, -H docker Host. http://<ip-address>:<port>

    Conclusion

    Ballerina is a brand new open source programming language purposely built for implementing integration services in MSA. It provides a complete ecosystem for designing, developing, documenting, testing and deploying integration workflows. Feel free to try it out, give feedback, report issues and most importantly to contribute back. Happy dancing with Ballerina!!

    References

    [1] Serverless Architectures, https://martinfowler.com/articles/microservices.html

    [2] What are Microservices, https://smartbear.com/learn/api-design/what-are-microservices

    [3] Introduction to Microservices, https://www.nginx.com/blog/introduction-to-microservices

    [4] Microservices: Building Services with the Guts on the Outside, http://blogs.gartner.com/gary-olliffe/2015/01/30/microservices-guts-on-the-outside/

    [5] The Future of Integration with Microservices, https://dzone.com/articles/the-future-of-integration-with-microservices

    [6] Ballerinalang Website, http://ballerinalang.org

    [7] Ballerinalang Documentation, http://ballerinalang.org/docs

    [8] Ballerinalang Github Repository, https://github.com/ballerinalang/ballerina

    Lakshman UdayakanthaSimple wait and notify example in Java

    This example demonstrate wait and notify example. Main thread(ThreadA) will create threadB and will start threadB. After threadB started, it just print that it is started and will go to WAITING state by calling wait(). Meanwhile threadA goes to sleep for 3 seconds and will print that it is awaked and will notify threadB by calling notify(). This will cause to threadB goes to RUNNABLE state. Then it will resume the threadB's execution and will print that it is notified.

    public class ThreadA {
    public static void main(String[] args) throws InterruptedException {
    ThreadB threadB = new ThreadB();
    Thread thread = new Thread(threadB);
    thread.start();
    Thread.sleep(3000);
    System.out.println("threadA is awaked.......");
    synchronized (threadB) {
    threadB.notify();
    }

    }
    }
    public class ThreadB implements Runnable {
    public void run() {
    System.out.println("threadB is started................");
    synchronized (this) {
    try {
    wait();
    } catch (InterruptedException e) {
    e.printStackTrace();
    }

    System.out.println("threadB is notified.............");
    }
    }
    }

    Note that when we call wait() and notify(), it should call inside synchronised context. Otherwise it will throw java.lang.IllegalMonitorStateException. We have to pass a lock object to the synchronised block. That object will be blocked during the execution of synchronisation block. In this case I pass the threadB itself as the lock object.

    Hariprasath ThanarajahHow to invoke the Ballerina Gmail connector actions using ballerina main function?

    Ballerina is a general purpose, concurrent and strongly typed programming language with both textual and graphical syntaxes, optimized for integration.

    Follow http://ballerinalang.org/docs/user-guide/0.8/ to understand and play with ballerina and it features.

    Here we going to see about the ballerina gmail connector and how can we invoke an action of it by writing a ballerina main function.

    Requirements

    1. Download ballerina tool distribution and add the bin path to $PATH environment

    2. Create a main function to invoke the connector action.

    3. How to invoke the action.

    Download ballerina tool distribution and add the bin path to $PATH environment

    Download the Ballerina Tools distribution which includes the Ballerina runtime plus the visual editor and other tools like connector in this case from http://www.ballerinalang.org and unzip it on your computer. 

    Add the <ballerina_home>/bin directory to your $PATH environment variable so that you can run the Ballerina commands from anywhere.

    From https://hariwso2.blogspot.com/2017/02/step-by-step-guide-to-create-third.html post you can simply create a ballerina connector and invoke its actions via the main function. But in this post, we can use the connectors which are already bundled with the ballerina tool distribution and using those to invoke the third party api via the ballerina main function.

    At the moment the Ballerina Tools distribution consists 12 connectors with that. Here we can invoke some gmail actions like createMail, getUserProfile, listMails ,etc ..


    Create a main function to invoke the connector action

    In the Ballerina, we create a main function to invoke the actions of a connector like gmail, twiiter and facebook.

    create a main function within the test.bal file as follows, 

    import org.ballerinalang.connectors.gmail;

    import ballerina.lang.jsons;
    import ballerina.lang.messages;
    import ballerina.lang.system;

    function main (string[] args) {

        gmail:ClientConnector gmailConnector = create gmail:ClientConnector(args[1], args[2], args[3], args[4], args[5]);

        message gmailResponse;
        json gmailJSONResponse;
        string deleteResponse;

        if( args[0] == "getUserProfile") {
            gmailResponse = gmail:ClientConnector.getUserProfile(gmailConnector);
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            system:println(jsons:toString(gmailJSONResponse));
        }

        if( args[0] == "createDraft") {
            gmailResponse = gmail:ClientConnector.createDraft(gmailConnector , args[6], args[7], args[8],
            args[9], args[10], args[11], args[12], args[13] );
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            system:println(jsons:toString(gmailJSONResponse));
        }

        if( args[0] == "updateDraft") {
            gmailResponse = gmail:ClientConnector.updateDraft(gmailConnector, args[6], args[7], args[8], args[9],
            args[10], args[11], args[12], args[13], args[14]);
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            system:println(jsons:toString(gmailJSONResponse));
        }

        if( args[0] == "readDraft") {
            gmailResponse = gmail:ClientConnector.readDraft(gmailConnector, args[6], args[7]);
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            system:println(jsons:toString(gmailJSONResponse));
        }

        if( args[0] == "listDrafts") {
            gmailResponse = gmail:ClientConnector.listDrafts(gmailConnector, args[6], args[7], args[8], args[9]);
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            system:println(jsons:toString(gmailJSONResponse));
        }

        if( args[0] == "deleteDraft") {
            gmailResponse = gmail:ClientConnector.deleteDraft(gmailConnector, args[6]);
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            deleteResponse = jsons:toString(gmailJSONResponse);
            if(deleteResponse == "null"){
                system:println("Draft with id: " + args[6] + " deleted successfully.");
            }
        }

        if( args[0] == "listHistory") {
            gmailResponse = gmail:ClientConnector.listHistory(gmailConnector, args[6], args[7], args[8], args[9]);
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            system:println(jsons:toString(gmailJSONResponse));
        }

        if( args[0] == "createLabel") {
            gmailResponse = gmail:ClientConnector.createLabel(gmailConnector, args[6], args[7], args[8],
            args[9], args[10], args[11], args[12], args[13]);
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            system:println(jsons:toString(gmailJSONResponse));
        }

        if( args[0] == "deleteLabel") {
            gmailResponse = gmail:ClientConnector.deleteLabel(gmailConnector, args[6]);
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            deleteResponse = jsons:toString(gmailJSONResponse);
            if(deleteResponse == "null"){
                system:println("Label with id: " + args[6] + " deleted successfully.");
            }
        }

        if( args[0] == "listLabels") {
            gmailResponse = gmail:ClientConnector.listLabels(gmailConnector);
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            system:println(jsons:toString(gmailJSONResponse));
        }

        if( args[0] == "updateLabel") {
            gmailResponse = gmail:ClientConnector.updateLabel(gmailConnector, args[6], args[7], args[8], args[9],
            args[10], args[11], args[12], args[13], args[14]);
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            system:println(jsons:toString(gmailJSONResponse));
        }

        if( args[0] == "readLabel") {
            gmailResponse = gmail:ClientConnector.readLabel(gmailConnector, args[6]);
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            system:println(jsons:toString(gmailJSONResponse));
        }

        if( args[0] == "readThread") {
            gmailResponse = gmail:ClientConnector.readThread(gmailConnector, args[6], args[7], args[8]);
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            system:println(jsons:toString(gmailJSONResponse));
        }

        if( args[0] == "listThreads") {
            gmailResponse = gmail:ClientConnector.listThreads(gmailConnector, args[6], args[7], args[8], args[9], args[10]);
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            system:println(jsons:toString(gmailJSONResponse));
        }

        if( args[0] == "deleteThread") {
            gmailResponse = gmail:ClientConnector.deleteThread(gmailConnector, args[6]);
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            deleteResponse = jsons:toString(gmailJSONResponse);
            if(deleteResponse == "null"){
                system:println("Thread with id: " + args[6] + " deleted successfully.");
            }
        }

        if( args[0] == "trashThread") {
            gmailResponse = gmail:ClientConnector.trashThread(gmailConnector, args[6]);
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            system:println(jsons:toString(gmailJSONResponse));
        }

        if( args[0] == "unTrashThread") {
            gmailResponse = gmail:ClientConnector.unTrashThread(gmailConnector, args[6]);
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            system:println(jsons:toString(gmailJSONResponse));
        }

        if( args[0] == "listMails") {
            gmailResponse = gmail:ClientConnector.listMails(gmailConnector, args[6], args[7], args[8], args[9], args[10]);
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            system:println(jsons:toString(gmailJSONResponse));
        }

        if( args[0] == "sendMail") {
            gmailResponse = gmail:ClientConnector.sendMail(gmailConnector, args[6], args[7], args[8], args[9], args[10], args[11],
            args[12], args[13]);
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            system:println(jsons:toString(gmailJSONResponse));
        }

        if( args[0] == "modifyExistingMessage") {
            gmailResponse = gmail:ClientConnector.modifyExistingMessage(gmailConnector, args[6], args[7], args[8]);
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            system:println(jsons:toString(gmailJSONResponse));
        }

        if( args[0] == "readMail") {
            gmailResponse = gmail:ClientConnector.readMail(gmailConnector, args[6], args[7], args[8]);
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            system:println(jsons:toString(gmailJSONResponse));
        }

        if( args[0] == "deleteMail") {
            gmailResponse = gmail:ClientConnector.deleteMail(gmailConnector, args[6]);
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            deleteResponse = jsons:toString(gmailJSONResponse);
            if(deleteResponse == "null"){
                system:println("Mail with id: " + args[6] + " deleted successfully.");
            }
        }

        if( args[0] == "trashMail") {
            gmailResponse = gmail:ClientConnector.trashMail(gmailConnector, args[6]);
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            system:println(jsons:toString(gmailJSONResponse));
        }

        if( args[0] == "unTrashMail") {
            gmailResponse = gmail:ClientConnector.unTrashMail(gmailConnector, args[6]);
            gmailJSONResponse = messages:getJsonPayload(gmailResponse);
            system:println(jsons:toString(gmailJSONResponse));
        }
    }

    How to invoke the action

    Go to the place you create the above test.bal file and run the following command to invoke the action of the gmail connector.

    $ ballerina run main test.bal <actionName> <userId> <accessToken> <refreshToken> <clientId> <clientSecret> 

    If the actions need more variable to invoke it then you need to give it as space separated arguments like above. You can follow https://github.com/ballerinalang/connectors/tree/master/gmail/docs/gmail to understand more about this.

    The refreshToken, clientId and the clientSecret are getting from the user because of the need to refresh the accessToken using Ballerina oAuth2 connector automatically. 

    The sample command to invoke getUserProfile action as follows,

    $ ballerina run main test.bal getUserProfile tharis63@gmail.com ya29.Gl0ABHGIfWx1fNrTFW6yQK_KE-eCq_KfaJeNDGuAUO98Lsj-On32dWK7VmfOQud8NUQ6yzqWN3xzwkUfxA72HCswv4pg7Yo_FCh0z1QxFhsEhUsWFzYX2xl4Rj1Sa-I xxxxx yyyyyy zzzzz


    References

    Lakshani GamageHow to block the login for the Management Console of WSO2 IoT Server

    Put the following config to <IoTS_HOME>/core/repository/conf/tomcat/carbon/WEB-INF/web.xml.


    <security-constraint>
    <display-name>Restrict direct access to certain folders</display-name>
    <web-resource-collection>
    <web-resource-name>Restricted folders</web-resource-name>
    <url-pattern>/carbon/*</url-pattern>
    </web-resource-collection>
    <auth-constraint />
    </security-constraint>

    Then restart the server.

    Imesh GunaratneRethinking Service Integrations with Microservices Architecture

    Image reference: https://www.pexels.com/photo/ballet-ballet-dancer-beautiful-choreography-206274/

    The dawn of the microservices architecture (MSA) begun revolutionizing the software paradigm in the past few years by revealing a new architectural style for optimizing the infrastructure usage to its optimal level. MSA defines a complete methodology for developing software applications as a collection of independently deployable, lightweight services in which each service would run on a dedicated process with decentralized control of languages and data. In spite of the wide variety of frameworks introduced during past few years for implementing business services in this architectural style nearly none were introduced for implementing service integrations. Very recently WSO2 initiated a new open source programming language and a complete ecosystem for this specific purpose.

    A new programming language? Yes you heard it right, it’s not another integration framework with many different domain specific languages (DSLs). It’s a purposely built programming language for integration, with native constructs for implementing enterprise integration patterns (EIPs) including support for industry standard protocols, message formats, etc optimized for containerized environments. It may be worth to note that Ballerina is designed ground up with nearly a decade of experience in implementing integration solutions at WSO2 with the vision of making service integrations much more easier to design, implement, deployable, and more importantly adhere to MSA.

    The Impact on Microservices Architecture

    Figure 1: Using a Monolithic ESB in Outer Archtiecture

    Today most enterprises seek for mechanisms for integrating services from various internal and external service providers for meeting their business needs. Traditionally this could be achieved using an integration framework, ESB or using an integration suite depending on the complexity of the integrations. As illustrated in figure 1, one option would be to use a monolithic ESB in the outer architecture while implementing business services in the inner architecture inline with MSA. Despite the fact that it is technically feasible it may contradict the main design goals of MSA as an ESB or an integration suite would consume considerable amount of resources while taking longer to bootstrap, having to use in-process multi-tenancy, with comparatively higher development and deployment cost, etc.

    For an example WSO2 ESB would need around 2 GB of memory for running a typical integration solution, it would take around 20 to 30 seconds to bootstrap, it may not evenly share resources among all the tenants with in-JVM multi-tenancy, the development process may take longer as it may depend on a single set of configurations and data stores, and finally the deployment would utilize more resources than optimally needed. Considering all of the above plus the vision of adapting serverless architecture, a much lighter, ultra fast integration framework with a higher throughout would be needed for gaining the best out of MSA.

    Figure 2: Using Ballerina for Implementing Integration Services in MSA

    The above figure illustrates a reference architecture for implementing integration services in MSA. Unlike an ESB where a collection of integration workflows are deployed in a single process, in this architecture each integration work flow will have its own process and a container. Hence services can be independently designed, developed, deployed, scaled and managed. More importantly, it will allow resources to be specifically allocated for each integration service container cluster for optimizing the overall resource usage. Moreover container cluster managers such as Kubernetes provide completely isolated contexts within a single container host cluster for managing multi-tenancy. Therefore this approach would naturally fit in MSA for implementing integration services.

    Ballerina Language Design

    As explained earlier Ballerina language has been carefully designed by studying constructs of widely used programming languages such as Java, Golang, C, C++, etc. The following section illustrates the high level language design in brief:

    Packages

    Package is the topmost container in Ballerina which holds functions or services. It is important to note that package definition is optional and if a package is defined, ballerina source files need to be stored in a hierarchical folder structure according to its package hierarchy.

    Functions

    A function represents a set of instructions that performs a specific task that is intended to be reusable. Mainly there are two types of functions; native functions which support returning multiple return parameters, and throwing exceptions.

    Main Function

    Main function is the entrypoint in Ballerina executable programs. Executables can be used for implementing integration logic which needs to be run in the background on a time interval or an event trigger.

    Services

    Ballerina services allow integration workflows to be exposed as services. Services are protocol agnostic and can be extended to work with any messaging protocol with required message formats. Currently services can be exposed as HTTP REST APIs, WebSockets, HTTP/2 services, and messages can also be delivered to mediation pipelines via JMS topics/queues (using an external broker), and files.

    Resources

    A resource represents a functional unit of a Ballerina service. A service exposed via a given protocol would use resources for managing different types of messages. For an example HTTP REST API would use resources for implementing API resources, a JMS service would use resources for receiving messages from a topic/queue,

    Workers

    A worker is a thread according to general programming terms. Workers provide the ability to execute a series of integration functions in parallel for reducing the overall mediation latency of an integration service.

    Connectors

    Connectors provide language extensions for talking to well known external services from Ballerina such as Twitter, Google, Medium, etcd, Kubernetes, etc. Moreover it also provides the ability to plug-in authentication and authentication features to the language.

    Ballerina Composer

    Figure 3: Ballerina Composer Design View

    Composer is the visual designer tool of the Ballerina language. It has been designed as a web application and shipped with the Ballerina Tools distribution. Execute the below set of commands to download it and run, once started access http://localhost:9091 in a web browser:

    $ version=0.8.1 # change this to the latest version
    $ wget http://ballerinalang.org/downloads/ballerina-tools/ballerina-tools-${version}.zip
    $ unzip ballerina-tools-${version}.zip
    $ cd ballerina-tools-${version} # consider this as [ballerina.home]
    # cd bin/
    $ ./composer

    Not only Composer provides a charming graphical designer, it also provides a text editor with syntax highlighting and code completion features, and a Swagger editor for HTTP based services. Composer provides all language constructs and native functions needed for implementing integration programs and services. More interestingly those can be run and debugged using the same editor.

    Figure 4: Ballerina Composer Source View

    For detailed information on Composer please refer this article.

    Ballerina CLI

    Ballerina ships two distributions, one for the Ballerina runtime and the other for the tooling. Ballerina runtime only includes features required for running Ballerina programs and services. The tools distribution include features for executing test cases, generating API documentation, generating swagger definitions and building Docker images:

    $ cd [ballerina.home]/bin/
    $ ./ballerina --help
    Ballerina is a flexible, powerful and beautiful programming language designed for integration.
    * Find more information at http://ballerinalang.org
    Usage:
    ballerina [command] [options]
    Available Commands:
    run run Ballerina main/service programs
    build create Ballerina program archives
    docker create docker images for Ballerina program archives
    doc generate Ballerina API documentation
    swagger Generate connector/service using swagger definition
    test test Ballerina program
    Flags:
    --help, -h for more information
    Use "ballerina help [command]" for more information about a command.

    Ballerina Packaging Model

    Ballerina programs and services can be packaged into archive files for distribution. These files will take the extension BSZ. Consider the below sample HTTP service, the source code of this can be found here:

    .
    └── hello-ballerina
    ├── README.md
    └── org
    └── foo
    └── bar
    ├── helloWorldService.bal
    └── helloWorldServiceTest.bal

    Following command can be executed for generating an archive file for this service:

    $ cd /path/to/hello-service/
    $ /path/to/ballerina-home/bin/ballerina build service org/foo/bar/

    The generated bar.bsz file would contain following files:

    .
    ├── BAL_INF
    │ └── ballerina.conf
    ├── ballerina
    │ └── test
    │ └── assert.bal
    └── org
    └── foo
    └── bar
    ├── helloWorldService.bal
    └── helloWorldServiceTest.bal

    Ballerina API Documentation Generator

    Ballerina tools distribution ships an API documentation generation tool called Docerina as a part of the Ballerina CLI. This allows developers to generate API documentation for Ballerina functions, connectors, structs, and type mappers. Currently, it does not include API documentation generation for Ballerina services as they are already supported with the Swagger integration for HTTP based services. In a future release it may support non-HTTP services such as JMS and file.

    API documentation of Ballerina native functions of v0.8 release can be found here. Execute ballerina doc help command for more information on generating API documentation for Ballerina code:

    $ cd ballerina-tools-${version}/bin/
    $ ./ballerina doc --help
    generate Ballerina API documentation

    Usage:
    ballerina doc <sourcepath>... [-o outputdir] [-n] [-e excludedpackages] [-v]
    sourcepath:
    Paths to the directories where Ballerina source files reside or a path to
    a Ballerina file which does not belong to a package

    Flags:
    --output, -o path to the output directory where the API documentation will be written to
    --native, -n read the source as native ballerina code
    --exclude, -e a comma separated list of package names to be filtered from the documentation
    --verbose, -v enable debug level logs

    Ballerina Test Framework

    Ballerina provides a test framework called Testerina for implementing unit tests for Ballerina code. In v0.8 release, following native test functions are available for starting services, asserting values and setting mock values:

    package ballerina.test;
    startService(string servicename)
    assertTrue(boolean condition)
    assertTrue(boolean condition, string message)
    assertFalse(boolean condition)
    assertFalse(boolean condition, string message)
    assertEquals(string actual, string expected)
    assertEquals(string actual, string expected, string message)
    assertEquals(int actual, int expected)
    assertEquals(int actual, int expected, string message)
    assertEquals(float actual, float expected)
    assertEquals(float actual, float expected, string message)
    assertEquals(boolean actual, boolean expected)
    assertEquals(boolean actual, boolean expected, string message)
    assertEquals(string[] actual, string[] expected)
    assertEquals(string[] actual, string[] expected, string message)
    assertEquals(float[] actual, float[] expected)
    assertEquals(float[] actual, float[] expected, string message)
    assertEquals(int[] actual, int[] expected)
    assertEquals(int[] actual, int[] expected, string message)
    package ballerina.mock;
    setValue(string pathExpressionToMockableConnector)

    Following is a sample HTTP service written in Ballerina:

    package org.foo.bar;
    import ballerina.lang.messages as message;
    @http:BasePath ("/hello")
    service helloService {
    @http:GET
    resource helloResource(message m) {
    message response = {};
    message:setStringPayload(response, "Hello world!");
    reply response;
    }
    }

    It can be tested by implementing a test case as follows:

    package org.foo.bar;
    import ballerina.lang.messages as message;
    import ballerina.test;
    import ballerina.net.http;
    function testHelloService () {
    message request = {};
    message response = {};
    string responseString;
        string serviceURL = test:startService("helloService");
        http:ClientConnector endpoint = create  http:ClientConnector(serviceURL);
    response = http:ClientConnector.get(endpoint, "/hello", request);
    responseString = message:getStringPayload(response);
    test:assertEquals(responseString, "Hello world!");
    }

    Ballerina Container Support

    Ballerina Docker CLI command can be used for creating Docker images for Ballerina program archives. Execute the below command for more information on this:

    cd ballerina-tools-${version}/bin/
    $./ballerina docker --help
    create docker images for Ballerina program archives

    Usage:
    ballerina docker <package-name> [--tag | -t <image-name>] [--host | -H <docker-hostURL>] --help | -h --yes | -y

    Flags:
    --tag, -t docker image name. <image-name>:<version>
    --yes, -y assume yes for prompts
    --host, -H docker Host. http://<ip-address>:<port>

    Conclusion

    Ballerina is a brand new open source programming language purposely built for implementing integration services in MSA. It provides a complete ecosystem for designing, developing, documenting, testing and deploying integration workflows. Feel free to try it out, give feedback, report issues and most importantly to contribute back. Happy dancing with Ballerina!!

    References

    [1] Serverless Architectures, https://martinfowler.com/articles/microservices.html

    [2] What are Microservices, https://smartbear.com/learn/api-design/what-are-microservices

    [3] Introduction to Microservices, https://www.nginx.com/blog/introduction-to-microservices

    [4] The Future of Integration with Microservices, https://dzone.com/articles/the-future-of-integration-with-microservices

    [5] Ballerinalang Website, http://ballerinalang.org

    [6] Ballerinalang Documentation, http://ballerinalang.org/docs

    [6] Ballerinalang Github Repository, https://github.com/ballerinalang/ballerina

    Chamara SilvaHow to operate P2 repositories through the admin service

    Even the WSO2 product comes with the default set of features, Anybody can install additional features based on their requirements. During the product release, WSO2 release feature repository aligns with the product release. It contains all the features can be installed in each product. Feature repository we called as a P2 repository and it's hosted on location again the to the product release

    Jayanga DissanayakeHow to create a heap dump of your Java application

    Heap in a JVM, is the place where it keeps all your runtime objects. The JVM create a dedicated space for the heap at the JVM startup, which can be controlled via JVM option -Xms<size> eg: -Xms100m (this will allocate 100MBs for the heap). JVM is capable of increasing and decreasing the size of the heap [1] based on the demand, and JVM has another option which allows to set max size for the heap, -Xmx<size>, eg: -Xmx6g (this allows the heap to grow up to 6GBs)

    JVM automatically perform Garbage Collection (GC), when it detects its about to reach the heap size limits. But the GC can only clean the objects which are eligible for GC. If the JVM can't allocate required memory even after GC, JVM will crash with "Exception in thread "main" java.lang.OutOfMemoryError: Java heap space"

    If your Java application in production crashes due to some issue like this, you cant just ignore the incident, and restart your application. You have to analyze the what cause the JVM crash, and take the necessary actions to avoid it happening again. This is where the JVM heap dump comes in to the play.

    JVM heap dumps are by default disabled, you have to enable heap dumps explicitly by providing following JVM option, -XX:+HeapDumpOnOutOfMemoryError

    The below sample code, tries to create a multiple, large arrays of chars, and keep the references in list. Which cause those large arrays ineligible for garbage collection.

    package com.test;

    import java.util.ArrayList;
    import java.util.List;

    public class TestClass {
    public static void main(String[] args) {
    List<Object> list = new ArrayList<Object>();
    for (int i = 0; i < 1000; i++) {
    list.add(new char[1000000]);
    }
    }
    }

    If you run the above code with following command lines,

    1. java -XX:+HeapDumpOnOutOfMemoryError -Xms10m -Xmx3g com.test.TestClass

    Result: Program runs and exit without any error. The heap size starts from 10MB and then grows as needed. Above needs memory less than 3GB. So, it completes without any error.

    2. java -XX:+HeapDumpOnOutOfMemoryError -Xms10m -Xmx1g com.test.TestClass

    Result: JVM crashes with OOM.

    If we change the above code a bit to remove the char array from the list, after adding to the list. what would be the result


    package com.test;

    import java.util.ArrayList;
    import java.util.List;

    public class TestClass {
    public static void main(String[] args) {
    List<Object> list = new ArrayList<Object>();
    for (int i = 0; i < 1000; i++) {
    list.add(new char[1000000]);
    list.remove(0);
    }
    }
    }

    3. java -XX:+HeapDumpOnOutOfMemoryError -Xms10m -Xmx10m com.test.TestClass

    Result: This code runs without any issue even with a heap of 10MBs.

    NOTE:
    1. There is no impact to your application if you enable the heap dump in the JVM. So, it is better to always enable -XX:+HeapDumpOnOutOfMemoryError in your applications

    2. You can create a heap dump of a running Java application with the use of jmap. jmap come with the JDK. Creating a heap dump of a running application cause the application to halt everything for a while. So, not recommended to use in production system. (unless there is a extreme situation)
    eg: jmap -dump:format=b,file=test-dump.hprof [PID]

    3. Above sample codes are just for understanding the concept. 

    [1] https://docs.oracle.com/cd/E13150_01/jrockit_jvm/jrockit/geninfo/diagnos/garbage_collect.html


    Edit:

    Following are few other important flags that could be useful in generating heap dumps;

    -XX:HeapDumpPath=/tmp/heaps
    -XX:OnOutOfMemoryError="kill -9 %p" : with this you can execute command at the JVM exit
    -XX:+ExitOnOutOfMemoryError : When you enable this option, the JVM exits on the first occurrence of an out-of-memory error. It can be used if you prefer restarting an instance of the JVM rather than handling out of memory errors [2].
    -XX:+CrashOnOutOfMemoryError : CrashOnOutOfMemoryError - If this option is enabled, when an out-of-memory error occurs, the JVM crashes and produces text and binary crash files (if core files are enabled) [2].

    [2] http://www.oracle.com/technetwork/java/javase/8u92-relnotes-2949471.html

    Supun SethungaCustom Transformers for Spark Dataframes

    In Spark a transformer is used to convert a Dataframe in to another. But due to the immutability of Dataframes  (i.e: existing values of a Dataframe cannot be changed), if we need to transform values in a column, we have to create a new column with those transformed values and add it to the existing Dataframe. 

    To create a transformer we simply need to extend the org.apache.spark.ml.Transformer class, and write our transforming logic inside the transform() method. Below are a couple of examples:

    A simple transformer

    This is a simple transformer, to get the given power, of each value of any column.

    public class CustomTransformer extends Transformer {
        private static final long serialVersionUID = 5545470640951989469L;
             String column;
             int power = 1;

        CustomTransformer(String column, int power) {
             this.column = column;
             this.power = power;
        }

        @Override
        public String uid() {
            return "CustomTransformer" + serialVersionUID;
        }

        @Override
        public Transformer copy(ParamMap arg0) {
            return null;
        }

        @Override
        public DataFrame transform(DataFrame data) {
            return data.withColumn("power", functions.pow(data.col(this.column), this.power));
        }

        @Override
        public StructType transformSchema(StructType arg0) {
            return arg0;
        }
    }

    You can refer [1]  for another similar example.

    UDF transformer

    We can also, register some custom logic as UDF in spark sql context, and then transform the Dataframe with spark sql, within our transformer.

    Refer [2] for a sample which uses a UDF to extract part of a string in a column.


    References:

    [1] https://github.com/SupunS/play-ground/blob/master/test.spark.client_2/src/main/java/MeanImputer.java
    [2] https://github.com/SupunS/play-ground/blob/master/test.spark.client_2/src/main/java/RegexTransformer.java

    Supun SethungaSetting up a Fully Distributed Hadoop Cluster

    Here i will discuss on how to setup a fully distributed hadoop cluster with 1-master and 2 salves. Here the three nodes are setup in three different machines.

    Updating Hostnames

    To start off the things, lets first give hostnames to the three nodes. Edit the /etc/hosts file with following command.
    sudo gedit /etc/hosts

    Add following hostname and against the ip addresses of all three nodes. Do this for the all three nodes.
    192.168.2.14    hadoop.master
    192.168.2.15    hadoop.slave.1
    192.168.2.15    hadoop.slave.2


    Once you do that, update the /etc/hostname file to include hadoop.master/hadoop.slave.1/hadoop.slave.2 as the hostname of each of the machines respectively.

    Optional:

    For security concerns, one might prefer to have a separate user for Hadoop. In order to create a separate user execute the following command in the terminal:
    sudo addgroup hadoop
    sudo adduser --ingroup hadoop hduser
    Give a desired password..

    Then restart the machine.
    sudo reboot


    Install SSH

    Hadoop needs to copy files between the nodes. For that it should be able to acces each node with ssh, without having to give username/password. Therefore, first we need to install ssh client and server.
    sudo apt install openssh-client
    sudo apt install openssh-server

    Generate a key
    ssh-keygen -t rsa -b 4096

    Copy the key for each node
    ssh-copy-id -i $HOME/.ssh/id_rsa.pub hduser@hadoop.master
    ssh-copy-id -i $HOME/.ssh/id_rsa.pub hduser@hadoop.slave.1
    ssh-copy-id -i $HOME/.ssh/id_rsa.pub hduser@hadoop.slave.2

    Try sshing to all the nodes. eg:
    ssh hadoop.slave.1

    You should be able to ssh to all the nodes, without proving the user credentials. Repeat this step in all three nodes.


    Configuring Hadoop

    To configure hadoop, change the following configurations:

    Define hadoop master url in <hadoop_home>/etc/hadoop/core-site.xml , in all nodes.
    <property>
      <name>fs.default.name</name>
      <value>hdfs://hadoop.master:9000</value>
    </property>

    Create two directories /home/wso2/Desktop/hadoop/localDirs/name and /home/wso2/Desktop/hadoop/localDirs/data (and make hduser the owner, if you create a separate user for hadop) . Give read/write rights to that folder.

    Modify the <hadoop_home>/etc/hadoop/hdfs-site.xml as follows, in all nodes.
    <property>
      <name>dfs.replication</name>
      <value>3</value>
    </property>
    <property>
      <name>dfs.name.dir</name>
      <value>/home/wso2/Desktop/hadoop/localDirs/name</value>
    </property>
    <property>
      <name>dfs.data.dir</name>
      <value>/home/wso2/Desktop/hadoop/localDirs/data</value>
    </property>

    <hadoop_home>/etc/hadoop/mapred-site.xml (all nodes)
    <property>
      <name>mapreduce.job.tracker</name>
      <value>HadoopMaster:5431</value>
    </property>


    Add the hostname of master node, to <hadoop_home>/etc/hadoop/masters file, in all nodes.
    hadoop.master

    Add hostname of slave nodes  to <hadoop_home>/etc/hadoop/slaves file, in all nodes.
    hadoop.slave.1
    hadoop.slave.2


    (Only in Master) We need to format the namenodes, before we start hadoop. For that, in the master node, navigate to <hadoop_home>/etc/hadoop/bin/ directory and execute the following.
    ./hdfs namenode -format

    Finally, start the hadoop server, by navigating to <hadoop_home>/etc/hadoop/sbin/ directory, and execute the following:
    ./start-dfs.sh

    If everything goes well, hdfs should be started. And you can browse the webUI of the namenode from the URL: http://localhost:50070/dfshealth.jsp.

    Pamod SylvesterHow i got started with Ballerina

    I am certain most of my friend's would click on the link to see me dancing :)

    With the announcement of Ballerina, the new integration language. I thought of writing a quick summary on how i got started. 

    Installation 

    I downloaded Ballerina from here. Also i referred Installation-instructions to get started.

    Writing an EIP

    CBR as a very common EIP in the integration world was something i tried out with Ballerina. So here's how i did it.  




    Creating a Mock Service in Ballerina




    Something i was longing to try out in Ballerina is to be able to write a service which could be executed in the same runtime. So here's how i did it, 


    Started the composer, and viola it provided a graphical view for me to represent the service and what it should do and all i had to do was drag and drop a few elements to the canvas. This was like drawing a floor chart. 


    the service i created would accept an incoming http message and send a mock respond back. The source view showed the language syntax i could use, here's how that looked like.

    import ballerina.lang.messages;

    @http:BasePath("/gadgets")
    service GadgetInventoryMockService {
    @http:GET
    resource inquire(message m) {
    message response = {};
    json payload = `{"inquire":"gadget","availability":"true"}`;
    messages:setJsonPayload(response, payload);
    reply response;
    }
    }

    Similarly i managed to create both the services ("Widget Inventory" and "Gadget Inventory").

    Routing with Ballerina

    Just like creating a service i was able to drag an drop a set of elements from the graphical view and create the router




    import ballerina.net.http;
    import ballerina.lang.jsons;
    import ballerina.lang.messages;

    @http:BasePath("/route")
    service ContentBasedRouter {
    @http:POST
    resource lookup(message m) {
    http:ClientConnector widgetEP = create http:ClientConnector("http://localhost:9090/widgets");
    http:ClientConnector gadgetEP = create http:ClientConnector("http://localhost:9090/gadgets");
    json requestMessage = messages:getJsonPayload(m);
    string inventoryType = jsons:getString(requestMessage, "$.type");
    message response = {};
    if (inventoryType == "gadget") {
    response = http:ClientConnector.get(gadgetEP, "/", m);
    }
    else {
    response = http:ClientConnector.get(widgetEP, "/", m);
    }
    reply response;
    }
    }

    While looking back i realize, it was not only convenient to create the message flow, but it was also easier for me to describe the flow through the diagram. The way it was describing the connections, the message flow and the client as seperate entities (the picture was actually speaking 1000 words :) ). 

    Running What I Wrote 


    I was excited to see how this diagram, would look like when it's running.

    This is all what i had to do,


    ballerina run service ./gadgetInventoryMockService.bal ./widgetInventoryMockService.bal ./router.bal

    where, gadgetInventoryMockService.bal and widgetInventoryMockService.bal were the mock services i wrote and router.bal is the routing logic. In this case i would've preferred to actually be able to bundle the whole project into one package instead of having to give each an individual file as arguments. I checked on this capability with the team and this will be supported in the near future by the composer. So i'll have my fingers crossed for this. As a result in my local machine each of the bal files were running as a service in the following URLs. The files i used could be found here.


    Service
    URL
    Gadget Inventory Mock Service
    http://localhost:9090/gadgets
    Widget Inventory Mock Service
    http://localhost:9090/widgets
    Router
    http://localhost:9090/route


    So to practically experience how Ballerina routed the requests i did the following, using cURL client i sent the following request, 

    curl -v http://localhost:9090/route -d '{"type" : "gadget"}'


    The following response should be observed,

    {"inquire":"gadget","availability":"true"}

    Re executed the request with the following,
    curl -v http://localhost:9090/route -d '{"type" : "widget"}'

    Then the following response should be observed,
    {"inquire":"widget","availability":"true"}


    In general there're more components i.e fork-join capability which will be required to implement some of the EIPs i wanted to try out i.e scatter-gather, so tick tock for the next release. However, it was a great experience.

    Ayesha DissanayakaWSO2GREG-5.2.0- Writing extension to bind clientside javascript to pages in store

    In a previous post I have explained how to Write extensions to replicate more artifact metadata in Store
    In this post I will explain how to bind some client-side javascript/jquery to improve the behavior of pages in Store UI.

    Followed by the sample steps explained in this previous post, Let's see how to add a custom javascript file to restservice asset type's details page.

    In this sample js, I am going to set active tab of the asset details page to a desired one, using a URL fragment.

    as of now, when we are browsing assets in Store and viewing metadata details of an asset, the first tab is opened by default.

    Let's say, I wanted to go directly to the page with the third tab 4th tab(security) opened.

    To do that,
    •  In [HOME]/repository/deployment/server/jaggeryapps/store/extensions/assets/restservice/themes/store/js/ location, add a js file, select-tab.js with following content

    $(function() {
    var fragment = window.location.hash;

    if(fragment) {
        var tabName = '#asset-content-' + fragment.replace("#", "");
        var tab = $(tabName);
        var tabContentName = '#tab-content-'+ fragment.replace("#", "");
        var tabContent = $(tabContentName);
        if(tab.length > 0 && tabContent.length > 0){
            tab.addClass("active");
            tabContent.addClass("active");
         } else {
        showDefault();
         }
    } else {
        showDefault();
    }
    });


    function showDefault(){
            $('#asset-description').addClass("active");
            $('#tab-properties').addClass("active");
    }


    • Now bind this js, to resetservice asset details page by editing [HOME]/repository/deployment/server/jaggeryapps/store/extensions/assets/restservice/themes/store/helpers/asset.js
     var name;
    var custom = require('/extensions/app/greg-store-defaults/themes/store/helpers/asset.js');
    var that = this;
    /*
    In order to inherit all variables in the default helper
    */
    for (name in custom) {
        if (custom.hasOwnProperty(name)) {
            that[name] = custom[name];
        }
    }
    var fn = that.resources;
    var resources = function(page, meta) {
        var o = fn(page, meta);
        if (!o.css) {
            o.css = [];
        }
        //code-mirror third party library to support syntax highlighting & formatting for WSDL content.
        o.css.push('codemirror.css');
        o.js.push('codemirror.js');
        o.js.push('javascript.js');
        o.js.push('formatting.js');
        o.js.push('xml.js'); //codemirror file to provide 'xml' type formatting.
        o.js.push('asset-view.js');//renders the wsdl content with codemirror supported formatting.
        o.js.push('select-tab.js');//renders active tab based on url fragment
        return o;
    };

    • Restart the server and after login to store, goto URls like "https://192.168.122.1:9443/store/assets/restservice/details/3601ed3c-5f49-4115-ac7d-d6f578d4c593#security

     


    Suhan DharmasuriyaBallerina is born!

    What is ballerina?
    What is ballerinalang?

    Ballerina - a new open source programming language that lets you 'draw' code to life!

    It is a programming language that lets you create integrations with diagrams.

    At WSO2, we’ve created a language where diagrams can be directly turned into code. Developers can click and drag the pieces of a diagram together to describe the workings of a program. Cool! isn't it?

    We’re not just targeting efficiency, but also a radical new productivity enhancement for any company. By simplifying the entire process, we’re looking at reducing the amount of work that goes into the making of a program. It’s where we believe the world is headed.

    As mentioned by Chanaka [4], there is a gap in the integration space where programmers and architects speaks in different languages and sometimes this resulted in huge losses of time and money. Integration has lot to do with diagrams. Top level people always prefer diagrams than code but programmers do the other way around. We thought of filling this gap with a more modernized programming language.

    Ballerina features both textual and graphical syntaxes that uniquely offer the exact same expressive capability and are fully reversible. The textual syntax follows the C/Java heritage while also adopting some aspects from Go. The graphical syntax of Ballerina follows a sequence diagram metaphor. There are no weird syntax exceptions, and everything is derived from a few key language concepts. Additionally, Ballerina follows Java and Go to provide a platform-independent programming model that abstracts programmers from machine-specific details.

    We are happy to announce the “Flexible, Powerful, Beautiful” programming language “Ballerina”. Here are the main features of the language in a short list [4].
    • Textual, Visual and Swagger representation of your code.
    • Parallel programming made easier with workers and fork-join.
    • XML, JSON and DataTable as built in data types for easier data handling.
    • Packaging and module system to write, share, distribute code in elegant fashion.
    • Composer (editor) makes it easier to write programs in a more visual manner.
    • Built in debugger and test framework (testerina) makes it easier to develop and test.
    Ballerina supports high-performance implementations—including the micro-services and micro-integrations increasingly driving digital products—with low latency, low memory and fast start-up. Notably, common integration capabilities are baked into the Ballerina language. These include deep alignment with HTTP, REST, and Swagger; connectors for both web APIs and non-HTTP APIs; and native support for JSON, XML, data tables, and mapping.

    Tryout ballerina and let us know your thoughts on medium, twitter, facebook, slack, google and many other channels.

    Ask a question in stackoverflow.

    Have fun!



    You can find the introduction to Ballerina presentation below presented by Sanjiva at WSO2Con 2017 USA.

    Dinusha SenanayakaWSO2 Identity Cloud in nutshell


    WSO2 Identity Cloud is the latest addition to  WSO2 public Cloud services.  Identity Cloud is hosted using WSO2 Identity Server which provides Identity and Access Management (IAM) solution. Initial launching of Identity Cloud has been focused on providing Single Sign On (SSO) solutions for organizations.

    All most all the organizations use different applications. This cloud be in-house developed and hosted applications or Salesforce, Councur, AWS like SaaS applications. Having centralized authentication system for all the applications will increase the efficiency of maintaining systems, centralize monitoring and company security from system administrative perspective while it makes application users life easy. WSO2 Identity Cloud provides solution to configure SSO for these applications.

    What are the features offered by WSO2 Identity Cloud ?


    • Single Sign On support with authentication standards - SAML-2.0, OpenID Connect, WS-Federation
       Single Sign On configurations for applications can be done using   SAML-2.0, OpenID Connect, WS-Federation protocols.   
              
    • Admin portal
      Portal provided for organization administrators to login and configure security for applications. Simplified UI is provided with minimum configurations. Pre-defined templates of security configurations are available by default for most popular SaaS apps. This list includes Salesforce, Concur, Zuora, GotoMeeting, Netsuite, AWS.

    • On-premise-user-store agent
      Organizations can connect local LDAP with Identity Cloud without sharing LDAP credentials with Identity Cloud and let users in organization LDAP to access applications with SSO.

    • Identity Gateway
      Act as a simple application proxy that intercepts application requests and applies security checks.

    • User portal
      User Portal provides a central location for the users of an organization to log in and discover applications in a central place, while applications can be accessed with single sign-on.


    Why you should go for a Cloud solution ?


    Depending on organization policies and requirements, you can either go for a on-premise deployment or Cloud Identity solution. If you have following concerns then selecting Cloud solution is the best fit for you.

    • Facilitating infrastructure - You don't have to spend money on additional infrastructure with the Cloud solution.
    • System Maintenance difficulties - If you do a on-premise deployment, then there should be a dedicated team allocated to ensure the availability of system and troubleshoot issues etc. But with the Cloud solution WSO2 Cloud team will take care of system availability. 
    • Timelines - Identity Cloud is a already tested, up and running solution. This will cut off the deployment finalizing and testing times that you should spend on a on-premise deployment.   
    • Cost - No cost involve for infrastructure or maintenance with the Cloud solution.

    We hope WSO2 Identity Cloud can help building a Identity Management solution for your organization. Register and tryout for free -http://wso2.com/cloud/ and give us your feedback on bizdev@wso2.com or dev@wso2.org.

    Amalka SubasingheHow to change the organization name and key appear in WSO2 Cloud UI

    Here are the instructions to change the Organisation Name:

    1. Go to Organization Page from Cloud management app.



    2. Select the organization that you want to change and select profile


    3. Change the Organization name and update the profile


    How to change the Organization Key:

    Changing Organization Key is not possible. We generate the key from the organization name users provide at the registration time. It is a unique value and plays a major role in multi-tenancy. We have certain internal criteria for this key.

    Another reason why we cannot do this is, we are using the organization key in the internal registries when storing API related metadata. So, if we change it, there is a data migration involved.


    Amalka SubasingheHow to change the organisation name appear in WSO2 Cloud invoices

    Let's say you want to change the organisation name appear in invoices when you subscribe to a paid plan. Here are the instructions:

    1. Login to the WSO2 Cloud and go the the Accounts page.

    2. You can find the contact information in the Accounts page. Click on 'update contact Info'.





    3. Change the organization name, Add the organization name which you want to display in the invoice.



    4. Save the changes.

    5. You can see the changed organization name in the Accounts Summary.

    Amalka SubasingheHow to add a new payment method to the WSO2 Cloud

    Here are the instructions:

    1. Go to: https://cloudmgt.cloud.wso2.com/cloudmgt/site/pages/account-summary.jag
    2. Log in with your WSO2 credentials (email and password),
    3. Click the 'New Payment Method' button:


    4. Supply the new credit card information, click the Payment Info button and then the Proceed button.


    Let us know if you need further help :)

    Tharindu EdirisingheHTTP GET vs. POST in HTML Forms - Security Considerations Explained with a Sample

    This blog post explains security considerations when using HTTP GET as the request method compared to HTTP POST. Here for accessing the form’s data posted to the server, I use a PHP file for demonstration, but you can use any other technology (JSP, ASP.NET etc.).

    Here I have a simple login page written in HTML.


    This is the sample source code of login.html file.

    <html>
       <head>
          <title>login page</title>
       </head>

       <h1>Welcome to My Site !</h1>

       <form action="validateuser.php" method="get">
          Username : <input type="text" id="username" name="username"/>
          <br>
          Password : <input type="password" id="password" name="password"/>
          <br>
          <input type="submit" value="login"/>      
       </form>
    <html>

    When you click the login button, the browser will redirect you to the web page/URL defined in the action of the HTML form. In this sample, I have a PHP file named validateuser.php and both the login page and this file are deployed in apache web server.

    In the HTML form of the login page, the method is defined as get.
    Therefore, in the validateuser.php file also we need to access the form data using $_GET[‘parameter name’].

    This is the sample source code of validateuser.php file.

    <?php

       $username = $_GET["username"];
       $password = $_GET["password"];

       //perform authentication

    ?>

    Now enter some value for username and password and click the login button.

    Browser will redirect to the validateuser.php page. However, since the HTML form’s method was defined as get, all the form’s data (here username and password) will be added as query parameters in the URL.

    The risk here is, the URLs users request from the server (here Apache web server) are printed in the access logs of the server. Therefore, anybody having access to the filesystem of the web server would see the query parameters in the URLs printed in the log file. (By default in linux operating system, if you install apache server, the logs are added to the /var/logs/apache2/access.log file)


    Therefore it is not recommended to use HTTP GET method when you need to send sensitive data in the request.

    Now let’s do a small modification to the login page and set the method to post.

    <html>
       <head>
          <title>login page</title>
       </head>

       <h1>Welcome to My Site !</h1>

       <form action="validateuser.php" method="post">
          Username : <input type="text" id="username" name="username"/>
          <br>
          Password : <input type="password" id="password" name="password"/>
          <br>
          <input type="submit" value="login"/>      
       </form>
    <html>

    In the validateuser.php file I retrieve the HTML form’s data using $_POST[“parameter name”].

    Here’s the source code of validateuser.php file.

    <?php

       $username = $_POST["username"];
       $password = $_POST["password"];

       //perform authentication

    ?>

    Now if you fill the form in the login page and click the button, data (username and password) will not be added to the URL as query parameters, but will be included in the body of the request.


    If you check the web server logs, you can’t see the form data in the request.


    Therefore, if your HTML web form sends sensitive information when the form is submitted, it is recommended to use HTTP POST method so that the data will not be sent to the server in the URL as query parameters.


    Tharindu Edirisinghe (a.k.a thariyarox)
    Independent Security Researcher

    Dumidu HandakumburaMoving blog to a new home

    Moving blog to a new home, https://fossmerchant.blogspot.com/ . Looking back at the kind of things I’ve posted last year the move seems appropriate.  

    sanjeewa malalgodaBallerina connector development sample - BallerinaLang

    Ballerina is a general purpose, concurrent and strongly typed programming language with both textual and graphical syntaxes, optimized for integration. In this post we will discuss how we can use ballerina swagger connector development tool to develop connector using already designed swagger API.

    First download zip file content and unzip it into your local machine. Also you need to download ballerina composer and run time from the ballerinalang web site to try this.


    Now we need to start back end for generated connector.
    Goto student-msf4j-server directory and build it.
    /swagger-connector-demo/student-msf4j-server>> mvn clean install

    Now you will see micro service jar file generated. Then run MSF4J service using following command.
    /swagger-connector-demo/student-msf4j-server>> java -jar target/swagger-jaxrs-server-1.0.0.jar
    starting Micro Services
    2017-02-19 21:37:44 INFO  MicroservicesRegistry:55 - Added microservice: io.swagger.api.StudentsApi@25f38edc
    2017-02-19 21:37:44 INFO  MicroservicesRegistry:55 - Added microservice: org.wso2.msf4j.internal.swagger.SwaggerDefinitionService@17d99928
    2017-02-19 21:37:44 INFO  NettyListener:68 - Starting Netty Http Transport Listener
    2017-02-19 21:37:44 INFO  NettyListener:110 - Netty Listener starting on port 8080
    2017-02-19 21:37:44 INFO  MicroservicesRunner:163 - Microservices server started in 307ms

    Now we can check MSF4J service running or not using CuRL as follows.
    curl -v http://127.0.0.1:8080/students
    *   Trying 127.0.0.1...
    * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
    > GET /students HTTP/1.1
    > Host: 127.0.0.1:8080
    > User-Agent: curl/7.43.0
    > Accept: */*
    >
    < HTTP/1.1 200 OK
    < Connection: keep-alive
    < Content-Length: 41
    < Content-Type: application/json
    <
    * Connection #0 to host 127.0.0.1 left intact
    {"code":4,"type":"ok","message":"magic!"}


    Please use following sample swagger definition to generate connector(this is available in zip file attached).

    swagger: '2.0'
    info:
     version: '1.0.0'
     title: Swagger School (Simple)
     description: A sample API that uses a school as an example to demonstrate features in the swagger-2.0 specification
     termsOfService: http://helloreverb.com/terms/
     contact:
        name: Swagger API team
        email: foo@example.com
        url: http://swagger.io
     license:
        name: MIT
        url: http://opensource.org/licenses/MIT
    host: schol.swagger.io
    basePath: /api
    schemes:
     - http
    consumes:
     - application/json
    produces:
     - application/json
    paths:
     /students:
        get:
         description: Returns all students from the system that the user has access to
         operationId: findstudents
         produces:
           - application/json
           - application/xml
           - text/xml
           - text/html
         parameters:
           - name: limit
             in: query
             description: maximum number of results to return
             required: false
             type: integer
             format: int32
         responses:
           '200':
             description: student response
             schema:
               type: array
               items:
                 $ref: '#/definitions/student'
           default:
             description: unexpected error
             schema:
               $ref: '#/definitions/errorModel'
        post:
         description: Creates a new student in the school.  Duplicates are allowed
         operationId: addstudent
         produces:
           - application/json
         parameters:
           - name: student
             in: body
             description: student to add to the school
             required: true
             schema:
               $ref: '#/definitions/newstudent'
         responses:
           '200':
             description: student response
             schema:
               $ref: '#/definitions/student'
           default:
             description: unexpected error
             schema:
               $ref: '#/definitions/errorModel'
     /students/{id}}:
        get:
         description: Returns a user based on a single ID, if the user does not have access to the student
         operationId: findstudentById
         produces:
           - application/json
           - application/xml
           - text/xml
           - text/html
         parameters:
           - name: id
             in: path
             description: ID of student to fetch
             required: true
             type: integer
             format: int64
           - name: ids
             in: query
             description: ID of student to fetch
             required: false
             type: integer
             format: int64
         responses:
           '200':
             description: student response
             schema:
               $ref: '#/definitions/student'
           default:
             description: unexpected error
             schema:
               $ref: '#/definitions/errorModel'
        delete:
         description: deletes a single student based on the ID supplied
         operationId: deletestudent
         parameters:
           - name: id
             in: path
             description: ID of student to delete
             required: true
             type: integer
             format: int64
         responses:
           '204':
             description: student deleted
           default:
             description: unexpected error
             schema:
               $ref: '#/definitions/errorModel'
    definitions:
     student:
        type: object
        required:
         - id
         - name
        properties:
         id:
           type: integer
           format: int64
         name:
           type: string
         tag:
           type: string
     newstudent:
        type: object
        required:
         - name
        properties:
         id:
           type: integer
           format: int64
         name:
           type: string
         tag:
           type: string
     errorModel:
        type: object
        required:
         - code
         - textMessage
        properties:
         code:
           type: integer
           format: int32
         textMessage:
           type: string


    Generate connector
    ./ballerina swagger connector /home/sanjeewa/Desktop/sample.yaml  -p org.wso2 -d ./test
    Then add connector to composer and expose it as service.

    import ballerina.net.http;
    @http:BasePath("/testService")
    service echo {
        @http:POST
        resource echo(message m) {
        Default defaultConnector = create Default();
        message response1 = Default.employeeIDGet( defaultConnector, m);
        reply response1;
        }
    }
    connector Default() {
        http:ClientConnector endpoint = create http:ClientConnector("http://127.0.0.1:8080/students");
        action employeeIDDelete(Default c, message msg)(message ) {
            message response;
            response = http:ClientConnector.delete(endpoint, http:getRequestURL(msg), msg);
            return response;     
        }
         action employeeIDGet(Default c, message msg)(message ) {
            message response;
            response = http:ClientConnector.get(endpoint, http:getRequestURL(msg), msg);
            return response;
        }
         action employeeIDPut(Default c, message msg)(message ) {
            message response;
            response = http:ClientConnector.put(endpoint, http:getRequestURL(msg), msg);
            return response;
        }
         action rootGet(Default c, message msg)(message ) {
            message response;
            response = http:ClientConnector.get(endpoint, http:getRequestURL(msg), msg);
            return response;
        }
         action rootPost(Default c, message msg)(message ) {
            message response;
            response = http:ClientConnector.post(endpoint, http:getRequestURL(msg), msg);
            return response;     
        } 
    }

    Then you will see relevant files in output directory.

    ├── test
      └── org
          └── wso2
              ├── default.bal
              ├── LICENSE
              ├── README.md
              └── types.json

    Then you can copy generated connector code into composer and start your service development. How its appear in composer source view.

    5qHg9h6fB0HmJoB9wQjYa7JMfNGiu3J5vkBS4ybVDkUfdA_tl6_xuXZcE0qz6vDmbZsTG2m9HhHfXwPTlPnTaLeTcyWvafZPCcGJOgU-AqHXtuhOxHDPx0Nz5NYFe31euoTPSQXZ (1160×796)

    How its loaded in composer UI.
    y3bhSlbBgD1u3cuhfJrBtt4aPZwIFGd-ASSGvPlOvaenekA4RLNVGhXa6WWpkSs1iDRUwFiUqU6ZH9m-b3xrb2UJ55Jvyg7zR_TcBfbjMHE2yHgdmQVJIoTcEwzhDiWDekOCh5Yj (1207×937)
    Then run it.
     ./ballerina run service ./testbal.bal

    Now invoke ballerina service as follows.

    curl -v -X POST http://127.0.0.1:9090/testService

    *   Trying 127.0.0.1...
    * Connected to 127.0.0.1 (127.0.0.1) port 9090 (#0)
    > POST /testService HTTP/1.1
    > Host: 127.0.0.1:9090
    > User-Agent: curl/7.43.0
    > Accept: */*
    >
    < HTTP/1.1 200 OK
    < Connection: keep-alive
    < Content-Length: 49
    < Content-Type: application/json
    <
    * Connection #0 to host 127.0.0.1 left intact
    {"code":4,"type":"ok","message":"test-ballerina"}

    Ushani BalasooriyaHow to auto generate salesforce search queries?

    If you are using salesforce as a developer you will need to know salesforce query language. Specially if you are using WSO2 salesforce connector, salesforce query is a must to know. Please read this article to know information on this.

    We have an awesome eclipse plugin which is available for you to perform this. In this blog post, I am demonstrating how to install it and to generate a sample query.

    For more information please have a look here.

    Steps :

    1. Install Eclipse IDE for Java developers
    2. Launch Eclipse and select Help -> Install New Software
    3. Click Add and in the repository dialog box, set the name to Force.com IDE and the location to https://developer.salesforce.com/media/force-ide/eclipse45. For Spring ’16 (Force.com IDE v36.0) and earlier Force.com IDE versions, use http://media.developerforce.com/force-ide/eclipse42.




    4. Select IDE and click on Next to install.



    5. Accept terms and Finish.



    6. Restart the Eclipse.

    7. When Eclipse restarts, select Window -> Open Perspective -> Other and Select Force.com and then click OK.






    8. Now go to File -> New -> force.com project and provide your credentials to login to your salesforce account.



    9. Click Next and it will create a project on the left pane.


    10. Double click and open the schema and it will load the editor.



    11. Now you can click on the preferred SF object and its fields. It will generate the SF query accordingly. Then you can run it.



    Reference: https://developer.salesforce.com/docs/atlas.en-us.eclipse.meta/eclipse/ide_install.htm

    sanjeewa malalgodaHow to use Ballerina code generator tools to generate connector from swagger definition - BallerinaLang

    Download samples and resource required for this project from this location.

    Goto ballerina distribution   
    /ballerina-0.8.0-SNAPSHOT/bin

    Then it will generate connector as well.

    Then run following command. For this we need to pass swagger input file to generate swagger.
    Sample command
    Example commands for connector, skeleton and mock service generation is as follows in order.

    ballerina swagger connector /tmp/testSwagger.json -d /tmp  -p wso2.carbon.test

    ballerina swagger skeleton /tmp/testSwagger.json -d /tmp  -p wso2.carbon.test

    ballerina swagger mock /tmp/testSwagger.json -d /tmp  -p wso2.carbon.test


    Command:
    >>./ballerina swagger connector /home/sanjeewa/Desktop/student.yaml -p org.wso2 -d ./test


    Please use following sample swagger definition for this.

    swagger: '2.0'
    info:
     version: '1.0.0'
     title: Swagger School (Simple)
     description: A sample API that uses a school as an example to demonstrate features in the swagger-2.0 specification
     termsOfService: http://helloreverb.com/terms/
     contact:
        name: Swagger API team
        email: foo@example.com
        url: http://swagger.io
     license:
        name: MIT
        url: http://opensource.org/licenses/MIT
    host: schol.swagger.io
    basePath: /api
    schemes:
     - http
    consumes:
     - application/json
    produces:
     - application/json
    paths:
     /students:
        get:
         description: Returns all students from the system that the user has access to
         operationId: findstudents
         produces:
           - application/json
           - application/xml
           - text/xml
           - text/html
         parameters:
           - name: limit
             in: query
             description: maximum number of results to return
             required: false
             type: integer
             format: int32
         responses:
           '200':
             description: student response
             schema:
               type: array
               items:
                 $ref: '#/definitions/student'
           default:
             description: unexpected error
             schema:
               $ref: '#/definitions/errorModel'
        post:
         description: Creates a new student in the school.  Duplicates are allowed
         operationId: addstudent
         produces:
           - application/json
         parameters:
           - name: student
             in: body
             description: student to add to the school
             required: true
             schema:
               $ref: '#/definitions/newstudent'
         responses:
           '200':
             description: student response
             schema:
               $ref: '#/definitions/student'
           default:
             description: unexpected error
             schema:
               $ref: '#/definitions/errorModel'
     /students/{id}}:
        get:
         description: Returns a user based on a single ID, if the user does not have access to the student
         operationId: findstudentById
         produces:
           - application/json
           - application/xml
           - text/xml
           - text/html
         parameters:
           - name: id
             in: path
             description: ID of student to fetch
             required: true
             type: integer
             format: int64
           - name: ids
             in: query
             description: ID of student to fetch
             required: false
             type: integer
             format: int64
         responses:
           '200':
             description: student response
             schema:
               $ref: '#/definitions/student'
           default:
             description: unexpected error
             schema:
               $ref: '#/definitions/errorModel'
        delete:
         description: deletes a single student based on the ID supplied
         operationId: deletestudent
         parameters:
           - name: id
             in: path
             description: ID of student to delete
             required: true
             type: integer
             format: int64
         responses:
           '204':
             description: student deleted
           default:
             description: unexpected error
             schema:
               $ref: '#/definitions/errorModel'
    definitions:
     student:
        type: object
        required:
         - id
         - name
        properties:
         id:
           type: integer
           format: int64
         name:
           type: string
         tag:
           type: string
     newstudent:
        type: object
        required:
         - name
        properties:
         id:
           type: integer
           format: int64
         name:
           type: string
         tag:
           type: string
     errorModel:
        type: object
        required:
         - code
         - textMessage
        properties:
         code:
           type: integer
           format: int32
         textMessage:
           type: string


    Then you will see relevant files in output directory.

    ├── test
      └── org
          └── wso2
              ├── default.bal
              ├── LICENSE
              ├── README.md
              └── types.json


    Now copy this connector content to ballerina editor and load it as connector. Please see below image.

    import ballerina.lang.messages;
    import ballerina.lang.system;
    import ballerina.net.http;
    import ballerina.lang.jsonutils;
    import ballerina.lang.exceptions;
    import ballerina.lang.arrays;
    connector Default(string text) {
       action Addstudent(string msg, string auth)(message ) {
          http:ClientConnector rmEP = create http:ClientConnector("http://127.0.0.1:8080");
          message request = {};
          message requestH;
          message response;
          requestH = authHeader(request, auth);
          response = http:ClientConnector.post(rmEP, "/students", requestH);
          return response;
         
       }
        action Findstudents(string msg, string auth)(message ) {
          http:ClientConnector rmEP = create http:ClientConnector("http://127.0.0.1:8080");
          message request = {};
          message requestH;
          message response;
          requestH = authHeader(request, auth);
          response = http:ClientConnector.get(rmEP, "/students", requestH);
          return response;
         
       }
       
    }

    Screenshot from 2017-02-19 22-16-06.png

    Then goto editor view and see loaded ballerina connector.

    Screenshot from 2017-02-19 22-17-12.png

    Then we can see it's loaded as follows.


    Now we can start writing our service by using generated connector. We can add following sample service definition which calls connector and get output. Connect your service with generated connector as follows.


    @http:BasePath("/connector-test")
    service testService {
       
       @http:POST
       @http:Path("/student")
       resource getIssueFromID(message m) {
          StudentConnector studentConnector = create StudentConnector("test");
          message response = {};
          response = studentConnector.Findstudents(studentConnector, "");
          json complexJson = messages:getJsonPayload(response);
          json rootJson = `{"root":"someValue"}`;
          jsonutils:set(rootJson, "$.root", complexJson);
          string tests = jsonutils:toString(rootJson);
          system:println(tests);
          reply response;
         
       }
       
    }


    Please see how it's loaded in editor.
    Screenshot from 2017-02-19 22-24-36.png



    Now we need to start back end for generated connector.
    Goto student-msf4j-server directory and build it.
    /swagger-connector-demo/student-msf4j-server>> mvn clean install

    Now you will see micro service jar file generated. Then run MSF4J service using following command.
    /swagger-connector-demo/student-msf4j-server>> java -jar target/swagger-jaxrs-server-1.0.0.jar
    starting Micro Services
    2017-02-19 21:37:44 INFO  MicroservicesRegistry:55 - Added microservice: io.swagger.api.StudentsApi@25f38edc
    2017-02-19 21:37:44 INFO  MicroservicesRegistry:55 - Added microservice: org.wso2.msf4j.internal.swagger.SwaggerDefinitionService@17d99928
    2017-02-19 21:37:44 INFO  NettyListener:68 - Starting Netty Http Transport Listener
    2017-02-19 21:37:44 INFO  NettyListener:110 - Netty Listener starting on port 8080
    2017-02-19 21:37:44 INFO  MicroservicesRunner:163 - Microservices server started in 307ms

    Now we can check MSF4J service running or not using CuRL as follows.
    curl -v http://127.0.0.1:8080/students
    *   Trying 127.0.0.1...
    * Connected to 127.0.0.1 (127.0.0.1) port 8080 (#0)
    > GET /students HTTP/1.1
    > Host: 127.0.0.1:8080
    > User-Agent: curl/7.43.0
    > Accept: */*
    >
    < HTTP/1.1 200 OK
    < Connection: keep-alive
    < Content-Length: 41
    < Content-Type: application/json
    <
    * Connection #0 to host 127.0.0.1 left intact
    {"code":4,"type":"ok","message":"magic!"}


    Now we have MSF4J service for student service up and running. We also have connector pointed to that and service which use that connector. So we can start ballerina service with final ballerina file. Then invoke student service as follows.
    curl -v http://127.0.0.1:/connector-test/student

    Afkham AzeezWSO2 started out as a middleware company.

    WSO2 started out as a middleware company. Since then, we’ve realized — and championed the fact that our products enable not just technological infrastructure, but radically change how a company works. All over the world, enterprises use our products to maximize revenue, create entirely new customer experiences and products, and interact with their employees in radically different ways. We call this digital transformation — the evolution of a company from one age to another, and our role in this has become more a technology partner than a simple software provider.

    In this realization, we’ve announced WSO2 Enterprise Integrator (EI) 6.0. Enterprise Integrator brings together all of the products and technologies WSO2’s created for the enterprise integration domain — a single package of digital transformation tools closely connected together for ease of use.

    When less is more

    Those of you who are familiar with WSO2 products will know that we had more than 20 products across the entire middleware stack.

    The rationale behind having such a wide array of products was to enable systems architects and developers to pick and choose the relevant bits that are required to to build their solution architecture. These products were categorized into several broad areas such as integration, analytics, Internet of Things (IoT) and so on.

    We realized that it was overwhelming for the architects and developers to figure out which products should be chosen. We also realized that digital transformation requires these products to be used in certain common patterns that mirrored five fields: Enterprise Integration, API Management, Internet of Things, Security and Smart Analytics.

    In order to make things easier for everyone, we decided to match our offerings to how they’re used best. In Integration, this means we’ve combined the functionality of the WSO2 Enterprise Service Bus, Message Broker, Data Services Server and others; now, rather than including and setting up many many products to implement an enterprise integration solution you can simply download and run Enterprise Integrator 6 (EI 6.0).

    What’s it got?

    EI 6.0 contains service integration or service bus functionality. It has data integration, service and app hosting, messaging, business processes, analytic and tooling. It also contains connectors which enable you to connect to external services and systems.

    The package contains the following runtimes:

    1. Service Bus

    Includes functionality from ESB, WSO2 Data Services Server (DSS) and WSO2 App Server (AS)

    2. Business Processes

    Includes functionality of WSO2 Business Process Server (BPS).

    3. Message Broker

    Includes the functionality of WSo2 Message Broker (MB). However, this is not to be used for purely message brokering solutions; this runtime is there for guaranteed delivery integration scenarios and Enterprise Integration Patterns (EIPs).

    4. Analytics

    The analytics runtime for EI 6.0, useful for tracking performance, tracing mediation flows and more.

    In order to provide a unified user experience, we’ve made some changes to the directory structure. This is what it looks like now:

    The main runtime is the integrator or service bus runtime and all directories relevant to that runtime are at the top level.

    This is very similar to the directory structure we use for other WSO2 products; the main difference is the WSO2 directory, under which the other runtimes are available.

    Under the other runtimes, you find the same directory structure as the older releases of those products, as shown below.

    One might ask why we’ve included multiple runtimes instead of putting everything in a single runtime. The reason for doing so is separation of concerns. Short running, stateless integrations will be executed on the service bus runtime while long running and possibly stateful integrations will be executed on the BPS runtime. We also have optional runtimes such as message broker and analytics which will be required only for certain integration scenarios and when analytics are required, respectively.

    By leaving out unnecessary stuff, we can reduce the memory footprint and ensure that only what is required is loaded. In addition, when it comes to configuration files, only files related to a particular runtime will be available under the relevant runtime’s directory.

    On the Management Console

    There’s also been a change to the port that the management console uses. The 9443 servlet transport port is no longer accessible; we now use the 8243 HTTPS port. Integration services, web apps, data services and the management console are all accessible only on the passthrough transport port, which defaults to 8243.

    Tooling

    Eclipse based tooling is available for the main integration and business process runtimes. For data integration, we recommend using the management console itself from the main integration runtime.

    Why 6.0?

    As the name implies, EI is an integration product. The most widely used product in the integration domain is the WSO2 Enterprise Service Bus (ESB), which in the industry is known to run billions of transactions per day. EI is in effect the evolution of WSO2 ESB 5.0, adding features coming from other products. Thus, it’s natural to dub this product 6.0 — the heart of it is still the same.

    However, we’ve ensured that the user experience is largely similar to what it was in terms of the features of the previous generation of products. The Carbon platform that underlies all of our products made it easy to achieve that goal.

    Migration to EI 6.0

    The migration cost from the older ESB, BPS, DSS and other related products to EI 6.0 is minimal. The same Synapse and Data Services languages, specifications and standards have been followed in EI 6.0. Minimal changes would be required for deploying automation scripts such as Puppet scripts -the directory structures are still very similar, and the configuration files haven’t changed.

    Up Next: Enterprise Integrator 7.0

    EI 6.0 is based on several languages; Synapse for mediation, BPMN & BPEL for business processes, DSS language for data integration.

    A user who wants to implement an integration scenario involving mediation, business processes and data integration has to learn several languages with different tooling. While it’s effective, we believe we can do better.

    At WSO2Con 2017, we just unveiled Ballerina, an entirely new language for integration. EI 7.0 will be completely based on Ballerina — a single language and tooling experience. Now the integration developer can concentrate on the scenario, and implement it using a single language and tool with first level support for visual tooling using a sequence diagram paradigm to define integration scenarios.

    However, 7.0 will come with a high migration cost. Customers who are already using WSO2 products in the integration domain can transition over to EI 6.0 — which we’ll be fully supporting — while planning on their 7.0 migration effort in the long term; the team will be working on tooling which will allow migration of major code to Ballerina.

    WSO2 will continue to develop EI 6 and EI 7 in parallel. This means new features and fixes will be released as WUM updates and newer releases of the EI 6.0 family will be available over the next few years so that existing users are not forced to migrate to EI 7.0. This is analogous to how Tomcat continues to release 5.x, 6.x, 7.x and so on.

    “\EI 6.0 is available for download at wso2.com/integration and on github.com/wso2/product-ei/releases. Try it out and let us know what you think — it’s entirely open source, so you can take a look under the hood if that takes your fancy. To report issues and make suggestions, head over to https://github.com/wso2/product-ei/issues.

    Need more information? Looking to deploy WSO2 in an enterprise production environment? Contact us and we’ll get in touch with you.


    WSO2 started out as a middleware company. was originally published in Azeez’s Notes on Medium, where people are continuing the conversation by highlighting and responding to this story.

    Chandana NapagodaHow to clean Registry log (REG_LOG) table

    If you are using WSO2 Governance Registry or API Manager product, you might be already aware that all the registry related actions are being logged. This REG_LOG table being read for Solr indexing(store and publisher searching). Based on the REG_LOG table entries we are indexing artifact metadata. However, with the time this table size might grow. So as a maintain step you can clean up obsolete records from that table.

    So you can use below query to delete obsolete records from REG_LOG table.

    DELETE n1 FROM REG_LOG n1, REG_LOG n2 WHERE n1.REG_LOG_ID < n2.REG_LOG_ID AND n1.REG_PATH = n2.REG_PATH AND n1.REG_TENANT_ID = n2.REG_TENANT_ID;

    DELETE FROM REG_LOG WHERE REG_ACTION = 7;

    Tharindu EdirisingheSecure Software Development with 3rd Party Dependencies and Continuous Vulnerability Management

    When developing enterprise class software applications, 3rd party libraries have to be used whenever necessary. It can be either to reduce development costs, meet deadlines or simply because of the the existing libraries already provide the functionality that you are looking for. Even though the software developed in-house of your organization are developed following best practices adhering to the security standards, you cannot be certain that your external dependencies meet the same standard. If the security of the dependencies are not evaluated, they may even introduce serious vulnerabilities to the systems you develop. Thus it has been identified by OWASP as one of the top 10 vulnerabilities [1]. In this article, I will discuss how to manage security of your project dependencies and how to develop a company policy for using 3rd party libraries. I will also discuss and demonstrate how this can be automated as a process in the software development life cycle.

    Before moving ahead with the topic, we need to be familiar with the technical jargon. Go through the following content to get some idea on them.

    What is a 3rd Party Library ?

    A reusable software component developed to be either freely distributed or sold by an entity other than the original vendor of the development platform.

    The third-party software component market thrives because many programmers believe that component-oriented development improves the efficiency and the quality of developing custom applications. Common third-party software includes macros, bots, and software/scripts to be run as add-ons for popular developing software. [2]

    Using 3rd Party Components in Software Development

    If you have developed software using any 3rd party library (here I have considered C# and Java as an example), following should be familiar to you where you have injected your external dependencies to your project in the IDE.
    visualstudioreferences.png
    ideaprojectdependencies.png
    3rd party dependencies of a C# project in Microsoft Visual Studio
    3rd party dependencies of a Maven based Java project in IntelliJ IDEA


    Direct 3rd Party Dependencies

    The external software components (developed by some other organization/s) that your project depends on are called as direct 3rd party dependencies. In the following example, the project com.tharindue.calc-1.0 (developed by myself) depends on several other libraries which are not developed by me, but by some other organizations.


    Direct 3rd Party Dependencies with Known Vulnerabilities

    The external software components (developed by some other organization/s) with known vulnerabilities that your project depends on are direct 3rd party dependencies. In this example, the project that I work on depends on commons-httpclient-3.1 component which has several known vulnerabilities [3].


    Transitive 3rd Party Dependencies

    The software components that your external dependencies depend on are called as transitive 3rd party dependencies. The project I work on, depends on com.noticfication.email component and com.data.analyzer component which are the direct 3rd party dependencies. These libraries have their own dependencies as shown below. Since my project indirectly depend on those libraries, they are called as transitive 3rd party dependencies.

    Transitive 3rd Party Dependencies with Known Vulnerabilities

    The software components with known vulnerabilities that your external dependencies depend on belong to this category. Here my project has the transitive 3rd party dependency of mysql-connector-5.1.6 library whereas it has several known vulnerabilities.


    What is a Known Vulnerability

    When we use 3rd party libraries which are publicly available to be used (or even proprietary), we may find a weakness in that library in terms of security that can also be exploited. In such case we can report the issue to the development organization of that component so that they would fix it and release as a higher version of the same component. Then they will publicly announce (Through a CWE or a CVE discussed later) the issue they fixed so that the developers of other projects that are using the vulnerable component get to know the issue and apply safety precautions to their systems.

    Common Weakness Enumeration (CWE)

    A formal list or dictionary of common software weaknesses that can occur in software's architecture, design, code or implementation that can lead to exploitable security vulnerabilities. CWE was created to serve as a common language for describing software security weaknesses; serve as a standard measuring stick for software security tools targeting these weaknesses; and to provide a common baseline standard for weakness identification, mitigation, and prevention efforts. [4]

    Common Vulnerabilities and Exposures (CVE)

    CVE is a list of information security vulnerabilities and exposures that aims to provide common names for publicly known cyber security issues. The goal of CVE is to make it easier to share data across separate vulnerability capabilities (tools, repositories, and services) with this "common enumeration." [5]

    CVE Example

    ID : CVE-2015-5262
    Overview :
    http/conn/ssl/SSLConnectionSocketFactory.java in Apache HttpComponents HttpClient before 4.3.6 ignores the http.socket.timeout configuration setting during an SSL handshake, which allows remote attackers to cause a denial of service (HTTPS call hang) via unspecified vectors.
    Severity: Medium
    CVSS Score: 4.3



    CVE vs. CWE

    Software weaknesses are errors that can lead to software vulnerabilities. A software vulnerability, such as those enumerated on the Common Vulnerabilities and Exposures (CVE®) List, is a mistake in software that can be directly used by a hacker to gain access to a system or network [6].

    Common Vulnerability Scoring System (CVSS)

    CVSS provides a way to capture the principal characteristics of a vulnerability, and produce a numerical score reflecting its severity, as well as a textual representation of that score. The numerical score can then be translated into a qualitative representation (such as low, medium, high, and critical) to help organizations properly assess and prioritize their vulnerability management processes [7].

    Selection_001.png

    National Vulnerability Database (NVD)

    NVD is the U.S. government repository of standards based vulnerability management data represented using the Security Content Automation Protocol (SCAP). This data enables automation of vulnerability management, security measurement, and compliance. NVD includes databases of security checklists, security related software flaws, misconfigurations, product names, and impact metrics.


    Using 3rd Party Dependencies Securely - The Big Picture

    All the 3rd party dependencies (including 3rd party transitive dependencies) should be checked in NVD for detecting known security vulnerabilities.

    When developing software, we need to use external dependencies to achieve the required functionality. Before using a 3rd party software component, it is recommended to search in the National Vulnerability Database and verify that there are no known vulnerabilities existing in those 3rd party components. If there are known vulnerabilities, we have to check the possibility of using alternatives or mitigate the vulnerability in the component before using it.

    We can manually check NVD to find out if the external libraries we use have known vulnerabilities. However, when the project size grows where we have to use many external libraries, we cannot do this manually. For that, we can use tools and given below are some examples.

    Veracode : Software Composition Analysis (SCA)

    This is a web based tool (not free !) where you can upload your software project and it will analyze the dependencies and give you a vulnerability analysis report.
    veracodesca.png

    Source Clear (SRC:CLR)

    This provides tools for analyzing known vulnerabilities in the external dependencies you use. The core functionality is available in the free version of this software.
    sourceclear.jpg

    OWASP Dependency Check
    owaspdependencycheck.jpg
    Dependency-Check is free and it is a utility that identifies project dependencies and checks if there are any known, publicly disclosed, vulnerabilities. Currently Java, .NET, Ruby, Node.js, and Python projects are supported; additionally, limited support for C/C++ projects is available for projects using CMake or autoconf. This tool can be part of a solution to the OWASP Top 10 2013 A9 - Using Components with Known Vulnerabilities.

    Following are some very good resources to know more about OWASP Dependency Check tool.



    Continuous Vulnerability Management in a Corporate Environment


    When developing enterprise level software in an organization, the developers cannot just use any 3rd party dependency that does provides the required functionality. They should request approval from engineering management to use any 3rd party software component. Normally the engineering management would check for the license compatibility in this approval process. However it is important to make sure that the 3rd party dependency has no known security risks for using it. In order to achieve this, they can search the National Vulnerability Database to check if known issues are there. If no known security risks are associated with that, the engineering management can approve using the dependency. This happens in the initial phase of using 3rd party dependencies.

    During the development phase, the developers themselves can check if the 3rd party dependencies have any known vulnerabilities reported. They can use IDE plugins that automatically detect the project dependencies, query the NVD and give the vulnerability analysis report.

    During the testing phase, the quality assurance team also can perform a vulnerability analysis and certify that the software product does not use external dependencies with known security vulnerabilities.

    Assume that a particular 3rd party software component does not have any known security vulnerabilities reported at the moment. Then we pack it in our software and now our customers are using the software. Let’s say after 2 months of the software release, a serious security vulnerability is reported against that 3rd party component which makes our software also vulnerable to an attack. How to handle a scenario like this ? For this, in the build process of the software development organization, we can configure a timely build job (using a build server like Jenkins, we can schedule a weekly/monthly build for the source code of the released product). We can integrate plugins to Jenkins to query NVD and detect vulnerabilities of the software. In this case, we can retrieve a vulnerability analysis report and that would contain the reported vulnerability. So we can create a patch and release to customers to make our software safer to use. You can read more on this in [8].

    Above we talked about handling security of 3rd party software components in a continuous manner. We can call it as continuous vulnerability management.

    Getting Rid of Vulnerable Dependencies

    Upgrade direct 3rd party dependencies to a higher version. (For example, if you use Apache httpclient 3.1, it has several known vulnerabilities. However if you use the latest version like 4.5.2, it does not have reported vulnerabilities)

    For transitive dependencies, check if the directly dependent component has a higher version that depends on a safer version of the transitive dependency.
    Contact the developers of the component and get the issue fixed.



    Challenges : Handling False Positives

    Even though the vulnerability analysis tools report that there are vulnerabilities in a 3rd party dependency, there can be cases where those are not applicable to your product because of the way you have used that software component.

    youarepregnant.jpg

    Challenges : Handling False Negatives

    Even though the vulnerability analysis tools report that your external dependencies are safe to use, still there can be unknown vulnerabilities.

    youarenotpregnant.jpg

    Summary

    Identify the external dependencies of your projects
    Identify the vulnerabilities in the dependency software components.
    Analyze the impact
    Remove false positives
    Prioritize the vulnerabilities based on the severity
    Get rid of vulnerabilities (upgrade versions, use alternatives)
    Provide patches to your products



    Notes :

    This is the summary of the teck-talk I did on Jun 15th, 2016 at the Colombo Security Meetup on the topic ‘Secure Software Development with 3rd Party Dependencies’.



    The event is listed in OWASP official website https://www.owasp.org/index.php/Sri_Lanka  



    References





    Tharindu Edirisinghe (a.k.a thariyarox)
    Independent Security Researcher

    Ushani BalasooriyaHow to use an existing java class method inside a script mediator in WSO2

    If you need to access a java class method inside WSO2 ESB script mediator, you can simply call it.

    Below is an example done to call matches() method inside java.util.regex.Pattern class.

    You can simply do it as below.

      <script language="js" description="extract username">  
    var isMatch = java.util.regex.Pattern.matches(".*test.*", "This is a test description!");
    </script>

    You can access this value using property mediator if you set this in to message context.


      mc.setProperty("isMatch",isMatch);   

    So a Sample synapse will be,



        <script language="js" description="extract username">
    var isMatch = java.util.regex.Pattern.matches(".*test.*", "This is a test description!");
    mc.setProperty("isMatch",isMatch);
    </script>

    <log level="custom">
    <property name="isMatch" expression="get-property('isMatch')"/>
    </log>


    You can use this in a custom sequence in WSO2 API Manager as well to perform your task.

    As an example, by using java.util.regex.Pattern.Matched method, you can use regular expression support in Java inside the script mediator.



    Chathurika Erandi De SilvaSample demonstration of using multipart/form-data with WSO2 ESB


    Say you need to process data that is being sent as multipart/form-data using WSO2 ESB. Following steps will take you through a quick sample how it can be done using WSO2 ESB.

    Sample form

    <html>  
     <head><title>multipart/form-data - Client</title></head>  
     <body>   
    <form action="endpoint" method="POST" enctype="multipart/form-data">  
    User Name: <input type="text" name="name">  
    User id: <input type="text" name="id">  
    User Address: <input type="text" name="add">  
    AGE: <input type="text" name="age">  
     <br>   
    Upload :   
    <input type="file" name="datafile" size="40" multiple>  
    </p>  
     <input type="submit" value="Submit">  
     </form>  
     </body>  
    </html>

    In here the requirement is to invoke the endpoint defined through form action on submit. As the endpoint here WSO2 ESB API will be used.

    For that I have created a sample API in ESB as below

    <api xmlns="http://ws.apache.org/ns/synapse" name="MyAPI" context="/myapi">
      <resource methods="POST GET" inSequence="mySeq"/>
    </api>

    The above mySeq just contains a log mediator set to level full.

    Now provide the ESB endpoint to your form as below

    <html>  
     <head><title>multipart/form-data - Client</title></head>  
     <body>   
    <form action="http://<ip>:8280/myapi" method="POST" enctype="multipart/form-data">  
    User Name: <input type="text" name="name">  
    User id: <input type="text" name="id">  
    User Address: <input type="text" name="add">  
    AGE: <input type="text" name="age">  
     <br>   
    Upload :   
    <input type="file" name="datafile" size="40" multiple>  
    </p>  
     <input type="submit" value="Submit">  
     </form>  
     </body>  
    </html>

    Now host the above as a html in a browser, fill in the details and submit. Once done, a similar output as below will be there in ESB console

    [2017-02-15 16:52:05,411]  INFO - LogMediator To: /myapi, MessageID: urn:uuid:80b7a0b0-6769-4a8f-9c66-e5d247bb7ad0, Direction: request, Envelope: <?xml version='1.0' encoding='utf-8'?><soapenv:Envelope xmlns:soapenv="http://www.w3.org/2003/05/soap-envelope"><soapenv:Body><mediate><add>test@gmail.com</add><datafile></datafile><age>23</age><id>001</id><name>naleen</name></mediate></soapenv:Body></soapenv:Envelope>
    [2017-02-15 17:06:24,890]  INFO - CarbonAuthenticationUtil 'admin@carbon.super [-1234]' logged in at [2017-02-15 17:06:24,889+0530]

    What really happens back stage?

    WSO2 ESB contains a message builder as below

    <messageBuilder contentType="multipart/form-data"
                           class="org.apache.axis2.builder.MultipartFormDataBuilder"/>

    This builds the incoming multipart/form-data and turns in to a processable one as shown in the above sample. Now any of the ESB mediators can be used to process it as needed.

    Ushani BalasooriyaHow to include batch of test data in to Salesforce Dev accounts?

    When you work with salesforce, you will need to have test data in salesforce dev account. In WSO2 if you use salesforce connector, sometimes you will need to deal with queryMore function. For more information, please check this link. This is a sample on how to include test data in to Salesforce. Salesforce it self provide an awesome tool called Data loader. You can go in to this document from this link. Im going to use this in an open source /linux environment. Pre Req : Need JDK 1.8 Step 1 : Install data loader. 1. Check out the code from git.  (https://github.com/forcedotcom/dataloader) git clone https://github.com/forcedotcom/dataloader.git 2. Build it mvn clean package -DskipTests3. To run the data loader java -jar target/dataloader-39.0-uber.jar Step 2 : Login to Data loader Provide your username (email address), password along with your security token and login url E.g., (https://login.salesforce.com/services/Soap/u/39.0) I have explained how to find your api login url in one of my previous blog post. Step 3 : Create your test data. Click on "Export" and Next and select the salesforce object (In Here I have selected Account) where you need to have test data. Then select the fields from the check boxes and click on Finish. Existing data will be exported in to a csv file. Open the extract CSV in an excel sheet and create any number of test data for just by dragging the last cell. It will increment the data in each cell. Note : You should delete the existing data in the Account from CSV before you upload. So newly incremented data will be there. Step 3 : Import test data in to Data Loader Next step is just just click on the "Import" -> Select the salesforce object (in here it is Account) -> Click Next -> Click on Create or Edit a Map -> Map the attributes with the coulmns in CSV as below. Click Next -> Finish. Select a file location to save error files. Then it will insert the bulk data and you will receive it once it is finished and success. You can also view errors if exists. Now if you query salesforce from developer console, you will be able to see your data. That's it! :) Happy coding!

    Charini NanayakkaraEnable/Disable Security in Firefox


    1. Open new tab
    2. Enter about:config
    3. Search browser.urlbar.filter.javascript
    4. Double click (value would change. True means security is on)

    Dhananjaya jayasingheHow to get all the default claims when using JWT - WSO2 API Manager

    There are situations like we need to pass the enduser's attributes to the backend services when using WSO2 API Manager.  We can use Java Web Tokens (JWT) for that.

    You can find the documentation for this in WSO2 site [1]

    Here I am going to discuss on how we can get all default claims for JWT token since by just enabling the configuration EnableJWTGeneration it will not give you all claims. 

    If you just enable above , the configuration will look like follows. 

       <JWTConfiguration>  
    <!-- Enable/Disable JWT generation. Default is false. -->
    <EnableJWTGeneration>true</EnableJWTGeneration>
    <!-- Name of the security context header to be added to the validated requests. -->
    <JWTHeader>X-JWT-Assertion</JWTHeader>
    <!-- Fully qualified name of the class that will retrieve additional user claims
    to be appended to the JWT. If not specified no claims will be appended.If user wants to add all user claims in the
    jwt token, he needs to enable this parameter.
    The DefaultClaimsRetriever class adds user claims from the default carbon user store. -->
    <!--ClaimsRetrieverImplClass>org.wso2.carbon.apimgt.impl.token.DefaultClaimsRetriever</ClaimsRetrieverImplClass-->
    <!-- The dialectURI under which the claimURIs that need to be appended to the
    JWT are defined. Not used with custom ClaimsRetriever implementations. The
    same value is used in the keys for appending the default properties to the
    JWT. -->
    <!--ConsumerDialectURI>http://wso2.org/claims</ConsumerDialectURI-->
    <!-- Signature algorithm. Accepts "SHA256withRSA" or "NONE". To disable signing explicitly specify "NONE". -->
    <!--SignatureAlgorithm>SHA256withRSA</SignatureAlgorithm-->
    <!-- This parameter specifies which implementation should be used for generating the Token. JWTGenerator is the
    default implementation provided. -->
    <JWTGeneratorImpl>org.wso2.carbon.apimgt.keymgt.token.JWTGenerator</JWTGeneratorImpl>
    <!-- This parameter specifies which implementation should be used for generating the Token. For URL safe JWT
    Token generation the implementation is provided in URLSafeJWTGenerator -->
    <!--<JWTGeneratorImpl>org.wso2.carbon.apimgt.keymgt.token.URLSafeJWTGenerator</JWTGeneratorImpl>-->
    <!-- Remove UserName from JWT Token -->
    <!-- <RemoveUserNameFromJWTForApplicationToken>true</RemoveUserNameFromJWTForApplicationToken>-->
    </JWTConfiguration>


    Then, By enabling wire logs[2], We can get the encrypted JWT Token as bellow when you invoke an API.


    When we decode it, It will look like follows.



    You can notice that, It is not showing the role claim. Basically, If you need to have all the default claims passed in this JWT token, You need to enable following two configurations in api-manager.xml



      <ClaimsRetrieverImplClass>org.wso2.carbon.apimgt.impl.token.DefaultClaimsRetriever</ClaimsRetrieverImplClass>  


     <ConsumerDialectURI>http://wso2.org/claims</ConsumerDialectURI>  

    Once you enable them and restart the server, You will get the all the default claims in the token as bellow.



    [1] https://docs.wso2.com/display/AM210/Passing+Enduser+Attributes+to+the+Backend+Using+JWT

    [2] http://mytecheye.blogspot.com/2013/09/wso2-esb-all-about-wire-logs.html

    Himasha GurugeFirefox issue with javascript functions directly called on tags

    If you try adding a javascript method on a html link like below, You will run into issues when trying out in firefox.

    <a href="javascript:functionA();" />

    This is because if this functionA returns some value (true/false) other than undefined, it will be appended to the link as a string value , and will try to be rendered which will redirect you to a blank page. Therefore it is always better to add a js function like below.

    <a href="#" onclick="functionA();"/>

    Chamalee De SilvaHow to install datamapper mediator in WSO2 API Manager 2.1.0

    WSO2 API Manager 2.1.0 was released recently with outstanding new features and many improvements and bug fixes. There are many mediators supported by WSO2 API Manager out of the box and some of them you should have to install as features.

    This blog post will guide you on how to install datamapper mediator as a feature in WSO2 API Manager 2.1.0.

    Download WSO2 API Manager 2.1.0 from product web page if you haven't done already.

    Please follow the below steps to install the datamapper mediator.

    1. Extract the product and start the server.

    2. Go to https://<host_address>:9443+offset/carbon and login with admin credentials.

    3. Go to Configure > Features > Repository Management.

    4. Click on "Add Repository ".

    5. Give a name to the reposiory,  and add the P2 repository URL which is http://product-dist.wso2.com/p2/carbon/releases/wilkes/ as the URL and click add.


    This will add the repository to your API Manager.

    6. Now click on Available features tab, un-tick "Group features by category" and click on "Find Features" button to list the features in the repository.


    7. Filter by feature name "datamapper" and you will get two versions of datamapper mediator Aggregate feature. Those are mediator version 4.6.6 and 4.6.10.

    The relevant mediator version for API Manager 2.1.0 is Mediator version 4.6.10.

    8. Click on the datamapper mediator Aggregate feature with version 4.6.10 and install it.


    9. Allow restarting the server after installation.


    This will install datamapper server feature and datamapper UI feature in your API Manager instance. Now you have to install Datamapper engine feature. To do that follow the below steps.

    Installing datamapper engine feature : 

    1. Go to WSO2 nexus repository :  https://maven.wso2.org/nexus/

    2. Type "org.wso2.carbon.mediator.datamapper.engine" in search bar and search for the jar file.



    3. You will find the set of releases of the org.wso2.carbon.mediator.datamapper.engine archives.


    4. Select 4.6.10 version from them, select the jar from the achieves and download the jar.

    5. Go to <APIM_HOME>/repository/components/dropins directory in your API Manager instance and copy the downloaded jar  (org.wso2.carbon.mediator.datamapper.engine_4.6.10.jar) in it.

    6. Restart WSO2 API Manager.


    Now you have an API Manager instance where you have successfully installed datamapper mediator. 


    Go ahead with mediation !!!


    Amalka SubasingheWSO2 ESB communication with WSO2 ESB Analytics

    This blog post is about how & what ports involved when connecting from WSO2 ESB to WSO2 ESB Analytics.

    How to configure: This document explains how to configure it
    https://docs.wso2.com/display/ESB500/Prerequisites+to+Publish+Statistics

    Let's say we have WSO2 ESB  and WSO2 ESB Analytics packs we want to run in same physical machine, then we have to offset one instance. 
    But we don't want to do that since WSO2 ESB Analytics by default come with the offset.

    So WSO2ESB will run on 9443 port, WSO2 ESB Analytics will run on 9444 port

    WSO2 ESB publish data to the WSO2 ESB Analytics via thrift. By default thrift port is 7611 and corresponding ssl thrift port is 7711 (7611+100), check the data-bridge-config.xml file which is in analytics server config directory . 

    Since we are shipping analytics products with offset 1 then thrift ports are 7612 and ssl port is 7712.
    Here, ssl port (7712) is used for initial authentication purposes of data publisher afterwards it uses the thrift port (7612) for event publishing.. 

    Here's a common error people raise when configuring analytics with WSO2 ESB.

    [2017-02-14 19:42:56,477] ERROR - DataEndpointConnectionWorker Error while trying to connect to the endpoint. Cannot borrow client for ssl://localhost:7713
    org.wso2.carbon.databridge.agent.exception.DataEndpointAuthenticationException: Cannot borrow client for ssl://localhost:7713
            at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.connect(DataEndpointConnectionWorker.java:99)
            at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.run(DataEndpointConnectionWorker.java:42)
            at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
            at java.util.concurrent.FutureTask.run(FutureTask.java:266)
            at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
            at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
            at java.lang.Thread.run(Thread.java:745)
    Caused by: org.wso2.carbon.databridge.agent.exception.DataEndpointSecurityException: Error while trying to connect to ssl://localhost:7713
            at org.wso2.carbon.databridge.agent.endpoint.thrift.ThriftSecureClientPoolFactory.createClient(ThriftSecureClientPoolFactory.java:61)
            at org.wso2.carbon.databridge.agent.client.AbstractClientPoolFactory.makeObject(AbstractClientPoolFactory.java:39)
            at org.apache.commons.pool.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:1212)
            at org.wso2.carbon.databridge.agent.endpoint.DataEndpointConnectionWorker.connect(DataEndpointConnectionWorker.java:91)
            ... 6 more
    Caused by: org.apache.thrift.transport.TTransportException: Could not connect to localhost on port 7714
            at org.apache.thrift.transport.TSSLTransportFactory.createClient(TSSLTransportFactory.java:237)
            at org.apache.thrift.transport.TSSLTransportFactory.getClientSocket(TSSLTransportFactory.java:169)
            at org.wso2.carbon.databridge.agent.endpoint.thrift.ThriftSecureClientPoolFactory.createClient(ThriftSecureClientPoolFactory.java:56)
            ... 9 more
    Caused by: java.net.ConnectException: Connection refused: connect
            at java.net.DualStackPlainSocketImpl.connect0(Native Method)
            at java.net.DualStackPlainSocketImpl.socketConnect(DualStackPlainSocketImpl.java:79)
            at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
            at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
            at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
            at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:172)
            at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
            at java.net.Socket.connect(Socket.java:589)
            at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668)
            at sun.security.ssl.SSLSocketImpl.<init>(SSLSocketImpl.java:427)
            at sun.security.ssl.SSLSocketFactoryImpl.createSocket(SSLSocketFactoryImpl.java:88)
            at org.apache.thrift.transport.TSSLTransportFactory.createClient(TSSLTransportFactory.java:233)
            ... 11 more

    This comes because people change the thrift port comes in the following configuration files by adding another 1 (7612+1), thinking of we need to add 1, since we have offset in analytics server as 1.

    <ESB_HOME>/repository/deployment/server/eventpublishers/MessageFlowConfigurationPublisher.xml
    <ESB_HOME>/repository/deployment/server/eventpublishers/MessageFlowStatisticsPublisher.xml