WSO2 Venus

Lahiru SandaruwanHow to Aggregate Responses of two REST endpoints and convert to JSON using WSO2 ESB

How to Aggregate Responses of two REST endpoints and convert to JSON using WSO2 ESB


Here is a usecase i came across recently. In this use case it will receive responses from two endpoints which are rest, aggregate them as a single body, which is XML and convert it to JSON before sending response.

Highlights


  • Using clone mediator for cloning the message body and send it to two endpoints
  • Using aggregate mediator for aggregating the response from two endpoints
    • Enclosing the response using custom tag
  • Converting the XML response to JSON property mediator(messageType)

Proxy

<proxy xmlns="http://ws.apache.org/ns/synapse"
name="AggregateMessageProxy" transports="https,http" statistics="disable" trace="disable" startOnLoad="true"> <target>
<inSequence>
<property name="enclosing_element" scope="default">
<result xmlns=""/>
</property>
<clone>
<target>
<endpoint name="v1">
<address uri="http://api_1/api"
format="get"/>
</endpoint>
</target>
<target>
<endpoint name="v2">
<address uri="http://api_2/api"
format="get"/>
</endpoint>
</target>
</clone>
</inSequence>
<outSequence>
<aggregate>
<completeCondition>
<messageCount min="2" max="-1"/>
</completeCondition>
<onComplete xmlns:s12="http://www.w3.org/2003/05/soap-envelope"
xmlns:s11="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:m0="http://services.samples"
expression="s11:Body/child::* | s12:Body/child::*"
enclosingElementProperty="enclosing_element">
<property name="messageType" value="application/json" scope="axis2"/>
<log level="full" separator=","/>
<send/>
</onComplete>
</aggregate>
</outSequence>
</target>
<description/>
</proxy>

Jayanga DissanayakeHow to register a servlet from a Carbon Component

There are three ways to register a servlet in carbon.
Specifying the servlet details in the web.xml file
Specifying the servlet details in the component.xml file
Registering the servlet with httpService in your component

You can find the sample code in : https://github.com/jsdjayanga/How-to-register-a-servlet-from-a-Carbon-Component

Specifying the servlet details in the web.xml file

Specifying the servlet details in the web.xml file is not recommended when working with the carbon framework, as it has less control over the servlet when it is directly specified in the web.xml

From the remaining two, neither is bad, its totally up to the developer to decide what is best for a given scenario.

Specifying the servlet details in the component.xml file

Specifying the servlet details in the component.xml file is the easiest way of doing this.

In this approach, you need to have your HttpServlet implementation. Then you have to specify the details about your servlet in the component.xml file. Following is how you should specify details


 1
2
3
4
5
6
7
8
9
10
11
12
<component xmlns="http://products.wso2.org/carbon">
<servlets>
<servlet id="SampleServlet">
<servlet-name>sampleServlet</servlet-name>
<url-pattern>/sampleservlet</url-pattern>
<display-name>Sample Servlet</display-name>
<servlet-class>
org.wso2.carbon.samples.xmlbased.SampleServlet
</servlet-class>
</servlet>
</servlets>
</component>

Once you restart the servlet, with your compiled .jar in the dropins directory (repository/components/dropins), all the request to the  http://ip:port/sampleservlet will be routed to your custom servlet (org.wso2.carbon.samples.xmlbased.SampleServlet).


Registering the servlet with httpService

Registering the servlet with httpService allows dynamically register and unregister services. This allows you to have more control over the availability of the servlet.

In this approach, you need to have your HttpServlet implementation. Then you have to register your servlet with the org.osgi.service.http.HttpService once your bundle get activated.


httpService.registerServlet("/sampledynamicservlet", new SampleDynamicServlet(), null, null);

Then onwards, requests received for the http://ip:port/sampledynamicservlet will be routed to your custom servlet.

In this approach you can unregister your servlet, this cause the http://ip:port/sampledynamicservlet to be unavailable.


httpService.unregister("/sampledynamicservlet");


Evanthika AmarasiriHow to solve the famous token regeneration issue in an API-M cluster

In a API Manager clustered environment (in my case, I have a publisher, a store, two gateway nodes and two key manager nodes fronted by a WSO2 ELB 2.1.1), while regenerating tokens, if you come across an error saying Error in getting new accessToken with an exception as below at Key Manager node, then this is due to a configuration issue.

TID: [0] [AM] [2014-09-19 05:41:28,321]  INFO {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil} -  'Administrator@carbon.super [-1234]' logged in at [2014-09-19 05:41:28,321-0400] {org.wso2.carbon.core.services.util.CarbonAuthenticationUtil}
TID: [0] [AM] [2014-09-19 05:41:28,537] ERROR {org.wso2.carbon.apimgt.keymgt.service.APIKeyMgtSubscriberService} -  Error in getting new accessToken {org.wso2.carbon.apimgt.keymgt.service.APIKeyMgtSubscriberService}
TID: [0] [AM] [2014-09-19 05:41:28,538] ERROR {org.apache.axis2.rpc.receivers.RPCMessageReceiver} -  Error in getting new accessToken {org.apache.axis2.rpc.receivers.RPCMessageReceiver}
java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
    at java.lang.reflect.Method.invoke(Method.java:619)
    at org.apache.axis2.rpc.receivers.RPCUtil.invokeServiceClass(RPCUtil.java:212)
    at org.apache.axis2.rpc.receivers.RPCMessageReceiver.invokeBusinessLogic(RPCMessageReceiver.java:117)
    at org.apache.axis2.receivers.AbstractInOutMessageReceiver.invokeBusinessLogic(AbstractInOutMessageReceiver.java:40)
    at org.apache.axis2.receivers.AbstractMessageReceiver.receive(AbstractMessageReceiver.java:110)
    at org.apache.axis2.engine.AxisEngine.receive(AxisEngine.java:180)
    at org.apache.axis2.transport.http.HTTPTransportUtils.processHTTPPostRequest(HTTPTransportUtils.java:172)
    at org.apache.axis2.transport.http.AxisServlet.doPost(AxisServlet.java:146)
    at org.wso2.carbon.core.transports.CarbonServlet.doPost(CarbonServlet.java:231)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:755)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
    at org.eclipse.equinox.http.servlet.internal.ServletRegistration.service(ServletRegistration.java:61)
    at org.eclipse.equinox.http.servlet.internal.ProxyServlet.processAlias(ProxyServlet.java:128)
    at org.eclipse.equinox.http.servlet.internal.ProxyServlet.service(ProxyServlet.java:68)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
    at org.wso2.carbon.tomcat.ext.servlet.DelegationServlet.service(DelegationServlet.java:68)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
    at org.wso2.carbon.tomcat.ext.filter.CharacterSetFilter.doFilter(CharacterSetFilter.java:61)
    at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
    at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
    at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
    at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
    at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472)
    at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
    at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
    at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:178)
    at org.wso2.carbon.tomcat.ext.valves.CarbonTomcatValve$1.invoke(CarbonTomcatValve.java:47)
    at org.wso2.carbon.webapp.mgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:56)
    at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:47)
    at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:141)
    at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:156)
    at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:936)
    at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:52)
    at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
    at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
    at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1004)
    at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
    at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1653)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1176)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
    at java.lang.Thread.run(Thread.java:853)
Caused by:
org.wso2.carbon.apimgt.keymgt.APIKeyMgtException: Error in getting new accessToken
    at org.wso2.carbon.apimgt.keymgt.service.APIKeyMgtSubscriberService.renewAccessToken(APIKeyMgtSubscriberService.java:281)
    ... 45 more
Caused by:
java.lang.RuntimeException: Token revoke failed : HTTP error code : 404
    at org.wso2.carbon.apimgt.keymgt.service.APIKeyMgtSubscriberService.renewAccessToken(APIKeyMgtSubscriberService.java:252)
    ... 45 more


This is what you have to do to solve this issue.

1. In you Gateway nodes, you need to change the host and the port values of the below APIs that resides under $APIM_HOME/repository/deployment/server/synapse-configs/default/api                                                      _TokenAPI_.xml                                                                                                                                         _AuthorizeAPI_.xml                                                                                                                                       _RevokeAPI_
If you get a HTTP 302 error at Key manager side while regenerating the token, make sure to check the RevokeURL of the api-manager.xml of the Key Manager node to see if it is pointing to the NIO port of the Gateway Node.

Umesha GunasingheCreating a metadata file for WSO2 IS as SP in a federation scenario

In today's' post I would like to share some tips that you will need while creating a metadata file to be used with WSO2 IS.

Use Case :-

With WSO2 IS you have the capability of multiple federation. Some of the IDPs requesting a metadata file in order to register IS as a trusted SP. For this we need to generate a metadata file for IS , but auto generation of metadata file is not available as yet with IS 5.0.0 hence we will have to create this manually.

Following is a general metadata details for IS as SP.




 <EntityDescriptor entityID="carbonServer" xmlns="urn:oasis:names:tc:SAML:2.0:metadata">

    <SPSSODescriptor protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">

        <NameIDFormat>urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified</NameIDFormat>

        <AssertionConsumerService index="1" Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST"

            Location="https://localhost:9443/commonauth"/>

        <KeyDescriptor>

    <ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">

      <ds:X509Data>

        <ds:X509Certificate>

MIICNTCCAZ6gAwIBAgIES343gjANBgkqhkiG9w0BAQUFADBVMQswCQYDVQQGEwJV

UzELMAkGA1UECAwCQ0ExFjAUBgNVBAcMDU1vdW50YWluIFZpZXcxDTALBgNVBAoM

BFdTTzIxEjAQBgNVBAMMCWxvY2FsaG9zdDAeFw0xMDAyMTkwNzAyMjZaFw0zNTAy

MTMwNzAyMjZaMFUxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJDQTEWMBQGA1UEBwwN

TW91bnRhaW4gVmlldzENMAsGA1UECgwEV1NPMjESMBAGA1UEAwwJbG9jYWxob3N0

MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCUp/oV1vWc8/TkQSiAvTousMzO

M4asB2iltr2QKozni5aVFu818MpOLZIr8LMnTzWllJvvaA5RAAdpbECb+48FjbBe

0hseUdN5HpwvnH/DW8ZccGvk53I6Orq7hLCv1ZHtuOCokghz/ATrhyPq+QktMfXn

RS4HrKGJTzxaCcU7OQIDAQABoxIwEDAOBgNVHQ8BAf8EBAMCBPAwDQYJKoZIhvcN

AQEFBQADgYEAW5wPR7cr1LAdq+IrR44iQlRG5ITCZXY9hI0PygLP2rHANh+PYfTm

xbuOnykNGyhM6FjFLbW2uZHQTY1jMrPprjOrmyK5sjJRO4d1DeGHT/YnIjs9JogR

Kv4XHECwLtIVdAbIdWHEtVZJyMSktcyysFcvuhPQK8Qc/E/Wq8uHSCo=

        </ds:X509Certificate>

      </ds:X509Data>

    </ds:KeyInfo>

  </KeyDescriptor>

    </SPSSODescriptor>

</EntityDescriptor>




However, certain IdPs might request for more details to be included in a metadata file. You can refer the metadata standard specification at http://docs.oasis-open.org/security/saml/v2.0/saml-metadata-2.0-os.pdf . The X509 data in the above example metadata are of the self signed certificate of WSO2 Identity Server. In a production deployment , you might wanna use your own certificate signed by a CA.

Therefore if you want to extract out the X509 information of your certificate , you can use the following command using java keytool :-

keytool export keystore pathToKeystore rfc alias aliasNameForCertificate

And also you might want to sign the metadata file using different algorithms. A very cool tool that you can use for this is the XmlSecTool which has lot of options.

You can check for the tool at https://wiki.shibboleth.net/confluence/display/SHIB2/XmlSecTool#XmlSecToolSigningSAMLMetadata


Use the following command to sign the metadata file using SHA256 algorithem (or you can use another algorithm according to the requirement ) after running the tool :-

--sign --digest SHA256 --inFile metadata.xml --outFile signedmetadata.xml --referenceIdAttributeName ID --keystore keystore.jks --keystorePassword password --key keyname --keyPassword password

Lali DevamanthriAre You Vulnerable to Shellshock

Run the following two commands,

env X="() { :;} ; echo busted" /bin/sh -c "echo completed"
env X="() { :;} ; echo busted" `which bash` -c "echo completed"

If you see “busted” then you are vulnerable. When I try it on my pc, second command reveal my ubuntu is vulnerable for shellshock. On Ubuntu, /bin/sh is not bash (it is dash). Only bash is affected by this vulnerability. But latest upgrade have fixed the issue.

Use dpkg to check your installed package version:

dpkg -s bash | grep Version

This will look up info on your bash package, and filter the output to only show you the version. The fixed versions are 4.3-7ubuntu1.4, 4.2-2ubuntu2.5, and 4.1-2ubuntu3.1.

For example, I see:

dpkg -s bash | grep Version
Version: 4.3-7ubuntu1.4

and can determine that I am not vulnerable. (when I was vulnerable to shellshock Version: 4.3-7ubuntu1)

The standard update manager will offer you this update. This is a prime example of how security updates are important, no matter what OS you use or how well-maintained it is.

The USN Bulletin states that new versions have been released for Ubuntu 14.04 Trusty Tahr, 12.04 Precise Pangolin, and 10.04 Lucid Lynx (This is why I’m like to stick to a LTS versions). If you are not on one of these LTS versions, but are on a reasonably-recent version, you’ll most likely be able to find a patched package.

 

 

 

 

 


Jayanga DissanayakeCustom Authenticator for WSO2 Identity Server (WSO2IS) with Custom Claims

WSO2IS is one of the best Identity Servers, which enables you to offload your identity and user entitlement management burden totally from your application. It comes with many features, supports many industry standards and most importantly it allows you to extent it according to your security requirements.

In this post I am going to show you how to write your own Authenticator, which uses some custom claim to validate users and how to invoke your custom authenticator with your web app.

Create your Custom Authenticator Bundle

WSO2IS is based OSGi, so if you want to add a new authenticator you have to crate an OSGi bungle. Following is the source of the OSGi bundle you have to prepare.

This bundle will consist of three files,
1. CustomAuthenticatorServiceComponent
2. CustomAuthenticator
3. CustomAuthenticatorConstants

CustomAuthenticatorServiceComponent is an OSGi service component it basically registers the CustomAuthenticator (service). CustomAuthenticator is an implementation of org.wso2.carbon.identity.application.authentication.framework.ApplicationAuthenticator, which actually provides our custom authentication.


1. CustomAuthenticatorServiceComponent


 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
package org.wso2.carbon.identity.application.authenticator.customauth.internal;

import java.util.Hashtable;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.osgi.service.component.ComponentContext;
import org.wso2.carbon.identity.application.authentication.framework.ApplicationAuthenticator;
import org.wso2.carbon.identity.application.authenticator.customauth.CustomAuthenticator;
import org.wso2.carbon.user.core.service.RealmService;

/**
* @scr.component name="identity.application.authenticator.customauth.component" immediate="true"
* @scr.reference name="realm.service"
* interface="org.wso2.carbon.user.core.service.RealmService"cardinality="1..1"
* policy="dynamic" bind="setRealmService" unbind="unsetRealmService"
*/
public class CustomAuthenticatorServiceComponent {

private static Log log = LogFactory.getLog(CustomAuthenticatorServiceComponent.class);

private static RealmService realmService;

protected void activate(ComponentContext ctxt) {

CustomAuthenticator customAuth = new CustomAuthenticator();
Hashtable<String, String> props = new Hashtable<String, String>();

ctxt.getBundleContext().registerService(ApplicationAuthenticator.class.getName(), customAuth, props);

if (log.isDebugEnabled()) {
log.info("CustomAuthenticator bundle is activated");
}
}

protected void deactivate(ComponentContext ctxt) {
if (log.isDebugEnabled()) {
log.info("CustomAuthenticator bundle is deactivated");
}
}

protected void setRealmService(RealmService realmService) {
log.debug("Setting the Realm Service");
CustomAuthenticatorServiceComponent.realmService = realmService;
}

protected void unsetRealmService(RealmService realmService) {
log.debug("UnSetting the Realm Service");
CustomAuthenticatorServiceComponent.realmService = null;
}

public static RealmService getRealmService() {
return realmService;
}

}


2. CustomAuthenticator

This is where your actual authentication logic is implemented


  1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
package org.wso2.carbon.identity.application.authenticator.customauth;

import java.io.IOException;
import java.util.Map;

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.wso2.carbon.identity.application.authentication.framework.AbstractApplicationAuthenticator;
import org.wso2.carbon.identity.application.authentication.framework.AuthenticatorFlowStatus;
import org.wso2.carbon.identity.application.authentication.framework.LocalApplicationAuthenticator;
import org.wso2.carbon.identity.application.authentication.framework.config.ConfigurationFacade;
import org.wso2.carbon.identity.application.authentication.framework.context.AuthenticationContext;
import org.wso2.carbon.identity.application.authentication.framework.exception.AuthenticationFailedException;
import org.wso2.carbon.identity.application.authentication.framework.exception.InvalidCredentialsException;
import org.wso2.carbon.identity.application.authentication.framework.exception.LogoutFailedException;
import org.wso2.carbon.identity.application.authentication.framework.util.FrameworkUtils;
import org.wso2.carbon.identity.application.authenticator.customauth.internal.CustomAuthenticatorServiceComponent;
import org.wso2.carbon.identity.base.IdentityException;
import org.wso2.carbon.identity.core.util.IdentityUtil;
import org.wso2.carbon.user.api.UserRealm;
import org.wso2.carbon.user.core.UserStoreManager;
import org.wso2.carbon.utils.multitenancy.MultitenantUtils;

/**
* Username Password based Authenticator
*
*/
public class CustomAuthenticator extends AbstractApplicationAuthenticator
implements LocalApplicationAuthenticator {

private static final long serialVersionUID = 192277307414921623L;

private static Log log = LogFactory.getLog(CustomAuthenticator.class);

@Override
public boolean canHandle(HttpServletRequest request) {
String userName = request.getParameter("username");
String password = request.getParameter("password");

if (userName != null && password != null) {
return true;
}

return false;
}

@Override
public AuthenticatorFlowStatus process(HttpServletRequest request,
HttpServletResponse response, AuthenticationContext context)
throws AuthenticationFailedException, LogoutFailedException {

if (context.isLogoutRequest()) {
return AuthenticatorFlowStatus.SUCCESS_COMPLETED;
} else {
return super.process(request, response, context);
}
}

@Override
protected void initiateAuthenticationRequest(HttpServletRequest request,
HttpServletResponse response, AuthenticationContext context)
throws AuthenticationFailedException {

String loginPage = ConfigurationFacade.getInstance().getAuthenticationEndpointURL();
String queryParams = FrameworkUtils
.getQueryStringWithFrameworkContextId(context.getQueryParams(),
context.getCallerSessionKey(),
context.getContextIdentifier());

try {
String retryParam = "";

if (context.isRetrying()) {
retryParam = "&authFailure=true&authFailureMsg=login.fail.message";
}

response.sendRedirect(response.encodeRedirectURL(loginPage + ("?" + queryParams))
+ "&authenticators=" + getName() + ":" + "LOCAL" + retryParam);
} catch (IOException e) {
throw new AuthenticationFailedException(e.getMessage(), e);
}
}

@Override
protected void processAuthenticationResponse(HttpServletRequest request,
HttpServletResponse response, AuthenticationContext context)
throws AuthenticationFailedException {

String username = request.getParameter("username");
String password = request.getParameter("password");

boolean isAuthenticated = false;

// Check the authentication
try {
int tenantId = IdentityUtil.getTenantIdOFUser(username);
UserRealm userRealm = CustomAuthenticatorServiceComponent.getRealmService()
.getTenantUserRealm(tenantId);

if (userRealm != null) {
UserStoreManager userStoreManager = (UserStoreManager)userRealm.getUserStoreManager();
isAuthenticated = userStoreManager.authenticate(MultitenantUtils.getTenantAwareUsername(username),password);

Map<String, String> parameterMap = getAuthenticatorConfig().getParameterMap();
String blockSPLoginClaim = null;
if(parameterMap != null) {
blockSPLoginClaim = parameterMap.get("BlockSPLoginClaim");
}
if (blockSPLoginClaim == null) {
blockSPLoginClaim = "http://wso2.org/claims/blockSPLogin";
}
if(log.isDebugEnabled()) {
log.debug("BlockSPLoginClaim has been set as : " + blockSPLoginClaim);
}

String blockSPLogin = userStoreManager.getUserClaimValue(MultitenantUtils.getTenantAwareUsername(username),
blockSPLoginClaim, null);

boolean isBlockSpLogin = Boolean.parseBoolean(blockSPLogin);
if (isAuthenticated && isBlockSpLogin) {
if (log.isDebugEnabled()) {
log.debug("user authentication failed due to user is blocked for the SP");
}
throw new AuthenticationFailedException("SPs are blocked");
}
} else {
throw new AuthenticationFailedException("Cannot find the user realm for the given tenant: " + tenantId);
}
} catch (IdentityException e) {
log.error("CustomAuthentication failed while trying to get the tenant ID of the use", e);
throw new AuthenticationFailedException(e.getMessage(), e);
} catch (org.wso2.carbon.user.api.UserStoreException e) {
log.error("CustomAuthentication failed while trying to authenticate", e);
throw new AuthenticationFailedException(e.getMessage(), e);
}

if (!isAuthenticated) {
if (log.isDebugEnabled()) {
log.debug("user authentication failed due to invalid credentials.");
}

throw new InvalidCredentialsException();
}

context.setSubject(username);
String rememberMe = request.getParameter("chkRemember");

if (rememberMe != null && "on".equals(rememberMe)) {
context.setRememberMe(true);
}
}

@Override
protected boolean retryAuthenticationEnabled() {
return true;
}

@Override
public String getContextIdentifier(HttpServletRequest request) {
return request.getParameter("sessionDataKey");
}

@Override
public String getFriendlyName() {
return CustomAuthenticatorConstants.AUTHENTICATOR_FRIENDLY_NAME;
}

@Override
public String getName() {
return CustomAuthenticatorConstants.AUTHENTICATOR_NAME;
}
}

3. CustomAuthenticatorConstants

This is a helper class to just to hold the constants you are using in your authenticaator


 1
2
3
4
5
6
7
8
9
10
11
12
package org.wso2.carbon.identity.application.authenticator.customauth;

/**
* Constants used by the CustomAuthenticator
*
*/
public abstract class CustomAuthenticatorConstants {

public static final String AUTHENTICATOR_NAME = "CustomAuthenticator";
public static final String AUTHENTICATOR_FRIENDLY_NAME = "custom";
public static final String AUTHENTICATOR_STATUS = "CustomAuthenticatorStatus";
}

Once you are done with these files, your authenticator is ready. Now you can build you OSGi bundle and place the bundle inside <CRBON_HOME>/repository/components/dropins.

Create new Claim

Now you have to create a new claim in WSO2IS. To do this, log into the management console of WSO2IS and do the steps described in [1]. In this example, I am going to create new claim "Block SP Login".

So, goto configuration section of the management console click on "Claim Management", then select "http://wso2.org/claims" Dialect

Click on "Add New Claim Mapping", and fill the details related to your claim.

Display Name   Block SP Login
Description   Block SP Login
Claim Uri http://wso2.org/claims/blockSPLogin
Mapped Attribute (s)  localityName
Regular Expression
Display Order 0
Supported by Default true
Required false
Read-only false

Now, your new claim is ready in WSO2IS. As you select "Supported by Default" as true, this claim will be available in your user profile. So you will see this field appear, when you try to create a user, but this field in not mandatory as you didn't specify it as "Required"

Change application-authentication.xml

There is another configuration change you have to do, as it is going to take the claim name from the configuration file (CustomAuthenticator.java, 107-114). Add the information about the your new claim into repository/conf/security/application-authentication.xml


1
2
3
<AuthenticatorConfig name="CustomAuthenticator" enabled="true">
<Parameter name="BlockSPLoginClaim">http://wso2.org/claims/blockSPLogin</Parameter>
</AuthenticatorConfig>

If you check the code CustomAuthenticator.java line,107-128. You will see in the processAuthenticationResponse, in addition to authenticating the user from the user store, it checks for the new claim,

So, this finishes the, basic steps to setup your custom authentication. Now you have to setup new Service Provider in WSO2IS and set you custom authentication to it. So that when ever your SP try to authenticate a user from WSO2IS, it will use your custom authenticator.

Create Service Provider and set the Authenticator

Follow the basic steps given in [2] to create a new Service Provider.

Then, goto, "Inbound Authentication Configuration"->"SAML2 Web SSO Configuration", and make the following changes,


1
2
3
4
5
6
Issuer* = <name of you SP>
Assertion Consumer URL = <http://localhost:8080/your-app/samlsso-home.jsp>
Enable Response Signing = true
Enable Assertion Signing = true
Enable Single Logout = true
Enable Attribute Profile = true

Then goto, "Local & Outbound Authentication Configuration" section,
select "Local Authentication" as the authentication type, and select your authenticator, here "custom".

Now you have completed all the steps needed to setup your custom autheticator with your custom claims

You can now start the WSO2IS, and start using your service. Meanwhile, change the value of the "Block SP Login" of a particular user and see the effect.


[1] https://docs.wso2.com/display/IS500/Adding+New+Claim+Mapping
[2] https://docs.wso2.com/display/IS500/Adding+a+Service+Provider

Aruna Sujith KarunarathnaWSO2 Carbon kernel 4.3.0 Alpha is Released!!!

Hi Folks, WSO2 Carbon team is pleased announce the alpha release of the Carbon kernel 4.3.0. WSO2 Carbon redefines middleware by providing an integrated and componentized middleware platform that adapts to the specific needs of any enterprise IT project - on premise or in the cloud. 100% open source and standards-based, WSO2 Carbon enables developers to rapidly orchestrate business

Chris HaddadThe Politics of APIs

Politics is all about power.   Whether in Washington DC, Brussels, or Beijing, individuals jockey for advantage using the political process.  The politics of APIs centers on ‘knowledge being power’ and ‘data content being power’.  Individuals and corporations gain a powerful advantage in the API economy by enforcing content ownership, access privileges, and distribution rights to their advantage.

The Politics of API session panelists, Andy Thurai (@AndyThurai) Program Director at IBM, Kin Lane (@kinlane) API Evangelist at API Evangelist, Mehdi Medjaoui (@medjawii) Founder at oAuth.io, Pratap Ranade (@PratapRanade) Co-Founder and CEO at Kimonolabs, at the API Strategy and Practice Conference described the cultural, legal, and social politics surrounding API publication and consumption.  The intriguing discussion focused on access and distribution privileges (rather than techie API message representation and tooling).   As Kin Lane states:

[API availability] is more about the business and politics of APIs, and less about the technology

Session panelists described how API consumers and API providers operate in a murky legal and social environment.  Legislation and ethics around private versus public broadcast, fair use versus bad faith use, and machine consumption versus human consumption have not been well resolved.  Draconian Terms of Service (ToS), public API shutdowns, and litigation explicitly surface insider power politics.

 

Tension is building across content owners providing data, data aggregators building API feeds, and data consumers acting on analytics.  Participants tussle in an environment where access connections outweigh access controls, where chain of custody becomes diluted, and where usage patterns flaunt antiquated legal frameworks.

 

Pratap Ranade, KimonoLabs co-founder, has based his company’s business model on open data.  KimonoLabs’ technology makes website content available as an API. Pratap Ranade sees valuable informational wealth locked within websites, and he proposed a fair harbor exclusion clause for any participant who ‘adds value to the data.’

 

Andre Thurai championed the distinction between private data and public data. If data falls within the public domain, owners have little basis to restrict distribution.  The analogies harken back to legal definitions of ‘confidential information’ and public versus private venues.  Individuals and corporations must carefully guard information, and not disseminate private or confidential information across public channels (i.e. Twitter, Facebook).  Andy has written a few blog posts about whether public apis are going away (Part 1 and Part 2).

 

Kin Lane and Mehdi Medjaoui are API champions who advocate fair use and an expanding, participatory API economy.  Restrictive API Terms of Service (TOS) and private APIs minimize participation, but may maximize monetization.  Both individuals and corporations must carefully balance business models, the network effect, and value symmetry.  Mehdi often talks about API trust, and how open APIs require  a service provider who promotes access, transparency, freedom, reusability, and neutrality.  Kin has published an excellent post outlining how APIs provide power through access.

Also, Kin has  posted a politics of API roadmap that maps branding, terms of service, privacy, service level agreement, data license, code license, and deprecation policy dimensions.

Whether APIs are open or closed, public or private, the API economy relies on a community of consumers who learn, adopt, and gain value from each API.  Kin sums up how to succeed in the API game by creating a culture (and business model) based on “transparency, outreach, and providing meaningful resources.”

 

We will see how far individuals, groups, companies, and governments will transcend political posturing, promote API consumption, and share API power.

Chanika GeeganageWriting a Simple AXIS 2 Service

In this blog post I'm going to discuss on how to write a simple axis2 service. Here I'm using code first approach to write the service. First of all, we can start from writing a java class with a simple method.

import org.apache.axis2.context.MessageContext;
import org.apache.axis2.transport.http.HTTPConstants;

public class SimpleService {

    public void print(String value) throws PrintException {
         if (value ==null) {
              throw new PrintException("value is null");
         }
        MessageContext context = MessageContext.getCurrentMessageContext();
        context.setProperty(HTTPConstants.RESPONSE_CODE, 200);

        System.out.println("Value = " + value);
    }
}

The PrintException method would be

public class PrintException extends Exception {
    public PrintException(String message) {
        super(message);
    }
}

In order to archive this as a valid axis2 service there should be a services.xml associated with the service inside the service archive file.  It contains the deployment description of the service.

<service>
<parameter name="ServiceClass" locked="false">SimpleService</parameter>
<operation name="echo">
<messageReceiver class="org.apache.axis2.rpc.receivers.RPCMessageReceiver"/>
</operation>  
</service>

The class name should be the fully qualified class name. As in my example the SimpleService class in not in any package I have simply used the class name.

Axis2 has a set of built-in message receivers. According to this sample services.xml file it says that operation 'print' of this service should use the Axis2 Java class 'org.apache.axis2.rpc.receivers.RPCMessageReceiver' as its message receiver class.

If an operation is an in-only operation you can use

org.apache.axis2.rpc.receivers.RPCInOnlyMessageReceiver

Create the archive file using

jar cvf SimpleService.aar *




Sivajothy VanjikumaranConnecting and monotoring JMX of WSO2 products in EC2 instance

Most of the time current deployments are hosted in EC2 instance as it is very reliable and scalable. When it come to monitoring the WSO2 products via jmx in EC2 instances. You have to add some parameters inorder to connect with it.


add below parameters in /bin/wso2server.sh


    -Dcom.sun.management.jmxremote \
    -Dcom.sun.management.jmxremote.port=XXXX \
    -Dcom.sun.management.jmxremote.ssl=false \
    -Dcom.sun.management.jmxremote.authenticate=false \
    -Djava.rmi.server.hostname=XXX.XXX.XXX.XXX \

Here port should be port that not used any where in the given instance and host-name should be private IP of the EC2 instances.


Access and monitor the server via Jconsole by JMX url connection that has the domain name and the jmx ports define in the carbon.xml  

Dinusha SenanayakaDistributed Transactions with WSO2 ESB

A transaction is a set of operations that executed as a single unit. When it comes to distributed transactions, it involves two or more networked computers, often using multiple databases. 

It's required to have transaction manager to handle these  distributed transactions. WSO2 carbon platform has integrated the "Atomikos" transaction manager which is a implementation of Java Transaction API (JTA). Also some products like WSO2DSS, WSO2ESB shipped this transaction manager by default and hence you don't need to provide an external JTA provider to use distributed transactions inside these products. 

WSO2ESB contains a synapse mediator called 'Transaction Mediator' which support the distributed transactions using Java Transaction API. In this post we are going to write some sample proxy service that uses the Transaction Mediator and inbuilt transaction manager that comes with WSO2 ESB to handle distributed transaction.

Sample Scenario
The scenario going to use in this sample is same as the scenario used in WSO2 ESB Transaction Mediator Sample (Sample describes above has used JBoss Application Server support to create XA data-sources and also transaction manager has been used the one provided from JBoss). But in this sample we are going to use the in built transaction manager that comes with WSO2 ESB and also XA data-sources are going to create using the Carbon data-source feature without using JBoss Application server.

Use the following scenario to show how the Transaction Mediator works. Assume we have a record in one database and we want to delete that record from the first database and add it to the second database (these two databases can be run on the same server or they can be in two remote servers). The database tables are defined in such a way that the same entry cannot be added twice. So, in the successful scenario, the record will be deleted from the first table (of the first database) and will be added to the second table (of the second database). In a failure scenario (the record is already in the second database), no record will be deleted from first table and no record will be added into the second database.

Step 1:
Create the sample Database and tables using MySQL server.

Create database DB1;
CREATE table company_x(name varchar(10) primary key, id varchar(10), price double);
INSERT into company_x values ('IBM','c1',0.0);
INSERT into company_x values ('SUN','c2',0.0);

Create database DB2;
CREATE table company_x(name varchar(10) primary key, id varchar(10), price double);
INSERT into company_x values ('SUN','c2',0.0);
INSERT into company_x values ('MSFT','c3',0.0);

Step 2:
(i) Add the mysql jdbc driver jar to {ESB_HOME}/repository/components/lib.

(ii) Create two XA data-sources for above two tables. For that, add the following xml configuration to master-datasources.xml file located in {ESB_HOME}/repository/conf/datasources/ directory. (These data-sources can be created using ESB UI as well rather using master-datasources.xml config file). In this data-sources, note that we have used the data-source provider class that provided from Atomikos transaction manager "com.atomikos.jdbc.AtomikosDataSourceBean". Other than that, you can provide the XA data-source properties specific to driver under <dataSourceProps>. Restart the server after modifying master-datasources.xml.  
<datasource>
<name>DS1</name>
<jndiConfig>
<name>DS1</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<dataSourceClassName>com.atomikos.jdbc.AtomikosDataSourceBean
</dataSourceClassName>
<dataSourceProps>
<property name="xaDataSourceClassName">com.mysql.jdbc.jdbc2.optional.MysqlXADataSource
</property>
<property name="uniqueResourceName">TXDB1</property>
<property name="xaProperties.user">root</property>
<property name="xaProperties.password">root</property>
<property name="xaProperties.URL">jdbc:mysql://localhost:3306/DB1</property>
<property name="poolSize">10</property>
</dataSourceProps>
</configuration>
</definition>
</datasource>

<datasource>
<name>DS2</name>
<jndiConfig>
<name>DS2</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<dataSourceClassName>com.atomikos.jdbc.AtomikosDataSourceBean
</dataSourceClassName>
<dataSourceProps>
<property name="xaDataSourceClassName">com.mysql.jdbc.jdbc2.optional.MysqlXADataSource
</property>
<property name="uniqueResourceName">TXDB2</property>
<property name="xaProperties.user">root</property>
<property name="xaProperties.password">root</property>
<property name="xaProperties.URL">jdbc:mysql://localhost:3306/DB2</property>
<property name="poolSize">10</property>
</dataSourceProps>
</configuration>
</definition>
</datasource>

Step 3:
Create the ESB sequnce to implement the mentioned scenario using above two data-sources.
<sequence xmlns="http://ws.apache.org/ns/synapse" name="main">
<in>
<send>
<endpoint>
<address uri="http://localhost:9000/services/SimpleStockQuoteService"/>
</endpoint>
</send>
</in>
<out>
<transaction action="new"/>
<log level="custom">
<property name="text" value="** Reporting to the Database DB1**"/>
</log>
<dbreport useTransaction="true">
<connection>
<pool>
<dsName>DS1</dsName>
</pool>
</connection>
<statement>
<sql>
<![CDATA[delete from company_x where name =?]]></sql>
<parameter xmlns:m1="http://services.samples/xsd"
xmlns:ns="http://org.apache.synapse/xsd"
xmlns:m0="http://services.samples"
expression="//m0:return/m1:symbol/child::text()" type="VARCHAR"/>
</statement>
</dbreport>
<log level="custom">
<property name="text" value="** Reporting to the Database DB2**"/>
</log>
<dbreport useTransaction="true">
<connection>
<pool>
<dsName>DS2</dsName>
</pool>
</connection>
<statement>
<sql>
<![CDATA[INSERT into company_x values (?,'c4',?)]]></sql>
<parameter xmlns:m1="http://services.samples/xsd"
xmlns:ns="http://org.apache.synapse/xsd"
xmlns:m0="http://services.samples"
expression="//m0:return/m1:symbol/child::text()" type="VARCHAR"/>
<parameter xmlns:m1="http://services.samples/xsd"
xmlns:ns="http://org.apache.synapse/xsd"
xmlns:m0="http://services.samples"
expression="//m0:return/m1:last/child::text()" type="DOUBLE"/>
</statement>
</dbreport>
<transaction action="commit"/>
<send/>
</out>
</sequence>


Testing
Successful Scenario

1. To remove the IBM record from the first database and add it to the second database, run the sample with the following options.
ant stockquote -Daddurl=http://localhost:9000/services/SimpleStockQuoteService -Dtrpurl=http://localhost:8280/ -Dsymbol=IBM

2. Check both databases to see how the record is deleted from the first database and added to the second database.

Failure Scenario

1. Try to add an entry which is already there in the second database. This time use Symbol SUN.
ant stockquote -Daddurl=http://localhost:9000/services/SimpleStockQuoteService -Dtrpurl=http://localhost:8280/ -Dsymbol=SUN

2. You will see that whole transaction has rollback. Check both databases again; there is no record deleted from the first database and no record added into the second database.

Dinusha SenanayakaHow to write a Custom Authentication Handler for WSO2 API Manger ?


WSO2 API Manager provide OAuth2 bearer token as its default authentication mechanism. But we can extend it to support any of the authentication mechanism other than the bearer token authentication.

This post explains, how we can write a custom authentication handler for WSO2 API Manager. 

Implementation of the default authentication handler used in WSO2 API Manger can be found here. As the same way, we can write our own authentication handler class by extending 'org.apache.synapse.rest.AbstractHandler' class.

In the authentication handler implementation class, we have to implement the 'handleRequest()' and 'handleResponse()' methods. See the sample 'CustomAPIAuthenticationHandler.java' class given bellow. 
 
package org.wso2.carbon.apimgt.gateway.handlers.security;

import org.apache.synapse.MessageContext;
import org.apache.synapse.core.axis2.Axis2MessageContext;
import org.apache.synapse.rest.AbstractHandler;

import java.util.Map;

public class CustomAPIAuthenticationHandler extends AbstractHandler {

public boolean handleRequest(MessageContext messageContext) {
try {
if (authenticate(messageContext)) {
return true;
}
} catch (APISecurityException e) {
e.printStackTrace();
}
return false;
}

public boolean handleResponse(MessageContext messageContext) {
return true;
}

public boolean authenticate(MessageContext synCtx) throws APISecurityException {
Map headers = getTransportHeaders(synCtx);
String authHeader = getAuthorizationHeader(headers);
if (authHeader.startsWith("userName")) {
return true;
}
return false;
}

private String getAuthorizationHeader(Map headers) {
return (String) headers.get("Authorization");
}

private Map getTransportHeaders(MessageContext messageContext) {
return (Map) ((Axis2MessageContext) messageContext).getAxis2MessageContext().
getProperty(org.apache.axis2.context.MessageContext.TRANSPORT_HEADERS);
}
}

  • Build the above class and copy the jar file to <AM_HOME>/repository/components/lib folder where <AM_HOME> is the root of the WSO2 API Manager distribution.
  •  You can engage this handler to the API through the Management Console. Log in to the console and select 'Service Bus > Source View' in the 'Main' menu.
  •  In the ESB configuration that opens, you can see following line as the first handler in the API, which is the current authentication handler used in API Manager. 
 
<handler class="org.wso2.carbon.apimgt.gateway.handlers.security.APIAuthenticationHandler"/>

Replace it with the one that we created.

  
<handler class="org.wso2.carbon.apimgt.gateway.handlers.security.CustomAPIAuthenticationHandler"/>


Sohani Weerasinghe

Removing additional namespace prefixes in the SOAP Fault using xslt


As some of you all know there is a limitation with AXIOM where is generates additional namespace prefixes inside the SOAP Fault. In order to get rid of the issue you can include a namespace prefix to the fault message generates from the Fault Mediator before sending back. You can easily do this by using an XSLT mediator. Please refer the proxy configuration and the xslt as a sample. 


TestProxy


<proxy xmlns="http://ws.apache.org/ns/synapse" name="TestProxy" transports="https,http" statistics="disable" trace="disable" startOnLoad="true"> 

    <target> 
        <inSequence> 
            <makefault version="soap11" response="true"> 
                <code xmlns:soap11Env="http://schemas.xmlsoap.org/soap/envelope/" value="soap11Env:VersionMismatch"/> 
                <reason value="Test the SOAP Message"/> 
                <role/> 
                <detail expression="/*[local-name()='Envelope']/*[local-name()='Body']/*"/> 
            </makefault> 
            <xslt key="prefixSet" source="//detail/child::node()"/> 
            <send/> 
        </inSequence> 
    </target> 
    <description/> 

</proxy> 



prefixSet.xslt 


<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:ns0="http://some.data" version="1.0"> 

        <xsl:output omit-xml-declaration="yes" indent="yes"></xsl:output> 
        <xsl:strip-space elements="*"></xsl:strip-space> 
        <xsl:template match="node()|@*"> 
            <xsl:copy> 
                <xsl:apply-templates select="node()|@*"></xsl:apply-templates> 
            </xsl:copy> 
        </xsl:template> 
        <xsl:template match="*"> 
            <xsl:element name="ns0:{name()}" namespace="http://some.data"> 
                <xsl:copy-of select="namespace::*"></xsl:copy-of> 
                <xsl:apply-templates select="node()|@*"></xsl:apply-templates> 
            </xsl:element> 
        </xsl:template> 
    </xsl:stylesheet> 

This transformation applies for the fault message and includes in the OMElement created, and provides a response with a predefined namespace prefix as follows.



<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">

   <soapenv:Body>
      <soapenv:Fault>
         <faultcode xmlns:soap11Env="http://schemas.xmlsoap.org/soap/envelope/">soap11Env:VersionMismatch</faultcode>
         <faultstring>Test the SOAP Message</faultstring>
         <detail>
            <ns0:someData xmlns:ns0="http://some.data">
               <ns0:blaData/>
            </ns0:someData>
         </detail>
      </soapenv:Fault>
   </soapenv:Body>

</soapenv:Envelope>



Sohani Weerasinghe

Save the thread dump when server starts as a background service

When you do a normal server start up and if you kill the process using the command 
kill -3 <%PID%> then you can view the thread dump in the console. But if you start the server as a background service then it is impossible to view the thread dump in the console. In order to view it follow the below steps.

1. Start the server as a background process by using the below command 

sh ./wso2server.sh start 

2. Run the below command

jstack %PID% > threaddump.txt

Please note the PID is the process ID and you can get this by running the command ps -ax|grep wso2 )


Also if you start the server with nohup command  (nohup wso2server.sh)  then you can see the thread dump in the nohup.out



Sohani Weerasinghe

Validate XML messages using more than one schemas

Validate Mediator of WSO2 ESB can be used to validate requests since it is vital in order to maximize resource utilization. Validate mediator facilitates validating the request against multiple XSDs as well. 

This blog post describes a scenario where you have an one XSD which has a reference to another XSD.  

1. Create XSD files

Basically we have the main XSD (TestSchema.xsd) which has a reference to another XSD ( referenceSchema.xsd).  The two schemas are as follows.

TestSchema.xsd


<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"

xmlns:ns0="http://www.wso2.org/refer"
xmlns:ns1="http://org.apache.axis2/xsd" xmlns:ns2="http://www.wso2.org/test"
attributeFormDefault="qualified" elementFormDefault="qualified"
targetNamespace="http://www.wso2.org/test">
 <xs:import namespace="http://www.wso2.org/refer"
               schemaLocation="referenceSchema.xsd" />
<xs:element name="test" type="ns0:refer">
</xs:element>
</xs:schema>


If you observer the schema definition, you can see there is a reference to the schema xmlns:ns0="http://www.wso2.org/refer" which is defined in the other schema file and type 'refer' is also defined in the second schema as follows.

referenceSchema.xsd


<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"

xmlns:ns1="http://org.apache.axis2/xsd" xmlns:ns0="http://www.wso2.org/refer"
attributeFormDefault="qualified" elementFormDefault="qualified"
targetNamespace="http://www.wso2.org/refer">
<xs:element name="refer" type="ns0:refer"></xs:element>
<xs:complexType name="refer">
<xs:sequence>
<xs:element minOccurs="1" name="name" >
 <xs:simpleType>
        <xs:restriction base="xs:string">
            <xs:minLength value="1" />
        </xs:restriction>
    </xs:simpleType>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:schema>


After you create the XSD, just upload them to the registry of WSO2 ESB at the location \_system\conf\


2. Create the Proxy Service

First let me consider about the validate mediator configuration which uses to validate the request



 <validate>

            <schema key="conf:/TestSchema.xsd"/>
            <resource location="referenceSchema.xsd" key="conf:/referenceSchema.xsd"/>
            <on-fail>
               <makefault version="soap11">
                  <code xmlns:tns="http://www.w3.org/2003/05/soap-envelope" value="tns:Receiver"/>
                  <reason value="Request is Invalid........"/>
                  <role/>
               </makefault>
               <log level="full"/>
               <property name="RESPONSE" value="true"/>
               <header name="To" action="remove"/>
               <send/>
               <drop/>
            </on-fail>
         </validate>


Basically we are validating the request against the schema TestSchema.xsd which states as key="conf:/TestSchema.xsd" which has a reference to the referenceSchema.xsd. 

In Validate mediator there is an attribute called <resource> to indicate on where to find other schemas. The key entry specifies the current location of the schema. ( key="conf:/referenceSchema.xsd") . In order to map schemas correctly, the 'location' entry in the Validate mediator should match with the 'schemaLocation' in the xsd. 

location="referenceSchema.xsd"
schemaLocation="referenceSchema.xsd"

The proxy configuration is as follows.



<?xml version="1.0" encoding="UTF-8"?>

<proxy xmlns="http://ws.apache.org/ns/synapse"
       name="TestProxy"
       transports="https,http"
       statistics="disable"
       trace="disable"
       startOnLoad="true">
   <target>
      <inSequence>
       <validate>
            <schema key="conf:/TestSchema.xsd"/>
            <resource location="referenceSchema.xsd" key="conf:/referenceSchema.xsd"/>
            <on-fail>
               <makefault version="soap11">
                  <code xmlns:tns="http://www.w3.org/2003/05/soap-envelope" value="tns:Receiver"/>
                  <reason value="Request is Invalid........"/>
                  <role/>
               </makefault>
               <log level="full"/>
               <property name="RESPONSE" value="true"/>
               <header name="To" action="remove"/>
               <send/>
               <drop/>
            </on-fail>
         </validate>
         <respond/>
      </inSequence>
      <outSequence>
         <send/>
      </outSequence>
   </target>
   <description/>
</proxy>
                               

3. Testing the scenario

First you can send the below request


<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
   <soapenv:Header/>
   <soapenv:Body>
      <p:test xmlns:p="http://www.wso2.org/test" xmlns:q="http://www.wso2.org/refer">
         <q:name>sohani</q:name>
      </p:test>
   </soapenv:Body>
</soapenv:Envelope>


Then you will have a response as below.



<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">

   <soapenv:Header/>
   <soapenv:Body>
      <p:test xmlns:p="http://www.wso2.org/test" xmlns:q="http://www.wso2.org/refer">
         <q:name>sohani</q:name>
      </p:test>
   </soapenv:Body>
</soapenv:Envelope>


If you send an invalid request then you will receive the response as below.



<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
   <soapenv:Header/>
   <soapenv:Body>
      <p:test xmlns:p="http://www.wso2.org/test" xmlns:q="http://www.wso2.org/refer">
         <q:name></q:name>
      </p:test>
   </soapenv:Body>
</soapenv:Envelope>




<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
   <soapenv:Body>
      <soapenv:Fault>
         <faultcode xmlns:tns="http://www.w3.org/2003/05/soap-envelope">tns:Receiver</faultcode>
         <faultstring>Request is Invalid........</faultstring>
      </soapenv:Fault>
   </soapenv:Body>
</soapenv:Envelope>









Harshana Eranga MartinFix Maven Build Hanging and Waiting for Repositories

I have tried to build WSO2 Developer Studio from source code on a new Mac Book Pro (with 10.9 Maverick) and encountered an issue with Maven build where it hands while trying to download some Eclipse Tycho plugins from Eclipse Maven Repository, http://maven.eclipse.org. This Maven repository is currently unavailable and I was even unable to access it from my browser. Even though the repository is not available, the request to particular server does not timeout like other requests and caused both browser and Maven process to hand forever before it exit the waiting state. This has caused a completely unacceptable situation with the Maven build. In order to fix this issue, i have done the following and was able to overcome the issue and continue the Maven build as usual.

1. Go to /etc/hosts file and added a new entry as shown below.

                                  127.0.0.1 maven.eclipse.org

2. Retry the Build and see whether it works for you.

Once I try the build now Maven continued the build process as usual since there is no service running in my local machine pointing to the artefact Maven try to download.

So if you are having trouble with your Maven build hanging and waiting for a certain repository, you can follow the same steps and overcome the issue.

Hope this helps you!

Dinuka MalalanayakeSelecting proper data structures for the realtime problems

As a software engineer you have to have a proper understanding about data structures and the corresponding implementations. In real world application development you need to think about the efficiency of the operations and it should fit for the entire domain that the solution is addressing. So in this blog post Im going to explain some of the data structures implemented in java.

Look at the following class hierarchy.

collections

1. LinkedList -  Use the linked list If you need to maintain the list which is no need random access to the values and needed frequent insertions and deletions.
Insertion and deletion is O(1) and access the (k)th element is O(n).
But remember LinkedList is not thread safe.

2. ArrayList –  Use the array list if you need the random access to the values frequently and same as you don’t need frequent insertion and remove operations.
Access the (k)th element is O(1) and insertion and deletion is O(n).
Same as LinkedList remember ArrayList is not thread safe.

3. Vector –  When you need to maintain thread safety over the ArrayList use the Vector. If you don’t need any synchronization on each and every operations you have to go for ArrayList otherwise it will give bad performance impact on each operation.

4. PriorityQueue –  We know that queue is a first come first serve basis but some times we need to get the things according to the priority. So if you have that kind of problem use the priority queue. But remember PriorityQueue is not thread safe.

5. HashSet –  It is not maintaining the duplicates. When you need to maintain the unique list of objects you can use the HashSet. HashSet allows the NULL Objects but it is not maintain the insertion sequence. If you need to maintain the insertion sequence you have to use the LinkedHashSet.

6. TreeSet –  Same as HashSet this data structure maintain the duplicate free collection and additionally its provide the sorted order of the elements.TreeSet is not allow the NULL objects. Guarantees log(n) time cost for the basic operations add, remove and contains.

7. HashTable –  This data structure is useful when you need to do the insertion, deletion and the quick access to the given element in constant time. All operations are in O(1).Hash tables are not maintain the insertion sequence. I would like to explain the HashTable little bit in depth because this is commonly using in the industry. As an example a router table. When a packet has to be routed to a specific IP address, the router has to determine the best route by querying the router table in an efficient manner.

To use the Hash tables you have to implement the equals(), hashCode() functions the Object type that you are going to store in the HashTable.
hashCode() – as a best practice you have to consider all attributes in the object to generate the hash code. See the following example.

public class Employee {
    int        employeeId;
    String     name;
    Department dept;
 
    // other methods would be in here 
 
    @Override
    public int hashCode() {
        int hash = 1;
        hash = hash * 17 + employeeId;
        hash = hash * 31 + name.hashCode();
        hash = hash * 13 + (dept == null ? 0 : dept.hashCode());
        return hash;
    }
}

Once you call the insert method in the hash table it will calculate the hash value and then store the the data according to the hash value.

 int hashValue = hashCode % hash_table_size;

There are several ways to handle the collisions in hash table.
1. Separate Chaining – Maintain the list in each slot. If there are two objects with same hash value it is going to store in that list. This implementation has extra overhead to maintain the list. Total Performance is depend on the number of elements in the list.

2. Open Addressing – In this approach it is find the next available slot according to the probing function.
Linear prob – h(x), h(x)+1, h(x)+2, h(x)+3,..
Quadratic prob – h(x), h(x)+1^2, h(x)+2^2, h(x)+3^2,..
Double hashing – h(x), h(x)+1*h'(x), h(x)+2*h'(x), h(x)+3*h'(x),..

Delete elements in hash table – If hash table is use open addressing then it is logically delete the value from the table by setting a flag.

Performance of the hash table is denoted by alpha load factor.

 alpha = number_of_elements/table_size;

If hash table is filled we need to do the rehashing. Rehashing is done according to the load factor. If load factor reaches to some level we need to do the rehashing. This is the only disadvantage of the HashTable.

8. HashMap –  Hash map is used when you need to store the elements as key, value pairs.In hash map you cannot duplicate the key but you can duplicate the value with different key. This is not maintain the insertion sequence. If you need to maintain the insertion sequence you have to use the LinkedHashMap. Remember HasMap is not thread safe if you need thread safety over HashMap you have to use the ConcurrentHashMap

8. TreeMap –  If you need to maintain the key, value pairs in sorted order this is the data structure suits for that operation. Its guarantees that the given key value pairs arrange in sorted order according to the compareTo() method in Object type which is stored in TreeMap.


John MathonArtificial Intelligence, The Brain as Quantum Computer – Talk about Disruptive

Brain_Area_Functions

The AI side of the equation

I started my career studying Artificial Intelligence at MIT.  Back then the researchers thought that we would have computers that were smarter than humans in short order.  What I discovered after observing what the AI people had done was hardly anything close to “intelligence.”   Marvin Minsky called the bluff and wrote an article back in the day basically spelling out in a somewhat comical way that using the variable “learning” in a computer program didn’t mean the program learned anything.

What we have done with AI since then is to build smarter and smarter algorithms and if there is a learning variable in those programs it doesn’t mean those programs are learning anything either.    Some of these algorithms running against massive data stores like the contents of the internet and with virtually unlimited processing power can appear to produce answers to questions that look like they “get” the question we asked and give us an answer that is better than a human being can.   I am not saying Watson has no value but as a “learning” “sentient” intelligence, NO!

The problem I believe is that computer algorithms are designed to operate on predefined abstractions.   No matter how smart the programmers are that create these algorithms and abstractions they are limited by the abstractions the programmers put in.  For instance, if they teach it about mathematics it can be programmed to be better at solving differential equations than a human being.  However, it really has no idea what a differential equation is.  It will NEVER discover Set Theory or come up with a way for you to go on your vacation for less than $1,000.   Hard AI advocates have said that if we keep programming expertise into the algorithms eventually it will cross the line into something we call “intelligence.”   No.  I emphatically say NO.  The problem is that we haven’t learned how to represent real “learning.”

It is not impressive what computers have accomplished but what they have NOT accomplished.  Given $10s of billions in resources and unlimited computer power an algorithm can answer memory based questions like a human with a brain made from 15 cents of common chemicals with no external memory storage other than what is sitting between the two ears and this runs on a few calories of energy an hour. Seriously after 35 years of research, the work of many diligent scientists and many other companies and research facilities worldwide the computer still has nothing approaching the human brain in real “intelligence.”  In my opinion our success with traditional computer science is F.

Grade for AI Researchers : F

Progress from Biology side

Other things have bothered me a lot from the biology point of view.  When we look at the human brain researchers have NOT discovered where the hard drive for the brain is.  Where exactly and how exactly are memories stored in these neurons?  They can tell you parts of the brain that seem involved.  They’ve discovered different kinds of memory but the memory that I pull up smells, people’s faces, formulas, algorithms, no I don’t believe they can tell you how the brain actually stores the information or how it retrieves or matches against it.   How does the brain do such things as “recognize” patterns?   How does it create “abstractions?”  How does a brain have an “AHA” moment?  What is consciousness?  What are those pesky EEG patterns about anyways?  Why does meditation work?  I could go on but we have only nibbled at the edges of the brain operation never really having any sort of answer for things that should have been cracked by now.   We see the neurons firing at the edge going in, we see neurons firing in the brain, we see neurons sending signals to body parts.  What happens in-between is still 100% mystery.

Besides the hard drive problem the other thing that I find fascinating is we don’t know how the pattern recognition algorithms are being run.  This is hard to believe because this is all that a brain does to some extent is match patterns.  It’s sole operation is to look at patterns and continually match those against prior seen patterns.  99% of the computation the brain is performing should be patently obvious because this pattern matching should be taking a lot of obvious work.  In a computer you would be burning billions of watts of energy pattern matching historical databases constantly.  Disk heads would be flying about in a constant state of activity.  We don’t see that.

Instead it seems like the pattern recognition and memory storage are hidden from us and happen as a side effect of brain operation.   We only seem to see things at the edge, signals in and out, some parts of the brain flash that are being used but how?  No answer.  Not even a start.   Again, in the sense of how far have we come in understanding how intelligence works I have to give brain researchers a low grade.

Progress from the Biology side: D

Why so little progress?

dontleaveme

Attacking the wrong problem:  Chicken problem

The big problem with all these attempts at AI from the beginning is what I’ve called the “chicken” problem.  I said back in school many years ago that our goal should not be to make a computer as smart as a human.  That was hopeless considering where we were.  It was better to set our sights on a possibly achievable goal.  Show me a computer program that can be as smart as a chicken.   Many people may think I am jesting and chickens are not that intelligent but in fact the opposite is true.   A chicken is an autonomous living thing which has to make decisions about life and death situations.  It has to recognize threats, recognize food, handle repair, recognize friends and foe, predict the future based on past experiences but be able to correct its mental model if it is wrong to survive and chickens have survived for a million years whereas if we didn’t keep fixing them computers would be extinct in 3 years.    We have a tendency to always underestimate our partners on this planet and overestimate our knowledge but if you actually thought of trying to mimic the behavior of many of these “lower” creatures in even limited ways we would be writing an incredibly complex and long program and I don’t believe we even know how to start.   So, one reason we’ve made so little progress IMO is that we frequently have been attacking the problem wrongly.

Not looking in the right place:  Pre-built structures and processing

The basic problem is “abstractions” and our inability to come up with a way to construct “abstractions” in a systematic way.   I have included a paper on some recent vision AI research.  Frequently in descriptions of the brain from people who have studied the brain they have discovered parts of the brain they think do this kind of abstraction or do that abstraction as if the algorithms like a computer program have been programmed into that structure of the neurons there.   They may be right but I doubt it for several reasons.  First, the amount of specificity they are talking about frequently if added up across all the things they think are pre-programmed into the brain would require a massive program far beyond what the DNA could encode.  It is possible some of what they say is true and some brain regions are customized or dedicated to some kind of work but I believe the number of these special areas must be very limited because of what i call the combinatorial problem.  The fact that some parts of the brain appear to process input signals in some abstract way is pure coincidence.   I believe the brain adapts to the input and many of these regions form naturally as a result of the inputs not preceding the inputs.  Therefore another dead end in my opinion is trying to study the brain to learn the abstractions it makes or really to in any way presuppose what are the right set of abstractions.   After all the whole point of the learning algorithm is to learn abstractions so why short circuit the process.  Start at the bottom and see if we can get a program to do the basics.

Not looking for a generic solution to learning

Whatever the brain does it produces abstractions from sensory input, then is able to take those abstractions up another level quickly and create abstractions on abstractions.  Sometimes those abstractions can be successful instantly and sometimes you must practice them a lot to get them rooted in your brain.  As an example, we learn first the idea of reading, then the shape of letters, then the combinations of letters to form words, the structure of sentences, the meaning of words, the relationships of words in a sentence, the physical or abstract representations that match the words.  With every sense memory, every abstraction we associate everything else that happened at the same time or has some relationship so that we can call up that abstraction from numerous tags which are themselves abstractions.   Thinking of these abstractions can produce the memories associated with that abstraction such as the smell or taste or sound.

Quantum Mechanics in evolution is pervasive

The number of examples of nature using quantum mechanics for basic functions is large and now goes back possibly a billion years or more in the tool chest of evolution.  Check the articles at the end for the latest information on where we have discovered quantum effects in animal and cell physiology.

Plants:

There is a lot of evidence emerging over the last 10 years that nature has leveraged quantum effects all along possibly from near the very beginning of life.  One of the most interesting is the discovery that photosynthesis operates using quantum tunneling.  When a photon hits a plant leaf a molecule attached to the chlorophyll molecules called the clorophore absorbs the photon.  The clorophore molecules are next to each other and they form a quantum coherent state that allows the electron that emerges from the photon hit to be ferried with zero energy cost essentially to the place it is needed to complete the breaking apart of CO2 molecules into Oxygen and Carbon for the plants and our benefit.  This effect is similar to superconducting.

Consider this:  Plants depend on this efficiency which could be 1,000 or 1,000,000 times more efficient than classical mechanics would allow to grow to the point they have.  If they didn’t use this they may never have produced a food supply sustainable for animals, or even been possible for themselves.   So, this is a rather astonishing discovery.  Life itself may depend on quantum effects and it is likely if it uses quantum mechanics for this that it uses it for other uses.

Birds

Recent studies have performed tests on various types of birds that have geolocating capability.  When the researchers turn on magnetic fields too small to move a single iron molecule the birds become unable to figure out anything.  When the field is turned off the birds return to being able to navigate.  Such a small magnetic field could only be operating through some kind of quantum effect.  We don’t know exactly how yet but clearly conventional physics is off the table.

Senses – Eyes, Smell, Hearing, Touch

chromophores

Smell is interesting because this sense in particular is one we have been particularly poor at trying to create our analog.  We have no good idea how to do the pattern matching the noses of animals and humans do routinely.   Studies have pointed to quantum effects going on in human and other animal smell.   This is not surprising at all.      There is good evidence that all our sensory input is being transduced by quantum effects for us and for most animals.  It turns out our eyes use the same mechanism as plants for photosynthesis.  All of our visual receptors contain a similar molecule which allows them to ferry the single photon of energy to the nerves and create a magnified macro nerve signal from a single photon.  So, it seems all or most of senses of most animals are helped by evolutions reuse of quantum computers.

Immune system

Healthy_Human_T_Cell

I have not read this yet but I would lay big bucks that the immune system leverages quantum effects to recognize foreign viruses.   There is evidence that the same microtubule type structures seen in post-synaptic membranes that I will talk about in the next section are seen in T-cells and recent studies have implicated these in quantum effects.  How else to explain the unbelievable skill of our immune system?  No other explanation I am aware of could come close to explaining it.

The Brain

 Would it be surprising that if nature has been using quantum technology for billions of years in our senses and in our defense systems that it would be leveraging quantum effects in our brains?  I think it would be ridiculous if it didn’t.  Many things about the brains function happens to correspond with some of the things quantum computers are good at.   To me it would be ridiculous to assume at this point that the brain doesn’t use quantum effects given we now have solid evidence evolution uses quantum mechanics all over the body.

In January of this year scientists discovered quantum effects in tubulin molecules around microtubules in post-synaptic neuron junctions.

Even more recently scientists discovered how calcium ions generated by neuronic activity can cause CaMkII (a 6 legged dual sided structure) can activate (program /phosphorylation) 6 bit tubulin molecules on microtubules in post-synaptic junctions.   They showed additionally that such tubulin molecules could lead to neuron firing.  Thus it seems that we finally have a mechanism that neuron sequences can be stored on microtubule tubulin molecules and that these “memories” can then trigger neuron firing.  Finally the hard drive may have been discovered if not the pattern matching machinery.

Penrose and Hameroff (*) propose a specific location for quantum activity (post-synaptic junctions in dendrites and soma) to happen and propose a specific way it happens and is used.  There is still some doubt about this but P&H don’t say they have all the answers and admit there’s is a work in progress as far as precisely how it works.    However, they now have a number of significant wins in terms of experimental evidence backing up their latest version of their theory called “Orch OR” for orechestracted objective quantum reduction.

In P&H’s theory every 20ms or so the brain has a decoherence event.  This happens in the P&H version due to quantum gravity, however, it is not necessary to invoke that idea to cause decoherence.  There could be any number of physical processes that could cause this.  What’s important is that they have postulated for once a reasonable model of how the brain could actually work using quantum computing.  Given that we have seen nature doing high temperature quantum computing and we ourselves have now built quantum high temperature material in Correlated Oxide it seems eminently plausible the human brain is doing quantum computing.

DNA and Evolution itself

nucleusfigure1

In the book Physics of Mind(1) Werner describes the likely way that the eyes and other senses evolved.  He also describes the specifics of the quantum effects involved in many of the things described above and puts forward an eminently more likely theory for describing how DNA evolves and changes to create things that have been puzzles.   I have struggled with evolution for a couple reasons.  The sheer complexity of life has never seemed sufficiently explained combinatorially  by DNA (what I call the combinatorial problem).  When the human genome project said there were only 30,000 genes it was apparent to me something was missing.  We discovered several years later that control DNA patterns were located in the 3 billion junk DNA base pairs.   I don’t believe this is the entire answer either.   Werner gives some ideas to help solve this puzzle using Quantum demons.  Worth reading.

I am a computer scientist.  I know how complicated it is to write the simplest program to operate things like robots or planes.   There are frequently millions of lines of code involved.  The code is extremely fragile.  Numerous bugs exist that cause problems.  As a result we go through extensive debugging processes.  These robots and planes do not have a small fraction of the code needed by a human body.   The human body has 100s of thousands of important chemicals, molecules and each of these needs to be regulated and put in action when needed at the right time.   When damage happens to the body it must put a repair process in place that pretty precisely fixes the problem employing many materials.  The body is beset by invaders who want to do it harm all the time.  These invaders must be detected, recognized and appropriate action taken.   Constant breakdown of the body occurs and must be repaired.   While the size of the DNA is impressive at 3 billion nucleotides it is impossible for me to understand how it is possible that in there is the coding to operate a human body.

When I mention this to biologists they seem unimpressed.  I don’t understand that reaction.   This reminds me of the fact that as early as the 1930s scientists were aware that galaxies were not possible.  The stars in the galaxies were flying 3 times faster than Newton and Einstein would allow.  Some bright cosmologists said back then that we would discover the universe was mostly made of invisible matter.  The issue was tabled for decades until recently.  When I went to school nobody mentioned this problem that the galaxies were disproving Newtons gravity law.  It was shuffled under the table.

The problem is really much worse than I described above.  If you look into each of these processes the human body does routinely you will see it is more complicated than you initially assumed.  For instance, looking at the ER (endoplasmic reticulum) in almost every cell of the body involves the construction of thousands of molecules and proteins, lipids, hormones.  The size of the ER and what it produces can vary dramatically during short time periods depending on demands and the chemicals needs, the stress on the body, the location and type of cell involved.   There are dozens of types of cell motor proteins for ferrying organelles and various proteins or other chemicals inside the cell as well as in and out of the nucleus and to other cells through microtubules and other gateways.   Some of the motor proteins can ferry individual molecules and some can ferry whole organelles like mitochondria.  Forget the human body, managing a single cell itself is like programming a city.  There are hundreds of cell types in the human body.

The complexity is mind boggling and imagining a program that could run a human body would require the programming output of all humanity for ages assuming we even knew how to start.   I get the idea that when I bring up issues such as these that the community is disregarding the problem.  There is no way that 3 billion nucleotides and 30,000 genes build, regulate, repair a human body.  I am not saying this based on some religious argument but based on solid foundation of understanding that the complexity of this is just too much.

I am certain the more we look into this we will find that the regulation and operation of DNA is controlled by pattern matching quantum gear in the cell nucleus somehow.   I would also bet that aspects of DNA replication and repair involve quantum effects.   This means that nature has probably been utilizing quantum effects since the whole of life began.

Quantum computers

d-wave-512-Qubit-computer

We have the understanding and formulas to solve quantum multi-body problems but the complexity of solving these problems is enormous and requires the use of supercomputers for massive computations to come up with the expected behaviors of only 3 particles.   The smallest visible object consists of a billion billion  particles.  The ability of us to compute the possible paths or expected paths of something like a billion particles are infinitely beyond our computers to calculate.   Yet nature does this 10 to the 40th times a second for 10 to the 75 particles in our visible universe.    This is a good argument  that either quantum mechanics is fundamentally wrong or we don’t live in a matrix.    At least not a matrix built using conventional linear computer technology.

Is quantum mechanics wrong?  Are we only deluding ourselves in thinking nature can possibly do such impressive calculations?  No.  It seems this is really how our world works.   One proof is we have built a quantum computer and it works.   Several years ago a company in Vancouver BC created the first quantum computer called the D-Wave.  This computer had a new version this year which has 500 qubits.    While 500 qubits is small number it still represents an enormous leap in computing power.   A 500 qubit computer can store 2 to the 500 number of 500 bit patterns in quantum superposition and do pattern matching against these patterns in real time.   The current D-Wave computer costs 10 million dollars, requires a supercooled environment to run but is able to do feats compared to traditional computers.  This essentially proves to me that quantum mechanics is correct at least as far as that the universe is not deluding us that it can do these things we theorize.  Somehow nature is able to do things beyond our ability to understand.     The D-Wave is being used by google for accelerating pattern recognition.  I find myself worried about that but that’s a distraction.  The real point of this article is to explain how I think the human brain operates in some ways like a quantum computer from the point of view of common sense.

First off we are making progress in using quantum technology not only in computers but for a number of products.  One example I thought was exciting recently is taking our basic silicon semiconductor technology and replacing it with a material that like the photon tunneling effect that nature uses in plants.    In an article referenced below scientists are able to reduce the resistance by a factor of 10 to the 7 in a material called  Correlated Oxide.  In their first experiment they were able to match the best semiconductor technology on the market in performance.   The Correlated Oxide has no need to operate at reduced temperatures at all.  In fact it works at a hundred degrees above room temperature perfectly well as at room temperature yet it demonstrates quantum effects.   So, it seems we have now seen at least one case where we can manufacture quantum mechanics operating devices that operate at room temperature like nature apparently discovered a billion years ago.

Quantum computers are programmed very differently than conventional computers.  A quantum computer has a number of basic operations that are not at all like traditional operators.  For instance, the “Fourier Operator”, the “Hadamard operation”, Grover’s Algorithm and Shor’s Algorithm.   These operations allow a quantum computer to solve difficult problems for instance that require searching for optimum solutions among a large number of possible answers.  The Hadamard operation is amazing because it allows us to build the first truly random number generator ever.

The world we live in solves these problems constantly finding the most probable answer among massive numbers of alternatives as much as  10 to the 40th times a second.   That electron is tunneled in the most efficient energy way in the plant.  In doing so nature has to look at all the possible ways the electron could travel and say this is the least energy one which has the highest probability of occurring.  Nature does this every time a photon hits a plant leaf so the effect is real and astonishing when you think about it in any depth.

Let’s say you have a huge database of information and you want to find the closest optimal match.   Such an algorithm is extremely costly with conventional techniques but falls out naturally in one cycle from a sufficiently large quantum computer.   Further, many of our algorithms that are the fastest produce answers which have locality problems.  They find a solution which happens to leverage one path that looks pretty good but the algorithm misses other minima that are better solutions.  To get the optimal solution requires looking at ALL the possible answers a much tougher problem.  The D-Wave quantum computer demonstrated this ability to find solutions that were unlike those provided by the cheap algorithms.  It finds the truly cheapest route instantly somehow.

A quantum computer can store a vast array of data in superposition meaning that they all exist at the same time as probabilities.  In nature this is called the fuzzy probabilistic wave nature of things.  The “memory is stored as competing possibilities that all exist in superposition.  When the quantum system “decoheres” the result is the most probable state.   The trick then in quantum computers is to turn problems into these kinds of things.  Turn problems into searching for an answer among a large number of options that is best according to some criteria.    Quantum mechanics stipulates the particles will always choose the least energy most probable path.  So, for example, by constructing our problem by turning it from a particle traversal problem to a pattern matching problem we solve the pattern match in the same time nature decides how a particle will go from here to there.

What is amazing is that in a  quantum computer we are actually making a particle to go from here to there and we construct the world around that particle so that as the nature solves its traversal problem it solves our problem in the process.  In computers we think of the real world as “virtualized.”  The computer is performing calculations but there is no real world.   In a quantum computer it is like we are running physics experiments every time we do an operation looking at what nature has figured out.  We don’t know how it does it so all we can do is watch and learn from what it does.  That is fascinating to me and does beg the question how is it possible it does it but that is a question for another time.  Sweep that one under the rug.

I want to put a little skepticism here.  Not all problems are tractable with quantum computers yet.  The D-Wave is an annealing quantum computer which has limitations.  Scientists are working on more “complete” quantum computers that can solve more general problems but D-Wave will introduce a 1152 qubit computer that is in testing now (release next year) and supposedly has improvements that may make it be able to do a wider set of operations and show entanglement.  My post is not to say D-Wave is the answer but it is evidence that quantum effects are real.

How does a Quantum Brain work?

In my conception of how intelligence could possibly come about it requires a learning period when the brain has to process many inputs from the real world and produce enough abstractions to make sense of all the data coming in.  This process of producing abstractions is far from trivial because the input data is immense and the ways it could be generalized are virtually unlimited.  Assuming this model is close to what the brain has to do, how could the brain do this?

Imagine that all of the abstractions are in superposition in a brain quantum computer.  What we looking for from the quantum brain computer is the best matching abstraction that fits the input data.   This is something a quantum computer may be able to do.    When the decoherence happens the result is the abstraction that matches, the “AHA” moment.

There is new evidence to support the microtubules as being part of this process(*).  A study showed that cutting microtubules between brain neurons caused people to lose memories.  Also, research has shown that anesthetics that remove our consciousness operate by stopping the functioning of microtubules in the brain.  Stopping the microtubules stops consciousness.

If P&H are right then tubulin molecules on microtubules in post synaptic junctions are pushed into states that record our memories.  Our memories would be encoded on these tubulin molecules somehow.  The quantum computer of the brain which consists of these tubulin molecules and microtubules would go into superposition with new inputs from the senses or from other parts of the brain.  These inputs could be both raw data or abstractions or some combination.  The brain in each 20ms (about the time for decoherence in P&H model) does a pattern match to produce the matching past experience and abstractions to describe the current data.   The neurons that come up with the matching answer fire.  Fine.

That explains two really hard things that I don’t think ever had a reasonable explanation for before.  The hard drive are these tubulin molecules and the pattern recognition is hidden because it happens in a quantum fog sometimes referred to as quantum foam that can’t be observed by us until decoherence in which case we have the answer and neurons fire.  Precisely what we see when looking at the brains macro behavior.   The P&H theory also explains other things.  For instance, nobody has explained what brain waves are.  Well, it seems that the decoherence events are the brain waves.  The cadence that P&H calculate matches roughly what we see in brain waves amazingly.  Also, meditation has the effect of slowing brain waves.  This means extending the time between decoherence events.  In quantum computers this means the quantum computer can find better answers.  Therefore this theory explains why meditation works and why increasing brain wavelength would increase intelligence.  Making the brain less cacophony means that more parts of the brain are potentially in coherence meaning better reasoning as well.

Now the difficult part that is my contribution.   Imagine that what is encoded in microtubules and tubulin is not just memories but also abstractions that are the result of pattern matches from before.  Each successful pattern match from before results in an abstraction being built.  The matching of additional patterns would be used to increase the probability of that pattern but there is a problem.

The way to construct abstractions is infinite practically.   The weightings on these abstractions are hard to stabilize.  Sheer luck may cause some abstractions to appear to be good for a while.  A computer algorithm will get locked into local minima and assume it has a good abstraction too soon.  What is needed is to continuously look at a large large set of abstractions and weights and pick the weightings that work best.  This is the salesman optimization problem multiplied by a billion.   I want the quantum computer called the brain to look at all the possible weightings for abstractions and pick the best weights and abstractions considering all the past experiences.  This is an unbelievably complex computer problem that is way beyond our linear technology.    It is almost unimaginably complex.   We wouldn’t even know where to start.  Yet this is precisely the kind of abstraction building problem we need a learning machine to do.   A learning machine which creates real “understanding” of the problem by creating the right abstractions that represent the data and its exceptions.

Some point out that the first layers of sight processing occur not in the brain but in the eyes themselves.  The neurons in the eyes learn to recognize certain invariances and certain patterns before it gets sent to the brain cortex.  There is no problem with this.  It makes sense to perform the first few levels of abstraction out in the eye neurons.  They are neurons and if they can be quantum computers too then they could compute certain first order abstractions so that the brain deals with higher order abstractions.

The microtubles that Penrose and Hameroff talk about are special microtubules in the brain.  These are more static microtubules that form after the brain comes to a more completed state.  This is fine because it corresponds to a troubling question some have had finding an answer to, namely, why do we not remember hardly anything prior to a certain age?  The explanation for why we have trouble remembering things from early childhood by P&H is that until the abstraction matrix is built and the brain stabilizes enough for the memory storage mechanism to be stable it is not possible for you to remember anything significant.

The cortex I have read is a remarkably uniform structure.  That would make sense and be numerically reasonable in terms of how much DNA would be needed to build a brain.  So, how come when researchers look at the brain they see these common areas where brain functions occur?  There is the possibility that this is simply the way a quantum system would find the abstractions and networking to process human speech or vision of our world or hearing.  In other words, the brain is uniform but the process of learning naturally ends up creating these structures in similar ways in most or all brains simply as a matter of coincidence.   I’m not suggesting all brain structures are built this way but it could relieve us of the need to be probing the brain to find structures that seem unlikely to really mean anything.

It has been noted the brain has tremendous plasticity and that people who are given alternate electrical mechanical based sense organs as a result of loss of eyes or other senses can regain from the new different input a sense of vision for instance.   If the brain is just a uniform matrix that learns abstractions on top of any input device then that makes sense.   It also means that we could construct almost any new input device for the brain and it could make sense of it.   That explains how animals learn how to utilize smells or directional signals that we are incapable of seeing or feeling.  It implies if we hooked up those senses to our brain we could probably learn to process them and have enhanced senses.  Scary and cool.

P&H have proposed a concrete theory of brain function which has been experimentally verified recently in some of its assertions.  Their theory contends that microtubules in the post-synaptic junctions of neurons and soma are highly regular fixed, water free environments that allow quantum coherence to exist for periods as long as 20ms.   P&Hs theory leverages Penrose’s theories of quantum gravity and discrete space time that he has developed called loop quantum gravity and the mathematics behind it called spin networks.  Penrose is not a lightweight okay!  He is over 80 years old and did work on seminal parts of Einsteins General relativity, published with Hawkins on black holes and has produced an incredible array of intellectual achievements.   I would say this man has been seriously underestimated by his peers and the world at large.   Amazingly he is still writing and doing research giving us new insights into the world.

Conclusion

mandelbrot Mandel_zoom_07_satellite

If the current theories of a number of scientists (and myself) are true then every human being has billions  possibly up to a trillion  quantum computers operating in the brain.   This puts the challenge for AI to come up with a competitor using conventional technology out of reach.  It also means many of lifes lower creatures have quantum computing capabilities that explains the complex behavior and capabilities many of them have that are beyond our understanding, even the low life chicken.

I would not be surprised if we found that quantum mechanics is being used by nature in all kinds of aspects of life that we have heretofore presumed were classical chemistry.  As an example I would be surprised if we didn’t find that the mitochondria in some way utilize quantum mechanics to perform miracles of energy production and transport in the body, that all of our senses use quantum effects to enhance their function, that our immune system uses quantum effects to match DNA patterns of threat viruses, that multicellular creatures use quantum effects for DNA expression, DNA repair, replication and regulation.  In short that life critically depends on quantum mechanics at every facet.    I say this because many of these things are well beyond our ability to really explain how they could possibly work using conventional molecular chemistry.

A child starts out knowing virtually nothing but over a period of years this child picks up abstractions.  Some abstractions are useful, some are not.  The chlds brain is processing millions of inputs from senses every second and it must digest and find patterns to abstract from this jumble of data.  Vision recognition computer scientists are always struggling with what abstractions to program into their models of vision to try to match a humans ability to do recognition.  The human does it over a long period in an unbelievably wide range of scenarios.  We call these invariances.  The human recognition system is good at a whole lot of invariances.  You can take a face, put it under low light, lower the contrast, deform it in numerous ways, put masks on it from beards to obscuring it, caricature it, you can fuzz it up, look at it from different angles, go in close or look at the face from afar.  There can be objects in front of the face, different expressions and we are able to recognize.  When there is a difference our brains are able to detect the face and the exception.  Our best algorithms can’t even approach this.fecundity regardless of the compute power we throw at them.   We keep trying to understand how the brain does what it does as if we could build an intelligent system and shortcut the process of learning that the brain goes through.  The problem with this is that they have not solved the problem which is really the problem that is the crucial one to be solved:  Namely, learning.   All you do by trying to shortcut the learning process by learning the abstractions the brain comes up with naturally is to delay the time when we have to face the fact we don’t know how learning works.

I believe that the advent of quantum computers and the recent understanding of how to leverage quantum systems even at room temperature by nature and now by us in Correlated Oxide means we are on the verge of a (excuse the pun) Quantum leap in AI.   Our ability to understand how quantum systems are employed by nature gives us a much deeper understanding of the complexity of nature and the ways nature has solved problems beautifully.  It will give us tools to leverage natures jump start on this technology to see what we can do with it.  I hope that this also gives us a renewed appreciation of the amazing creatures we live close to and who were the guinea pigs literally to help us get to where we are.   They are more than a food source.  So far as we know they are our only real relatives possibly in the universe we will ever know.    We are potentially also on the verge of a biological quantum leap in both understanding and capability.    Lastly this gives us an understanding of how special our abilities are requiring billions of quantum computers we are a long way from having computers be a competitor to us.   I believe people doing hard AI don’t understand the path they are on is fruitless and pointless with the methods in use so far.

Article to read more:

Books:

Physics in Mind, Werner Loewenstein

Penrose Books

Quantum Mechanics in nature:

Photosynthesis efficiency tied to Quantum Mechanics

birds-might-actually-be-using-quantum-mechanics-to-find-their-way-through-the-skies

Discovery of Quantum effects in the brain

Microtubules in the brain discovered to be quantum correlated

Smell may be quantum sense

Quantum Mechanics and Quantum Computers:

Quantum Computer fundamentals

Quantum Microscope

for-electronics-beyond-silicon-new-contender-emerges

Brain Quantum Mechanics

quantum consciousnes

Orchestrated_objective_reduction

The visual Recognition problem

Scientists claim brain memory code cracked

Other Interesting articles:

uci-team-is-first-to-capture-motion-of-single-molecule-in-real-time/

infrared-new-renewable-energy-source


Lahiru SandaruwanAutoscaling in Apache Stratos: Part II

Elasticity fine tuning parameters to tune a PaaS built using Apache Stratos 4.0


This is a continuation of the first blog post regarding Apache Stratos Autoscaling. Here it gives a brief explanation on how we can tune the elasticity parameters in Apache Stratos. I've copied the Drools file at the end, which we are using in Autoscaler for Apache Stratos 4.0.0 release which is the latest at the moment(Removed imports to shorten).

Terms and usecases of terms

What is a partition?

A portion of an IaaS which can be defined by any separation that an IaaS has. 

E.g. Zone, Rack, etc.

What are Partition Groups/ Network partitions?
Partition groups are also known as "network partitions". As the name implies, this is the an area of an IaaS that is bound by one network of an IaaS. We can include several partitions inside a network partition.

E.g. Region

How are the partitions used by Stratos?

We can define partition in stratos to manage the instance count. You can define min and max for that. We can select to use 'round-robin' or 'one-after-another' algorithms when scaling happens among several partitions.

What is a deployment policy?

Your definition point of your PaaS is the deployment policy. There, you can organize your partitions and partition groups(network partitions) as you want.

What is an Autoscaling policy?

A policy that the user define the thresholds of memory consumption, load average, and request in flights. Those thresholds are used by Autoscaler to take the scaling decisions.

Implementation Details 

Basic logic has been described in part I of this series. Here i go into to implementation log in details. If you want a better understanding on the complete architecture and network partition go through Lakmal's blog.

We are running the scaling rule against a network partition(Partition group defined in deployment policy) of the cluster.

All the global attributes are set to the Drools session before firing it. (i.e. It should have the global values passed to before running it). We will have the clusterwise averages, gradient, and second derivative relevant to network partition of the particular cluster of following, since we have Network partition context object.

  • Requests in flight
  • Memory consumption
  • Load average
We will take scale up decision based on all 3 mentioned above.

We also receive memberwise Load average and memory consumption values. We use those values when we take the decision to scale down, to find the member who has least load at the moment. Then that member will be terminated. 

Also we should know whether above said parameters are updated or not. We do not need to run the rule twice for same set of stats received. Therefore we have flags called 'rifReset', 'mcReset', and 'laReset' global values to represent Requests in flight, Memory consumption, and Load average respectively.

Autoscale policy object(which has information from deployed autoscale policy json) will have thresholds that we require to take scale up/down decisions as i explained at terms definition.

Tuning parameters


Scale up and scale down factors control parameters (0.8 and 0.1)

We use these values for deciding whether we need to scale up or down along with the threshold value given by user for particular cluster.

E.g. Here we multiply threshold from scale up factor to get the decision of scaling up,

scaleUp : Boolean() from ((rifReset && (rifPredictedValue > rifAverageLimit * 0.8)) || (mcReset && (mcPredictedValue > mcAverageLimit * 0.8)) || (laReset && (laPredictedValue > laAverageLimit * 0.8)))

Users can adjust these parameters according to their requirement. i.e. If users want to scale up when the 70% of the threshold is reached, s/he should set 0.7 the scaling up factor.

"overallLoad" formula

Here we give the weight of 50% for both CPU and Memory consumption when calculating the overall load. Users can adjust this weight in the rules file in following fomula,

overallLoad = (predictedCpu + predictedMemoryConsumption) / 2;

Predicted value for next minute

We have passed 1 in default for formula of predicting attributes. That means we get the result predicting for one minute. If user wants to change prediction interval, s/he should pass the number in minutes.

E.g.
getPredictedValueForNextMinute(loadAverageAverage, loadAverageGradient, loadAverageSecondDerivative, 1)

Scale Down Requests Count

Scale down request count mean the number of turns that we wait after first request is made to scale down, for actually take the scale down action for the cluster.

$networkPartitionContext.getScaleDownRequestsCount() > 


[1] http://www.slideshare.net/hettiarachchigls1/autoscaler-architecture-of-apache-stratos-400
[2] http://www.youtube.com/watch?v=DyWtCXT8Vqk
[3] http://www.sc.ehu.es/ccwbayes/isg/administrator/components/com_jresearch/files/publications/autoscaling.pdf
[4] 

global org.apache.stratos.autoscaler.rule.RuleLog log;
global org.apache.stratos.autoscaler.rule.RuleTasksDelegator $delegator;
global org.apache.stratos.autoscaler.policy.model.AutoscalePolicy autoscalePolicy;
global java.lang.String clusterId;
global java.lang.String lbRef;
global java.lang.Boolean rifReset;
global java.lang.Boolean mcReset;
global java.lang.Boolean laReset;

rule "Scaling Rule"
dialect "mvel"
when
        $networkPartitionContext : NetworkPartitionContext ()

        $loadThresholds : LoadThresholds() from  autoscalePolicy.getLoadThresholds()
   algorithmName : String() from $networkPartitionContext.getPartitionAlgorithm();
        autoscaleAlgorithm : AutoscaleAlgorithm() from  $delegator.getAutoscaleAlgorithm(algorithmName)

        eval(log.debug("Running scale up rule: [network-partition] " + $networkPartitionContext.getId() + " [cluster] " + clusterId))
        eval(log.debug("[scaling] [network-partition] " + $networkPartitionContext.getId() + " [cluster] " + clusterId + " Algorithm name: " + algorithmName))
        eval(log.debug("[scaling] [network-partition] " + $networkPartitionContext.getId() + " [cluster] " + clusterId + " Algorithm: " + autoscaleAlgorithm))


        rifAverage : Float() from  $networkPartitionContext.getAverageRequestsInFlight()
        rifGradient : Float() from  $networkPartitionContext.getRequestsInFlightGradient()
        rifSecondDerivative : Float() from  $networkPartitionContext.getRequestsInFlightSecondDerivative()
        rifAverageLimit : Float() from  $loadThresholds.getRequestsInFlight().getAverage()
   rifPredictedValue : Double() from $delegator.getPredictedValueForNextMinute(rifAverage, rifGradient, rifSecondDerivative, 1)

        memoryConsumptionAverage : Float() from  $networkPartitionContext.getAverageMemoryConsumption()
        memoryConsumptionGradient : Float() from  $networkPartitionContext.getMemoryConsumptionGradient()
        memoryConsumptionSecondDerivative : Float() from  $networkPartitionContext.getMemoryConsumptionSecondDerivative()
        mcAverageLimit : Float() from  $loadThresholds.getMemoryConsumption().getAverage()
   mcPredictedValue : Double() from $delegator.getPredictedValueForNextMinute(memoryConsumptionAverage, memoryConsumptionGradient, memoryConsumptionSecondDerivative, 1)

        loadAverageAverage : Float() from  $networkPartitionContext.getAverageLoadAverage()
        loadAverageGradient : Float() from  $networkPartitionContext.getLoadAverageGradient()
        loadAverageSecondDerivative : Float() from  $networkPartitionContext.getLoadAverageSecondDerivative()
        laAverageLimit : Float() from  $loadThresholds.getLoadAverage().getAverage()
   laPredictedValue : Double() from $delegator.getPredictedValueForNextMinute(loadAverageAverage, loadAverageGradient, loadAverageSecondDerivative, 1)


        scaleUp : Boolean() from ((rifReset && (rifPredictedValue > rifAverageLimit * 0.8)) || (mcReset && (mcPredictedValue > mcAverageLimit * 0.8)) || (laReset && (laPredictedValue > laAverageLimit * 0.8)))
        scaleDown : Boolean() from ((rifReset && (rifPredictedValue < rifAverageLimit * 0.1)) && (mcReset && (mcPredictedValue < mcAverageLimit * 0.1)) && (laReset && (laPredictedValue < laAverageLimit * 0.1)))

        eval(log.debug("[scaling] " + " [cluster] " + clusterId + " RIF predicted value: " + rifPredictedValue))
        eval(log.debug("[scaling] " + " [cluster] " + clusterId + " RIF average limit: " + rifAverageLimit))

        eval(log.debug("[scaling] " + " [cluster] " + clusterId + " MC predicted value: " + mcPredictedValue))
        eval(log.debug("[scaling] " + " [cluster] " + clusterId + " MC average limit: " + mcAverageLimit))

        eval(log.debug("[scaling] " + " [cluster] " + clusterId + " LA predicted value: " + laPredictedValue))
        eval(log.debug("[scaling] " + " [cluster] " + clusterId + " LA Average limit: " + laAverageLimit))

        eval(log.debug("[scaling] " + " [cluster] " + clusterId + " Scale-up action: " + scaleUp))
        eval(log.debug("[scaling] " + " [cluster] " + clusterId + " Scale-down action: " + scaleDown))

then
        if(scaleUp){

            $networkPartitionContext.resetScaleDownRequestsCount();
            Partition partition =  autoscaleAlgorithm.getNextScaleUpPartition($networkPartitionContext, clusterId);
            if(partition != null){
                log.info("[scale-up] Partition available, hence trying to spawn an instance to scale up!" );
                log.debug("[scale-up] " + " [partition] " + partition.getId() + " [cluster] " + clusterId );
                $delegator.delegateSpawn($networkPartitionContext.getPartitionCtxt(partition.getId()), clusterId, lbRef);
            }
        } else if(scaleDown){

            log.debug("[scale-down] Decided to Scale down [cluster] " + clusterId);
            if($networkPartitionContext.getScaleDownRequestsCount() > 5 ){
                log.debug("[scale-down] Reached scale down requests threshold [cluster] " + clusterId + " Count " + $networkPartitionContext.getScaleDownRequestsCount());
                $networkPartitionContext.resetScaleDownRequestsCount();
                MemberStatsContext selectedMemberStatsContext = null;
                double lowestOverallLoad = 0.0;
                boolean foundAValue = false;
                Partition partition =  autoscaleAlgorithm.getNextScaleDownPartition($networkPartitionContext, clusterId);
                if(partition != null){
                    log.info("[scale-down] Partition available to scale down ");
                    log.debug("[scale-down] " + " [partition] " + partition.getId() + " [cluster] " + clusterId);
                    partitionContext = $networkPartitionContext.getPartitionCtxt(partition.getId());

                    for(MemberStatsContext memberStatsContext: partitionContext.getMemberStatsContexts().values()){

                        LoadAverage loadAverage = memberStatsContext.getLoadAverage();
                        log.debug("[scale-down] " + " [cluster] "
                            + clusterId + " [member] " + memberStatsContext.getMemberId() + " Load average: " + loadAverage);

                        MemoryConsumption memoryConsumption = memberStatsContext.getMemoryConsumption();
                        log.debug("[scale-down] " + " [partition] " + partition.getId() + " [cluster] "
                            + clusterId + " [member] " + memberStatsContext.getMemberId() + " Memory consumption: " + memoryConsumption);

                        double predictedCpu = $delegator.getPredictedValueForNextMinute(loadAverage.getAverage(),loadAverage.getGradient(),loadAverage.getSecondDerivative(), 1);
                        log.debug("[scale-down] " + " [partition] " + partition.getId() + " [cluster] "
                            + clusterId + " [member] " + memberStatsContext.getMemberId() + " Predicted CPU: " + predictedCpu);

                        double predictedMemoryConsumption = $delegator.getPredictedValueForNextMinute(memoryConsumption.getAverage(),memoryConsumption.getGradient(),memoryConsumption.getSecondDerivative(), 1);
                        log.debug("[scale-down] " + " [partition] " + partition.getId() + " [cluster] "
                            + clusterId + " [member] " + memberStatsContext.getMemberId() + " Predicted memory consumption: " + predictedMemoryConsumption);

                        double overallLoad = (predictedCpu + predictedMemoryConsumption) / 2;
                        log.debug("[scale-down] " + " [partition] " + partition.getId() + " [cluster] "
                            + clusterId + " [member] " + memberStatsContext.getMemberId() + " Overall load: " + overallLoad);

                        if(!foundAValue){
                            foundAValue = true;
                            selectedMemberStatsContext = memberStatsContext;
                            lowestOverallLoad = overallLoad;
                        } else if(overallLoad < lowestOverallLoad){
                            selectedMemberStatsContext = memberStatsContext;
                            lowestOverallLoad = overallLoad;
                        }
                    }
                    if(selectedMemberStatsContext != null) {
                        log.info("[scale-down] Trying to terminating an instace to scale down!" );
                        log.debug("[scale-down] " + " [partition] " + partition.getId() + " [cluster] "
                            + clusterId + " Member with lowest overall load: " + selectedMemberStatsContext.getMemberId());

                        $delegator.delegateTerminate(partitionContext, selectedMemberStatsContext.getMemberId());
                    }
                }
            } else{
                 log.debug("[scale-down] Not reached scale down requests threshold. " + clusterId + " Count " + $networkPartitionContext.getScaleDownRequestsCount());
                 $networkPartitionContext.increaseScaleDownRequestsCount();

             }
        }  else{
            log.debug("[scaling] No decision made to either scale up or scale down ... ");

        }

end

Lahiru SandaruwanAutoscaling in Apache Stratos: Part I

Apache Stratos Autoscaler supports predictive approach beyond reactive Horizontal Autoscaling



In this blog post, I will explain how it enables predictive Autoscaling approach for a PaaS which is setup using Apache Stratos in brief.


You can get an idea about Autoscaling technologies following my hangout on Autoscaler(Slides [1] and record [2]). [3] is also a good read regarding this.


Current Implementation


Autoscaler which is a key member of Apache Stratos is designed to handle all elasticity aspects of the cloud.
It receives summarized statistic values such as average, gradient, and second derivative of stats, Load balancer requests in flight, Memory consumption and Load average from Complex Event Processor engine. Currently Apache Stratos uses WSO2 CEP as the default Complex Event Processing Engine.


CEP receives the statistics such as number of requests in flight from load balancer of a particular cluster, memory consumption and load average of a running cartridge instance periodically. Then it calculates the average, gradient, and second derivative over one minute(as configured in default) and send calculated values to health stat message queue.


Autoscale has subscribed to that queue and receives those stats. Then it will predict the values of the stats for a time duration(can be given from Drools file). It uses famous S = u*t + .5*a*t*t motion equation to predict. This is one of basic rules of Kalman filter. Here the statistics value is mapped to the displacement of an object which is in linear motion with constant acceleration.


Then the autoscaler compares predicted values with given threshold to get decisions on scale up and down. So in the current approach, as the first step, it is still using a threshold value to get the decision. There are lot of improvements that can be done.

Future Improvements


Decide number of instances to be scale up/ down according to the predicted value(Amazon EC2 has a good model)
Use better prediction approaches such as control theory to predict
Use approaches like time series analysis to detect pattern and be proactive on load
Make Autoscaler works on pre-defined timely patterns (E.g. Yearly pattern to handle New year hikes)
Consider application dependencies while scaling.


[1] http://www.slideshare.net/hettiarachchigls1/autoscaler-architecture-of-apache-stratos-400
[2] http://www.youtube.com/watch?v=DyWtCXT8Vqk

[3] http://www.sc.ehu.es/ccwbayes/isg/administrator/components/com_jresearch/files/publications/autoscaling.pdf

Madhuka UdanthaApache Solr and Search

Introduction

It is open source enterprise search platform from the Apache Lucene. Its major features include

  • text search
  • hit highlighting
  • faceted search
  • near real-time indexing
  • dynamic clustering
  • database integration
  • rich document (e.g., Word, PDF) handling
  • geospatial search

Solr runs as a standalone full-text search server within a servlet container. Solr uses the Lucene Java search library at its core for full-text indexing and search, and has REST-like HTTP/XML and JSON APIs. Solr put documents (by indexing) via XML, JSON, CSV or binary over HTTP. Solr users can query it via HTTP GET and receive XML, JSON, CSV or binary results.

Below diagram is the Apache Solr architecture (conceptual) and Apache Solr is composed of multiple modules.

image

Download ::http://lucene.apache.org/solr/downloads.html

 

1. Start Apache S olr from ‘solr-4.9.0\example>java -jar start.jar’

image

1.1 Then go to http://localhost:8983/solr/

image

 

Indexing files

2. Now we will index some sample file in Solr.

solr-4.9.0\example\exampledocs>java -jar post.jar *.xml

image

Status can be check from http://localhost:8983/solr/admin/cores?action=STATUS

or web user interface

image

 

Search

 

3.1 Here we search for ‘video’ in our index

http://localhost:8983/solr/collection1/select?q=video

image

3.2 Picking up field only that you need

http://localhost:8983/solr/collection1/select?q=video&fl=id,name

image

3.3 Search for word "Dell" in the name field.

http://localhost:8983/solr/collection1/select?q=name:Dell

3.4.1 Used AND in search
name:Dell AND manu:Dell

3.4.2 Using OR with AND
(name:Dell AND manu:Dell)OR name:fox

3.4.3 Using NOT
name:Dell -name:server

3.5 Wildcard matching
name:Samsu*
http://localhost:8983/solr/collection1/select?q=name:Samsu*

3.6 Proximity matching
Search for "foo bar" within 2 words from each other.
http://localhost:8983/solr/collection1/select?q=name:"Samsung SpinPoint"~2

3.7 Range
http://localhost:8983/solr/collection1/select?q=price:[10 TO 300]

3.8 Facet search

Faceted search is the dynamic clustering of items or search results into categories

http://localhost:8983/solr/collection1/select?q=price:[10%20TO%20300]&facet=true&facet.field=manu

http://localhost:8983/solr/collection1/select?q=price:[10%20TO%20300]&facet=true&facet.field=cat

image

Heshan SuriyaarachchiJmeter plugins for Apache Jmeter

I came across Jmeter-plugins project today and tried it out. It has a nice set of addons to support/complement existing functionality of Jmeter.

Chintana WilamunaSetup WSO2 platform using Vagrant

Vagrant is a neat way to setup a machine and other stuff you want for development/testing etc… very quickly without having to waste a lot of time. Using this script, now you can setup WSO2 middleware stack on your machine. README lists steps on how to get going with it.

Currently it sets up WSO2 Identity Server and Application Server and configure SSO between the two.

Dulitha WijewanthaPython Based Device Agent for Internet of Things

Once we have all the backend infrastructure ready with Event Collection, Device Management etc etc – we need a Device Agent to connect the actual device with the aforementioned infastructure. I have embedded the RA diagram below to refresh our thoughts.

After thinking for a while – I got a feeling that there are 2 types of things that the agent does.

  • Device Management Agent
  • Activity Agent

Device Management Agent

DM Agent is a generic component set that provide Device Management and utilities. For an example, DM Agent provides –

  • Communication adaptors for MQTT, HTTP
  • Device Enrollment
  • Token Management
  • Management for platform type

Activity Agent

​Activity Agent is a framework that allows developers to plugin custom Publishers and APIs. The framework can load the Publishers periodically and publish data using communication wrappers (provided from the framework). In a real world example – this is a temperature sensor publishing data to an API.

The framework can expose custom API classes using MQTT / HTTP API wrapper. This allows external invocation of Hardware APIs via the wrapper. A Buzzer connected to a Raspberry Pi exposed as an API is the ideal example.

The White Paper by WSO2 on the Reference Architecture for the Internet of Things explains all the components in IoT and give details on implementation of different components. The RA diagram was taken from it.

Architecture of the Agent

It would help to see the components of the Device Agent stacked together to better understand the agent architecture.

Before – I took an example of a Temperature sensor and a Buzzer actuator. Let’s connect those to a RaspberryPi and see what the architecture looks like.

Implementation

Now comes the exciting part. I have started work on this at our RA for IoT Github repo. Currently I have added support to RaspberryPi and BeagleBone Black. Lot of you might ask – why we chose Java instead of Python to write the base agent. This is mainly because of the out of the box support provided by most hardware to run python (RaspberryPi, BeagleBone, Arduino Yun).

Currently I manage to obtain below data from both platforms –

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
{
    'hardware': {
        'node': 'beaglebone',
        'system': 'Linux',
        'machine': 'armv7l',
        'version': '#1SMPFriApr1101: 36: 09UTC2014',
        'release': '3.8.13-bone47',
        'processor': ''
    },
    'platform': {
        'terse': 'Linux-3.8.13-bone47-armv7l-with-glibc2.7',
        'alias': 'Linux-3.8.13-bone47-armv7l-with-debian-7.4',
        'normal': 'Linux-3.8.13-bone47-armv7l-with-debian-7.4'
    },
    'python_info': {
        'version_tuple': ('2',
        '7',
        '3'),
        'version': '2.7.3',
        'build': ('default',
        'Mar14201417: 55: 54'),
        'compiler': 'GCC4.6.3'
    },
    'os': {
        'name': ('Linux',
        'beaglebone',
        '3.8.13-bone47',
        '#1SMPFriApr1101: 36: 09UTC2014',
        'armv7l',
        '')
    }
}

WSO2 Device Manager

After the device enrollment and provisioning -the user can view the device on the Device Manager. All the Device Info is shown as key value pairs on the page. The plan is to show the APIs as operations on the Device Manager.

As for the enrollment part – I thought of writing a separate post on it. There are multiple enrollment mechanisms for IoT devices and the option we see is to support all of them. Right now – the implementation supports token based enrollment. This approach assumes that the agent knows a token that is already registered to a user on the device manager.

sanjeewa malalgodaRead system property using proxy service deployed in WSO2 ESB

Here i have added sample synapse configuration to get carbon server home property and return it as response. Add this configuration and invoke proxy service. You will get carbon.home as response.





<proxy name="EchoProxyTest"
          transports="https http"
          startOnLoad="true"
          trace="disable">
      <description/>
      <target>
         <inSequence>
            <sequence key="responseTest1"/>
         </inSequence>
         <outSequence>
            <send/>
         </outSequence>
      </target>
   </proxy>
 <sequence name="responseTest1">
      <script language="js">var carbonHome = java.lang.System.getProperty("carbon.home");
      mc.setPayloadXML(&lt;serverHome&gt;{carbonHome}&lt;/serverHome&gt;);</script>
      <header name="To" action="remove"/>
      <property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>
      <property name="RESPONSE" value="true"/>
      <send/>
      <log level="full"/>
   </sequence>

Niranjan KarunanandhamGetting your iOS Device Managed by WSO2 Enterprise Mobility Manager

In 2007, Apple Inc. introduced the iPhone, which was one of the first mobile phones to use a multi-touch interface. Now, with iOS 7, iOS is enterprise-friendly with its features like enterprise mobile management support, remote wipe option if the device is lost or stolen, and support enterprise single sign-on. WSO2 Enterprise Mobility Manager (EMM) allows organizations to enroll iOS devices and manage its own Mobile App Store.
WSO2 EMM is a free and open source EMM solution which supports both iOS and Android. It also comes with Enterprise Mobile Publisher and Mobile Store using which you can have your own enterprise mobile store.

WSO2 EMM 1.1.0 needs to be configured in-order to manage iOS devices which is provided in the WSO2 EMM 1.1.0 documentation.

To find more about how to get your iOS device managed by WSO2 Enterprise Mobility Manager, please join me with the Webinar on Wednesday, September 23, 2014 at 10:00 AM – 11:00 AM (PDT).

John MathonPublish / Subscribe Event Driven Architecture in the age of Cloud, Mobile, Internet of Things(IoT), Social

connectivity

Event driven computing and publish / subscribe (pubsub) is critical to the new world of Cloud, Mobile and now Internet of Things.   I will show how publish / subscribe for IoT is very powerful way for IoT devices to interact.

First, I am going to take you on a journey of a little history to get you to the point you understand how event driven architecture led the way to the technology of today and fits into the new world of cloud, mobile, Internet of things and social.   I think this has been for me as the creator of publish/subscribe an exciting journey that hasn’t ended and seems on the verge of another renaissance as we move to the Connected World of Cloud, Services, APIs, Mobile, IOT.

What is publish / subscribe, event driven computing?

I tell people that I had this multi-colored dream.  In the spirit of Kurt Vonnegut’s Breakfast of Champions book, here is a picture in the dream I had back in 1984:

pubsubdream

At the time the idea of event driven computing was radical.  Virtually no VC or company saw the value of information delivered quickly to people or applications.  How quaint that seems now for the IM addicted world we live in.  :)   Publish/Subscribe is a message oriented communications paradigm that implements what we call event driven computing.

In publish/subscribe architecture somebody publishes some information and others who are interested in that information find about it pretty much instantly ideally at the same exact time.  At TIBCO we preached this philosophy of doing everything as publish/subscribe for years against the culture of the centralized batch world that existed at the time.  In the batch world you ran “jobs” every week or so and processed information in large chunks.   Unbelievably, almost everybody thought that batch processing weekly or monthly was great and many saw little or no value in getting information any sooner.

Event driven architecture eventually succeeded beyond all expectation.  However, publish/subscribe never superseded the more centrified hub and spoke point to point approach to event driven computing.    Much of the industry implemented publish/subscribe on top of point to point.  Let me explain what I mean by all that.   If you don’t want to understand the underlying technology of event driven computing and publish / subscribe you can skip the next few sections and get directly to the new stuff.

Google Example

I think a good way to understand this is to see how Google worked in the day.   In IPv4 internet there are roughly 2 billion addresses.  Google literally tries every single possible IP address.  If an answer comes back from that address Google walks the website and indexes the information so you could then find that website content in searches.   The process of scanning every possible IP address took about 6 months back in the early days.  I’m not joking.  If you put up a new web site it could take months before Google would recognize that you were there.   This is the opposite of publish/subscribe. (:))

Believe it or not in the early days of the internet that wasn’t such a big issue.  Web sites were pretty static.    Also, if information changed 2 or 3 times between the time they scan you they won’t know.   Google demonstrates the advantages and problems with polling very well.   Incidentally, here is what Google says today:

“There is No set time in for Google to initially index your site – the time taken can vary.   The time it does take may vary based upon factors such as; * Popularity of the site (Whether it has any links to it) * Whether the content is Crawl-able (Server Responses and Content type) * Site structure (how pages interlink)  It is possible for a site to be crawled/indexed within 4 days to 4 weeks.   In some cases, it may take longer.”

Wow!   Of course this not only applies to the first time they index you but also they make no promises how frequently they will come back and look for new content.

Let’s consider a different way to do this with Publish/subscribe

What if every website published the information when they changed using publish/subscribe.  Google or anybody interested in knowing about changes would get this information instantly.     Who needs that?  (I heard that argument many years ago – please don’t make me explain it again.  :) ) publish subscribe in the cloud

There are lots of ways to leverage publish / subscribe in the internet but this was never implemented.   Wouldn’t this be cool if we could do this?

a) website content would be delivered to interested parties instantly (including Google)

b) instead of missing key content parties would be notified of all changes

c) information could be classified according to a naming scheme such that you could get real-time updates on almost any information from stock prices to weather, traffic,voting results, sports results etc… simply by subscribing to that topic.

d) numerous intermediary services might have been created already that leveraged the information published to the cloud using this new way of providing information and services

Such a world doesn’t exist because publish/subscribe was never implemented in the cloud or internet.   Google does all the work of collecting all the information so we are left with waiting until Google or some other service creates an API to get at the information.   However, we finally have the world of APIs being created.  10s of thousands of services are being created each year by thousands of companies.  Check out my blog on cool APIs.

What is the excitement over Pubsub about?

social10

I frequently found people excited about the pubsub idea.  Speaking at college campuses, to different companies and colleagues at different companies there are a number of people that get very excited about pubsub.   Why?

“Simplified communication”

When you designed software back in the days it was a pretty much all or nothing thing.   You designed the whole thing as a entity.  Everything that talked from one thing to another was planned in advance.   You could draw boxes and lines and show how everything connected.  It was assumed that was the way the software was going to work for some time.  However, software is extremely malleable.   What we thought of one day as the architecture might shift the next day.   Since communication code was amongst the hardest code to write and everything was point to point adding points  or changing a point was a big hassle.  When you added something else to use a function or service you had to plan that well in advance and do a lot of new coding and testing.

Sometimes when I talk to engineers and realize they never got this “religion” they think that pubsub gets in the way of writing code to go from point A to point B.   That seems the most efficient code possible.  Engineers still aren’t taught frequently how to gain maximum agility in the code they write by leveraging the patterns that still make sense.   They are not taught the pitfalls of writing and the long term consequences of tightly bound code.

Pubsub simplified the whole communications paradigm dramatically.   Now, all you did was design independent pieces that didn’t care about the places it connected to.   I simply subscribed or published the relevant data and everything to do with delivery of the data to whoever was interested was handled by the “network.”   This makes building component architecture much simpler.   I didn’t even need to know the “URL” of the service.

Also, the asynchronous nature of pubsub led to highly efficient and scalable applications.   Pubsub was the node.js of platform 2.0 world.  :)

Pubsub is turning into the defacto way IoT works for partly this reason.  A need for simplified communication that is robust and flexible because we don’t know how all the devices will need to talk to each other.  Pubsub just simplifies all that which makes IoT easier and more fun.

“Late binding”

Pubsub implied late binding meaning a whole lot of related things.  I didn’t need to know much about a service to use it.  I could change the service easily and add things.   I could add new services, break services apart and easily “mediate” how everything used everything else.  In the financial world this meant I could add new data services trivially.  I could delete them as easily.  I could decide I wanted to handle orders one way one month and the next month decide to use a different clearing house the next.

In IoT world “late binding” means being able to add new devices or remove them easily without having to reconfigure the world or change anything.  No hiccups.   Devices can change, upgrade, be multiplied or disappear and the system dynamically adjusts.

“Easy Reuse”

I could add services dynamically and they were discovered.  I could add value easily to a service through another service.   I mentioned earlier the idea of “Business Activity Monitor” and calculation engine.  It was easy to take data that was being published and mine it for more information, produce something else that was useful.  For instance something as simple as taking the bid and ask price and producing an average.  I could take a basket of stocks and compute an arbitrage or a new risk assesment tool and add it to the mix easily.    The ease of doing this is empowering and created agility.

In the IoT world being able to reuse devices for numerous functions will be the key to the “Network Effect” I talk about.

“Discovery”

Pubsub usually is designed to leverage pubsub to announce services so it makes discovery of services easy.   This makes network architecture easier.   I can take down and bring up new components and the system automatically adjusts and notices the new services or the lacking of other services.   Everything becomes an event and I can automate the activity to the event rather than the old way of having to manually configure files, command line scripts, restart applications and all the hullabaloo that came with point to point, configured, locked in approaches to communication.  Turning change into an “event” meant that things could adjust automatically whereas no event means that a human must act as the event and go around and do changes, reboot things whatever.   Automatic, event driven pubsub is easier, more natural and less error prone.

Three Approaches to Publish / Subscribe: Broadcast, Hub and Spoke and Polling

The protocol that underlies networking in corporations is called IP and it has always supported a type of communication called UDP which allows you to send information to all computers attached to the network simultaneously.   This protocol is at a hardware level analogous to the idea of publish/subscribe.

At first blush  this may seem wasteful to have every computer get every piece of information but consider that the information only needs to be sent once and everyone gets it.   If a piece of information is useful to only one subscriber this might be wasteful to send it to everyone but the we found ways  to make this work really efficiently.   This implementation of publish/subscribe allowed TIBCO in 3 years to get 50% market share in financial trading floors.  The alternatives to implementing this are:

Hubandspoke

a) Hub and Spoke pattern: the information has to be copied N times to each of N interested parties.   Somebody has to do this work and it consumes network bandwidth and introduces latency.   If the server doing this doesn’t know you are interested in the information you won’t get it.  This is the broker pattern.

Connectivity_kucuk

b) Polling pattern: the parties can all contact you periodically to ask if you have anything of interest to them.   Your subscribers might contact you hundreds of times and you have nothing new for them.  That’s wasteful.  If I have 2 updates in rapid succession some may miss some information.

These techniques place a huge load on the publisher to answer many individual requests for information, keep tract of lists of interested parties, answer lots of questions which have the same answer over and over and do a lot of work to manage the end clients.  These techniques also aren’t as flexible about introducing new suppliers of information or new consumers.

In a publish/subscribe broadcast architecture  information is sent once to all parties and they decide if it is of interest and throw it away if it isn’t.  If they can be very efficient about throwing away unneeded information then this scheme works well.  We were able to get the overhead to throw away messages so good at TIBCO that only 1% or less of the computers time was spent throwing away messages leaving 99% of the computer free to process the messages it considered more interesting.   Also, sending a message via broadcast frequently consumes no more bandwidth in the network than sending it to all the computers since for many network schemes the bandwidth is shared among a number of devices.  Another huge advantage of this model was that when someone new wanted to listen in or be a publisher they didn’t have to do a lot of work.  New publishers and subscribers could dynamically be added without bothering anybody and everyone learned of new services instantly.  It was auto-discovery.

At this point you should be seeing that broadcast pattern has numerous advantages over the hub and spoke or polling pattern.   I publish once and thousands of parties could get it instantly.  In the hub and spoke model some subscribers at the end of the queue are going to be pissed they get information much later than others.    If polling is used some of the subscribers may miss information entirely.  If they want a fast response they can poll more frequently but this will place a large burden on the servers.    I will still get the information later than with hub and spoke or publish/subscribe.

A cool demo of the efficiency of publish/subscribe over broadcast

We demonstrated the incredible efficiency of publish/subscribe with a cool demo.   We wrote a program which simulated balls bouncing around a room.

ball_example

You could start with one ball and increase the number.  Imagine each ball is its own IoT.  As the balls bounced the way they communicated was to send their position to a central server which computed their next position and sent them this information.  The central computer handled the bouncing trajectories and the physics.  In the hub and spoke or polling model the latencies and work required of the central server increases dramatically and as the square of the number of balls.   Even if you optimize this and figure out a linear algorithm the cost on the network bandwidth grows very fast.  With a larger number of balls the performance of a hub and spoke where each ball has to be informed from a central point becomes extremely poor very fast making the demo look bad.

bouncing-balls-23524-200x125

If the information between the balls is sent using publish / subscribe each ball sends only one message.  Here I am.  The other balls get all the messages of the other balls and compute on their own because they have all the information they need.  The result was the balls move smoothly even as you scale the number of balls up dramatically.   This very visually and dramatically gets the point of publish/subscribe very well.  It convinced a lot of people why our publish/subscribe was so much better at information distribution.

Publish/subscribe over broadcast worked really well on trading floors of financial firms and many other places where timeliness was king.

Problems with publish/subscribe over broadcast

You can skip this section if you don’t want to get too technically deep.

There are many interesting problems we solved in broadcast publish/subscribe.  You may find these enlightening.  If not, just skip it.

1) UDP isn’t reliable.  One problem with the internet broadcast protocol is that it is not reliable.  A message is sent to all computers on the network but the computers have the right to drop it if they are busy.    A trader doesn’t want to miss information.  The only way around this initially was to wait until the endpoint noticed it missed a message.   It would sit there and go:  “Wait, I got message 97 and then 99.  What happened to 98?  Send me 98 again.”   Our protocol understood this and promptly sent message 98 to the affronted party.  However, in the meantime a lot of time could have elapsed and they now have information much later than others.  If you are trying to win that 1/4 of a point better price that’s a problem.   Also, this turns out to have an even more serious problem because when someone misses a message they send a message to ask for a resend.   The server then has to do a lot of additional work to satisfy this party that missed the message.  Maybe 3 or 4 out of thousands miss the message so instead of sending the message only once I actually in reality have to send it 4 or 5 times.  Now imagine that I am getting loaded down and the message traffic is high and more computers are dropping messages so now lots more are asking for repeats.  This happens precisely when the network is busy so that it exacerbates the problem.  These were the first examples of network storms.   It happened and we found ways around it.

I later invented and patented the idea of publishing the message more than once.  This may seem wasteful at first.  However, the probability of dropping 2 messages by the same subscriber is a lot lower than dropping a single message.   Therefore if 3 or 4 needed rebroadcasts if I send the message twice there is a good chance these 3 or 4 will get the second message and there will be vastly less situations where I need to retransmit the message to specific parties.  For the reliability protocol it means instead of dealing with 3 or 4 failures there are no failures.  My cost for doing that is simply to send the message twice which is a lot less than sending it 5 times which I ended up doing by optimistically assuming everyone will get the first message.  Not only that but the guys who missed the first message are a lot happier because they got the second message which came only a millisecond late so for all intents and purposes they didn’t see nearly as large a latency problem.   Let’s say I am sending these messages not over local network but these messages are going out over multiple connected networks to get to its destination.  Along the way the message may have had to be retransmitted several times.  These errors in networks happen and the latency introduced when an error occurs is substantial.  If I send the message twice one of the messages may not need a retransmission so it gets there much faster than the other message.   So, even in a case where you use an underlying reliable protocol sending a message more than once can result in superior performance.  This fact is important in any scenario where timeliness is important and depending on the fanout of the messages it can be more efficient for the network.  Exercise left for the reader.

2) UDP doesn’t scale beyond local networks easily.  Broadcast protocol is great if the total world is a few hundred or thousand computers but when you connect networks of computers many of which are unlikely to be interested in information from the other networks then the efficiency of broadcast publish/subscribe loses out.   We tried to get internet providers to support UDP over wide area networks but they didn’t want to do that.  We invented a protocol called PGM which was implemented in most Cisco routers that allowed broadcast over wide area networks efficiently but it was never used widely.

So we developed the idea of a publish/subscribe  proxy/router.  The proxy sat between 2 networks and helped manage load.  If everyone on a certain network had no interest in a particular subject then there is no point in broadcasting that information to that network.   This required knowing what people on this new network were interested so we developed the idea that when you subscribed to something it was sent out as a broadcast message.  A proxy could listen to this and make a note.  “Hey, somebody is interested in messages like this on this network”   The proxy knowing this subscription information  could then decide if it was useful to pass the information on to their network or not.   A hardware company called Solace Networks makes a device which does intelligent routing of messages like this.  It can do things likecompute the distance from some point to decide if it should forward a message allowing you to ask a question like:  “I want to know about anything happening 30 miles from Chicago.”   Very cool.

Brokers  Replaced Broadcast

BrokerDiagram

Instead of implementing true publish / subscribe over broadcast we ended up collectively implementing publish / subscribe over hub and spoke and created the notion of the broker.   In the broker model we abstract the idea of publish / subscribe with a software component.   I publish something once to the broker and it distributes it to all parties as if I had broadcast it.   The broker uses hub and spoke and sends the message to each interested party.  It removes a lot of the work to implement scalable reliable powerful flexible communications .   It drastically simplifies integration and writing enterprise applications.   I send the message once like in the broadcast model and the broker takes the load of copying it to all the interested parties and all the security and reliability issues.   It takes the problem of keeping tract of who is interested in every message and manages the reliability of the message delivery to every interested party.

Conceptually then the broker is identical to publish/subscribe over broadcast but it does have the dependency on the broker and the problems we talked about earlier. Modern message brokers are efficient and deliver messages in milliseconds.  Modern message brokers implement security robustly, are very reliable and can do a lot more than just distribute messages.  Also, since the Internet is not “Broadcast enabled” (which is another whole story)  and many corporations don’t support broadcast across their entire network the message broker became the only way to implement publish/subscribe scalably in Enterprises.

After publish/subscribe and event driven messaging came a flood of cool stuff (SOA too)

The messaging explosion of the late 80s and 90s spurred an innovation tsunami.  Most enterprises followed similar paths after adopting publish/subscribe and SOA they found the following tools very useful to leverage event driven computing.   These became the standard SOA tools.

Calculation Engines / Activity Monitoring

The very first thing people wanted after their data was on the “bus” was the ability to do computations on the data in realtime.    Maybe I want an average of something rather than the instantaneous value?  Maybe I want a computation of a combination of many related things, for instance calculating the sum of all the things bought,  the average of all the stocks in the S&P.   I might want to be notified when an order >$10,000 was made.   This can also be used to calculate information like the load on computers which can be used to decide when to allocate more instances of a service.   This activity monitoring and calculation service could be used at the operational as well as the application level.

Data Integration

Getting data on the “bus” meant having adapters and connectors to make it easy to get all your enterprise applications on the “bus.”   I might like to draw a real-time graph of the history of that data.    So, people wanted to integrate databases and all kinds of existing applications and protocols into the bus and make them compatible with the event driven messaging architecture.   This became known as “Data Services and adapter and connection frameworks”   This also was consistent with the growing belief that applications should be broken into tiers with data services separared from business and presentation levels.

Mediation

The publish/subscribe paradigm made integration easy.  Part of how we did that was to build some intelligence into the routing and transaltion so that whatever information was on the bus could be combined with other data easily or broken up into pieces as all the other applications might like.   Somebody figured out the common ways people wanted to combine and bifurcate message flows.  This became known as EAI (Enterprise application Integration) and these message mediation patterns became standardized in Enterprise integration Patterns.  This is a super powerful way to combine services and data to produce new data.

Complex Event Processing

The next thing people wanted to do was to look for patterns in the messages and do things based on those patterns.   Maybe I want to look for a user looking at product X, then product Y and suggest they look at product Z automatically?  Maybe I want to make an offer to them based on their activity to incentivize them to buy?  Maybe I want to discover a certain behavior that is indicative of a security breach?  Complex event processing enabled me to detect complex patterns in activity in the enterprise and automate it.  Very powerful stuff.  In today’s bigdata world this is all still very relevant.

Mashups

Once I had lots of information “on the bus” it was useful to have a tool that let me combine the information in visual ways easily.   This led to the mashup craze.

Conclusion of Platform 2.0 / Distributed Era

In summary here are the traditional components of an event driven messaging architecture

Event Driven Components

 

Today Oracle, IBM, TIBCO, Software AG, WSO2 provide full suites of this Platform 2.0 event driven messaging technology.

 

Platform 3.0  The New Event Driven Architecture

 

I have written extensively about Platform 3.0 and the shift to a new development technology base.  Here is a blog you might consider reading if you want to learn more:

Enterprise Application Platform 3.0 is a Social, Mobile, API Centric, Bigdata, Open Source, Cloud Native Multi-tenant Internet of Things IOT Platform

Platform 3.0 will reinvigorate pubsub as the notions of social, mobile, IoT and the world are moving to more and more being done the instant it is possible and events to tell everyone interested about it.  The pubsub philosophy is reemerging with Platform 3.0.

 

New Event Driven Components

The IoT (Internet of Things) and Pubsub

Exponential Value from Connectedness

We have a big hype cycle occurring now around “The Internet of Things.”   This is actually not entirely hype.  Things like Uber and Tesla two of my favorite examples prove that having devices connected has tremendous value.  Uber is valued at $17 billion and it is an app that depends on IOT smartphones being ubiquitous.   TESLA has changed the way cars are maintained.  My car wakes up in the middle of the night every few weeks and automatically downloads a new version of itself.  Last time it did this it upgraded my suspension system, my navigation system, the efficiency of my car and gave me a better way to handle driving on hills.  Not bad.

In the diagram above imagine I am connected to a glucose sensor.  When my body signs point to needing insulin I can be notified on my cell phone.  If I am older and my family might be worried if I will do the right thing they could be informed as well and give me a call to make sure I do what I need to do.  If the situation gets worse the sensor could talk to other devices around me and notify an emergency service.  The emergency service might automatically know who I am and where I am (like Uber) and find me quickly.   The emergency service using Waze and information about traffic on routes by IoTs all over the road system will tell me the fastest way to get to the patient.  In the meantime my vital signs are being sent to the system and ultimately this data can be used by researchers to figure out how to better prevent problems in the future.  IoT for health is more than about counting steps.  Numerous devices are being built to measure all kinds of things in the human body and ultimately this could be immensely useful for statisticians who can figure out what affects what and what leads to what and what doesn’t lead to what.

Things talk to things directly

social3

Imagine as you walk around you have devices on you physically or around you that are all connected and can talk to each other as well as to the internet.  I admit there is a certain amount of worry how all this will work but putting aside our concerns for the moment let’s consider what the advantages this entails.  My next blog will get into technical considerations to making this work well.

If you read this entire blog you will remember the “demo” I talked about we did at TIBCO many years ago.  This demonstrated the power of publish/subscribe over centralized systems.  If every device has to have the ability to communicate long distances, have the full capability to do everything it needs then each device becomes bigger, heavier, more expensive, breaks more often, runs out of power sooner.   It would be better if each device could leverage the devices around it to get the job done or that it only needed to do a small part of a larger problem.

With pubsub I discover devices are near me and they can publish things to me and I can publish things to them.  They may not be interested and I may not be interested but if we are interested we can help each other and deliver more value to the user.

An Example:  I can save a lot of power if I don’t have to broadcast information over long distances.   It would be awfully nice if the other devices near me were able to forward the information farther on so I don’t have to use as much battery power.  In the mesh pubsub models being considered for some IoT protocols when one device publishes something another device near it can automatically forward the message closer to where it needs to go.  This powerful technology is similar to the Internet TCP/IP protocol routing which gives the internet the reliability and robustness that has proven itself.

If things can talk to things directly without having to go to a central hub you remove a serious latency and a serious failure point.  You also make it easier to build devices that don’t have to be “all-complete” within themselves.  Devices can leverage other devices directly reducing the cost and increasing the ubiquitousness of everything.

As a result you get:

1. Higher efficiency

2. Smaller devices that use less power

3. More reliability and robustness

4. Network Effect of Devices

5. More devices, more adoption

(I don’t want to suggest that there is not a value in having a central server that knows and can interact with everything as well. All I am suggesting here is that having the ability of the devices to communicate locally and operate more autonomously leveraging each other will result in better user experience and higher reliablity.)

As a result almost all the new protocols and messaging architectures being promoted for “Internet of Things” are publish/subscribe.    This is the power of publish/subscribe being leveraged again.    It’s exciting to think of this entire new domain of hardware being built for the home or office on a true publish/subscribe paradigm.  Some of the protocols actually look like the original publish/subscribe we did on trading floors.  MQTT implements a publish/subscribe broker.  Zigbee, CoAp all implement publish/subscribe semantics.   CoAp already has a mesh network standard.  These protocols will all be used in the new IoT devices we buy.  Most of the devices can operate autonomously but if in the presence of a “hub” can talk to a hub and deliver the messages through the hub to a central server.  Some devices can detect cellphones closeby.

Apple has patented an approach to security which gives you the ability when you are located in close proximity to your devices, i.e for instance in your home, that you are automatically able to access them easily but if you are not at home you have to go through an additional security step to access your devices.

My next blog is going to be a lot more technical and we will get into the details of what the pubsub IoT world looks like today and what ideas I have for how it should work.

Here is more information you may find interesting:

Enterprise Application Platform 3.0 is a Social, Mobile, API Centric, Bigdata, Open Source, Cloud Native Multi-tenant Internet of Things IOT Platform

Publish/subscribe model overview

Cloud Security and Adoption

WSO2 Message Broker

Pub/sub, the Internet of Things, and 6LoWPAN connectivity

Wireless Mesh Networks Facilitate pubsub

Tesla Case Study

Software Development today, Cool APIs, Open Source, Scala, Ruby, Chef, Puppet, IntelliJ, Storm, Kafka


Shelan PereraCabo Da Roca - Western Most Cape of Europe


 It is a new country for me and it is full of new places to travel. Luckily we have one friend at the apartment who knows the area pretty well. So we did not have much trouble in locating the places to visit but if you travel by public transport following information would be useful for you.

  If you are near IST you need to travel from Alameda Metro station to Cais do Sodré station which is at the end of Green line. Then you need to get the Train from Cais do Sodré station to Cascais. To travel up to Cascais you can obtain a travel card from Metro station which covers all 4 forms of transportation including Bus, Metro, Train and Ferry. You have different options but it is better to buy  a viva via gem card if you are a non frequent traveller. It is reusable and reloadable card.


source :http://www.metrolisboa.pt/eng/customer-info/viva-viagem-card

Loading this card with 10 Euros would be enough as we experienced and you can top up it from train stations or metro stations conviniently. It is advisable to keep 5,10 Euro notes as sometimes it will issue receipt for balance which you should claim at train stations if you try to recharge from a train station. Believe me it is a hassle :(.

 You should spend some time at Cascais as it is a beautiful area with lots of tourist and local activities. We stayed there before moving to Cabo da Roca.



                  This was truly a beautiful beach...


From Cascais you need to get 403 bus operated by scottrub. It is usually has one hour frequency. It will go via Cabo da Roca to Sentra and you have to get down from the Cabo da Roca on the way. These buses does not have display boards for the stops but Cabo da Roca is a special stop and you know once you arrive there and just enjoy the breathtaking views out side on the way from the bus. (Believe me it is a special stop ;) )

Cabo da Roca is the western most point of the Europe mainland. You will be surely amazed how picturesque the view is. You will lost in your thoughts for a moment when you reach the top. But make sure not to fall out from the top to Atlantic ocean while taking selfies.. Cos' it is a shame that you cannot tell how magnificent it was to you friends :P












Pushpalanka JayawardhanaLeveraging federation capabilities of Identity Server for API gateway - Configuration Details

With this post I am to share the steps of a popular solution using WSO2 Identity Server and WSO2 API Manager. Following diagram will give an initial insight on this solution.

Overview




1.  Webapp that requires single sign on(SSO) facility with some other applications. 
                - To achieve this we are using WSO2 Identity Server(IS) as the Identity Provider(IDP). 

2.  Webapp needs to consume some APIs secured with OAuth tokens. 
                - To expose the APIs secured with OAuth tokens we are using WSO2 API Manager(AM) here.
                - Since we already have the SAML Response received at SSO step, SAML2 Bearer grant type is ideal to use at this scenario to request an OAuth token to access the required APIs.
                - Allowing AM to properly issue an OAuth token in this scenario, we add IS as a trusted IDP in AM.

3.  Webapp requires to allow users registered in another IDP like Facebook or Google to be able to login with SSO functionality. 
                - With minimum configurations to internal IS and external IDP side, we need to achieve this.
    Rest of this post will deal with how we can configure the above, without sharing any databases underneath. Will setup this in multi-tenancy mode, to make it a more general scenario. Another instance of WSO2 Identity Server will be used in the place of external IDP. This can be replaced with Facebook, Google etc. according the requirement.

    Pre-requisites

    WSO2 Identity Server -5.0.0 - http://wso2.com/products/identity-server
    WSO2 API Manager 1.7.0 - http://wso2.com/products/api-manager
    As we are to run several instances of WSO2 servers we need to configure port offsets, if all are configured in one machine. Following are the port offsets and ports I am to use. I will be using two tenants in the two Identity servers and API manager will be run of super tenant mode.

    1. WSO2 Identity Server -5.0.0 - Internal IDP - Offset 0 - Port 9443 
                     Tenant - lanka.com
    2. WSO2 Identity Server -5.0.0 - Internal IDP - Offset 1 - Port 9444 
                     Tenant - lux.org
    3. WSO2 API Manager 1.7.0 - API Gateway - Offset 2 - Port 9445
                     Tenant - carbon.super


    Webapp Configurations 

    If you download the webapp from the above link, the keystore configurations are already done. You will only need to import the public certificate of the tenant in internal IDP(lanka.com) to the keystore of webapp 'travelocity.jks'. 
    Please note the following configurations done in travelocity.properties file found inside webapp at '/travelocity.com/WEB-INF/classes/travelocity.properties'.


    EnableSAML2Grant=true

    #A unique identifier for this SAML 2.0 Service Provider application
    SAML.IssuerID=travelocity.com@lanka.com

    #The URL of the SAML 2.0 Identity Provider
    SAML.IdPUrl=https://localhost:9443/samlsso

    #Password of the KeyStore for SAML and OpenID
    KeyStorePassword=travelocity

    #Alias of the IdP's public certificate
    SAML.IdPCertAlias=lanka.com

    #Alias of the SP's private key
    SAML.PrivateKeyAlias=travelocity

    #Private key password to retrieve the private key used to sign
    #AuthnRequest and LogoutRequest messages
    SAML.PrivateKeyPassword=travelocity

    #OAuth2 token endpoint URL
    SAML.OAuth2TokenEndpoint=https://localhost:9445/oauth2/token

    #OAuth2 Client ID
    SAML.OAuth2ClientID=FxuhFBEcX5P1wjtqPqigJ0OVP5ca

    #OAuth2 Client Secret
    SAML.OAuth2ClientSecret=2eqfI11Y9dRZaiijbAK3dfJFNRMa

1. SSO Setup with Internal IDP

  • Login as tenant admin - <admin>@lanka.com
  • Export the public certificate of the private key used at webapp side to sign the SAML Authentication Request. Following command can be used to export it.
keytool -export -alias travelocity -file travelocity -keystore <path to travelocity.jks(which ever keystore used at webapp side)>
  • Import the above exported public certificate to the tenant key store of the internal IDP, identity server as below.
keystore1.png













  • After the import it will listed as below.













  • Create a new Service Provider for Travelocity webapp as following.
  • Then we needs to configure it as below.    
- By enabling SaaS application, we are removing the tenant boundary for this service provider.
- Enable response and assertion signing according to your requirement.
- Enable signature verification for SAML Authentication Request



The configurations are mostly done to get the SSO scenario work with the webapp. We need to export the tenant public certificate to be imported to the trust store at webapp side. This is in order to verify the SAML Response/Assertion signed signature at the webapp side. We can export the certificate as below from the UI, using public key link.

keystore1.png


The exported key needs to be imported to webapp truststore(in this case travelocity.jks we located inside the webapp).

keytool -import -alias <The given alias name. Here lanka.com> -file <path to downloaded public certificate> -keystore <path to trust store of webapp. Here the travelocity.jks file>

Now if you try to login to travelocity web app as a tenant user, it should succeed.

2. External federation


Following configuration will demonstrate how to configure an external identity provider in Identity Server. Here we will use another instance of identity server as the external IDP. The scenario will be extended from the previous scenario. 


At internal federation, we had 'Travelocity' webapp registered as a Service Provider in the IDP which decided the authenticity of user. Now we will federate the decision on the authenticity of the user to an external IDP. For the demonstration purposes I am using a tenant(named lux.org) in the external IDP(idp.lanka.com).

Configuring another IS instance to act as an external IDP (idp.lanka.com)

  • Create a tenant named ‘lux.org’ and login with this tenant.
  • First we need to import the public certificate of the internal IS into tenant key store, which is paired with the private key used to sign the SAML Request. This time it is 'wso2carbon'.
  • Configure the internal IS as a service provider here. This is because now the SAML request is to be sent to this IS by the internal IS, we configured before.
  • Note that ‘Assertion Consumer URL’ is pointed to ‘https://localhost:9443/commonauth’ of the internal IS. Also note the certificate alias we have selected to use at SAML Request signature validation. This is the one that we imported here.




Configure Internal IS to federate SSO requests from Travelocity webapp to the external IDP

In the internal IS, we need to configure it to make use of the external IDP we just configured.
  • Create a Identity Provider as below. 
  • Upload the public certificate we can download from the external IDP, tenant keystore. This is to validate the signatures of SAML Response and Assertions that will be sent to this internal IS from the external IS.



We have to make some additional changes in the service provider configuration for Travelocity webapp as well. 
  • In the drop down list of federated authenticators select the identity provider we just configured.


There is one more addition we have to do as following to meet the requirement of audience restriction validation in SAML SSO scenario. This is not a requirement for federation, but for API access. The value we give here for audience is the OAuth token endpoint, which we will consume to exchange the SAML token for an OAuth token.



Now we are in a position to test the external federation scenario with Travelocity webapp sample. After hosting it in a tomcat server, hit the URL, ‘http://localhost:8080/travelocity.com/index.jsp’, which will take us to a page as below. Click on the link to login with SAML.


This will take us to following screen. Note the page is from our external IDP (idp.lanka.com), where we can enter credentials of a user in this IDP to get successfully authenticated. If our external authenticator was Google this will be a page from Google, submitted to enter the credentials.

After successful authentication, following screen will be shown.


Now the external federated SAML scenario is completed.

3. API Access leveraging the federation

To achieve this we need following configurations present at Webapp side.
  • Register an application in API-M store, subscribe to some APIs, provide the generated client id and client secret values in travelocity.properties file of the sample webapp.
  • Point SAML.OAuth2TokenEndpoint=https://localhost:9445/oauth2/token to OAuthtoken endpoint of API-M.
  • Import the public certificate of internal IS, to the  keystore used by webapp. The given alias value should be provided at ‘SAML.IdPCertAlias=lanka.com’ in travelocity.properties file.

IDP Configuration at API-M

  • Configure ‘host name’ and ‘mgt host name’ in APIM_HOME/repository/conf/carbon.xml 
  • Login as super admin and add an identity provider as following.

  • Following fields needs to be filled. Note that we have imported the public certificate of the internal IS here, so that we can validate it’s SAML token.
  • API-M is not aware of the federation happening at the internal IS node.

  • When configuring the federated authenticator, we should note that Identity Provider entity id should be as same as the issuer of the SAML Response coming to API-M, to exchange for an OAuth token. SSO URL, is the redirect URL for internal IS.

  • Once these configurations are done, we can test the end-to-end scenario. Now from the page we left at ‘Travelocity’ webapp, if we click on the link ‘Request OAuth2 Access Token’ following page will appear. It is showing the details of the OAuth token, it received in exchange to the provided SAML token.

Now we can use this access token at webapp side to consume any APIs we got subscribed to.

Hope this helps. We can expand and customize this scenario in several ways according to requirements, with options provided with federation, provisioning and extension points. Will discuss those in a latter post.

Cheers!



Hasitha AravindaESB: Invoking secured backend - Part 1 - Username Token



Scenario 
  1. Backend service is secured using Username token. 
  2. Client invokes ESB proxy using http. ( no security between client and ESB) 
  3. At the ESB, proxy adds username token to outgoing message and invokes secured backend.
  4. ESB sends back echo service's response back to client. 

Setting up environment 

Backend ( WSO2 Application server 5.2.1)
  1. Start WSO2 AS 5.2.1 using ( Unix: sh wso2server.sh / Windows: wso2server.bat ) 
  2. Log in to management console. ( https://localhost:9443/carbon/ ) 
  3. Create two user called tom and bom
    • Goto Configure -> Users and Roles -> Users
    • Create an user called tom with password "tompass". 
    • Create another user called bob with password "bobpass"
    • Assign both users to "admin" role.
  4. Secure Echo service with Username token. 
    • Goto Main -> Services -> List 
    • Click on "echo" service. This will open up "Service Dashboard (echo)" page.
    • Under "Quality of Service Configuration", Select "security".
    • In "Security for the service" page, Select Enable security.
    • Under Security scenarios, select "Username token"  ( First security policy) and click next. 
    • In next page, select "admin" under user group. 
    • Click Finish. 
ESB ( WSO2 ESB 4.8.1 )
  1. Start WSO2 ESB with port offset =1 ( Unix: sh wso2server.sh -DportOffset=1 / Windows: wso2server.bat --DportOffset=1) 


Rampart configuration for UsernameToken  ( ESB )
  • Create an ESB in-line xml local entry called "UTOverTransport.xml" with following content. 

Password callback Implementation

  • Create a jar with following class, and drop it to /repository/components/lib/
  • Then restart ESB server. 
  • ( Maven Project is located here. )
Some useful References on Rampart password callback handler:  
  1. http://wso2.com/library/3733/
  2. http://wso2.com/library/240/

ESB Proxy

  • Create a proxy called EchoUTProxy with following content. 


Testing Scenario

  • Enable Soap tracer on WSO2 AS.
  • Invoke EchoUTProxy  using SOAP UI. 
You can see Username token in request message as follows. 



Hasitha AravindaESB: Invoking secured backend - Part 2 - Username Token - Dynamic username

My previous post shows how to invoke an username token secured backend using an ESB proxy. But we used static value for the username ( tom ), which is hard coded in the policy file. So each request authenticated as tom at the backend service.

But some may wants to access backend service as different users. This post discusses how you can extend it to support dynamic user name in policy file.


Setting up environment : 

Setup both WSO2 AS and WSO2 ESB as mentioned in previous post. 


ClassMediator (ESB)
  • In this scenario, we set username as a property in the ESB proxy. 
  • To pass username into RampartConfiguration, we use custom class mediator called, SetUserMediator.
  • This custom mediator, adds username into rampartConfigCallbackProperties map and set the map into Axis2MessageContext. So later we can access these properties from Rampart ConfigCallbackHandlers. 
  • We have to use customMediator, since we can't set a Map using standard ESB mediators.  
  • ( Maven Project is located here. )


Rampart ConfigCallbackHandler (ESB)
  • Similar to PasswordCallback handler, Rampart provides Configuration Callback handler to dynamically load Rampart configuaraion to runtime. We use this to set username dynamically. 
  • ( Maven Project is located here. )


Policy for UsernameToken  ( ESB )
  • Create an ESB in-line xml local entry called "UTOverTransportDynamic.xml" with following content. 


Proxy Service (ESB)

  • Create a proxy called EchoUTDynamicProxy with following content. 

Testing Scenario

  • Enable Soap tracer on WSO2 AS.
  • Invoke EchoUTDynamicProxy  using SOAP UI. 
You can see Username token in request message as follows.


Hasitha AravindaESB: Invoking secured backend - Part 3 - Username Token with BasicAuth

This post shows, how to invoke an UsernameToken secured backend ( Hosted in WSO2 AS ), using basic auth. For this we use POXSecurityHandler, (Which comes default with WSO2 Products) to convert the HTTP basic auth information into wsse:UsernameToken.

Setting up environment : 

Setup both WSO2 AS and WSO2 ESB as mentioned in previous post.


ESB Proxy
  • Create a proxy called EchoUTBasicProxy with following content. 

Testing Scenario
  • Enable Soap tracer on WSO2 AS.
  • Enable wire log in WSO2 ESB.
  • Invoke EchoUTBasicProxy  using SOAP UI. 
You can see, there is no username token in incoming message to backend. Instead you see basic auth header in outgoing message to backend from ESB.




Danushka FernandoWSO2 App Factory - How to Create a new Application Type and how application type deployment works.

With WSO2 AppFactory 2.1.0 onwards it allows to add Application Types as archives. These archives which should be named with the extention ".apptype" should contain a file named apptype.xml which is the configuration for the new Application Type. The configuration given below is a sample apptype.xml which is for java web applications.
  <ApplicationType>
<ApplicationType>war</ApplicationType>
<ProcessorClassName>org.wso2.carbon.appfactory.utilities.application.type.MavenBasedApplicationTypeProcessor</ProcessorClassName>
<DisplayName>Java Web Application</DisplayName>
<Extension>war</Extension>
<Description>Web Application Archive file</Description>
<Buildable>true</Buildable>
<BuildJobTemplate>maven</BuildJobTemplate>
<MavenArcheTypeRequest>-DarchetypeGroupId=org.wso2.carbon.appfactory.maven.webapparchetype
-DarchetypeArtifactId=webapp-archetype -DarchetypeVersion=2.0.1 -DgroupId=org.wso2.af
-Dversion=default-SNAPSHOT -DinteractiveMode=false
-DarchetypeCatalog=local
</MavenArcheTypeRequest>
<ServerDeploymentPaths>webapps</ServerDeploymentPaths>
<Enable>enabled</Enable>
<Comment>Test123</Comment>
<Language>Java</Language>
<SubscriptionAvailability>aPaaS</SubscriptionAvailability>
<IsUploadableAppType>false</IsUploadableAppType>
<DevelopmentStageParam>.dev</DevelopmentStageParam>
<TestingStageParam>.test</TestingStageParam>
<LaunchURLPattern>https://appserver{stage}.appfactory.private.wso2.com:9443/t/{tenantDomain}/webapps/{applicationID}-{applicationVersion}/</LaunchURLPattern>
</ApplicationType>
You can create an apptype.xml with similar content and create an zip file with  ".apptype" extension and then copy the archive to the

   $APPFACTORY_HOME/repository/deployment/apptype
folder.

Then an underlying Axis 2 Deployer [1] which listens to the above mentioned location will extract the archive and read the apptype.xml and fill up an in memory data structure in Application Type Manager class [2]. This class is a singleton and can be accessed as following

  ApplicationTypeManager.getInstance()

There is an Application Type Bean Map [3] which contains all configurations provided in apptype.xml as properties. Someone can access a property with the name "foo" in application type "bar" with following code.

  ApplicationTypeManager.getInstance().getApplicationTypeBean("bar").getProperty("foo")

There should be a class provided for the name mentioned in ProcessorClassName in configuration provided to server by copying a jar to the server *. This should be an implementation of the interface ApplicationTypeProcessor [4].  There are few events that can be customized according to the application type. So you can write a new implementation which will match your application type.

[1] https://svn.wso2.org/repos/wso2/scratch/appfactory_2.0.0/components/appfac/org.wso2.carbon.appfactory.core/2.0.1/src/main/java/org/wso2/carbon/appfactory/core/apptype/ApplicationTypeDeployer.java

[2] https://svn.wso2.org/repos/wso2/scratch/appfactory_2.0.0/components/appfac/org.wso2.carbon.appfactory.core/2.0.1/src/main/java/org/wso2/carbon/appfactory/core/apptype/ApplicationTypeManager.java

[3] https://svn.wso2.org/repos/wso2/scratch/appfactory_2.0.0/components/appfac/org.wso2.carbon.appfactory.core/2.0.1/src/main/java/org/wso2/carbon/appfactory/core/apptype/ApplicationTypeBean.java

[4] https://svn.wso2.org/repos/wso2/scratch/appfactory_2.0.0/components/appfac/org.wso2.carbon.appfactory.core/2.0.1/src/main/java/org/wso2/carbon/appfactory/core/apptype/ApplicationTypeProcessor.java

Sohani Weerasinghe

Share database across computers on the same network


In order to allow remote access to your mysql server, first you need to change the "bind-address". By default it only allows connections from localhost therefore to allow connections from all networks just comment the line "bind-address = 127.0.0.1" in /etc/mysql/my.cnf

Let's assume that you want to grant permission to a database called 'testDB', you can easily grant permission using below command


grant all privileges on testDB.* to '<uname>'@'%' IDENTIFIED BY '<password>';


Then the other machine on the network can access this database by using the command

mysql -u<uname> -p<password> -h<IP>

Nirmal FernandoOS X Start-up Disk is Full?

I have faced a situation where my OS X start up disk reached 100% usage and machine become drastically slow. I realized that I need to find the culprit files for this growth and delete/back-up them.
I used following command on my terminal to find out the files which are greater than 1GB on my start up disk.

sudo find -x / -type f -size +1G
 
From the above command I found that some of my VirtualBox images caused this growth and then backed them up to some other partition.

Next, I need to find a way to make my VirtualBox to not use my start-up disk. :-)

Shelan PereraWant to learn anything faster ?

 We are in a world with lot of learning opportunities. But how many stuff do we really learn ? What holds us back in learning new things faster?

Learning how to learn is the greatest learning of all. I happened to read interesting reading from this blog [1] which had an amazing link mentioning about a guy who learned MIT 4 year course within 1 year.

It starts with

"My friend Scott Young recently finished an astounding feat: he completed all 33 courses in MIT’s fabled computer science curriculum, from Linear Algebra to Theory of Computation, in less than one year. More importantly, he did it all on his own, watching the lectures online and evaluating himself using the actual exams. (See Scott’s FAQ page for the details of how he ran this challenge.)"

http://calnewport.com/blog/2012/10/26/mastering-linear-algebra-in-10-days-astounding-experiments-in-ultra-learning/

When i was going little bit deeper in the subject i found another useful resource in one of the ted talks about the time it takes to learn a new thing. Amazingly it is only twenty hours :). Yes it is less than a day to acquire a new skill.

But...

You need to do it in four steps. Four simple steps to rapid knowledge acquire.

1) Deconstruct the new skill.

     divide and conquer small chunks of skill in new skill. Because most of the big skills are set of smaller skills

2) Learn enough to self correct.

 He depicts the most common error we do in learning which is trying to learn everything and start practicing later. But Ideally what you should do is to learn the essential or more obvious things so you
can self correct your self.

3) Remove practice barriers.

The most common barrier i can think of is "Facebook" ;)

4) Practice atleast 20 hours.

"The major barrier to learn new thing is not intellectual it is emotional"

So start practicing what you love to acquire..!!!




[1] http://kkpradeeban.blogspot.pt/2012/11/optimism-is-selection-of-nature.html

Chris HaddadResponsive IT: Harvard Business Review’s Expert Perspective

Harvard Business Review has made a not so shocking assertion, most IT teams desire a Path to Responsive IT.   Yet only a minority achieve high IT responsiveness. Digital transformation is required to successfully seize business opportunity.

Are you ready to build a Responsive IT team? 

HBR echoes a statement that we often hear in client conversations:

Cultural resistance, fixed processes, and outdated IT systems get in the way.

To blaze the path to Responsive IT, CIOs and IT leaders must adopt a New IT delivery model and New IT plan.   An IT leader super-majority see resisting change as not a viable option today.  The recent HBR survey indicates IT stakeholders believe  successful IT is critical to their core business survival:

75 percent of respondents said their company’s survival depends on their ability to successfully exploit information technology, with 41 percent strongly agreeing that this is so.

However,  when HBR quantified “How responsive (or quick to act) is your IT organization to ideas initiated by the business for fast-moving technology innovation?”, HBR only identified 20% of IT teams as achieving highly responsive IT.

Clearly, almost everyone wants to be responsive, but few organizations have overcome legacy infrastructure drag and process inefficiencies.  By adopting DevOps, PaaS, APIs, Big Data streaming, and ecosystem platforms, you can not be the guy in the room (half the people) who said they

missed out on new technology-enabled business opportunities because their IT department was too slow to respond.

What is your path to Responsive IT, and have you presented the concept to your IT peers? Reach me via Twitter by including #ResponsiveIT in your tweet.

 

Responsive IT References

DevOps.com article: Harvard Business Review Survey: IT responsiveness predicts business success

Harvard Business Review Survey

The Path to Responsive IT

 Responsive IT and Connected Business Presentation

Dimuthu De Lanerolle

Dimuthu De Lanerolle

How to add test ui modules to products

Abstract

This article will mainly focus on core areas engineers should know when adding integration UI tests related modules to products in-order to run with WSO2 Test Automation Framework. If you are familiar with Selenium you can directly start writing UI tests after adding these UI modules to your product.

Table of contents

Introduction
Structure of the implementation
Dependency management for UI tests
Scopes
test
compile
Plugin and Configuration management for UI tests
maven-surefire-plugin
maven-clean-plugin
maven-dependency-plugin
maven-jar-plugin
maven-resources-plugin
Classes
Writing the test case
Execution of tests
Summary


Introduction

Wso2 TAF  is an automation framework that performs equally in all stages of the deployment lifecycle. To refer more on Wso2 TAF please refer WSO2 TAF documentation.
Our objective is to present a comprehensive guidance on adding tests-UI related modules to Wso2 products and a step-by-step guide describing the execution of tests.
Note: Readers of this article are expected to be familiar themselves with TestNG and Selenium for writing UI tests. (Please refer TestNG documentation and Selenium http://www.seleniumhq.org/docs/)

Structure of the implementation

In order to begin with our implementation we have to define a structure to our project.

For the demonstration purposes we will now consider a use case of introducing UI modules for Wso2 BAM product. To learn more on Wso2 BAM please refer https://docs.wso2.com/display/BAM241/WSO2+Business+Activity+Monitor+Documentation

You can clone the WSO2 BAM product source from the following github HTTP clone URL
https://github.com/wso2-dev/product-bam

After cloning the source we need add relevant modules to the project structure. Navigate to …./product-bam/modules/integration module. If not exist you need to add tests-common module to the integration module. Now navigate to created tests-common module and we need to create ui-pages module inside the tests-common module.

Moreover we need to add a new tests-ui-integration module for the purpose of writing UI tests in our project. To do so navigate back to …./product-bam/modules/integration module and create tests-ui-integration maven module.

Finally our project structure should look similar to below graphical representation.



Screenshot from 2014-09-16 19:24:34.png


tests-common

This module is used to add useful custom common utilities in helping writing our tests. 

ui-pages

This module can be used to store page object classes that we can use inside our tests. For convenience you can create separate sub-modules (eg: home)  for each page object type and store relevant page objects classes (eg : HomePage.java)  inside these sub-modules.

tests-ui-integration

Our test classes can be written inside this module.

Dependency management for UI tests

Maven dependency management is one of the key features of Maven and for our exercise also we need to identify required maven dependencies for the project. We will now list-down basic key maven dependencies you need to add to your relevant pom.xml file/s. We will travel through each module and describe the necessary maven dependencies need to be added to each pom.xml file.

Note the groupId, artifactId and versions of each pom.xml files. You can replace relevant values in accordance with your module structure. In-order to adhere into the best practise we will define all required dependencies in the root pom.xml ( product-bam level pom.xml file) with the versions ( note the properties tags which defines actual versions for each dependency type) enabling other pom.xml files exists below the root pom.xml to derive the dependency versions automatically.

product-bam pom.xml

  ………………..  

    </dependencies>
      <dependency>
                <groupId>org.wso2.carbon.automation</groupId>
                <artifactId>org.wso2.carbon.automation.engine</artifactId>
                <version>${test.framework.version}</version>
                <scope>test</scope>
            </dependency>
            <dependency>
                <groupId>org.wso2.carbon.automation</groupId>
                <artifactId>org.wso2.carbon.automation.extensions</artifactId>
                <version>${test.framework.version}</version>
                <scope>test</scope>
            </dependency>
            <dependency>
                <groupId>org.wso2.carbon.automation</groupId>
                <artifactId>org.wso2.carbon.automation.test.utils</artifactId>
                <version>${test.framework.version}</version>
                <scope>test</scope>
            </dependency>
            <dependency>
                <groupId>org.wso2.bam</groupId>
                <artifactId>org.wso2.bam.integration.ui.pages</artifactId>
                <version>${bam.version}</version>
                <scope>test</scope>
            </dependency>
            <dependency>
                <groupId>org.testng</groupId>
                <artifactId>testng</artifactId>
                <version>${testng.version}</version>
            </dependency>
            <dependency>
                <groupId>org.wso2.carbon</groupId>
                <artifactId>org.wso2.carbon.integration.common.admin.client</artifactId>
                <version>${carbon.version}</version>
                <scope>test</scope>
            </dependency>
            <dependency>
                <groupId>org.wso2.carbon</groupId>
                <artifactId>org.wso2.carbon.integration.common.extensions</artifactId>
                <version>${carbon.version}</version>
                <scope>test</scope>
            </dependency>
        </dependencies>


 <properties>
          <carbon.version>4.3.0-SNAPSHOT</carbon.version>
           <test.framework.version>4.3.1-SNAPSHOT</test.framework.version>
          <bam.version>2.5.0-SNAPSHOT</bam.version>
          <testng.version>6.8</testng.version>
</properties>
…………...

tests-common pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>org.wso2.bam</groupId>
    <artifactId>bam-integration-tests-common</artifactId>
    <packaging>pom</packaging>
    <version>2.5.0-SNAPSHOT</version>
    <name>WSO2 BAM Server Integration Test Common</name>

    <modules>
        <module>ui-pages</module>
    </modules>
</project>


ui-pages pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

    <parent>
        <groupId>org.wso2.bam</groupId>
        <artifactId>bam-integration-parent</artifactId>
        <version>2.5.0-SNAPSHOT</version>
        <relativePath>../../pom.xml</relativePath>
    </parent>

    <modelVersion>4.0.0</modelVersion>
    <name>WSO2 BAM - Integration Test UI Module</name>
    <groupId>org.wso2.bam</groupId>
    <artifactId>org.wso2.bam.integration.ui.pages</artifactId>

    <dependencies>
        <dependency>
            <groupId>org.wso2.carbon.automation</groupId>
            <artifactId>org.wso2.carbon.automation.extensions</artifactId>
            <scope>compile</scope>
        </dependency>
        <dependency>
            <groupId>org.testng</groupId>
            <artifactId>testng</artifactId>
            <scope>compile</scope>
        </dependency>
        <dependency>
            <groupId>org.wso2.carbon.automation</groupId>
            <artifactId>org.wso2.carbon.automation.engine</artifactId>
            <scope>compile</scope>
        </dependency>
        <dependency>
            <groupId>org.wso2.carbon.automation</groupId>
            <artifactId>org.wso2.carbon.automation.test.utils</artifactId>
            <scope>compile</scope>
        </dependency>
        <dependency>
            <groupId>org.wso2.carbon</groupId>
            <artifactId>org.wso2.carbon.integration.common.admin.client</artifactId>
            <scope>compile</scope>
        </dependency>
    </dependencies>
</project>



tests-ui-integration pom.xml

………
  <?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

    <parent>
        <groupId>org.wso2.bam</groupId>
        <artifactId>bam-integration-parent</artifactId>
        <version>2.5.0-SNAPSHOT</version>
        <relativePath>../pom.xml</relativePath>
    </parent>

    <modelVersion>4.0.0</modelVersion>
    <name>BAM Server Integration test UI module</name>
    <artifactId>org.wso2.bam.ui.integration.test</artifactId>
    <packaging>jar</packaging>

    <build>
        <plugins>
            <plugin>
                <artifactId>maven-surefire-plugin</artifactId>
                <inherited>false</inherited>
                <version>2.12.3</version>
                <configuration>
                    <!-- <argLine>-Xmx1024m -XX:PermSize=256m -XX:MaxPermSize=512m -Xdebug -Xnoagent -Djava.compiler=NONE -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=5003</argLine>-->
                    <argLine>-Xmx1024m -XX:PermSize=256m -XX:MaxPermSize=512m</argLine>
                    <suiteXmlFiles>
                        <suiteXmlFile>src/test/resources/testng.xml</suiteXmlFile>
                    </suiteXmlFiles>

                    <skipTests>${skipUiTests}</skipTests>

                    <systemProperties>
                        <property>
                            <name>maven.test.haltafterfailure</name>
                            <value>false</value>
                        </property>
                        <property>
                            <name>carbon.zip</name>
                            <value>
                                ${basedir}/../../distribution/target/wso2bam-${project.version}.zip
                            </value>
                        </property>
                        <property>
                            <name>samples.dir</name>
                            <value>${basedir}/../../../samples/product</value>
                        </property>
                        <property>
                            <name>framework.resource.location</name>
                            <value>
                                ${basedir}/src/test/resources/
                            </value>
                        </property>
                        <property>
                            <name>server.list</name>
                            <value>
                                BAM
                            </value>
                        </property>
                        <property>
                            <name>usedefaultlisteners</name>
                            <value>false</value>
                        </property>


                        <sec.verifier.dir>${basedir}/target/security-verifier/</sec.verifier.dir>
                        <emma.home>${basedir}/target/emma</emma.home>
                        <instr.file>${basedir}/src/test/resources/instrumentation.txt</instr.file>
                        <filters.file>${basedir}/src/test/resources/filters.txt</filters.file>
                        <emma.output>${basedir}/target/emma</emma.output>
                    </systemProperties>

                    <workingDirectory>${basedir}/target</workingDirectory>
                </configuration>
            </plugin>

            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-clean-plugin</artifactId>
                <version>2.5</version>
                <configuration>
                    <filesets>
                        <fileset>
                            <directory>${basedir}/src/test/resources/client/modules</directory>
                            <includes>
                                <include>**/*.mar</include>
                            </includes>
                            <followSymlinks>false</followSymlinks>
                        </fileset>
                    </filesets>
                </configuration>
            </plugin>

            <plugin>
                <artifactId>maven-dependency-plugin</artifactId>
                <executions>

                    <execution>
                        <id>copy-emma-dependencies</id>
                        <phase>compile</phase>
                        <goals>
                            <goal>copy-dependencies</goal>
                        </goals>
                        <configuration>
                            <outputDirectory>${project.build.directory}/emma</outputDirectory>
                            <includeTypes>jar</includeTypes>
                            <includeArtifactIds>emma
                            </includeArtifactIds>
                        </configuration>
                    </execution>
                    <execution>
                        <id>copy-jar-dependencies</id>
                        <phase>compile</phase>
                        <goals>
                            <goal>copy-dependencies</goal>
                        </goals>
                        <configuration>
                            <outputDirectory>${basedir}/src/test/resources/artifacts/BAM/jar
                            </outputDirectory>
                            <includeTypes>jar</includeTypes>
                            <includeArtifactIds>mysql-connector-java
                            </includeArtifactIds>
                            <excludeTransitive>true</excludeTransitive>
                        </configuration>
                    </execution>
                    <execution>
                        <id>copy-secVerifier</id>
                        <phase>compile</phase>
                        <goals>
                            <goal>copy-dependencies</goal>
                        </goals>
                        <configuration>
                            <outputDirectory>${basedir}/target/security-verifier</outputDirectory>
                            <includeTypes>aar</includeTypes>
                            <includeArtifactIds>SecVerifier</includeArtifactIds>
                            <stripVersion>true</stripVersion>
                        </configuration>
                    </execution>

                    <execution>
                        <id>unpack-mar-jks</id>
                        <phase>compile</phase>
                        <goals>
                            <goal>unpack</goal>
                        </goals>
                        <configuration>
                            <artifactItems>
                                <artifactItem>
                                    <groupId>org.wso2.bam</groupId>
                                    <artifactId>wso2bam</artifactId>
                                    <version>${project.version}</version>
                                    <type>zip</type>
                                    <overWrite>true</overWrite>
                                    <outputDirectory>${basedir}/target/tobeCopied/</outputDirectory>
                                    <includes>**/*.jks,**/*.mar</includes>
                                </artifactItem>
                            </artifactItems>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-jar-plugin</artifactId>
                <version>2.4</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>test-jar</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>

            <plugin>
                <artifactId>maven-resources-plugin</artifactId>
                <version>2.6</version>
                <executions>
                    <execution>
                        <id>copy-resources-jks</id>
                        <phase>compile</phase>
                        <goals>
                            <goal>copy-resources</goal>
                        </goals>
                        <configuration>
                            <outputDirectory>${basedir}/src/test/resources/keystores/products
                            </outputDirectory>
                            <resources>
                                <resource>
                                    <directory>
                                        ${basedir}/target/tobeCopied/wso2bam-${project.version}/repository/resources/security/
                                    </directory>
                                    <includes>
                                        <include>**/*.jks</include>
                                    </includes>
                                </resource>
                            </resources>
                        </configuration>
                    </execution>
                    <execution>
                        <id>copy-resources-mar</id>
                        <phase>compile</phase>
                        <goals>
                            <goal>copy-resources</goal>
                        </goals>
                        <configuration>
                            <outputDirectory>${basedir}/src/test/resources/client/modules
                            </outputDirectory>
                            <resources>
                                <resource>
                                    <directory>
                                        ${basedir}/target/tobeCopied/wso2bam-${project.version}/repository/deployment/client/modules
                                    </directory>
                                    <includes>
                                        <include>**/*.mar</include>
                                    </includes>
                                </resource>
                            </resources>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

    <dependencies>

        <dependency>
            <groupId>org.wso2.carbon.automation</groupId>
            <artifactId>org.wso2.carbon.automation.engine</artifactId>
        </dependency>
        <dependency>
            <groupId>org.wso2.carbon.automation</groupId>
            <artifactId>org.wso2.carbon.automation.test.utils</artifactId>
        </dependency>
        <dependency>
            <groupId>org.wso2.carbon</groupId>
            <artifactId>org.wso2.carbon.integration.common.extensions</artifactId>
        </dependency>
        <dependency>
            <groupId>org.wso2.bam</groupId>
            <artifactId>org.wso2.bam.integration.ui.pages</artifactId>
            <scope>compile</scope>
        </dependency>

    </dependencies>

    <properties>
        <skipUiTests>true</skipUiTests>
    </properties>

</project>



Scopes

As you can see we used maven scope to limit the transitivity of a dependency.  Below are the scopes we have used in the above pom.xml files.

    test
        - This scope indicates that the dependency is not required for normal use of the application, and is only available for the test compilation and execution phases.
 
    compile
        - This is the default scope, compile dependencies are available in all classpaths of a project. Furthermore, those dependencies are propagated to dependent projects.


Plugin and Configuration management for UI tests

maven-surefire-plugin
 
This plugin can be used during the test phase of the build lifecycle to execute our tests.  You can define many configurational properties such as mentioned below which becomes very handy when organizing your maven test project. 
         eg :
         
                       <skipTests>${skipUiTests}</skipTests>

We use this property tag to skip all Ui tests from regular builds and enable UI tests only  in build servers

                <property>
                        <name>carbon.zip</name>
                        <value>${basedir}/../../distribution/target/wso2bam-${project.version}.zip</value>

</property>

You need to provide the correct location where your product zip file resides. Test suite will extract the zip file found from this place to ${basedir}/target directory and start running your test class.

maven-clean-plugin

This Plugin removes files generated at build-time in a project's directory.Clean Plugin assumes that these files are generated inside the target directory.

maven-dependency-plugin

Basically this plugin is capable of manipulating artifacts. You can define operations like copying artifacts from local or remote repositories to a specified location.

maven-jar-plugin

We use this plugin to build and sign jars. Basically you can define two goals inside this plugin.
jar:jar
                    - creates a jar file for your project classes inclusive resources.
jar:test-jar
                    - creates a jar file for your project test classes.

maven-resources-plugin

Copies project resources to the output director.

Classes

We will now elaborate source codes for each classes categorized under respective UI modules.

tests-common

HomePage.java

Home page class holds the information of product page. Also contains a sign-out method.

package org.wso2.bam.integration.ui.pages.home;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.wso2.bam.integration.ui.pages.UIElementMapper;
import org.wso2.bam.integration.ui.pages.login.LoginPage;

import java.io.IOException;

public class HomePage {

    private static final Log log = LogFactory.getLog(HomePage.class);
    private WebDriver driver;
    private UIElementMapper uiElementMapper;

    public HomePage(WebDriver driver) throws IOException {
        this.driver = driver;
        this.uiElementMapper = UIElementMapper.getInstance();
        // Check that we're on the right page.
        if (!driver.findElement(By.id(uiElementMapper.getElement("home.dashboard.middle.text"))).getText().contains("Home")) {
            throw new IllegalStateException("This is not the home page");
        }
    }

    public LoginPage logout() throws IOException {
        driver.findElement(By.xpath(uiElementMapper.getElement("home.greg.sign.out.xpath"))).click();
        return new LoginPage(driver);
    }
}

LoginPage.java

Basically performs UI Login test case scenario. ie. This class contains methods to login to wso2 products.

package org.wso2.bam.integration.ui.pages.login;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.wso2.bam.integration.ui.pages.UIElementMapper;
import org.wso2.bam.integration.ui.pages.home.HomePage;

import java.io.IOException;

public class LoginPage {
    private static final Log log = LogFactory.getLog(LoginPage.class);
    private WebDriver driver;
    private UIElementMapper uiElementMapper;

    public LoginPage(WebDriver driver) throws IOException {
        this.driver = driver;
        this.uiElementMapper = UIElementMapper.getInstance();
        // Check that we're on the right page.
        if (!(driver.getCurrentUrl().contains("login.jsp"))) {
            // Alternatively, we could navigate to the login page, perhaps logging out first
            throw new IllegalStateException("This is not the login page");
        }
    }

    /**
     * Provide facility to log into the products using user credentials
     *
     * @param userName login user name
     * @param password login password
     * @return reference to Home page
     * @throws java.io.IOException if mapper.properties file not found
     */
    public HomePage loginAs(String userName, String password) throws IOException {
        log.info("Login as " + userName);
        WebElement userNameField = driver.findElement(By.name(uiElementMapper.getElement("login.username")));
        WebElement passwordField = driver.findElement(By.name(uiElementMapper.getElement("login.password")));
        userNameField.sendKeys(userName);
        passwordField.sendKeys(password);
        driver.findElement(By.className(uiElementMapper.getElement("login.sign.in.button"))).click();
        return new HomePage(driver);
    }
}

BAMIntegrationUiBaseTest.java

This is an abstract class which basically helps us to create custom automation context objects and we can define environment related methods those can be used regularly inside our test cases. This class can be extended by other test classes which in-turn eliminates code duplication inside the project.

package org.wso2.bam.integration.ui.pages;

import org.wso2.carbon.automation.engine.context.AutomationContext;
import org.wso2.carbon.automation.engine.context.TestUserMode;
import org.wso2.carbon.automation.test.utils.common.HomePageGenerator;

import javax.xml.xpath.XPathExpressionException;

public abstract class BAMIntegrationUiBaseTest {

    protected AutomationContext automationContext;

    protected void init() throws Exception {
        automationContext = new AutomationContext("BAM", "bam001", TestUserMode.SUPER_TENANT_ADMIN);
    }


    protected String getServiceUrl() throws XPathExpressionException {
        return automationContext.getContextUrls().getServiceUrl();
    }

    protected String getLoginURL() throws XPathExpressionException {
        return HomePageGenerator.getProductHomeURL(automationContext);
    }
}


UIElementMapper.java

The objective of this class is to read mapper.properties file and load it's uiElements into properties object.

package org.wso2.bam.integration.ui.pages;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;

import java.io.IOException;
import java.io.InputStream;
import java.util.Properties;


public class UIElementMapper {
    public static final Properties uiProperties = new Properties();
    private static final Log log = LogFactory.getLog(UIElementMapper.class);
    private static UIElementMapper instance;

    private UIElementMapper() {
    }

    public static synchronized UIElementMapper getInstance() throws IOException {
        if (instance == null) {
            setStream();
            instance = new UIElementMapper();
        }
        return instance;
    }

    public static Properties setStream() throws IOException {

      InputStream inputStream = UIElementMapper.class.getResourceAsStream("/mapper.properties");

        if (inputStream.available() > 0) {
            uiProperties.load(inputStream);
            inputStream.close();
            return uiProperties;
        }
        return null;
    }

    public String getElement(String key) {
        if (uiProperties != null) {
            return uiProperties.getProperty(key);
        }
        return null;
    }
}

Mapper.properties

Includes essential configurational properties related to tests. Should be placed inside …../tests-common/ui-pages/src/main/resources directory.  Below is an excerpt of a mapper.properties file.

….
login.username=username
login.password=password
login.sign.in.button=button
home.dashboard.middle.text=middle
….

To view a structure of a complete mapper.properties file click on here.

Writing the test case

Add the following test class (LoginTestCase.java)  to the module tests-ui-integration .

The basic objective of this class is to perform and verify UI login to bam server. Note how we extended  BAMIntegrationUiBaseTest inside our LoginTestCase class. @BeforeClass annotation allows us to execute the testLogin() method before the testLogin() method which is under the @Test annotation. Inside the setUp() (Under @Before class) we call  BAMIntegrationUiBaseTest class  init() method which initialises the automation context and then we can invoke getWebDriver() of BrowserManager class in-order to derive the Webdriver instance. After performing the required UI operation define inside testLogin() method we can quit the driver, closing every associated window as shown under tearDown() method. ( Note that @AfterClass annotation allows to execute  tearDown() after performing the stated operations inside testLogin() method.)


package org.wso2.bam.ui.integration.test;

import org.openqa.selenium.WebDriver;
import org.testng.annotations.AfterClass;
import org.testng.annotations.BeforeClass;
import org.testng.annotations.Test;
import org.wso2.carbon.automation.extensions.selenium.BrowserManager;
import org.wso2.bam.integration.ui.pages.BAMIntegrationUiBaseTest;
import org.wso2.bam.integration.ui.pages.home.HomePage;
import org.wso2.bam.integration.ui.pages.login.LoginPage;

public class LoginTestCase extends BAMIntegrationUiBaseTest {

    private WebDriver driver;

    @BeforeClass(alwaysRun = true)
    public void setUp() throws Exception {
        super.init();

        driver = BrowserManager.getWebDriver();
        driver.get(getLoginURL());
    }

    @Test(groups = "wso2.bam", description = "verify login to bam server")
    public void testLogin() throws Exception {
        LoginPage test = new LoginPage(driver);
        HomePage home = test.loginAs(automationContext.getSuperTenant().getTenantAdmin().getUserName(),
                automationContext.getSuperTenant().getTenantAdmin().getPassword());
        home.logout();
        driver.close();
    }

    @AfterClass(alwaysRun = true)
    public void tearDown() throws Exception {
        driver.quit();
    }
}

Note :

Click on BrowserManager to view the source of the Browsermanager class. 

Execution of tests

In-order to run  LoginTestCase follow the steps mentioned below.

Configuring automation.xml

Click automation.xml to learn more on automation.xml file. We will now consider relevant segments you need to draw your attention in-order to execute our UI test scenario with a short description underneath each segment.

   <tools>
        <selenium>
            <!-- Change to enable remote webDriver -->
            <!-- URL of remote webDriver server  -->
            <remoteDriverUrl enable="false">http://10.100.2.51:4444/wd/hub/</remoteDriverUrl>

            <!-- Type of the browser selenium tests are running" -->
            <browser>
                <browserType>firefox</browserType>
                <!-- path to webDriver executable - required only for chrome-->
                <webdriverPath enable="false">/home/test/name/webDriver</webdriverPath>
            </browser>
        </selenium>
    </tools>

Description:

Above configuration will help us to define the browser type the test should run and define web driver path and the choice of enabling / disabling the remote web driver instance.


    <userManagement>
        <superTenant>
            <tenant domain="carbon.super" key="superTenant">
                <admin>
                    <user key="superAdmin">
                        <userName>admin</userName>
                        <password>admin</password>
                    </user>
                </admin>
                <users>
                    <user key="user1">
                        <userName>testuser11</userName>
                        <password>testuser11</password>
                    </user>
                    <user key="user2">
                        <userName>testuser21</userName>
                        <password>testuser21</password>
                    </user>
                </users>
            </tenant>
        </superTenant>
    </userManagement>

Description:

To register a set of system wide users at the test initiation stage.Note the admin super tenant and set of tenant users controlled by the admin super tenant.


 <platform>
        <!--
        cluster instance details to be used to platform test execution
        -->
        <productGroup name="BAM" clusteringEnabled="false" default="true">

            <instance name="bam001" type="standalone" nonBlockingTransportEnabled="false">
                <hosts>
                    <host type="default">localhost</host>
                </hosts>
                <ports>
                    <port type="http">9763</port>
                    <port type="https">9443</port>
                </ports>

                <properties>
                    <!--<property name="webContext">admin</property>-->
                </properties>
            </instance>

        </productGroup>
    </platform>

Description:

You can define different product groups for the product category (In our case we can specify as BAM) together with enable/disable clustering feature (true/ false). Note how these configuration helps us to initialise the automation context inside  BAMIntegrationUiBaseTest class.


Add the following listeners entries to testng.xml file.


<listeners>
        <listener class-name="org.wso2.carbon.automation.engine.testlisteners.TestExecutionListener">
        <listener class-name="org.wso2.carbon.automation.engine.testlisteners.TestManagerListener">
        <listener class-name="org.wso2.carbon.automation.engine.testlisteners.TestReportListener">
        <listener class-name="org.wso2.carbon.automation.engine.testlisteners.TestSuiteListener">
        <listener class-name="org.wso2.carbon.automation.engine.testlisteners.TestTransformerListener">
    </listener></listener></listener></listener></listener></listeners>

 <test name="facebook-connector" preserve-order="true" parallel="false">
        <classes>
            <class name="org.wso2.carbon.connector.integration.test.facebook.FacebookConnectorIntegrationTest">
        </class></classes>
    </test>


2.

 <test name="Login" preserve-order="true" verbose="2">
        <classes>
            <class name="org.wso2.bam.ui.integration.test.LoginTestCase"/>
        </classes>
    </test>

Description:

     1 - Implementing TestNG listener interfaces provide a way to call event handlers inside custom listener classes, thus this makes possible to do pre-defined operations in the TestNG execution cycle.  Click here to learn more on these listener classes.

     2 - Provide the test class you need to execute. This will run the mentioned class only in the test suite.

Note :

Alternatively you can execute a whole test package which contains one or many test classes in the test suite. To do so you can simply add  the below snippet to the testng.xml file.

<test name="Login-tests" preserve-order="true" parallel="false">
        <packages>
            <package name="org.wso2.bam.ui.integration.test"/>        
        </packages>
 </test>



Execute the below maven command.
            mvn install -DskipUiTests=false

Summary

This article provided a step-by-step guide on our UI testing scenario. This article can be used as a foundation and guide for users to add UI modules to products and implement different UI testing scenarios.

sanjeewa malalgodaHow to write sample class mediator to get system property and inject to synapse properties / How to generate response with synapse property using script mediator

Here i have added class mediator code and sample synapse configuration to get carbon server home property. To test this create java project and add following class mediator code file to /repository/components/lib directory and add following synapse configuration to source view.  Then you can invoke created proxy service.

Class mediator code.

package main.java.org.wso2.carbon.custommediator;
import org.apache.synapse.MessageContext;
import org.apache.synapse.core.axis2.Axis2MessageContext;
import org.apache.synapse.mediators.AbstractMediator;
public class SampleCustomMediator extends AbstractMediator {

public boolean mediate(MessageContext synapseMsgContext) {
    String carbonHome = System.getProperty("carbon.home");
    log.info("Carbon Home is : "+carbonHome);
    synapseMsgContext.setProperty("CARBON_HOME", carbonHome);
    return true;
 }
 


Synapse configuration.

 

<proxy name="EchoProxyTest"
          transports="https http"
          startOnLoad="true"
          trace="disable">
  <target>
   <inSequence>
     <class name="main.java.org.wso2.carbon.custommediator.SampleCustomMediator"/>
     <sequence key="responseTest"/>
   </inSequence>
   <outSequence>
      <send/>
   </outSequence>
 </target>
</proxy>
<sequence name="responseTest">
      <script language="js">var carbonHome = mc.getProperty("CARBON_HOME");
      var carbonHomeTest = "sanjeewa";
      mc.setPayloadXML(&lt;serverHome&gt;{carbonHome}&lt;/serverHome&gt;);</script>
      <header name="To" action="remove"/>
      <property name="NO_ENTITY_BODY" scope="axis2" action="remove"/>
      <property name="RESPONSE" value="true"/>
      <send/>
      <log level="full"/>
</sequence>



You can invoke create proxy service by calling following URL
https://130.76.202.98:9443/services/EchoProxyTest

Then you will get following response.
 

<serverHome>/home/sanjeewa/work/packs/wso2esb-4.8.0</serverHome>

Shelan PereraNew window just opened ...!

  It has been a week since i have became a student once more in my life. I like the experience of recreating the feelings that i had when i was a student ago two years back. Since now this blog is full of technical content and i did not have much time to blog much about my other experiences. But now this is a new life so thought of doing the same to my blog. Lets see how the new story unfolds...

I am an EMDC ( European Masters in Distributed Computing) student at IST (Instituto Superior Técnico) and KTH (Royal Institute of Technology) in Sweden. This offering is great not only because of the content it offers but the cultural experience you will gain. It is a bonus..

During the first week I was fortunate to visit some of the very beautiful places in Lisbon. I should say the coastal area is really fantastic even though i am coming from a place with full of beaches covering my beautiful island Sri Lanka. I will post more about that place in another post with more details.. and of course with pictures. Till then Just relax with the view out of my apartment...



Hasini GunasingheInstalling Python for Beginners in Ubuntu

Hi all,

I have been away from my blog for a long time... Although I had many things to write, I didn't find time for that.
Here I am again, writting a small post on something I just started to learn...
It is Python, which is needed for Machine Learning class that I am taking this semester.

Since the usage of Python for Machine Learning involves handling of large data sets in multi-dimensional settings, I wanted to install a Python distribution which includes supportive libraries such as NumPy and Pandas. And I also wanted the advanced interpreter - IPython, over the basic command line interpreter shipped with core Python installation.

I went for the Canopy Python distribution (which has been named as EPD before). Another option would be Anaconda Scientific Python Distribution.

I am noting down the installation steps here:

1. Download the Canopy distribution that matches your platform from here. This will download a file named: "canopy-1.4.1-rh5-64.sh"

2. Install it by typing: $bash canopy-1.4.1-rh5-64.sh

3. After accepting license agreement, it will ask where to put the Python files, for which you can give your preferred location (lets say it is: ~/CanopyHome).

4. Although the installation completes, it doesn't allow you to use tools right away.
You need to set the $Path environment variable pointing to the Canopy Python environment. You can do this either by running the UI tool (./canopy) or the command line tool (./canopy_cli) which is found in ~/CanopyHome, and specifying where you need to create the Canopy Python environment.

5. You can check if the $Path variable is properly set by executing export $PATH in a new terminal after step 4.
Then you can install Pandas library using easy_install tool: $easy_install pandas

6. Verify the successful installation by running ipython from command line, which will give an output similar to following:
$ ipython
Python 2.7.6 | 64-bit | (default, Jun  4 2014, 16:32:15)
Type "copyright", "credits" or "license" for more information.

IPython 2.1.0 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

In [1]:

Shazni NazeerWSO2 ESB Performance Round 7.5

WSO2 has published latest performance round testing of the WSO2 ESB known as WSO2 ESB Performance Round 7.5

Following are some of the stats related to the test conducted with WSO2 ESB 4.8.1.

It indicates relative performance metrics with number of leading open source ESB's.




Numbers clearly indicate WSO2 ESB outperforms other open source ESB's listed.


Shazni NazeerDownloading and running WSO2 Complex Event Processor

WSO2 CEP is a lightweight, easy-to-use, 100% open source Complex Event Processing Server licensed under Apache Software License v2.0. Modern enterprise transactions and activities consists of stream of events. Enterprises that monitor such events in real time and respond quickly to those environments undoubtedly have greater advantage over its competitors. Complex Event Processing is all about listening to such events and detecting patterns in real-time, without having to store those events. WSO2 CEP fulfills this requirements by identifying the most meaningful events within the event cloud, analyzes their impact and acts on them in real-time. It’s extremely high performing and massively scalable.


How to run WSO2 CEP

  1. Extract the zip archive into a directory. Say the extracted dircetory is CEP_HOME
  2. Navigate to the CEP_HOME/bin in the console (terminal)
  3. Enter the following command  
        ./wso2server.sh       (In Linux)
        wso2server.bat        (In Windows)

Once started you can access the management console by navigating the following URL

http://localhost:9443/carbon

You may login with default username (admin) and password (admin). When logged in you would see the management console as shown below.


Shazni NazeerAccessing an existing H2 DB

H2 is one of the fastest file based database available. You can download the database from http://www.h2database.com/

This post is a very short guide on how to access an already existing H2 database through the command line and the browser.

Let me assume, you have a database named test.db in your home (referring it as $HOME). So the full path of your database is $HOME/test.db

First let me explain how to access the DB in command shell.

Extract the downloaded h2 zip file from the above mentioned URL. Extract it to a location of your liking. Navigate to the bin directory of the extracted directory and enter the following.
$ java -cp h2*.jar org.h2.tools.Shell

Welcome to H2 Shell 1.3.174 (2013-10-19)
Exit with Ctrl+C
[Enter]   jdbc:h2:~/test
URL       jdbc:h2:$HOME/test
[Enter]   org.h2.Driver
Driver   org.h2.Driver
[Enter]   username
User      username
[Enter]   password
Password  password
Connected

Commands are case insensitive; SQL statements end with ';'
help or ?      Display this help
list           Toggle result list / stack trace mode
maxwidth       Set maximum column width (default is 100)
autocommit     Enable or disable autocommit
history        Show the last 20 statements
quit or exit   Close the connection and exit

sql>

Replace $HOME with your home directory name. Note at the end you do not specify .db extention as in test.db in the URL. Instead it is just 'test'.
Now you can enter sql commands against your DB schema.

Now let's see how to access the DB in the Web browser.

Type the following on top the bin directory.
$ java -jar h2*.jar

A web browser window should be opened as shown below.


Enter the following,

Saved Settings        : Generic H2 (Embedded)
Setting name           : Any name you like  (You can save or remove this configuration with the save and remove button)

Driver Class             : org.h2.Driver

JDBC URL              : jdbc:h2:/$HOME/test

Enter also your user name and password for the DB.

You should be able to see a window like the below. Now you can enter sql commands against your DB schema.


Enjoy using H2 DB in your applications.

Shazni NazeerIntegrating Sharepoint with WSO2 Governance Registry

WSO2 Governance Registry (GREG) is a fully open source registry-repository for storing and managing service artifacts and other resources.

Microsoft Sharepoint is a web application platform, comprising of multipurpose set of web technologies. Sharepoint has historically been engaged with content and document management. But the latest versions can be used for lot more; document and file management, social networking, websites, intranet, extranet, enterprise search, business intelligence, system integration, process integration and workflow automation. The latest version of Sharepoint is Sharepoint 2013.

In this guide I shall show you how can we integrate Sharepoint resources with WSO2 Governance Registry. This can be useful to govern resources and artifacts stored in Sharepoint in GREG.

In this guide I will create a Sharepoint site and a blog in the site and create a resource in GREG with the blog posts URL.

You can find Sharepoint 2013 installation instructions here.

Let's first create a site in the Sharepoint 2013. You can create a sharepoint site collection using the Sharepoint 2013 Central Administration, which you can find in the start menu. This will prompt you for your Administrative user name and password, which you would have configured at the installation time. Sharepoint 2013 Central Administration window will open up in the browser as shown below.

Sharepoint 2013 Central Administration

You can create a site collection by clicking 'Create site collection' link and following the onscreen instructions. I've created a site collection called 'MySiteCollection' for demonstration and I can access the site collection by navigating to the http://win-nh67lj7lsq4/sites/mysitesYou can configure your site collections URL while following the above mentioned instructions. When you navigate to your site collection with the configured URL, you would see a similar window as the following.

Site Collection
You can create a new post by clicking 'Create a post' link and following the onscreen instructions. I've created a blog post called 'My first blog post'. After creating you can see the blog post listed in the site collection as shown in the following screenshot.

A blog post in Sharepoint

You can view the blog post by clicking the blog link. Its URL in my case is http://win-nh67lj7lsq4/sites/mysites/Lists/Posts/Post.aspx?ID=3

OK. Now we can import this blog post as a resource in the WSO2 Governance Registry. This would allow us to govern certain aspects of this resource in the WSO2 Governance Registry.

If you haven't downloaded and ran WSO2 Governance Registry, look here for the details. Navigate to the Management console in browser using the URL http://localhost:9443/carbon if you are running the WSO2 Governance Registry in the local machine and running with the default port settings.

Now let's add a resource in the WSO2 Governance Registry, corresponding to the blog post we created in the Sharepoint.  Login to the Management Console and click browse and navigate to a path where you want to store the blog post in WSO2 Governance Registry, Let's say in /_system/governace.

Click the 'Add Resource' and select 'import content from URL' for method, as shown in the following picture. Provide the Sharepoint blog post URL and give it a name. This should import the blog post content into WSO2 Governance Registry.



In case you get an error, that is most probably due to Sharepoint resources are protected. You can't access the Sharepoint resources without the authentication provided. Even if you want to access the WSDL in the browser by providing the link, you will be prompted for credentials. So how do we cope this scenario in the WSO2 Governance Registry? WSO2 products offer an option in the configuration to allow this kind of authentication to external resources happen in the network. Open up the carbon.xml located in GREG_HOME/repository/conf. There you will find a tag named <NetworkAuthenticatorConfig>. Provide the following configuration (of course changing the pattern according to your requirement and provide your credentials).
<NetworkAuthenticatorConfig>
<Credential>
<Pattern>http://192.168.1.9.*</Pattern>
<Type>server</Type>
<Username>YourUserName</Username>
<Password>YourPassword</Password>
</Credential>
</NetworkAuthenticatorConfig>

Provide your Sharepoint username and password for the <Username> and <Password> tags. <Pattern> tag allows any URL pattern matching to that to be authenticated in the WSO2 products. Type can be either 'server' or 'proxy' depending on your type.

After doing this change, you need to restart the WSO2 Governance Registry and attempt the above import. Now it should work.

By clicking the resource link the tree view would take you the following screen where you can do all the conventional governance operations for a resource. You can add lifecycles, tags, comments all in the WSO2 Governance Registry.

If you just want to save the blog post as a URI, you may do so by adding a the Sharepoint Blog URL as a URI artifact. This step will further be described below with adding a WSDL.

I'll wrap up this post by adding a WSDL file of a web service stored in the Sharepoint. The WSDL is of a default web service to list the content of a site collection. The WSDL URL of this service name List is http://win-nh67lj7lsq4/sites/mysites/_vti_bin/Lists.asmx?wsdl. Replace win-nh67lj7lsq4/sites/mysites/ with your configured site URL.

Adding a WSDL in WSO2 Governance Registry would import any dependent schema, create a service artifact type and endpoints (if available). Let's create a WSDL artifact in WSO2 Governance Registry.

Click the Add WSDL in the left pane as shown below.



Provide the WSDL URL and a name for it. This would import the WSDL and create any corresponding artifacts. In this case it creates a List service and an endpoint. You can find the service by clicking the Services in the left pane.  The endpoint dependency can be seen by clicking the 'View Dependency' in the WSDL list as shown below.



Above description indicated the way it imported the WSDL and its dependencies. You might just want to have the WSDL URL stored in WSO2 Governance Registry and not the imported content. This could be done by adding the WSDL as a URI. For that, click the Add URI button in the left pane. This should bring up a window as shown below.



Provide the WSDL URL for URI. Select WSDL for the Type. Provide the name List.wsdl (provide .wsdl extension anyway) and click save. Now go to the List URI. You should be able to see the WSDL listed there as shown below.



Click the List.wsdl. This will bring up the following windows with the dependency and associations listed in the right side.



This guide gave you a very basic guide on how to integrate some of the resources in Sharepoint with WSO2 Governance Registry. You can do lot more with WSO2 Governance Registry. I recommend you download it and play with the product and get more details from WSO2 official documentation from https://docs.wso2.org/.

Hope this guide was useful.

Cheers


Shazni NazeerHaving frequent WiFi disconnection in your Linux?

I've been lately using Fedora 20 as my development environment and it's really fascinating. It is one of the best OS's for a developer with extensive support for development no matter what technology you are focused into (Of course if it's Windows or MAC specific, then it's not)

A major issue I have been experiencing in Fedora 20 in my laptop (Lenovo T530), is the frequent WiFi disconnection. It was very embarrassing, although the WiFi signal strength is good. At times the WiFi goes off and never gets connected back. The following was few useful commands I came to know that can force the WiFi adapter to initiate the connection (possibly). I hope this may be useful to some of you as well.
[shazni@shazniInWSO2 Disk1]$ nmcli -p con

=============================================================================================================
List of configured connections
=============================================================================================================
NAME UUID TYPE TIMESTAMP-REAL
-------------------------------------------------------------------------------------------------------------
TriXNet bd504b18-be5f-4d2d-9ea6-6bff40f29cca 802-11-wireless Mon 08 Sep 2014 01:55:20 PM IST
NAUMI LIORA HOTEL dc5a4e03-69f0-4f0b-80a4-b5c6f2755ace 802-11-wireless Thu 21 Aug 2014 10:20:16 PM IST
TriXNet_Groud_Floor 6142855d-7845-4f6a-88fb-068b15cbd029 802-11-wireless Thu 14 Aug 2014 11:30:43 PM IST
em1 32a2ab29-73af-477a-888c-ab9cf2a489b2 802-3-ethernet never
Dialog 4G 3b96c186-012a-43eb-bdb8-72679f318472 802-11-wireless Sun 31 Aug 2014 11:01:30 PM IST
PROLiNK_PRN3001 cb54a35f-e459-44cf-a2b3-32071e77460b 802-11-wireless Mon 07 Jul 2014 08:57:24 PM IST
ZTE d57d20b1-c556-4706-8849-acafdee1fc88 802-11-wireless Sat 09 Aug 2014 05:50:02 PM IST
WSO2 15da72d9-07f9-47bd-9a0a-f055399a6339 802-11-wireless Fri 05 Sep 2014 06:20:15 PM IST
AMB 0a521533-22f4-4adc-89c3-8f1b9118743e 802-11-wireless Fri 22 Aug 2014 05:01:37 PM IST

[shazni@shazniInWSO2 Disk1]$ nmcli con up uuid bd504b18-be5f-4d2d-9ea6-6bff40f29cca
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)
The first command shows the available connections together with device UUIDs. The second command initiates the connection given the UUID. This might result in prompting you for a password.

Well I have been doing this for sometime whenever my connection goes off and never re-connects. I found an easy solution ( or rather a workaround ) for frequent disconnection. The workaround was to disable IPv6 in my Linux box. I can't assure the solution would work if you are encountering the same issue. Anyway, that worked for me. Ever since disconnections have been rare.

To disable IPv6 in your machine, do the following.
[shazni@shazniInWSO2 shazni]$ sudo vi /etc/sysctl.conf
3) Add the following line at the end of the file (or where its convenient for you):
net.ipv6.conf.all.disable_ipv6 = 1
Restart your machine for the changes to take effect.

Shazni NazeerStoreKeeper 1.0.0 Released

In the current world, lot of things are being sold and bought. The items sold are of various types; be it a consumer good, a service rendered to a client, the providers (sellers) need to keep track of details of their accounting. They, most often also need to provide a bill to the customer. Traditionally, this was a manual process. The provider writes a manual bill and write down the total price with all the items sold with details of quantity, unit price, name etc. In the end of the day he'll calculate his revenue. And also he keeps an eye on the available items to get more items to sell when the items run out of the stock.

What if there is a multipurpose point of sale software, which could do all of the above things in a snap of a time? In the modern world, time is valuable and efficiency in trading to serve more customers is vital for a successful business. Point of sale software are just created for that.

StoreKeeper is a software tool which you can use at the point of sale to add things to transactions and in the end print a bill on a printer (usually using a dot matrix printer). StoreKeeper just makes this process extremely streamlined and is very easy to use. It's a light weight, portable software, which needs a bare minimum of system requirements.

Currently following features are supported.

  • Add an item
  • Update an item
  • Query an item or all items to view
  • Add users
  • Update a user
  • Query users
  • Add items to transaction
  • Delete item from transaction
  • Cancel the transaction
  • End a transaction
  • Print a bill (with a configured printer) together with a pdf copy in the computer
  • Get the total sales on a day

Features to be added in the future

  • Query current days transactions
  • Query current days transaction on a given item/service
  • View the history of an item sale
  • View the history of total sales
  • Notification of different events via email
  • Graphs to view the stats
  • Different access to different users with different credentials

Following is a screenshot of the StoreKeeper taken from a Linux desktop. It's currently tested to be working in both Windows and Linux.



To give you a small technical idea of StoreKeeper, it's a classic client server application. The server serves request from a Graphical front end GUI client. This means, the StoreKeeper would support multiple clients to use the Software at the same time (This scenario is not yet completely tested).

Following is a sample bill (Sorry for hiding few information) printed using a dot matrix printer.



Like our facebook page for StoreKeeper https://www.facebook.com/StoreKeeperService.

Please feel free to contact us if you wish to purchase this product or if you need to provide your feedback to us via our facebook page https://www.facebook.com/StoreKeeperService.

Lali DevamanthriMobile Devices: The Biggest Growth Force in IT Today

Selection_005Over 1.5 billion smartphones and tablets to be shipped worldwide in 2016,
growing at a combined annual growth rate (CAGR) of over 20%.
In 2013, mobile devices will generate more than half of total industry’s overall
growth. By comparison, the IT industry will grow by only 2.9% in 2013, with the exclusion of mobile devices.
Leaders in the mobile device market, and ultimately in the next generation PC (and
converged PC/mobile device) market will succeed to attract large numbers of
application developers. Market power fundamentally resides in the operating system
and related application platform software, over 80% of which will be controlled by
Apple and Google in 2016

Cloud Driving Mobile and Vice Versa

Selection_007

IDC expects one in three enterprises have already deployed large scale cloud-
based mobile solution deployment or will do so within 18 months.
Furthermore, mobile application developers remain strongly committed to the cloud:
83% of all mobile app developers plan to use cloud-based back-end systems for
their applications.
Over 65% of mobile application developers plan to develop HTML5 and mobile
web applications in 2013

IT Consumerization and Line of Business Buyers

“Line of business” executives (LOB) will be directly involved in 58% of new
investments involving 3 platform technologies in 2013.
In 2016, that number will rise to 80% of new IT investments, with LOB taking the
lead decision-maker role in at least half of those investments.

Mobile Collaboration: The Next Frontier in a $4 Billion Mobile Enterprise Market

Selection_008

Over 60% of organizations will have rolled out mobile applications beyond email
to their workforce and/or customers by the end of 2013.
Over 20% of firms plan to roll out five or more applications in next 12 months.
80% of enterprises plan to invest in mobile app development resources in 2013.

Email has always been the most mature and successful business mobile application,
but 12 months of rapid IT consumerization has accelerated the adoption of more
advanced applications, including mobile content and collaboration tools. These apps enhance the value of collaboration by enabling users to store, access,
and collaborate on their documents with any device and make decisions anywhere,
anytime.

 

 

 

 

 

 

 


Chris HaddadEnabling Cloud-native, complex enterprise development in the Cloud

Forklifting terrestrial middleware into the cloud provides incremental benefits. To revolutionize project delivery and build a responsive IT, organizations operating at the speed of business:

Adopt Complex Enterprise-Ready Cloud Solutions

Enterprise applications today require integration, identity, access control, business process workflow, mobility, APIs, event processing, and sophisticated big data reporting. Does your cloud platform provide a solid foundation?

While some environments only deliver a simple application server and database in the Cloud, WSO2 cloud solutions provide all platform components required to deliver an enterprise-ready cloud platform, supporting complex application development and deployment.

The WSO2 platform eliminates traditional IT challenges and fosters adoption by integrating seamlessly with your enterprise directory, identity management, monitoring systems, existing internal and external applications, and services.

In addition, WSO2 can help you reduce time to market by implementing transparent governance, hiding cloud complexity, and streamlining development and operations processes.

Why did Boeing, Verio, Cisco, and others pick WSO2?

Because WSO2 delivers Cloud-native Platform as a Service environments encompassing components required to deliver real-world, complex applications without requiring extensive system integration to tie together cloudy middleware technology from multiple vendors, or wrapping non-cloudy middleware technology.

 Additional Resources

Cloud-Native Datasheet

Infrastructure Cloud Services Model

Reducing Cloud Computing Cost 

 

Chandana NapagodaCustomizing Header and Footer of WSO2 API Manager Store

If you want to customize the header and footer of WSO2 API Manager Store application, this blog post will be explaining how to do that by adding a subtheme. 

Adding a Subtheme

1). Navigate to "\repository\deployment\server\jaggeryapps\store\site\themes\fancy\subthemes" directory.
2). Create a directory with the name of your subtheme. Here I am going to create a subtheme called "test"
3). Edit "repository\deployment\server\jaggeryapps\store\site\conf\site.json" file as below. This makes your sub theme as the default theme.

"theme" : {
        "base" : "fancy",
  "subtheme" : "test"
    },
Customize Header:

As an example, Here I am going to remove theme selection menu item from store header (theme selection will be appearing after user login into the store).

1). Create a directory called "templates" inside your subtheme directory.
2). Copy "template.jag" located in "\repository\deployment\server\jaggeryapps\store\site\themes\fancy\templates\user\login" 
into your new "templates" directory with the same "templates\user\login" directory structure.
3). To remove theme selection menu item, remove "&lt;li class="dropdown settingsSection"&gt;...&lt;/li>" HTML tag section from the "template.jag". 

Customize Footer:

Here I am going to Remove documentation hyperlinks available in the API Store footer

1). Create a directory called "templates" inside your subtheme directory. Ignore this step if you have already done with the "Header customization"
2). Copy "template.jag" located in "\repository\deployment\server\jaggeryapps\store\site\themes\fancy\templates\page\base" 
 into your new "templates" directory with same "templates\page\base" directory structure.
3). To remove Docs link, you can find them inside the "row-fluid" div tag. Customize them according to your requirements.

Saliya EkanayakeGLIBCXX_3.4.9 Could Not Be Found with Apache Spark

If you encounter an error similar to the following, which complains that GLIBCXX_3.4.9 could not be found, while running an application with Apache Spark you can avoid this by switching Spark's compression method from snappy to something such aslzf.
...
Caused by: java.lang.UnsatisfiedLinkError: .../snappy-1.0.5.3-1e2f59f6-8ea3-4c03-87fe-dcf4fa75ba6c-libsnappyjava.so: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by.../snappy-1.0.5.3-1e2f59f6-8ea3-4c03-87fe-dcf4fa75ba6c-libsnappyjava.so)
There are a few ways how one can pass configuration options to Spark. The naive way seems to be through command line as,
--conf "spark.io.compression.codec=lzf"
On a side note, you can find what GLIBC versions are available by running strings /usr/lib/libstdc++.so.6 | grep GLIBC
References

Manoj KumaraInstall SVN on RedHat Linux RHEL and configure DepSync on WSO2 worker manager nodes

  • After loggin in change the user into root user 
    • sudo -i 
  • You can install required packages using (This command will install Apache if its not already installed)
    • yum install mod_dav_svn subversion 
  • After this step SVN will be installed in the server and now we can configure it :) 
    • Navigate to /etc/httpd/conf.d/subversion.conf and modify it as below

LoadModule dav_svn_module modules/mod_dav_svn.so
LoadModule authz_svn_module modules/mod_authz_svn.so<Location /svn>
<Location>
   DAV svn
   SVNParentPath /var/www/svn
   AuthType Basic
   AuthName "Subversion repositories"
   AuthUserFile /etc/svn-auth-users
   Require valid-user
</Location>
  • You can create SVN users using following commands 
    • htpasswd -cm /etc/svn-auth-users testuser
  • This will request password for for the user
  • Now finally since we sucessfully installed SVN created SVN user next we can create the repository :) 
    • mkdir /var/www/svn
    • cd /var/www/svn
    • svnadmin create mySvnRepo
    • chown -R apache.apache mySvnRepo
  • Next we need to restart the Apache server 
    • service httpd restart
Goto http://localhost/svn/mySvnRepo address using your browser. Now by giving the SVN user credentials you can login to the repo and view the content.


Once you provided the credentials you will be able to check the your repository and the content if any,


    Now lets configure WSO2 servers as Manager and Workers

  • Download and install SVNKit (svnClientBundle-1.0.0.jar) from http://dist.wso2.org/tools/svnClientBundle-1.0.0.jar to the/repository/components/dropins folder. 
  • Download http://maven.wso2.org/nexus/content/groups/wso2-public/com/trilead/trilead-ssh2/1.0.0-build215/trilead-ssh2-1.0.0-build215.jar and copy it to /repository/components/lib folder. 
  • Open <RepoLocation>/conf/svnserve.conf and set following lines to configure authentication for the new repository. 
    • anon-access = none 
    • auth-access = write (permission for authenticated users)
    • password-db = passwd (source of authentication)

      Enabling DepSync on the manager node

      You can configure DepSync in the /repository/conf/carbon.xml file on the manager node by making the following changes,

      <DeploymentSynchronizer>
          <Enabled>true</Enabled>
          <AutoCommit>true</AutoCommit>
          <AutoCheckout>true</AutoCheckout>
          <RepositoryType>svn</RepositoryType>
          <SvnUrl>http://localhost/svn/mySvnRep/</SvnUrl>
          <SvnUser>testuser</SvnUser>
          <SvnPassword>testpass</SvnPassword>
          <SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
      </DeploymentSynchronizer>

      Enabling DepSync on the worker nodes

      In worker node you need to set the AutoCommit property to false as below,

      <DeploymentSynchronizer>
          <Enabled>true</Enabled>
          <AutoCommit>false</AutoCommit>
          <AutoCheckout>true</AutoCheckout>
          <RepositoryType>svn</RepositoryType>
          <SvnUrl>http://localhost/svn/mySvnRep/</SvnUrl>
          <SvnUser>testuser</SvnUser>
          <SvnPassword>testpass</SvnPassword>
          <SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
      </DeploymentSynchronizer>

      [1] https://docs.wso2.com/display/CLUSTER420/SVN-based+Deployment+Synchronizer
      [2] www.if-not-true-then-false.com/2010/install-svn-subversion-server-on-fedora-centos-red-hat-rhel

    Lali DevamanthriNext Version of Windows Screenshots Leaked

    Two German-language blogs,Computerbase.de and Winfuture.de posted  21 screenshots, which appear to genuinely “Windows Technical Preview”  build. Complete set of screen shots can be find in imgur.

     

     

     

     

     

     

     

     

     

     


    Manoj KumaraWSO2 ESB - JSON to SOAP (XML) transformation using Script sample


    • Reqired SOAP request as generated using SoapUI
     <?xml version="1.0" encoding="utf-8"?>  
    <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
    <soapenv:Body>
    <m:newOrder xmlns:m="http://services.com">
    <m:customerName>WSO2</m:customerName>
    <m:customerEmail>customer@wso2.com</m:customerEmail>
    <m:quantity>100</m:quantity>
    <m:recipe>check</m:recipe>
    <m:resetFlag>true</m:resetFlag>
    </m:newOrder>
    </soapenv:Body>
    </soapenv:Envelope>

    • The object that I used to test on Advanced REST Client[2]:

    POST request
    Content-Type   : application/json
    Payload           : {"newOrder": { "request": {"customerName":"WSO2", "customerEmail":"customer@wso2.com", "quantity":"100", "recipe":"check", "resetFlag":"true"}}}


    • Proxy configuration

     <?xml version="1.0" encoding="UTF-8"?>  
    <proxy xmlns="http://ws.apache.org/ns/synapse"
    name="IntelProxy"
    transports="https,http"
    statistics="disable"
    trace="disable"
    startOnLoad="true">
    <target>
    <inSequence>
    <script language="js"><![CDATA[
    var customerName = mc.getPayloadXML()..*::customerName.toString();
    var customerEmail = mc.getPayloadXML()..*::customerEmail.toString();
    var quantity = mc.getPayloadXML()..*::quantity.toString();
    var recipe = mc.getPayloadXML()..*::recipe.toString();
    var resetFlag = mc.getPayloadXML()..*::resetFlag.toString();
    mc.setPayloadXML(
    <m:newOrder xmlns:m="http://services.com">
    <m:request>
    <m:customerName>{customerName}</m:customerName>
    <m:customerEmail>{customerEmail}</m:customerEmail>
    <m:quantity>{quantity}</m:quantity>
    <m:recipe>{recipe}</m:recipe>
    <m:resetFlag>{resetFlag}</m:resetFlag>
    </m:request>
    </m:newOrder>);
    ]]></script>
    <header name="Action" value="urn:newOrder"/>
    <log level="full"/>
    </inSequence>
    <outSequence>
    <log level="full"/>
    <property name="messageType" value="application/json" scope="axis2"/>
    <send/>
    </outSequence>
    <endpoint>
    <address uri="http://localhost/services/BusinessService/" format="soap11"/>
    </endpoint>
    </target>
    <description/>
    </proxy>


    Referance

    [1] https://docs.wso2.org/display/ESB481/Sample+441%3A+Converting+JSON+to+XML+Using+JavaScript

    [2] https://chrome.google.com/webstore/detail/advanced-rest-client/hgmloofddffdnphfgcellkdfbfbjeloo

    Manoj KumaraDid you forgot your MySQL password

    I have installed MySQL server on my machine for many testing purposes and after that I forgot the password I used in many cases :D

    There is a very simple way to reconfigure MySQL in Linux. 

    manoj@manoj-Thinkpad:~$ sudo dpkg-reconfigure mysql-server-5.5 

    This will allow us to reset the password on our MySql server.

    sanjeewa malalgodaWSO2 API Manager - Basic user operations curl command( How to do basic operations using curl commands)

    Here in this post i will list out curl commands that required to cover complete API Manager (basic) use case. Here we will do fallowing operations. Here we need to login API Manager publisher and store multiple times. Here user will be tenant user(ttt@ttt.ttt). if need change any parameter.

    1) Create APIs
    2) Publish APIs
    3) Create application
    4) Subscribe to APIs for that application
    5) Remove subscriptions
    6) Remove application
    7) Remove APIs

    API Publisher
    =============
    curl -X POST -c cookies http://localhost:9763/publisher/site/blocks/user/login/ajax/login.jag -d 'action=login&username=ttt@ttt.ttt&password=tttttt'

    1) Create APIs
    curl -X POST -b cookies http://localhost:9763/publisher/site/blocks/item-add/ajax/add.jag -d "action=addAPI&name=sanjeewa-api&context=/sanjeewa-api&version=1.0.0&tier=Bronze&tier=Gold&transports=http&http_checked=http&transports=https&https_checked=https&description=sanjeewa-api&visibility=public API&tags=sanjeewa-api,api&resourceCount=0&resourceMethod-0=GET,PUT,POST,DELETE,OPTIONS&resourceMethodAuthType-0=Application & Application User,Application & Application User,Application & Application User,Application & Application User,None&uriTemplate-0=/*&resourceMethodThrottlingTier-0=Unlimited, Unlimited, Unlimited, Unlimited, Unlimited&tiersCollection=Bronze,Gold,Silver,Unlimited" -d 'endpoint_config={"production_endpoints":{"url":"http://search.twitter.com","config":null},"endpoint_type":"http"}'

    2) Publish APIs
    curl -X POST -b cookies 'http://localhost:9763/publisher/site/blocks/life-cycles/ajax/life-cycles.jag' -d 'action=updateStatus&name=sanjeewa-api&version=1.0.0&provider=ttt@ttt.ttt&status=PUBLISHED&publishToGateway=true&requireResubscription=true'

    API Store
    =========
    curl -X POST -c cookies http://localhost:9763/store/site/blocks/user/login/ajax/login.jag -d 'action=login&username=ttt@ttt.ttt&password=tttttt'

    3) Create application
    curl -X POST -b cookies http://localhost:9763/store/site/blocks/application/application-add/ajax/application-add.jag -d 'action=addApplication&application=sanjeewa-application&tier=unlimited&description=&callbackUrl='

    4) Subscribe to APIs for that application
    curl -b cookies http://localhost:9763/store/site/blocks/application/application-list/ajax/application-list.jag?action=getApplications
    Got app id for sanjeewa-application each time and used it for further calls.

    curl -X POST -b cookies http://localhost:9763/store/site/blocks/subscription/subscription-add/ajax/subscription-add.jag -d 'action=addSubscription&name=sanjeewa-api&version=1.0.0&provider=ttt@ttt.ttt&tier=Unlimited&applicationId=7'

    5) Remove subscriptions
    curl -X POST -b cookies http://localhost:9763/store/site/blocks/subscription/subscription-remove/ajax/subscription-remove.jag -d 'action=removeSubscription&name=sanjeewa-api&version=1.0.0&provider=ttt@ttt.ttt&applicationId=7'

    6) Remove application
    curl -X POST -b cookies http://localhost:9763/store/site/blocks/application/application-remove/ajax/application-remove.jag -d "action=removeApplication&application=sanjeewa-application"

    API Publisher
    =============
    curl -X POST -c cookies http://localhost:9763/publisher/site/blocks/user/login/ajax/login.jag -d 'action=login&username=ttt@ttt.ttt&password=tttttt'
    7) Remove APIs
    curl -X POST -b cookies http://localhost:9763/publisher/site/blocks/item-add/ajax/remove.jag -d "action=removeAPI&name=sanjeewa-api&version=1.0.0&provider=ttt@ttt.ttt"

    Chris HaddadChoosing API Security Options Fostering API Ecosystems

    Choosing appropriate API security options will help you gain developer trust, increase API adoption, and build an effective API ecosystem.  While APIs are the ‘coolest’ and most effective mechanism to expose business functionalities out towards the outside world and inward to other teams, API security requires learning new technologies (i.e. OAuth, MAC token profiles, and JSON Web Token [JWT]) and retrofitting existing identity management architecture with token chaining and identity brokering.

     

    Many mobile application developers and architects find API security and identity options are arcane, jargon-filled, and confusing.   They frequently ask whether selecting one choice over another is appropriate – and you need to cautiously identify and isolate  tradeoffs.  A robust API security platform can help guide you in the right direction.

    API Security Basics

    Security is not an afterthought. Incorporate security as an integral part of any application development project. The same approach applies to API development as well. API security has evolved significantly in the past five years. The recent standards growth has been exponential. OAuth and bearer tokens are the most widely adopted standard, and are possibly now the de-facto standard for API security.

    What API security decisions should you consider?

    A few common API security decisions include:

    • How to pass a token used to authorize the API request
    • How to assert user identity to the API
    • How to propagate token revocation actions
    • How to transform (chain) access credentials

     

    Securing Access with OAuth

    Two divergent flavors of OAuth exist, 1.0 and 2.0.  OAuth 1.0 is a standard built for identity delegation. OAuth 2.0 is a highly extensible authorization framework, which is a significant attribute and selling point.

    OAuth 2.0 exposes three major authorization process phases:

    1. Requesting an Authorization Grant
    2. Exchanging the Authorization Grant for an Access Token
    3. Access the resource with the Access Token

    Access Tokens and Profiles

    A security architecture must specify a token format that communicates identity and entitlement assertions. The access token contains the information required to successfully make a request to the protected resource (along with type-specific attributes).  Access tokens are associated with profiles that specify how to communicate and interpret access token requests, access token responses, and interactions with protected resources.

    The OAuth 2.0 core specification does not mandate a specific access token type. The authorization server (who grants authorization) chooses an acceptable token type, and the requester or client does not dictate the token type. The Authorization Server decides which token type is to be returned during Phase 2 when crafting the Access Token response.

    An access token type definition may specify additional attributes (if any) sent to the client together with the “access_token” response parameter. The type also defines the HTTP authentication method used to include the access token when making a request to the protected resource.

    Bearer Token Profile

    Web applications commonly rely on SAML tokens or proprietary session cookies.  Most API security implementations today rely on Bearer Token Profiles. A Bearer token is a randomly generated string created by the authorization server and returned as a response to the client in Phase 2. Any client in possession of the token can use the token to access a secured API.

    The underlying token transport channel (e.g. TLS) enforces token integrity and privacy.  TLS only provides the security while in transit. The OAuth Token Issuer (or the Authorization Server) and  OAuth client are responsible to protect the access token while the token is at rest (stored in memory or on disk). In most cases, the access token needs to be encrypted. Moreover, the token issuer needs to guarantee the randomness of the generated access token, and the encryption has to be sufficient enough to exhaust any brute-force attacks.

    MAC Token Profile

    Rather than rely on a static random string known by both the client, authorization server, and resource server, the MAC token profile does not directly pass the  access token to the resource server.  The profile relies on client-side code to sign the resource request with a shared session key, and the resource server checks the signature. The client uses the signature algorithm, access token and the MAC key to calculate the request token passed to the resource server.

    The OAuth authorization server will issue a MAC key along with the signature algorithm, session key,  nonce and an access token.  The access token  can be used as an identifier for the MAC key. The client signs a normalized string derived from request attributes. Unlike in Bearer token, the MAC access token will never be shared between the client and the resource server. The MAC access token is only known to the authorization server and the client.

    The MAC token profile is more complex than the Bearer token profile, and requires both the client and resource server to perform more sophisticated processing logic.

    Making the Assertion 

    Sending a access token (e.g. Bearer, MAC) to pass an authorization gate is just one piece of the security puzzle.

    A partner employee can login to a web application using SAML2 SSO (you have to trust the partner’s SAML2 IdP) and  the web application can then access a secured API on behalf of the logged in user. The web application will use the SAML2 assertion  provided by the identity provider, and exchange the SAMl2 assertion for an OAuth access token via SAML2 grant type.

    The SAML 2.0 Bearer Assertion Profile, which is built on top of OAuth 2.0 Assertion Profile, defines SAML 2.0 Bearer Assertion requests for an OAuth 2.0 access token in addition to authenticating the client. Under OAuth 2.0,  requesting an access token is known as a grant type.

    JSON Web Token (JWT) Assertions

    JSON Web Token (JWT) Bearer Profile is almost the same as the SAML2 Assertion Profile. Instead of SAML tokens, JWT uses JSON Web Tokens.

    Linking Tokens with Associated Data

    The Internet draft OAuth Token Introspection, which is currently being discussed under the IETF OAuth working group, defines a method for a client or a protected resource (resource server) to query an OAuth authorization server and determine metadata about an OAuth token. The resource server sends the access token and the resource id (which is going to be accessed) to the authorization server’s introspection endpoint. The authorization server can check the validity of the token, evaluate any access control rules around it, and send back the appropriate response to the resource server. In addition to  token validity information, the response will detail token scopes, client_id, and any additional metadata associated with the token.

    Link Tokens with Identity

    When using OAuth 2.0, the client application is not required to know the end user (only exception is resource owner credentials grant type). The client simply obtains an access token to access a resource on behalf of a user. How does the application obtain an identity that can be asserted to the resource provider?

    OpenID Connect is a profile built on top OAuth 2.0. OAuth talks about access delegation while OpenID Connect talks about authentication. In other words, OpenID Connect builds an identity layer on top of OAuth 2.0.   With OpenID Connect, the client will get an ID Token in addition to the access token. ID Token represents the end user’s identity.

    In-depth API Security

    To learn more about  security specifications and how to implement API security, check out the following resources:

    API Security Ecosystem White Paper

    An Ecosystem for API Security Presentation: OAuth 2.0, OpenID Connect, UMA, SAML, SCIM, and XACML

    Best Practices in Building an API Security Ecosystem

    Adam FirestoneIt’s in the Requirements: Cyber Resiliency as a Design Element

    This is the second installment of a two-part discussion of the threats and challenges involved with cybersecurity.  The first part of the discussion explored cyber threats and challenges using the Stuxnet attack as a lens.  This post picks up with an allegorical analysis of the cyber threat posed by nation-state attacks as well as ideas about how information systems can be built so that they are less tempting targets.

    For me, and others such as Ruth Bader Ginsberg, Donald Douglas and Alan Dershowitz, growing up in Brooklyn was an education in itself.  In addition to practical matters such as what constituted an ideal slice of pizza, how to parallel park and how to tell which blocks to avoid as a pedestrian after dark, there were more philosophical lessons to be learned.  Take, for example, the case of Anthony Gianelli.  (Note:  Names have been changed to protect the innocent.)

    Anthony, or Tony as he was called, was a hard working guy.  He had that common Brooklyn fondness for sitting on his stoop in the evenings and pontificating on weighty issues about the state of the world.  One week, as always, Tony played the lottery.  Only this week was different.  Tony won, and won big.  I won’t say just how much money Tony went home with after taxes, but it was bordering on life changing.  So what, you may ask, did Tony do with his winnings? 

    For those readers hailing from that storied borough, the answer is both obvious and easy.  For everyone else… I’ll tell you.  Tony bought a car.  And not just any car.  Tony bought a pristine, brand-spanking-new Ferrari GTSi.  However, his trip home from the dealership was only the beginning.  Knowing that he had about a month before the car was delivered, Tony set about fortifying his garage.

    Fortifying might have been a bit of an understatement.  Tony broke up the garage’s concrete floor and poured a new one - about eight feet deep.  Sunk deeply into the wet concrete were four hardened steel eye bolts.  The garage door was replaced with a high security model and a state of the art, sensor-based alarm system added.  During the construction process, Tony spent many an evening on his stoop declaiming enthusiastically about the high degree of security being engineered into his garage. 

    The big day came and the Ferrari arrived.  Tony drove it in a manner that was almost, well, reverent.  At the end of the day, the ritual began.  Tony lovingly parked the car in the garage, ran hardened steel chains through the undercarriage and secured each chain to an eye bolt with a high security padlock.  The door was shut and hermetically sealed.  The alarm was set, Tony wished the car good night, and then took to the stoop, passionately discussing the Ferrari’s security.

    One day, several months after taking delivery, Tony went down to the garage to greet the Ferrari.  To his horror and shock, the car was gone.  Not only was it gone, but there was no evidence of any burglary.  The door hadn’t been forced.  The alarm hadn’t been tripped.  The chains were neatly coiled around the eye bolts, the locks opened, ready for use.  Tony, predictably, went into mourning.

    After several months and stages of grief, Tony became somewhat philosophical about the loss.  It was, he mused, a case of “easy come, easy go.”  And so, you can only imagine Tony’s surprise when he walked into his dark garage on the way to retrieve the newspaper one morning only to bump into something with delightful, albeit hard, curves.  Turning on the light, Tony stared and crossed himself.  The Ferrari was back.  In fact, it was all back.  The chains were looped through the undercarriage.  The alarm, which was now going off, had been set, and the door was still sealed.  It was as if the car had never left.  Except for one small detail.

    Taped to the windshield was a note.  There were all of eight words:

    If we really want it, we’ll take it.

    Tony took his Ferrari and moved to New Jersey.
    ---
    Tales of braggadocio and grand theft auto notwithstanding, the story about Tony’s Ferrari has an important nugget of advice for cyber defenders.  Tony ran into a certain kind of reality.  Specifically, he discovered what happens when an individual of significant but finite resources is at odds with an organization that has almost limitless time and resources.  This reality, deriving from the axiom that “given enough time and money, all things are possible,” also applies when cybersecurity intersects with geopolitics.  That is to say, when a nation-state puts your information system in the crosshairs of its cyber capabilities, there’s generally little that can be done about it.

    That doesn’t mean that organizations should give up on cyber defense.  Dedicated, specific, targeted attacks by nation-states using Advanced Persistent Threats (e.g., “Stuxnet”) are rare.  The real cyber threats faced by commercial, government and military organizations – probes and penetration by external actors and data loss due to insider threats – are almost mundane in their ubiquity.  Moreover, these threats are so common that many security professionals simply assume that losses due to cyberattacks are just another terrain feature in cyberspace.

    That assumption is premised on the ideas that cyber defense is inherently reactive and that that architecture of distributed systems (and, for that matter, the internet) must remain inherently static. 
    That premise is inherently flawed. 

    Technical standards and capabilities don’t remain static.  They continuously advance.  Many of the advances made over the last decade or so present engineers, architects, designers and developers with new options and choices when crafting responses to operational requirements.  Taken as a whole, this technical progress offers an ability to proactively design and integrally implement security in a manner that could alter much of the cybersecurity calculus. 

    This isn’t to say that there is a single silver bullet.  Rather, there are a number of technologies that, operating in concert, offer designers and defenders significant advantages.  An exhaustive discussion of all these technologies could fill volumes (and has) and is beyond the scope of this post.  However, highlighting just a few provides a useful overview of the way things could, and should, be.

    1.      Software is broken.  It’s created broken, it’s delivered broken and what’s worse, users become (unwitting) beta testers.  These flaws in the delivered product result in vulnerabilities which are exploited by hackers and malware authors.  In a disturbingly large proportion of cases, the delivery of flawed products can be traced to the nature of the software development life cycle itself.  In these cases, security verification and validation is the penultimate step prior to release.  As a result, it’s often rushed, resulting in flaws not being discovered.  Worse, it’s often too late or too expensive to fix a significant number of the flaws that are found.
     But what if security verification and validation could be pushed back to the beginning of the development lifecycle?  If we could ensure that the only code modules that entered the trunk were those that had passed the complete battery of functional andnon-functional (e.g., performance, scalability, interoperability and security) tests, the ensuing increase in the quality of software products would be accompanied by a significant decrease in delivered vulnerabilities.
     The good news is that this is exactly what the DevOps PaaS delivers.  By leveraging a shared, Cloud-based integrated development environment (IDE), environmental variances between Dev, Test and Operations that inject vulnerabilities can be eliminated.  Next, by automating DevOps practices such as Continuous Build, Continuous Integration, Continuous Test and Continuous Design, the onus is shifted to the developer, who must deliver flawless code, from the tester who had previously been (unrealistically) expected to find all the flaws in the code.

    2.      Many, if not most, critical systems are protected by access control systems that focus on authentication, or ensuring that the entity requesting access to the system is who it claims to be.  Authentication can be a powerful gate guard, sometimes requiring multiple concurrent methodologies (e.g., something you know, something you have, something you are, etc.).  The problem is that once a user is authenticated, these systems provide few, if any, controls or protections to system resources.  This vulnerability was exploited by both Bradley Manning and Edward Snowden.
     The answer is to add a layer that enforces fine-grained authorization and managing which resources can be accessed by authenticated users with a given set of attributes.  This mechanism, called attribute-based access control, or ABAC, is implemented through an OASIS open standard known as the eXtensible Access Control Markup Language (XACML).  XACML was first published in September 2003, and there are a significant number of commercial software packages (both proprietary and open source) that use it to bring ABAC’s powerful security to the enterprise.

    3.     When vulnerabilities are discovered in an enterprise’s key software components, it can take a significant amount of time to disseminate countervailing security measures.  During this time, the enterprise remains vulnerable.  The challenge is to rapidly close the security gap while ensuring that the enterprise’s operations suffer as little disruption as possible.
     The answer is to apply access control security at the operating system level, enabling an access control regime that is dynamic and centrally controlled.  In principle, this is similar to what ABAC implements for enterprise resources.  In this case, however, the control takes place at the inter-process communication (IPC) level.  In practice, this means that the organization can, upon learning about a vulnerability or compromise, push out a new access control policy to all hosts.  The policy can both enable and disable specific IPC types.  The net result is that the compromised software is prevented from executing while replacement software is seamlessly enabled.

    None of these things are a panacea to the cyber-vulnerability epidemic.  However, they all represent very real, tangible steps that engineers, designers and defenders can take to mitigate the risks faced while operating in an increasingly hostile environment.  They don’t solve everything.  But, taken in concert with other measures, they create a much more agile, resilient infrastructure.


    And that beats moving to New Jersey.

    Chamath GunawardanaEmail and normal user name configuration with WSO2 IS

    In this blog post I'm going to discuss how to configure WSO2 Identity Server to support email based user names for one user store and also normal String type user names to another user store.

    This is supported in WSO2 IS 4.6 and 5.0 versions.

    First you need to configure the Identity Server to support email based user names. You can refer this [1] blog post for the this configuration steps. You can configure the primary user store to have email user name as described in the blog.

    Then you can add a secondary user store from Configure -> User Store Management configuration. Click on add secondary user store and give the necessary details for the user store which support both email and normal user name types. We will call this user store domain as "TEST".
    When you configure this and strait away if you try to add an user with only String type user name which has alphanumeric chars it will complain that user name is not confirming to policy as shown below.


    So in order to have both type of user names for TEST domain we need to add the following configuration for the user store. 
    <Property name="UsernameWithEmailJavaScriptRegEx">^[\S]{3,30}$</Property>
    This property defines the user name pattern to be used when email user name is enabled as discussed in the [1] blog.

    However you cannot add this property from the User Store Manager configuration UI. Hence you need to edit the file manually. Usually the secondary user store properties for super tenant is placed in <IS_HOME>/repository/deployment/server/userstores/ directory with the domain name as here it will be TEST.xml

    After adding the property you need to restart the server. Then try to add an user to TEST domain with normal String type user name (testuser1) from Configure -> Users and Roles -> Users and by clicking Add New User. Then try with email user name (testuser2@email.com).

    With the configuration you will be able to add both type of users to this user store.

    Note:
    This is supported in super tenant mode only. So in multi-tenant deployment its recommend to have only one type of user name configuration.


    Ref:
    [1] - http://sureshatt.blogspot.de/2013/07/attribute-email-based-user.html

    Pushpalanka JayawardhanaHow to write a Custom SAML SSO Assertion Signer for WSO2 Identity Server

    This is the 3rd post I am writing to explain the use of extension points in WSO2 Identity Server. WSO2 Identity Server has so many such extension points which are easily configurable and arm the server with lot of flexibility. With this, we can support so many domain specific requirements with minimum efforts.
    • Now this third post deals with writing a custom SAML SSO Assertion signer.

    What we can customize?

    • Credentials used to sign the SAML Assertion (The private key)
    • Signing Algorithm
    • This sample can be extended to customize how we sign the SAML Response and validate the signature as well.

    How?

    We have to write a class extending 
    • The class 'org.wso2.carbon.identity.sso.saml.builders.signature.DefaultSSOSigner' or
    Implementing,
    • The interface 'org.wso2.carbon.identity.sso.saml.builders.signature.SSOSigner'
    Needs to override the following method in our case to customize how we sign the assertion,


        @Override

        public Assertion doSetSignature(Assertion assertion, String signatureAlgorithm, X509Credential cred) throws IdentityException {

            try {
                //override the credentials with our desired one
                cred = getRequiredCredentials();
                Signature signature = (Signature) buildXMLObject(Signature.DEFAULT_ELEMENT_NAME);
                signature.setSigningCredential(cred);
                signature.setSignatureAlgorithm(signatureAlgorithm);
                signature.setCanonicalizationAlgorithm(Canonicalizer.ALGO_ID_C14N_EXCL_OMIT_COMMENTS);

                try {
                    KeyInfo keyInfo = (KeyInfo) buildXMLObject(KeyInfo.DEFAULT_ELEMENT_NAME);
                    X509Data data = (X509Data) buildXMLObject(X509Data.DEFAULT_ELEMENT_NAME);
                    X509Certificate cert = (X509Certificate) buildXMLObject(X509Certificate.DEFAULT_ELEMENT_NAME);

                    String value = org.apache.xml.security.utils.Base64.encode(cred
                            .getEntityCertificate().getEncoded());
                    cert.setValue(value);
                    data.getX509Certificates().add(cert);
                    keyInfo.getX509Datas().add(data);
                    signature.setKeyInfo(keyInfo);
                } catch (CertificateEncodingException e) {
                    throw new IdentityException("errorGettingCert");
                }

                assertion.setSignature(signature);

                List<Signature> signatureList = new ArrayList<Signature>();
                signatureList.add(signature);

                // Marshall and Sign
                MarshallerFactory marshallerFactory = org.opensaml.xml.Configuration
                        .getMarshallerFactory();
                Marshaller marshaller = marshallerFactory.getMarshaller(assertion);
                marshaller.marshall(assertion);

                org.apache.xml.security.Init.init();
                Signer.signObjects(signatureList);

                return assertion;
            } catch (Exception e) {
                throw new IdentityException("Error while signing the SAML Response message.", e);
            }



    Finally we have to update the identity.xml() as below with the above custom class we write overriding the methods.

     <SAMLSSOSigner>org.wso2.custom.sso.signer.CustomSSOSigner</SAMLSSOSigner>
    and place the compiled package with the above class at 'IS_HOME/repository/components/lib' 

    Now if we restart the server and run the SAML SSO scenario, the SAML SSO Assertion will be signed in the way we defined at the custom class we wrote.

    Here you can find a complete sample code to customize the assertion signing procedure.

    Hope this helps..
    Cheers!

    Aruna Sujith KarunarathnaEnable Java Security Manager for WSO2 Products

    Hi everyone, in this post we are going to explore on how to enable java security manager for WSO2 products. For this we need to sign all the jars using the jarsigner program. For the learning purpose I will use the wso2carbon.jks java key store file, which ships default with WSO2 products. Special thanks goes to Sanjaya Ratnaweera who generously gave me the script files.. :) I am going to

    Pavithra MadurangiError while regenerating application token when IS is acting as key manager

    Tried the scenario in following version :
    WSO2 Identity Server 5.0.0
    WSO2 API Manger 1.7.1

    In this scenario WSO2 IS acts as key manager and I was following following guide
    https://docs.wso2.com/display/CLUSTER420/Configuring+WSO2+Identity+Server+as+the+Key+Manager

    After all the configuration is done, created an API, published it, subscribed and created an application token.

    When regenerating the token I observed exact issue reported at https://wso2.org/jira/browse/APIMANAGER-2738

    [2014-09-09 11:12:04,737] ERROR - APIStoreHostObject Error in getting new accessToken
    org.apache.axis2.AxisFault: Error in getting new accessToken
        at org.apache.axis2.util.Utils.getInboundFaultFromMessageContext(Utils.java:531)
        at org.apache.axis2.description.OutInAxisOperationClient.handleResponse(OutInAxisOperation.java:370)
        at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:445)
        at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:225)
        at org.apache.axis2.client.OperationClient.execute(OperationClient.java:149)
        at org.wso2.carbon.apimgt.keymgt.stub.subscriber.APIKeyMgtSubscriberServiceStub.renewAccessToken(APIKeyMgtSubscriberServiceStub.java:1187)
        at org.wso2.carbon.apimgt.keymgt.client.SubscriberKeyMgtClient.regenerateApplicationAccessKey(SubscriberKeyMgtClient.java:83)
        at org.wso2.carbon.apimgt.hostobjects.APIStoreHostObject.jsFunction_refreshToken(APIStoreHostObject.java:3219)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:601)
        at org.mozilla.javascript.MemberBox.invoke(MemberBox.java:126)
        at org.mozilla.javascript.FunctionObject.call(FunctionObject.java:386)
        at org.mozilla.javascript.optimizer.OptRuntime.callN(OptRuntime.java:52)
        at org.jaggeryjs.rhino.store.modules.subscription.c4._c_anonymous_3(/store/modules/subscription/key.jag:61)
        at org.jaggeryjs.rhino.store.modules.subscription.c4.call(/store/modules/subscription/key.jag)
        at org.mozilla.javascript.ScriptRuntime.applyOrCall(ScriptRuntime.java:2430)
        at org.mozilla.javascript.BaseFunction.execIdCall(BaseFunction.java:269)
        at org.mozilla.javascript.IdFunctionObject.call(IdFunctionObject.java:97)
        at org.mozilla.javascript.optimizer.OptRuntime.call2(OptRuntime.java:42)
        at org.jaggeryjs.rhino.store.modules.subscription.c0._c_anonymous_10(/store/modules/subscription/module.jag:35)
        at org.jaggeryjs.rhino.store.modules.subscription.c0.call(/store/modules/subscription/module.jag)
        at org.mozilla.javascript.optimizer.OptRuntime.callN(OptRuntime.java:52)
        at org.jaggeryjs.rhino.store.site.blocks.subscription.subscription_add.ajax.c0._c_anonymous_1(/store/site/blocks/subscription/subscription-add/ajax/subscription-add.jag:206)
        at org.jaggeryjs.rhino.store.site.blocks.subscription.subscription_add.ajax.c0.call(/store/site/blocks/subscription/subscription-add/ajax/subscription-add.jag)
        at org.mozilla.javascript.optimizer.OptRuntime.call0(OptRuntime.java:23)
        at org.jaggeryjs.rhino.store.site.blocks.subscription.subscription_add.ajax.c0._c_script_0(/store/site/blocks/subscription/subscription-add/ajax/subscription-add.jag:3)
        at org.jaggeryjs.rhino.store.site.blocks.subscription.subscription_add.ajax.c0.call(/store/site/blocks/subscription/subscription-add/ajax/subscription-add.jag)
        at org.mozilla.javascript.ContextFactory.doTopCall(ContextFactory.java:394)
        at org.mozilla.javascript.ScriptRuntime.doTopCall(ScriptRuntime.java:3091)
        at org.jaggeryjs.rhino.store.site.blocks.subscription.subscription_add.ajax.c0.call(/store/site/blocks/subscription/subscription-add/ajax/subscription-add.jag)
        at org.jaggeryjs.rhino.store.site.blocks.subscription.subscription_add.ajax.c0.exec(/store/site/blocks/subscription/subscription-add/ajax/subscription-add.jag)
        at org.jaggeryjs.scriptengine.engine.RhinoEngine.execScript(RhinoEngine.java:570)
        at org.jaggeryjs.scriptengine.engine.RhinoEngine.exec(RhinoEngine.java:273)
        at org.jaggeryjs.jaggery.core.manager.WebAppManager.execute(WebAppManager.java:432)
        at org.jaggeryjs.jaggery.core.JaggeryServlet.doPost(JaggeryServlet.java:29)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:755)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
        at org.apache.catalina.core.ApplicationDispatcher.invoke(ApplicationDispatcher.java:749)
        at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:487)
        at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:379)
        at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:339)
        at org.jaggeryjs.jaggery.core.JaggeryFilter.doFilter(JaggeryFilter.java:21)
        at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
        at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
        at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
        at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
        at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472)
        at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
        at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
        at org.wso2.carbon.tomcat.ext.valves.CompositeValve.continueInvocation(CompositeValve.java:178)
        at org.wso2.carbon.tomcat.ext.valves.CarbonTomcatValve$1.invoke(CarbonTomcatValve.java:47)
        at org.wso2.carbon.webapp.mgt.TenantLazyLoaderValve.invoke(TenantLazyLoaderValve.java:56)
        at org.wso2.carbon.tomcat.ext.valves.TomcatValveContainer.invokeValves(TomcatValveContainer.java:47)
        at org.wso2.carbon.tomcat.ext.valves.CompositeValve.invoke(CompositeValve.java:141)
        at org.wso2.carbon.tomcat.ext.valves.CarbonStuckThreadDetectionValve.invoke(CarbonStuckThreadDetectionValve.java:156)
        at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:936)
        at org.wso2.carbon.tomcat.ext.valves.CarbonContextCreatorValve.invoke(CarbonContextCreatorValve.java:52)
        at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
        at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407)
        at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1004)
        at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
        at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1653)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:722)
    [2014-09-09 11:12:04,749] ERROR - subscription-add:jag org.wso2.carbon.apimgt.api.APIManagementException: Error in getting new accessToken

    The solution is I've missed following point in the guide...

    Copy the section from the identity.xml file in the API Manager (/repository/conf/identity.xml) and replace the section of the identity.xml file of the IS (/repository/conf/identity.xml).

    Thought of sharing this hoping that someone will come across the same issue :)

    Chatura Dilan PereraGetting your Android Device Managed by WSO2 Enterprise Mobility Manager

    Android, the world leading mobile operating system based on the Linux kernel has been ruled the world by acquiring 80% of the smartphone market share today. That means many employees in your organization are using Android phones for their day to day tasks, and bringing them to your organization.  If you are a CTO or  system administrator who deal with security, you have had to be worried about bringing Android devices into your work place.

    WSO2 Enterprise Mobility Manager was developed to provide a platform which helps solve mobile computing challenges enterprises face today. It is a free and open source EMM solution which supports Android from the beginning.

    New WSO2 EMM 1.1.0 supports Android with zero configurations. It support for both Android mobile device management as well as mobile application management out of the box. Policy driven approach can be also used for Android in WSO2 EMM to manage BYOD and COPE devices separately.

    WSO2 EMM also comes with Enterprise Mobile store and mobile publisher. You can create your own Google Play like private enterprise mobile store with EMM and adding your private enterprise apps or public Google Play apps to it.

    To find more about to manage your Android device with WSO2 EMM, please join me with the Webinar on Wednesday, September 10, 2014 at 10:00 AM – 11:00 AM (PDT)

    Ishara PremadasaSending Form Data through WSO2 ESB with x-www-form-urlencoded content type

    This post is about how to post form data into a REST service from WSO2 ESB 4.8.1.
    Imagine that we have the following key values pairs to be passed into a REST service  which accepts x-www-form-urlencoded type data.

    name=ishara&company=wso2&country=srilanka

    Now when we going to send these data into ESB, it is needed to set them as key values pairs through adding a PayloadFactory mediator in the following format.  

    <property name="name" value="ishara" scope="default" type="STRING"/>
    <property name="company" value="wso2" scope="default" type="STRING"/>/>
    <property name="country" value="srilanka" scope="default" type="STRING"/>/>

                 
    <payloadFactory media-type="xml">
                    <format>
                        <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
                            <soapenv:Body>
                                <root>
                                    <name>$1</name>
                                    <company>$2</company>
                                    <country>$3</country>                               
                                </root>
                            </soapenv:Body>
                        </soapenv:Envelope>
                    </format>
                    <args>
                        <arg evaluator="xml" expression="$ctx:name"/>
                        <arg evaluator="xml" expression="$ctx:company"/>
                        <arg evaluator="xml" expression="$ctx:country"/>
                   </args> 

    </payloadFactory>           

    Then set the messageType property as 'application/x-www-form-urlencoded'. This is how ESB can identify these key-value pairs as form data and it will do the transformations. Then it is also required to disable chunking too.
                
    <property name="messageType" value="application/x-www-form-urlencoded" scope="axis2" type="STRING"/>
    <property name="DISABLE_CHUNKING" value="true" scope="axis2" type="STRING"/>

                           
    Now we are all set to call the REST endpoint with this message data as below. You can use either send or a call mediator.

    <call>
       <endpoint key="conf:endpoints/EmployeeDataServiceEndpoint.xml"/>
    </call>

                           

    Dulitha WijewanthaLoad Balancing Proxy for WSO2 Servers

    I have written a post previously about setting up Apache Proxy for Carbon Servers on Mac OS X. This time – I will be focusing on the SSL aspect and load balancing aspect of it. For this particular use case I am going to take the WSO2 Identity Server. The final scenario is to have a deployment architecture where 2 WSO2 Identity Servers load balances the traffic coming to the Proxy.

    Finding httpd.conf

    This depends vastly on the OS. For Mac OS X – this is located usually /etc/apache2/httpd.conf. Red Hat Linux put this file in /etc/http.d/.

    Necessary modules

    Apache server is broken into the core and modules. Some modules are not enabled by default in certain distributions. Modules are defined in the httpd.conf file. This file is read by apache server in startup to configure itself. Below are the necessary modules for Apache 2. Check if below modules are available in the httpd.conf. If not you’ll have to install them using your package manager-

    1
    2
    3
    4
    5
    6
    7
    8
    
    LoadModule proxy_module libexec/apache2/mod_proxy.so
    LoadModule proxy_connect_module libexec/apache2/mod_proxy_connect.so
    LoadModule proxy_ftp_module libexec/apache2/mod_proxy_ftp.so
    LoadModule proxy_http_module libexec/apache2/mod_proxy_http.so
    LoadModule proxy_scgi_module libexec/apache2/mod_proxy_scgi.so
    LoadModule proxy_ajp_module libexec/apache2/mod_proxy_ajp.so
    LoadModule proxy_balancer_module libexec/apache2/mod_proxy_balancer.so
    LoadModule ssl_module libexec/apache2/mod_ssl.so

    Certificates for SSL

    A certificate generation is necessary to perform SSL Proxy. We generate a private key using Open SSL. When generating the private key – use wso2carbon as the pass phrase.

    openssl genrsa -des3 -out server.key 1024

    Afterwards – we generate the Certificate signing request (.csr).

    openssl req -new -key server.key -out server.csr

    By using both the CSR request and the private key – we can generate a certificate for particular number of days.

    openssl x509 -in server.csr -out server.crt -req -signkey server.key -days 365

    Copy your certificate file (server.crt) and private key (server.key) to a directory inside apache. Let’s put it to a folder called certs under apache.

    Configure Apache for certificates

    Let’s get down and dirty with the https.conf file now. Forget about all the default configurations in the file and scroll to the bottom of the file. First we are going to add the balancer. We are adding the two server hostnames as balancer members.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    
    <Proxy balancer://mycluster>
        Order Deny,Allow
        Deny from none
        Allow from all
        ProxySet lbmethod=byrequests
    
        # Define back-end servers:
        # Server 1
        BalancerMember https://localhost:9443/
        BalancerMember https://localhost:9453/
    </Proxy>

    Next we are going to configure a VirtualHost that uses the above balancer. First – apache will have to listen to 443 port. 443is the default SSL port. The private key and the certificate is configured inside the virtual host. Also note that after the cluster name (mycluster) the ‘/‘ is necessary.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    
    Listen 443
    NameVirtualHost *:443
    <VirtualHost *:443>
        SSLEngine On
        SSLProxyEngine On
       
        # Set the path to SSL certificate
        SSLCertificateFile /private/etc/apache2/certs/server.crt
        SSLCertificateKeyFile /private/etc/apache2/certs/server.key
       
        # Setup the balancer:
        ProxyPass / balancer://mycluster/
        ProxyPassReverse / balancer://mycluster/
           
    </VirtualHost>

    Configure the certificate password

    Create a file in the certs folder as pass. Include below content to that file –

    1
    2
    
    #!/bin/sh
    echo "wso2carbon"

    In the httpd.conf file – outside the Virtual Host, put below configuration to setup the password. This will read the password from that file. Otherwise we have to provide the pass phrase of the private key every time we start the server.

    1
    
    SSLPassPhraseDialog  exec:/private/etc/apache2/certs/pass

    Now if you access https://localhost – you’ll be proxyed to either Identity servers running in 9443, 9453 ports.

    Real world use case

    There are several use cases for using a proxy. First – a proxy can be used to securely proxy traffic to the identity server sitting in the internal network. This way the proxy is in the DMZ. Another use case is to provide high availability (HA). The Proxy can be used to direct traffic to couple of servers where if a server goes down – other servers will continue to process requests.

    Malintha AdikariHow to write test case using WSO2 Test Automation Framework

    1. You have to checkout and build the latest version of the WSO2 Test Automation framework from WSO2 source repository.

    Once you build the Test Automation Framework(TAF) you can utilize the TAF artifacts to write test cases. WSO2 TAF is capable of execute the tests to the given environments.

    2. Then you have to include automation.xml file under the resources directory of your test.

     Test writers can maintain all the configuration details related to the test in a single file called automation.xml file which categorizes the configurations in to several modules.

    3. Now we can start writing the test case. Here is a sample test case which add a proxy service to WSO2 ESB and  invoke that proxy. (In this example we use a StockQuote web service hosted in Axis2 server as our backend of the proxy)




    public class CarbonApplicationDeploymentTestCase {
    private CarbonAppUploaderClient carbonAppUploaderClient;
    private ApplicationAdminClient applicationAdminClient;
    private final int MAX_TIME = 120000;
    private AutomationContext automationContext;

    @BeforeClass(alwaysRun = true)
    protected void uploadCarFileTest() throws Exception {
    automationContext = new AutomationContext("ESB", TestUserMode.TENANT_USER);
    ProxyServiceAdminClient proxyServiceAdminClient = new ProxyServiceAdminClient(automationContext.getContextUrls().getBackEndUrl(), automationContext.login());

    String proxy = "<proxy xmlns=\"http://ws.apache.org/ns/synapse\"\n" +
    " name=\"StockQuoteProxy\"\n" +
    " transports=\"https http\"\n" +
    " startOnLoad=\"true\"\n" +
    " trace=\"disable\">\n" +
    " <description/>\n" +
    " <target>\n" +
    " <inSequence>\n" +
    " <send>\n" +
    " <endpoint>\n" +
    " <address uri=\"http://localhost:9000/services/SimpleStockQuoteService\"/>\n" +
    " </endpoint>\n" +
    " </send>\n" +
    " </inSequence>\n" +
    " </target>\n" +
    " <publishWSDL uri=\"http://localhost:9000/services/SimpleStockQuoteService?wsdl\"/>\n" +
    "</proxy>";

    OMElement proxyOM = AXIOMUtil.stringToOM(proxy);
    proxyServiceAdminClient.addProxyService(proxyOM);

    }

    @Test(groups = {"wso2.esb"}, description = "Invoke the added proxy")
    public void endpointDeploymentTest() throws Exception {
    StockQuoteClient axis2C = new StockQuoteClient();
    String serviceURL = getProxyServiceURL("StockQuoteProxy");
    OMElement response = axis2C.sendSimpleStockQuoteRequest(serviceURL, null, "WSO2");
    axis2C.sendSimpleStockQuoteRequest(serviceURL, null, "WSO2");
    System.out.println(response);

    }
    }


    Now let's discuss the steps of writting a test case using WSO2 Test Automation Framework using above example.

           1. Create the context (environment for the test)
           2. Use the create context details in  the test and develop the test

    Create the context

    You have to create the test environment using the provided configuration details (in auotmation.xml ) before you start developing the test case. You can create new environment by creating AutomationContext instace in your test. You have use the correct configuration details provided in the automation.xml when you create the AutomationContext instance.

     automationContext = new AutomationContext("ESB", TestUserMode.TENANT_USER);

    Above example we have used "ESB" as the product group and "TestUserMode.TENANT_USER" as the user type.

    Use the create context details in  the test

    After you create an instance of AutomationContext providing suitable configurations , that instace can be used to obtain the context details related the test. Ex: SessionCookie of the given user's login, backend URL of the  given product group instace, service URL of the given product group instance etc..).

    ProxyServiceAdminClient(automationContext.getContextUrls().getBackEndUrl(), automationContext.login());

    In our example, we have created  ProxyServiceAdminClient using the beackend URL and the session cookie of the created AutomationContext instace. Likewise we can use these details when it needs in the test.













    Chandana NapagodaWSO2 Governance Registry - Monitor database operations using log4jdbc

    LOG4JDBC is a Java based database driver that can be used to log SQL and/or JDBC calls. So here I am going to show how to monitor JDBC operations on Governance Registry using log4jdbc.

    Here I believe you have already configured Governance Registry instance with MySQL. If not, please follow the instruction available in the Governance Registry documentation.

    1). Download the log4jdbc driver

     You can download log4jdbc driver from below location: https://code.google.com/p/log4jdbc/

    2). Add log4jdbc driver

     Copy log4jdbc driver into CARBON_HOME/repository/components/lib directory.

    3). Configure log4j.properties file.

    Navigate to log4j.properties file located in CARBON_HOME/repository/conf/ directory and add below entry in to log4j.properties file.

    # Log all JDBC calls except for ResultSet calls
    log4j.logger.jdbc.audit=INFO,jdbc
    log4j.additivity.jdbc.audit=false

    # Log only JDBC calls to ResultSet objects
    log4j.logger.jdbc.resultset=INFO,jdbc
    log4j.additivity.jdbc.resultset=false

    # Log only the SQL that is executed.
    log4j.logger.jdbc.sqlonly=DEBUG,sql
    log4j.additivity.jdbc.sqlonly=false

    # Log timing information about the SQL that is executed.
    log4j.logger.jdbc.sqltiming=DEBUG,sqltiming
    log4j.additivity.jdbc.sqltiming=false

    # Log connection open/close events and connection number dump
    log4j.logger.jdbc.connection=FATAL,connection
    log4j.additivity.jdbc.connection=false

    # the appender used for the JDBC API layer call logging above, sql only
    log4j.appender.sql=org.apache.log4j.FileAppender
    log4j.appender.sql.File=${carbon.home}/repository/logs/sql.log
    log4j.appender.sql.Append=false
    log4j.appender.sql.layout=org.apache.log4j.PatternLayout
    log4j.appender.sql.layout.ConversionPattern=-----> %d{yyyy-MM-dd HH:mm:ss.SSS} %m%n%n

    # the appender used for the JDBC API layer call logging above, sql timing
    log4j.appender.sqltiming=org.apache.log4j.FileAppender
    log4j.appender.sqltiming.File=${carbon.home}/repository/logs/sqltiming.log
    log4j.appender.sqltiming.Append=false
    log4j.appender.sqltiming.layout=org.apache.log4j.PatternLayout
    log4j.appender.sqltiming.layout.ConversionPattern=-----> %d{yyyy-MM-dd HH:mm:ss.SSS} %m%n%n

    # the appender used for the JDBC API layer call logging above
    log4j.appender.jdbc=org.apache.log4j.FileAppender
    log4j.appender.jdbc.File=${carbon.home}/repository/logs/jdbc.log
    log4j.appender.jdbc.Append=false
    log4j.appender.jdbc.layout=org.apache.log4j.PatternLayout
    log4j.appender.jdbc.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss.SSS} %m%n

    # the appender used for the JDBC Connection open and close events
    log4j.appender.connection=org.apache.log4j.FileAppender
    log4j.appender.connection.File=${carbon.home}/repository/logs/connection.log
    log4j.appender.connection.Append=false
    log4j.appender.connection.layout=org.apache.log4j.PatternLayout
    log4j.appender.connection.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss.SSS} %m%n



    4). Update the master-datasources.xml file

    Update the master-datasources.xml file located in CARBON_HOME/repository/conf/datasources directory. There each datasource URL and Drivers as below

    <url>jdbc:log4jdbc:mysql://localhost:3306/amdb?autoReconnect=true</url>
    <driverClassName>net.sf.log4jdbc.DriverSpy</driverClassName>

    5). Enjoy

    Some database drvers may not support by default(ex :db2), there you can pass database driver name as VM argument

    -Dlog4jdbc.drivers=com.ibm.db2.jcc.DB2Driver

    Restart the server and enjoy your work with log4jdbc. Log files are created under CARBON_HOME/repository/logs/ directory. So using sqltiming.log file you can monitor execution time of each query.

    PS : If  you want to simulate lower bandwidth situation, you can use trickle when the server is starting.
    (More)
             Example sh wso2server.sh trickle -d 64 -u 64

    Chandana NapagodaHow to configure WSO2 G-Reg with ELB - Updated

    This is the updated post for configuring WSO2 ELB 2.1.0 and G-Reg 4.6.0 releases. If you are using earlier releases, please refer the early post.

     When we are fronting WSO2 ELB(Elastic Load Balancer) for a WSO2 G-Reg(Governance Registry) node, all the incoming messages will go through ELB node and it will act as a HTTP and HTTPS proxy to GREG node. So when you configure ELB to the GREG front, GREG won't be able to access alone without the ELB.

    1. Download WSO2 ELB and WSO2 GREG. Rename the extracted ELB as ELB-HOME and extracted GREG as GREG-HOME.

    ELB Configuration 

    2. Go to ELB-HOME/repository/conf/loadbalancer.conf and add the following entry. There can be multiple entries according to the clustering requirements.

     governance {

      domains   {

               wso2.governance.domain {

                   tenant_range *;
                   group_mgt_port 4000;
                   mgt{
                       hosts governance.local.wso2.com;
                   }
               }
           }
    }

     3. Update hosts files in nodes with relevant IP and Domain information. In my example, G-Reg and ELB will be hosted on the same node.

    127.0.0.1 governance.local.wso2.com 
    if not this governance.local.wso2.com domain should be mapped into ELB node.

     4 Navigate to the repository/conf/axis2 directory and open the axis2.xml file and update localMemberPort parameter value as below. This port number should not conflict with any other port.

    <parameter name="localMemberPort">5000</parameter>
    Greg Configuration

    4. Then clustering should be enabled in G-Reg as well. To enable it, go to GREG-HOME/repository/conf/axis2/axis2.xml and modify clustering configuration as below.


    <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
    <parameter name="membershipScheme">wka </parameter>
    </clustering>

     5. Uncomment localmemberhost element in GREG-HOME/repository/conf/axis2/axis2.xml and specify the IP address (or host name) to be exposed to members of the cluster.

    <parameter name="localMemberHost">127.0.0.1</parameter>


    6. Then define clustering domain information.(in GREG-HOME/repository/conf/axis2/axis2.xml). This "domain" name value should be same as in loadbalancer.conf.

    <parameter name="domain">wso2.governance.domain</parameter>
    <parameter name="localMemberPort">4250</parameter>

    7. As shown below, add "subDomain" information into the same axis2.xml file.

    <parameter name="properties">
    <property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/>
    <property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/>
    <property name="subDomain" value="mgt"/>
    </parameter>

    8. Add load balancer IP address or host information in to same axis2.xml file. In this example scenario IP address is 127.0.0.1.

    <members>
    <member>
    <hostname>127.0.0.1 </hostname>
    <port>4000</port>
    </member>
    </members>
    </members>

    9. Open the GREG-HOME/repository/conf/tomcat/catalina-server.xml file and add HTTP and HTTPS proxy port information to that file.

    <connector port="9763" protocol="org.apache.coyote.http11.Http11NioProtocol" proxyport="8280"/> 
    <connector port="9443" protocol="org.apache.coyote.http11.Http11NioProtocol" proxyport="8243"/>


    10. Go to GREG-HOME/repository/conf/carbon.xml file and update the "HostName" and "MgtHostName" as below.

    <hostname>governance.local.wso2.com</hostname>
    <mgthostname>governance.local.wso2.com</mgthostname>

    11. This step only need to be followed if you are hosting G-Reg and ELB both in same node. In this sample we have configured both ELB and G-Reg in same node. Therefore we should avoid port conflicts between ELB and G-Reg. To avoid such problems we can use "Offset" entry in carbon.xml file. Go to GREG-HOME/repository/conf/carbon.xml and change port offset value.

    <offset>1</offset>

    12. Start the ELB instance

    13. Start the G-Reg instance(s)

    14.You can access G-Reg instance using following URL        "https://governance.local.wso2.com:8243/carbon/"

    Chandana NapagodaHow to configure WSO2 G-Reg with ELB

    When we front WSO2 ELB(Elastic Load Balancer) into WSO2 G-Reg(Governance Registry) node, all the incoming messages will go through ELB node and it will act as a HTTP and HTTPS proxy to GREG node. So when you configure ELB to the GREG front, GREG won't be able to access alone without the ELB.

     1. Download WSO2 ELB and WSO2 GREG. Rename the extracted ELB as ELB-HOME and extracted GREG as GREG-HOME.

    ELB Configuration 


     2. Go to ELB-HOME/repository/conf/loadbalancer.conf and add the following entry. There can be multiple entries according to the clustering requirements.
     governance {
    domains {

    wso2.governance.domain {
    min_app_instances 1;
    hosts governance.local.wso2.com;
    sub_domain mgt;
    tenant_range *;
    }
    }
    }

    3. Update hosts files in nodes with relevant IP and Domain information. In my example, G-Reg and ELB will be hosted in same node.

     127.0.0.1 governance.local.wso2.com
    if not this governance.local.wso2.com should be mapped in to ELB node.

    Greg Configuration 


    4. Uncomment
     <localmemberhost></localmemberhost>
    element in GREG-HOME/repository/conf/axis2/axis2.xml and specify the IP address (or host name) to be exposed to members of the cluster.

     <parameter name="localMemberHost">127.0.0.1</parameter>

    5. Then clustering should be enable in G-Reg also. To enable it, go to GREG-HOME/repository/conf/axis2/axis2.xml and modify clustering configuration as below.
      <clustering class="org.apache.axis2.clustering.tribes.TribesClusteringAgent" enable="true">
    <parameter name="membershipScheme">wka </parameter>

    </clustering>
    6. Then define clustering domain information.(in GREG-HOME/repository/conf/axis2/axis2.xml). This "domain" and "localMemberHost" information should be same as loadbalancer.conf.

    <parameter name="domain">wso2.governance.domain</parameter>
    <parameter name="localMemberHost">governance.local.wso2.com</parameter>
    <parameter name="localMemberPort">4250</parameter>

    7. As shown below, add "subDomain" information in to the same axis2.xml file.
     <parameter name="properties">
    <property name="backendServerURL" value="https://${hostName}:${httpsPort}/services/"/>
    <property name="mgtConsoleURL" value="https://${hostName}:${httpsPort}/"/>
    <property name="subDomain" value="mgt"/>
    </parameter>
    8. Add load balancer IP address or host information in to same axis2.xml file. In this example scenario IP address is 127.0.0.1.

    <member>
    <hostname>127.0.0.1 </hostname>
    <port>4000</port>
    </member>
    </members>
    9. Open the GREG-HOME/repository/conf/tomcat/catalina-server.xml file and add HTTP and HTTPS proxy port information to that file.
     <connector port="9763" protocol="org.apache.coyote.http11.Http11NioProtocol" proxyport="8280"/>

    <connector port="9443" protocol="org.apache.coyote.http11.Http11NioProtocol" proxyport="8243"/>

    10. Go to GREG-HOME/repository/conf/carbon.xml file and update the "HostName" and "MgtHostName" as below.

    <hostname>governance.local.wso2.com</hostname>
    <mgthostname>governance.local.wso2.com</mgthostname>

    11. This step only need to be followed if you are hosting G-Reg and ELB both in same node. In this sample we have configured both ELB and G-Reg in same node. Therefore we should avoid port conflict between ELB and G-Reg. To avoid such problems we can use "Offset" entry in carbon.xml file. Go to GREG-HOME/repository/conf/carbon.xml and change port offset value.

    <offset>1</offset>

    12. Start the ELB instance

    13. Start the G-Reg instance(s)

    14.You can access G-Reg instance using following URL "https://governance.local.wso2.com:8243/carbon/"

    Chandana NapagodaPerformance Results - WSO2 Governance Registry - Part 1

    Recently I have completed a performance analysis for WSO2 Governance Registry. Here I am publishing some of the results.

    Zip Upload Thread Pool Size

    Whats is Zip Upload Thread Pool:

    When we are uploading ZIP files with WSDLs & Schema's, it hits the ZipWSDLMediaTypeHandler. There we can optimize number of threads which are working on Uploading WSDLs, Schema's and Service creation.

    Recommended Other Configuration:
    • Increased the timeouts for the UI to 7200000 in axis2_client.xml. 
    • Disabled WSDL and Schema validation in registry.xml. 
    • Disabled registry indexing in registry.xml. 
    • Stopped automatic versioning of resources. 
    • Set the number of maximum active DB connections to 120 and max_connection in to 500(MySQL level), and the maximum wait time to 600000. 
    • Set G-Reg memory configurations to 4GB
    With above configurations,here is test results for different thread upload pool size. Uploaded ZIP file containing 280 WSDLs and 506 Schema files. Time measured in Minutes and uploaded zip file size is 1.4 MB (1,359,833 bytes)

    265 WSDL(Pool Size)
    Test 1
    Test 2
    Test 3

    10
    8.48
    9.17
    9.12
    No retrying issues and failure
    20
    9.25
    8.49
    8.57
    No retrying issues and failure
    30
    9.22
    9.07
    8.12
    No retrying issues and failure
    40
    9.35
    9.19
    9.13
    see retrying issues
    50
    10.2
    9.31
    9.47
    see retrying issues
    60
    9.50
    9.40
    9.54
    see retrying issues
    70
    9.05
    9.51
    9.44
    see retrying issues


    Max File Upload Size

    Recommended Other Configuration:
    • Increased the timeouts for the UI to 7200000 in axis2_client.xml. 
    • Disabled registry indexing and caching in registry.xml. 
    • Stopped automatic versioning of resources. 
    • Set the number of maximum active DB connections to 120 and max_connection in to 500(MySQL level), and the maximum wait time to 600000. 
    • Set G-Reg memory configurations to 4GB
    with above configuration, tested file upload time for different sizes of text contents. Please note that this test was with a separate MySQL instance as the registry database.

    File Size(MB)
    Time(ms)
    1.4
    5003
    4.9
    9829
    31
    20922
    61.9
    30515
    94
    45071
    115
    61599
    157.3
    92036
    178.3
    100767
    230.7
    119764

    DB Tuning

    maxActive

    G-Reg server need at least two database connections to startup the server and recommended minimum database connection count is 8 and maxActive database count should define according to your requirement and maximum database connection which can handle with database system.
    maxActive

    1
    Server will not start
    2
    Server will start, WDDL upload failed
    5
    23 WSDL upload is not working
    7
    1 Timeout trying to lock table "REG_RESOURCE"; SQL statement:
    8
    Can upload 23 WSDL with 23 Services
    10
    Can upload 23 WSDL with 23 Services
    20
    Can upload 23 WSDL with 23 Services
    50
    Can upload 23 WSDL with 23 Services
    100
    Can upload 23 WSDL with 23 Services


    Max Wait
    To conduct this test, we have configured 8 max active connections and results were measured for different max wait values(maxActive is 8).
    maxWait

    -1
    Can upload all WSDL
    60000
    Can upload all WSDL
    10000
    Can upload all WSDL with retrying message
    5000
    Can upload all WSDL with retrying message
    2500
    Can upload all WSDL with retrying message
    1000
    Can upload all WSDL with retrying message
    100
    Can upload all WSDL with retrying message
    10
    Can upload all WSDL with retrying message
    1
    Can upload all WSDL with retrying message

    Validation Interval

    We have configured 8 as max active connections and conducted the previous test with different validation intervals values. However there was no effect to normal G-reg use and server could use without interrupting any operations..

    Content Search/Indexing

    To conduct these tests we have uploaded "Enterprise Search Server" PDF ebook in to G-Reg and after 7 minutes following content search was done; wildcards, Wicket, Zappos, XSLT, Clustering, BinContentStreamDataSource, David, errata. This test conducted with Java agent which provided by New Relic: Application Performance Management & Monitoring[
    http://newrelic.com]

    Indexing Frequency

    When increasing Indexing Frequency, Solr cache size will increase.

    Solr - RAM Buffer

    When increasing RAM Buffer size, Solr response time will be decreased. Here is the Solr recommendation "If you will be loading a lot of data at once, then increase this value. any experts seem to find that a number in the vicinity of 128 to be good for many apps."

    Solr - Merge Factor

    When the Merge Factor is low, indexed document count is low.

    Solr - Lock Timeouts

    When we set Solr lock timeout value with a higher value, it index more documents at once and Solr response time is lower. To lower lock timeout value, only two indexing operations happened.

    ** current WSO2 Governance Registry Solr configuration is appropriate for most G-Reg related operation, except for huge file indexing.


    Memory

    Xms
    Can start WSO2 G-Reg server with any Xms values

    Xmx

    G-Reg server needs at least 192Xmx value to start Java instance.

    MaxPermSize

    G-Reg server needs at least 128mb value as MaxpermSize, to do basic Operation like upload ZIp file(with 23WSDl).

    Chandana NapagodaWSO2 Governance Registry - Apply Tags using Handler

    There can be scenarios, where users want to apply tags into their resources, while resource is inserted. I am writing this post based on my answer provided to this Stack Overflow question.

    Question was on how to apply a tag into a service in WSO2 Governance Registry at the time of the service creation. We can use Registry Handler for achieving this requirement. Handlers are the well-known extension points in WSO2 Governance Registry.

    "Handlers are pluggable components, that contain custom processing logic for handling resources. All handlers extend an abstract class named Handler, which provides default implementations for resource handling methods as well as a few utilities useful for concrete handler implementations."[WSO2 Governance Registry Docs]

    I have modified handler sample, which is shipped with Governance Registry pack. Using above link please download the G-Reg pack and handler sample is located at GREG_HOME/samples/handler.

     
    public void put(RequestContext requestContext) throws RegistryException {

    if (!CommonUtil.isUpdateLockAvailable()) {
    return;
    }
    CommonUtil.acquireUpdateLock();
    try {
    String resourcePath = requestContext.getResourcePath().getPath();
    Registry registry = requestContext.getRegistry();
    registry.applyTag(resourcePath, "CustomTag");

    } finally {
    CommonUtil.releaseUpdateLock();
    }
    }

    Compiled jar file need to be added into GREG_HOME/repository/components/dropins folder and register handler using either management console or registry.xml file.

    <handler class="org.wso2.carbon.registry.samples.handler.CustomServiceHandler">
    <filter class="org.wso2.carbon.registry.core.jdbc.handlers.filters.MediaTypeMatcher">
    <property name="mediaType">application/vnd.wso2-service+xml</property>
    </filter>
    </handler>

    When you are inserting a new service, this handler will get hit and it will add the given tag into the Service.

    Chandana NapagodaManage SOAPAction of the Out Message

     
     When you are sending a request message to a backend service through WSO2 ESB, there could be some scenarios where you need to remove or change the SOAPAction header value.


    Using header mediator and property mediator which are available in WSO2 ESB, we can remove SOAPAction or set it empty.

    Set SOAPAction as Empty:
    <header name="Action" value=""/>
    <property name="SOAPAction" scope="transport" value=""/>

    Remove SOAPAction:
    <header action="remove" name="Action"/> 
    <property action="remove" name="SOAPAction" scope="transport"/>

    Modify SOAPAction:

    When setting SOAPAction one of the below approches can be used

    1) .
    <header name="Action" value="fixedAction"/>

    2).
    <header expression="xpath-expression" name="Action"/>

    More Info: Header Mediator

    TCPMon:

    If we need to monitor the messages getting passed between ESB and backend service, we can point TCPMon[1] in between back-end and ESB. Using TCPMon, we can monitor messages and their header information(Including SOAPAction).

    Bottom of the TCPMon there is a special control available to view Messages in XML format.

    [1]. http://ws.apache.org/tcpmon/tcpmontutorial.html

    Chris HaddadEmbrace the Shadow Today

    Enterprise IT must embrace Shadow IT today and establish a partnership that will move the business forward at the speed of now. By understanding the Shadow IT mindset, you can bridge the divide, accelerate solution development, and empower every team to build in an enterprise-safe manner.  Start today, and take small steps towards a big vision that delivers a flexible enterprise IT environment that enables and empowers Shadow IT teams.

    Shadow IT (also called rogue IT) brings risk and reward to IT business solution development. Because ECD (externalization, consumerization, and democratization) trends are driving significant Shadow IT project growth, Enterprise IT teams must adapt and address Shadow IT requirements, autonomy, and goals.  By understanding the Shadow IT mindset and bridging the divide between the two groups, Enterprise IT teams can embrace Shadow IT as a beneficial solution development partner.

    When working with Shadow IT teams, Enterprise IT is often challenged to establish suitable cross-team architecture, development lifecycle processes, governance, and tooling. Shadow IT wants to use multiple languages, frameworks, tools, and environments that don’t fit into enterprise DevOps processes, management, and security models. Shadow IT desires rapid iterations and creative experimentation, which may not fit enterprise development lifecycle processes. Shadow IT teams often view enterprise governance as an undue and unnecessary burden. Enterprise IT software development tools do not provide a collaborative environment joining diverse Shadow IT teams with Enterprise IT teams.

    Another major hurdle when adopting enterprise IT solutions, Shadow IT teams may not have the skills and best practice knowledge (or desire) to use new Enterprise IT tools, patterns, and processes.   Enterprise IT must lower the adoption hurdle.

    The presentation below describes a template for engaging with Shadow IT:

    Lali DevamanthriJava 9 Features


    Oracle has announced the first set of enhancement proposals (known as JEPs)  for Java 9 which has been targeted for release in early 2016.
    Three new APIs have been announced:

    Process API Updates for interacting with non-Java operating system processes. The limitations of the current API often force developers to resort to native code. The main risk with this API is differences between operating systems, in particular Windows.The design of this API needs to accommodate possible deployment on smaller devices with different operating system models. It should also take into account environments where multiple Java virtual machines are running in the same operating system process. These considerations could lead to a more abstract API and/or increase the design effort.

    New HTTP Client that includes HTTP/2 support.
    Problems with the existing API and implementation:

    • URLConnection based API was designed with multiple protocols in mind, nearly all of which are defunct now (ftp, gopher, etc.)
    • predates HTTP 1.1 and is too abstract
    • hard to use (much behavior undocumented)
    • works in blocking mode only (one thread per request/response)
    • very hard to maintain

    Https 2.0 support depends on TLS ALPN (Application Layer Negotiation Extension) which is not currently supported in JDK. The Http 2.0 spec itself is still in internet-draft form, but is expected to be submitted as a draft standard Nov 2014.

    New lightweight JSON API which provide a light-weight API for consuming and generating JSON documents and data streams.. The latter is expected to build upon the JSON support already standardized as part of JSR 353.

    There are also three JVM / performance related features announced:

    Improve contended locking for better performance when threads are competing for access to objects. Improving contended locking will significantly benefit real world applications, in addition to industry benchmarks such as Volano and DaCapo.

    This project will explore performance improvements in the following areas related to contended Java Monitors:

    • Field reordering and cache line alignment
    • Speed up PlatformEvent::unpark()
    • Fast Java monitor enter operations
    • Fast Java monitor exit operations
    • Fast Java monitor notify/notifyAll operations
    • Adaptive spin improvements and SpinPause on SPARC

    Segmentation of the JIT compiler’s code cache (for better JIT performance on large applications). Divide the code cache into distinct segments, each of which contains compiled code of a particular type, in order to improve performance and enable future extensions.

    The organization and maintenance of compiled code has a significant impact on performance. Instances of performance regressions of several factors have been reported if the code cache takes the wrong actions. With the introduction of tiered compilation the role of the code cache has become even more important, since the amount of compiled code increases by a factor of 2X–4X compared to using non-tiered compilation. Tiered compilation also introduces a new compiled code type: instrumented compiled code (profiled code). Profiled code has different properties than non-profiled code; one important difference is that profiled code has a predefined, limited lifetime while non-profiled code potentially remains in the code cache forever.

    The current code cache is optimized to handle homogeneous code, i.e., only one type of compiled code. The code cache is organized as a single heap data structure on top of a contiguous chunk of memory. Therefore, profiled code which has a predefined limited lifetime is mixed with non-profiled code, which potentially remains in the code cache forever. This leads to different performance and design problems. For example, the method sweeper has to scan the entire code cache while sweeping, even if some entries are never flushed or contain non-method code.

    Further development of the “smart” Java compiler, sjavac, which promises parallel and shared compilation among other features.

    Due to various issues relating to stability and portability, sjavac is not used by default in the JDK build scripts. The first goal of this JEP is to resolve these issues. This involves making sure the tool produces reliable results on all software/hardware configurations at all times.

    The overarching goal is to improve the quality of sjavac to the point where it can serve as a general purpose javac wrapper able to compile large arbitrary Java projects.

    A follow-on project will explore how sjavac is to be exposed in the JDK tool chain, if at all; this might be a separate supported standalone tool, an unsupported standalone tool, integration with javac, or something else.

    Finally, one tantalizing feature has been promised in the form of JEP 201 – Modular Source Code. This is not, as yet, the modularity solution known as Project Jigsaw (initially targeted as part of Java 8).

    Project Jigsaw aims to design and implement a standard module system for the Java SE Platform and to apply that system to the Platform itself, and to the JDK. Its primary goals are to make implementations of the Platform more easily scalable down to small devices, improve the security and maintainability, enable improved application performance, and provide developers with better tools for programming in the large.

    This JEP is part of the first phase of Project Jigsaw; later JEPs will modularize the JRE and JDK images and then introduce a module system.

    The motivations to reorganize the source code at this early stage are to:

    1. Give JDK developers the opportunity to become familiar with the modular structure of the system;
    2. Preserve that structure going forward by enforcing module boundaries in the build, even prior to the introduction of a module system; and
    3. Enable further development of Project Jigsaw to proceed without always having to “shuffle” the present non-modular source code into modular form.

    Ishara Premadasa[Quick Note] Modifying Git Configurations in Ubuntu and Windows

    OS : Ubuntu 13.10 or Windows 7

    This is a brief post on how to change default configuration entries for Git.

    There might be requirements that you need to change the default username where the commits get written to github, specially if you created a new github account, changed email etc.

    Git uses global configuration file called .gitconfig in order to store the global credentials. To change the default user you can simply edit this file.

    isha@thinkpad:~$ vim .gitconfig

    For Windows this file can be found in file: git/etc/.config location.
    I have set the following three properties here.

     1. username: github username of author.
     2. user email: github email address of user.
     3. credential helper: turn on the credential helper so that Git will save your password in memory for some time. By default Git will cache your password for 15 minutes. You can change this by changing --timeout property in the command.

    [user]
            name = ishadil
            email = ishadil@github.com
    [credential]
            helper = cache --timeout=3600  // i have changed timeout for 1 hour


    Other option is doing this using the command prompt as below.

    git config --global user.name "ishadil"

    git config --global user.email "ishadil@github.com"

    git config --global credential.helper 'cache --timeout=3600'

    After that use git config --global --list command to verify whether the properties are reset correctly.

    git config --global --list

    In a similar manner there are lot more configuration entries that can be set into this file. For e.g. let's set a proxy url in to git .config file.

    [http]
        proxy = proxy-test.abc.com:8080

    Adam FirestoneSTUXNET: ANATOMY OF A CYBER WEAPON



    This is the first of a focused two part discussion of the threats and challenges involved with cyber security.  The exploration of cyber threats and challenges is conducted using the Stuxnet attack as a lens.  The following post picks up with an allegorical analysis of the cyber threat posed by nation-state attacks as well as ideas about how information systems can be built so that they are less tempting targets.

    Stuxnet is widely described as the first cyber weapon.  In fact, Stuxnet was the culmination of an orchestrated campaign that employed an array of cyber weapons to achieve destructive effects against a specific industrial target.  This piece explores Stuxnet’s technology, its behavior and how it was used to execute a cyber-campaign against the Iranian uranium enrichment program.  This discussion will continue in a subsequent post describing an orthogonal view on the art and practice of security – one that proposes addressing security as a design-time concern with runtime impacts.

    Stuxnet, discovered in June 2010, is a computer worm that was designed to attack industrial programmable logic controllers (PLC). PLCs automate electromechanical processes such as those used to control machinery on factory assembly lines, amusement park rides, or, in Stuxnet’s case, centrifuges for separating nuclear material.  Stuxnet’s impact was significant; forensic analyses conclude that it may have damaged or destroyed as many as 1,000 centrifuges at the Iranian nuclear enrichment facility located in Natanz.   Moreover, Stuxnet was not successfully contained, it has been “in the wild” and has appeared in several other countries, most notably Russia.

    There are many aspects of the Stuxnet story, including who developed and deployed it and why.  While recent events seem to have definitively solved the attribution puzzle, Stuxnet’s operation and technology remain both clever and fascinating. 

    A Stuxnet attack begins with a USB flash drive infected with the worm.  Why a flash drive?  Because the targeted networks are not usually connected to the internet.  These networks have an “air gap” physically separating them from the internet for security purposes.  That being said, USB drives don’t insert themselves into computers.  The essential transmission mechanism for the virus is, therefore, biological;  a user.   

    I’m tempted to use the word “clueless” to describe such a user, but that wouldn’t be fair.  Most of us carbon-based, hominid, bipedal Terran life forms are inherently entropic – we’re hard-wired to seek the greatest return for the least amount of effort. In the case of a shiny new flash drive that’s just fallen into one’s lap, the first thing we’re inclined to do is to shove it into the nearest USB port to see what it contains.  And if that port just happens to be on your work computer, on an air-gapped network. . .well, you get the picture.

    It’s now that Stuxnet goes to work, bypassing both the operating system’s (OS) inherent security measures and any anti-virus software that may be present.  Upon interrogation by the OS, it presents itself as a legitimate auto-run file.  Legitimacy, in the digital world, is conferred by means of a digital certificate.  A digital certificate (or identity certificate) is an electronic cryptographic document used to prove identity or legitimacy.  The certificate includes information about a public cryptographic key, information about its owner's identity, and the digital signature of an entity that has verified the certificate's contents are correct.  If the signature is valid, and the person or system examining the certificate trusts the signer, then it is assumed that the public cryptographic key or software signed with that key is safe for use.

    Stuxnet proffers a stolen digital certificate to prove its trustworthiness.  Now vetted, the worm begins its own interrogation of the host system. :  Stuxnet confirms that the OS is a compatible version of Microsoft Windows and, if an anti-virus program is present, whether it is one that Stuxnet’s designers had previously compromised.  Upon receiving positive confirmation, Stuxnet downloads itself into the target computer.

    It drops two files into the computer’s memory.  One of the files requests a download of the main Stuxnet archive file, while the other sets about camouflaging Stuxnet’s presence using a number of techniques, including modifying file creation and modification times to blend in with the surrounding system files and altering the Windows registry to ensure that the required Stuxnet files run on startup.  Once the archived file is downloaded, the Stuxnet worm unwraps itself to its full, executable form.

    Meanwhile, the original Stuxnet infection is still on the USB flash drive.  After successfully infecting three separate computers, it commits “security suicide.”  That is, like a secret agent taking cyanide to ensure that she can’t be tortured to reveal her secrets, Stuxnet deletes itself from the flash drive to frustrate the efforts of malware analysts.

    Internally to the target computer, Stuxnet has been busy.  It uses its rootkit to modify, and become part of the OS.  Stuxnet is now indistinguishable from Windows; it’s become part of the computer’s DNA.  It’s now that Stuxnet becomes a detective, exploring the computer and looking for certain files.  Specifically, Stuxnet is looking for industrial control system (ICS) software created by Siemens called Simatic PCS7 or Step 7 running on a Siemens Simatic Field PG notebook (a Windows-based system dedicated for ICS use).  

    The problem facing Stuxnet at this point is that a computer can contain millions, if not tens of millions, of files and finding the right Step 7 file is a bit like looking for a needle in a haystack.  In order to systematize the search, Stuxnet needs to find a way to travel around the file system as it conducts its stealthy reconnaissance.  It does this by attaching itself to a very specific kind of process.:  One that is trusted at the highest levels by the OS and that looks at every single file on the computer.  Something like. . . 

    . . .the scan process used by anti-virus software.  (In the attack on the facility in Natanz, Stuxnet compromised and used the scan processes from leading anti-virus programs.  (It’s worth noting that all of the companies whose products were compromised have long since remedied the vulnerabilities that Stuxnet exploited.)  Along the way, Stuxnet compromises every comparable process it comes across, pervading the computer’s memory and exploiting every resource available to execute the search.  

    All the while, Stuxnet is constantly executing housekeeping functions.  When two Stuxnet worms meet, they compare version numbers, and the earlier version deletes itself from the system.   Stuxnet also continuously evaluates its system permission and access level.  If it finds that it does not have sufficient privileges, it uses a previously unknown system vulnerability (such a thing is called a “Zero-Day,” and will be discussed below) to grant itself the highest administrative privileges and rights.    If a local area network (LAN) connection is available, Stuxnet will communicate with Stuxnet worms on other computers and exchange updates – ensuring that the entire Stuxnet cohort running within the LAN is the most virulent and capable version.   If an Internet connection is found, Stuxnet reaches back to its command and control (C2) servers and uploads information about the infected computers, including their internet protocol (IP) addresses, OS types and whether or not Step 7 software has been found.

    As noted earlier, Stuxnet relied on four Zero-Day vulnerabilities to conduct its attacks.  Zero-Days are of particular interest to hacker communities.:  Since they’re unknown, they are by definition almost impossible to defend against.  Stuxnet’s four Zero-Days included:


    • The Microsoft Windows shortcut automatic file execution vulnerability which allowed the worm to spread through removable flash drives;
    • A print spooler remote code execution vulnerability; and
    • TWO different privilege escalation vulnerabilities.

    Once Stuxnet finds Step 7 software, it patiently waits and listens until a connection to a PLC is made.  When Stuxnet detects the connection, it penetrates the PLC and begins to wreak all sorts of havoc.  The code controlling frequency converters is modified and Stuxnet takes control of the converter drives.  What’s of great interest is Stuxnet’s method of camouflaging its control.   

    Remember the scene in Mission Impossible, Ocean’s 11 and just about every other heist movie where the spies and/or thieves insert a video clip into the surveillance system?  They’re busy emptying the vault, but the hapless guard monitoring the video feed only sees undisturbed safe contents.  Stuxnet turned this little bit of fiction into reality.  Reporting signals indicating abnormal behavior sent by the PLC are intercepted by Stuxnet and in turn signals indicating nominal, normal behavior are sent to the monitoring software on the control computer.

    Stuxnet is now in the position to effect a physical attack against the gas centrifuges.  To understand the attack it’s important to understand that centrifuges work by spinning at very high speeds and that maintaining these speeds within tolerance is critical to their safe operation.  Typically, gas centrifuges used to enrich uranium operate at between 807hz and 1,210hz, with 1,064hz as a generally accepted standard.

    Stuxnet used the infected PLCs to cause the centrifuge rotors to spin at 1,410hz for short periods of time over a 27 day period.  At the end of the period, Stuxnet would cause the rotor speed to drop to 2hz for fifty minutes at a time.  Then the cycle repeated.  The result was that over time the centrifuge rotors became unbalanced, the motors wore out and in the worst cases, the centrifuges failed violently.

    Stuxnet destroyed as much as twenty percent of the Iranian uranium enrichment capacity.  There are two really fascinating lessons that can be learned from the Stuxnet story.  The first is that cyber -attacks can and will have effects in the kinetic and/or physical realm.  Power grids, water purification facilities and other utilities are prime targets for such attacks.  The second is that within the current design and implementation paradigms by which software is created and deployed, if a bad actor with the resources of a nation-state wants to ruin your cyber-day, your day is pretty much going to be ruined.

    But that assumes that we maintain the current paradigm of software development and deployment.  In my next post I’ll discuss ways to break the current paradigm and the implications for agile, resilient systems that can go into harm’s way, sustain a cyber-hit and continue to perform their missions.

    Ishara Premadasa[WSO2 ESB] Enrich Mediator Patterns

    WSO2 ESB Enrich Mediator can process a message based on a given source configuration and then perform the specified action on the message by using the target configuration. The following post specifies some of the source/target patterns that can be used with Enrich mediator. The flow is explained in 'description' element of each mediator entry to make it easily understandable.

    1. Adding a property value into message body as a sibling

    <property name="failureResultProperty" scope="default" description="FailureResultProperty">
            <result xmlns="">failure</result> 

    </property>
    <enrich>
            <source xpath="$ctx:failureResultProperty"/>
            <target type="body" action="sibling"/> 

    </enrich>   
       
    2. Append new elements into a property using two enrichs

    <property name="testMessages" scope="default" description="Test Messages Property">
            <
    testMessages/> 
    </property>
              
    <property name="testMsg" expression="//testMessage" scope="default" type="OM"/>

    <enrich description="Get testMessages into current msg body">
                <source type="property" property="testMessages"/>
                <target type="body"/> 

    </enrich>
    <enrich description="Append testMsg into testMessages property">
                <source xpath="$ctx:testMsg"/>
                <target type="body" action="child"/> 

    </enrich>

    <property name="testMessages" description="Set testMessages property back"  expression="//testMessages" scope="default" type="OM"/>
           
           
    The final output for 'testMessages' property after enriching will be like this.

     




     
    3 . Append direct to a property using single Enrich.
       
    <property name="testMessages" scope="default" description="Test Messages Property">
            <
    testMessages/> 
    </property>
    <enrich>
                <source xpath="//testMessage"/>
                <target action="child" xpath="$ctx:testMessages"/>
     </enrich>

        

    Ishara Premadasa[WSO2 ESB] How to Read Local Entries and Registry Entries

    This post is rather a quick note to my self. In WSO2 ESB we can have external reference to a file using either a local entry or a registry entry. 

    Local-entry :

    An entry stored in the local registry. The local registry acts as a memory registry where you can store text strings, XML strings, and URLs. These entries can be retrieved from a mediator.

    Registry-entry :
    WSO2 ESB makes use of a registry to store various configurations and artifacts such as sequences and endpoints. A registry is a content store and a metadata repository. Various SOA artifacts such as services, WSDLs, and configuration files can be stored in a registry and referred to by a key, which is a path similar to a UNIX file path.

    1. This is how we can read some value stores in a local-entry. Let's say i have a value stored in local-entry file 'testEntry'.

    <localEntry xmlns="http://ws.apache.org/ns/synapse" key="testEntry">12345</localEntry>

     This is how we can read this value into a mediator.

    <property name="testProp" expression="get-property('testEntry')" scope="default" type="STRING"/>

    2. If this 'testEntry' file is stored in the registry we have to use 'registry' scope with get-property() XPath extension function and read entry as below.

    <property name="testProp" expression="get-property('registry', 'conf://custom/testEntry')" scope="default" type="STRING"/>

    Ishara Premadasa[WSO2 ESB] Property mediator : Performing XPath Expressions with and without Namespaces

    Assume that we have an XML payload where there are several namespaces defined and we need to retrieve values of some elements from the payload into Properties using XPath. There are several ways to do this. We can either define the namespace in the property mediator and then refer to XML element via XPath with adding the namespace itself into the Xpath operation. let's get started with this.

    Below is our payload.

    <abc  xmlns="http://abc.xyz.com" >
             <A>YES</A>
             <B>{"abc":"Y","d":"N","e":"N"}</B>
    </abc>

    For e.g i need to read the value 'A' into a property.  Following are the two ways for this However the 2nd option is more cleaner approach since we can get rid of adding namespace references every where. In addition to Property mediator, this can be used with other mediators in WSO2 ESB whether XPath operations are supported.

    1. Use name space entries with property mediator

    <property  xmlns:ns="http://abc.xyz.com" name="aValue" expression="//ns:abc/ns:A" scope="default" type="STRING"/>

    2. Use local-name() option in XPath in order to avoid name spaces and get the same element value with property mediator.

    <property name="aValueWithoutNS" expression="//*[local-name()='A']" scope="default" type="STRING"/>


     

    Ishara Premadasa[WSO2 ESB] XSLT mediator : Writing a simple style sheet

    The XSLT Mediator of WSO2 ESB applies a specified XSLT transformation to a selected element of the current message payload. In this post i am going to write a simple XSLT style sheet which will read values from the current XML payload inside ESB using XPath and populate them into style sheet to create a new or different payload.

    Below is our original XML payload.



    We will be passing this payload into XSLT mediator with specifying a certain drink name as a parameter to the style sheet. For e.g i am passing drink name as 'Coffee'. At the style sheet it will traverse through incoming payload and find the <lunch> elements which contains 'Coffee' as drink name. If matches found we add the prices of those elements under a new <Payment> element. So when we come out of XSLT mediator the payload will be now changed to the <Payment> entry where it contains drinkPrices of matching elements.

    The style sheet 'discountPayment.xsl' is like this.

    <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:fn="http://www.w3.org/2005/02/xpath-functions" xmlns:m0="http://services.samples" version="2.0" exclude-result-prefixes="m0 fn">
            <xsl:output method="xml" omit-xml-declaration="yes" indent="yes"/>
            <xsl:param name="drink_name"/>
            <xsl:template match="/">
                <Payment>
                    <xsl:for-each select="//Order/lunch[contains(drinkName, $drink_name)]">
                        <discount>
                            <xsl:value-of select="drinkPrice"/>
                        </discount>
                    </xsl:for-each>
                </Payment>
            </xsl:template>
    </xsl:stylesheet>

    Add this style sheet into local-entries of ESB and it can be referred from XSLT mediator as this.

    After coming out from XSLT mediator now our current message payload is changed to below.


    Ishara Premadasa[WSO2 ESB] Using [resource] attribute for [xsd:import] elements in a scheme file with Validate Mediator

    The Validate Mediator in WSO2 ESB is used for schema validation of messages. Then this message is validated against the schema specified.

    When we use validate mediator against a schema that schema file itself can contain some other schema references under <xsd:import> elements like below.


    If our schema file is like this, in this scenario validate mediator tries to map and find the internal schemas which are defined in <xsd:import> elements. If these cannot be found, ESB will log an error saying that "The resource "{resource_file_name} can not be located"

    Therefore in order to use these schema files with validate mediator follow the steps given below.

    1. First add all the xsd files as local entries to ESB. These is no relative paths or nested folders supported so the xsd files has to be in local-entries folder itself. However if you put the schemas into registry nested folders are supported.

    2. In the validate mediator there is an attribute called <resource>. Use this option to tell the validate mediator on where to find the files to import as below. Make sure ‘location’ and ‘schemaLocation’ entries has the same file name in order to map those correctly. Then the 'key' attribute is used to specify the current location of this schema file as shown in the given exmaple.


    If you have added the schema files as registry entries, and when there are relative paths mentioned in parent schema, validate mediator can be used in  the following manner.



    Lali DevamanthriMultitenancy support with IBM SDK,7R1


    The multitenant JVM recently became available as part of the IBM SDK Java™ Technology Edition, Version 7 Release 1 as a tech preview. This is a mechanism for sharing run-time resources across Java virtual machine instances.

    Cloud system fasten application processing and reduces memory usage by running multiple applications together within a single multitenant JVM. According to cloud providers, there are two popular multitenant architectures – shared or no shared multitenant architectures. For the shared architecture, underlying hardware and software will be same. In case of no shared multitenant architecture, the complete set of hardware and software is fully dedicated to single customer only. Obviously, in shared architecture the overall cost involved is a lot lesser.

    • Besides reducing processing time, better isolation is achieved between tenants and applications being shared on JVM.
    • Reduces applications’ start times as subsequent applications take less time to start when JVM is running already.
    • Reduces overall cost as single set of hardware and software is shared by multiple tenants.

    Multitenant JVM Vs  Multiple standard JVMs.

    Instead of using multitenant JVM, developer can use multiple JVMs. This associate with with memory consumption issues.

    • The Java heap consumes hundreds of megabytes of memory. Heap objects cannot be shared between JVMs, even when the objects are identical. Furthermore, JVMs tend to use all of the heap that’s allocated to them even if they need the peak amount for only a short time.
    • The Just-in-time (JIT) compiler consumes tens of megabytes of memory, because generated code is private and consumes memory. Generated code also takes significant processor cycles to produce, which steals time from applications.
    • Internal artifacts for classes (many of which, such as String and Hashtable, exist for all applications) consume memory. One instance of each of these artifacts exists for each JVM.
    • Each JVM has a garbage-collector helper thread per core by default and also has multiple compilation threads. Compilation or garbage-collection activity can occur simultaneously in one or more of the JVMs, which can be suboptimal as the JVMs will compete for limited processor time.

    With this cons, maximum number of concurrent applications that can be run on multitenat JVM is improve nearly 5X.

    Application Description Improvement with multitenant JVM
    Hello World Print “HelloWorld” and then sleep 4.2X to 4.9X
    Jetty Start Jetty and wait for requests 1.9X
    Tomcat Start Tomcat and wait for requests 2.1X
    JRuby Start JRuby and wait for requests 1.2X to 2.1X

    Using the multitenant JVM

    import java.io.*;
    
    public class HelloFile {
      public static void main(String[] args) throws IOException {
        try(PrintStream out = new PrintStream("hello.txt")) {
          out.println("Hello, Tenant!");
        }
      }
    }

    Compiling and invoking the above program:

    $ javac HelloFile.java
    $ java -Xmt HelloFile

    Resource constraints
    The multitenant JVM provides controls that can be configured to limit a tenant’s ability to misbehave and use resources in a way that affects other tenants. Values that can be controlled include:

    • Processor time
    • Heap size
    • Thread count
    • File I/O: read bandwidth, write bandwidth
    • Socket I/O: read bandwidth, write bandwidth

    These controls can be specified in the -Xmt command line. For example:

    • -Xlimit:cpu=10-30 (10 percent minimum CPU, 30 percent maximum)
    • -Xlimit:cpu=30 (30 percent maximum CPU)
    • -Xlimit:netIO=20M (maximum bandwidth of 20 Mbps)
    • -Xms8m-Xmx64m (initial 8 MB heap, 64 MB maximum)

     

    Documented limitations

    Multitenancy cannot be applied arbitrarily to Java applications. There are documented limitations.

    • Native libraries (including GUIs like SWT)
    • Debuggers and profilers

     

     


    Chandana NapagodaHost name verification failed for

    Are you getting a "javax.net.ssl.SSLException: Host name verification failed for host" exception as below while trying to connect with a different host? 

    TID: [0] [AM] [2014-08-28 14:41:51,936] ERROR {org.apache.synapse.transport.passthru.TargetHandler} - I/O error: Host name verification failed for host : vminstance.domain.com {org.apache.synapse.transport.passthru.TargetHandler}

    javax.net.ssl.SSLException: Host name verification failed for host : vminstance.domain.com at org.apache.synapse.transport.http.conn.ClientSSLSetupHandler.verify(ClientSSLSetupHandler.java:152)
    at org.apache.http.nio.reactor.ssl.SSLIOSession.doHandshake(SSLIOSession.java:285)
    at org.apache.http.nio.reactor.ssl.SSLIOSession.isAppInputReady(SSLIOSession.java:380)
    at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:118)
    at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:160)
    at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:342)
    at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:320)
    at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:280)
    at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:106)
    at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:604)
    at java.lang.Thread.run(Thread.java:745)


    Reason is hostname verification is set to Default. So in order to get rid of this issue, modify the axis2.xml(https - transportSender) to turn Hostname Verification off as below.


    &lt;parameter name="HostnameVerifier">AllowAll</parameter>

    More Information about HostnameVerifier.

    Lali DevamanthriWeb Services Authentication Security Patterns


    In the development of secure applications, patterns are useful in the design of security functionality. Mature security products or frameworks are usually employed to implement such functionality. One of the most exciting developments in software engineering is the
    emergence of design patterns as an approach to capturing, reusing, and
    teaching software design expertise.Yet, without a deeper comprehension of these products, the implementation of security patterns is difficult, as a non-guided implementation leads to non-deterministic results. Security engineering aims for a consecutive secure software development by introducing methods, tools, and activities into a software development process.

    There are two main patterns (when consider architecture) for authentication. Both pattern focus on the relationships that exist between a client and service participating in a Web
    service interaction.
    1.Direct authentication
    The Web service acts as an authentication service to validate credentials from the client. The credentials, which include proof-of-possession that is based on shared secrets, are verified against an identity store.
    2.Brokered authentication
    The Web service validates the credentials presented by the client, without the need for a direct relationship between the two parties. An authentication broker that both parties trust independently issues a security token to the client. The client can then present credentials, including the security token, to the Web service.

    Considering design Brokered authentication can be subcategorize…

    Brokered Authentication: Kerberos
    Use the Kerberos protocol to broker authentication between clients and Web services.

    The Web Server has to hand-shake with browser to obtain kerberos token. The token can be validated against keytab file  or connecting through Active Directory.
    The below diagram explains how the handshake happens between browser and webserver to obtain kerberos token for authentication.

    Brokered Authentication: X.509 PKI
    Use brokered authentication with X.509 certificates issued by a certificate authority (CA) in a public key infrastructure (PKI) to verify the credentials presented by the requesting application.

    The X.509, PKI X.509 and Public Key Cryptography Standards (PKCS) are the building blocks a PKI system that defines the standard formats for certificates and their use.A typical X.509 standard digital certificate has following format,

    Brokered Authentication: STS
    Use brokered authentication with a security token issued by a Security Token Service (STS). The STS is trusted by both the client and the Web service to provide interoperable security tokens.
    The Security Token Service, based on WS-Trust specification addresses the token translation challenge where a web service or client can translate one token to another token. Security Token Service should be part of Web Services Security Architecture and it acts as a broker in translating one token to another token format. Whether or not you have a WS-Security product (which may or may not have STS), your Security Architecture should consider STS as a key Architecture building block.

    Client Applications + STS + WS-Security Gateway == SOAP Message with appropriate authentication token.

    With the above architecture, you actually delegate the token translation and certain cryptographic key management to a central service. STS can be extended to include any number of input and output token formats without affecting the client applications and removing redundant code across various client applications.

     


    Saliya EkanayakeReferring Methods that Throw Exceptions in Java

    The ability to refer (pass) methods in Java 8 is a convenient feature, however, as a programmer you might face the situtaion where some code that seemingly follow the correct syntax to refer a method that throws an exception gives a compilation error of anUnhandled Exception, which doesn't go away by wrapping the call in a try/catch or adding a throws clause to the method signature. See the following code,
    import java.util.function.Function;

    public class PassingMethodsThatThrowExceptions {

    public static int addOne(String value) throws NotANumberException{
    int v = 0;
    try{
    v = Integer.parseInt(value);
    } catch (NumberFormatException e){
    throw new NotANumberException();
    }
    return v+1;
    }

    public static void increment(Function<String,Integer> incrementer, String value){
    System.out.println(incrementer.apply(value));
    }

    public static void main(String[] args) {
    increment(PassingMethodsThatThrowExceptions::addOne, "10");
    }
    }
    This is a simple code, which has
    • an addOne function that takes in a String value representing a number then adds 1 to it and returns the result as an int.
    • an increment function that simply takes a function, which can perform the increment and a value then apply the function to the value.
    • the main method that calls increment with addOne function and value "10"
    Note. addOne function is declared to throw possible exception of type NotANumberException (the type of exception is NOT important here).
    This code will result in following compilation error,
        Error: java: incompatible thrown types exceptions.NotANumberException in method reference
    If you use an IDE such as IntelliJIDEA it'll show Unhandled Exception: NotANumberException for the increment method call in mainand adding try/catch will not work.
    What's going wrong here? It's actually a mistake on your end.
    The increment function expects a function that takes a String and returns an int, but you forgot to mention that this method may also throw an exception of type NotANumberException.
    The solution is to correct the type of incrementer parameter in increment function.
    Note. you'll need to write a new functional interface because you can't add throws NotANumberException to thejava.util.function.Function interface that's used to define the type of incrementer parameter here.
    Here's the working solution in full.
    public class PassingMethodsThatThrowExceptions {
    public interface IncrementerSignature{
    public int apply(String value) throws NotANumberException;
    }

    public static int addOne(String value) throws NotANumberException{
    int v = 0;
    try{
    v = Integer.parseInt(value);
    } catch (NumberFormatException e){
    throw new NotANumberException();
    }
    return v+1;
    }

    public static void increment(IncrementerSignature incrementer, String value) throws NotANumberException {
    System.out.println(incrementer.apply(value));
    }

    public static void main(String[] args) {
    try {
    increment(PassingMethodsThatThrowExceptions::addOne, "10");
    } catch (NotANumberException e) {
    e.printStackTrace();
    }
    }
    }
    Also, note this is NOT something to do with referring methods or Java 8 in general. You may face a similar situation even in a case where you implement a method of an interface and in the implementation you add the throws SomeException to the signature. Here's a stackoverflow post you'd like to see on this.
    Hope this helps!

    Sivajothy VanjikumaranHow to log the Content-Type in WSO2 ESB


    To identify the Content-Type in the mediation in wso2 esb.
    Please refer to configuration given below. I have modified the sample given in the wso2 esb


    Sivajothy VanjikumaranList of "conditional content aware mediators" and "content aware mediators"

    I have Listed down set of "conditional content aware mediators" and "content aware mediators"

    conditional content aware mediator

    fastXSLT
    filter
    header
    log
    property
    switch

    content aware mediators

    bean
    cache
    callout
    clone
    command
    conditional router
    dblookup
    dbreport
    ejb
    enrich
    event
    payloadfactory
    script
    spring
    store
    validate
    xquery
    xslt
    iterate


    Aruna Sujith KarunarathnaUsing WSO2 admin services to upload Carbon Applications - With Sample

    Hi all, in this post we are going to explore how to use carbon admin services and how to consume them properly. There are lot of carbon admin services available for WSO2 Carbon based product. To list  out all the admin services follow the below steps. Start a WSO2 product using the following command. In this particular example I am using WSO2 ESB 4.8.0 aruna@aruna:~$ ./wso2server.sh

    Udara LiyanageWSO2 ESB ConcurrentConsumers and MaxConcurrentConsumers

    When using WSO2 ESB as a JMS consumer you can use ConcurrentConsumers and MaxConcurrentConsumers properties to control the number of threads used to consume messages in the JMS queue or topic.

    ConcurrentConsumers is the minimum number of threads for message consuming. If there are more messages to be consumed while those running threads are busy, then additional threads are started until total number of threads reaches MaxConcurrentConsumers. Simply saying, initially “ConcurrentConsumers” of threads are started in order to consume messages. Then if there are more message in the queue/topic yet to be consumed while running threads are busy, additional threads are started in order to consume the remaining messages. Like wise more threads are started untill total number of threads reaches “MaxConcurrentConsumers” number.

    Below is a sample JMSListener configuration in axis2.xml. It tells use 5 threads intially and increase number of threads upto 20 according to the load.

    &lt;transportReceiver name="jms" class="org.apache.axis2.transport.jms.JMSListener"&gt; 
    
            &lt;parameter name="myTopicConnectionFactory" locked="false"&gt; 
                    &lt;parameter locked="false" name="transport.jms.ConcurrentConsumers"&gt;5&lt;/parameter&gt; 
                    &lt;parameter locked="false" name="transport.jms.MaxConcurrentConsumers"&gt;20&lt;/parameter&gt; 
                    &lt;parameter name="java.naming.factory.initial" locked="false"&gt;org.apache.activemq.jndi.ActiveMQInitialContextFactory&lt;/parameter&gt; 
                    &lt;parameter name="java.naming.provider.url" locked="false"&gt;failover:tcp://localhost:61616&amp;lt;/parameter&gt; 
                    &lt;parameter name="transport.jms.ConnectionFactoryJNDIName" locked="false"&gt;TopicConnectionFactory&lt;/parameter&gt; 
                        &lt;parameter name="transport.jms.ConnectionFactoryType" locked="false"&gt;topic&lt;/parameter&gt; 
            &lt;/parameter&gt; 
    

     

    Additionally you can override the values in axis.xml in your proxy too.


    Udara LiyanageUpdating API documentation using Swagger- Add a custom header

    In order to understand and test APIs created, WSO2 API manager provides interactive documentation. WSO2AM incoparates swagger[https://developers.helloreverb.com/swagger] for this puprpose.In Swagger, we can define a API using a static JSON file. In APIM, when we create an API, it automatically generates the JSON representation of the API which is loaded by the Swagger. In this tutorial, let’s see how we can edit the JSON representation in order to add some custom header to when calling the API. Below are the steps that are going to do in this tutorial.

    • create an API
    • Use Swagger to invoke the API
    • Describe Swagger documantation
    • Edit Swagger documentation to add a custom header named “username”
    • Invoke the API using Swagger with the newly added header

     

    Step1 : Create an API

    Design API

     

    APi Design

    APi Design

    Implement API

    api-implement

    Manage API

    api manager

     

    Swagger doc.

    To view the swagger documentation
    Locate to APIM store  ->select the API -> Click on “API Console” tab.

    swagger doc

    Here you see a header called “Authorization” and a query parameter named “Query Parameter”. However if you want to add another header or query parameter, you have to edit the Swagger documentation of the API.

     

    Step2:

    Edit Swagger doc

    To edit the Swagger documentation
    Locate to APIMPublisher -> Select API -> click Docs tab -> Edit Content link

    edit

     

    swagger doc edit

     

    Below is the existing JSON representation of the API we created.

    {
        &amp;quot;apiVersion&amp;quot;: &amp;quot;1.0.0&amp;quot;,
        &amp;quot;swaggerVersion&amp;quot;: &amp;quot;1.1&amp;quot;,
        &amp;quot;basePath&amp;quot;: &amp;quot;http://192.168.122.1:8280&amp;quot;,
        &amp;quot;resourcePath&amp;quot;: &amp;quot;/swagger&amp;quot;,
        &amp;quot;apis&amp;quot;: [
            {
                &amp;quot;path&amp;quot;: &amp;quot;/swagger/1.0.0/users&amp;quot;,
                &amp;quot;description&amp;quot;: &amp;quot;&amp;quot;,
                &amp;quot;operations&amp;quot;: [
                    {
                        &amp;quot;httpMethod&amp;quot;: &amp;quot;GET&amp;quot;,
                        &amp;quot;summary&amp;quot;: &amp;quot;&amp;quot;,
                        &amp;quot;nickname&amp;quot;: &amp;quot;&amp;quot;,
                        &amp;quot;parameters&amp;quot;: [
                            {
                                &amp;quot;name&amp;quot;: &amp;quot;Query Parameters&amp;quot;,
                                &amp;quot;description&amp;quot;: &amp;quot;Request Query Parameters&amp;quot;,
                                &amp;quot;paramType&amp;quot;: &amp;quot;body&amp;quot;,
                                &amp;quot;required&amp;quot;: false,
                                &amp;quot;allowMultiple&amp;quot;: false,
                                &amp;quot;dataType&amp;quot;: &amp;quot;String&amp;quot;
                            },
                            {
                                &amp;quot;name&amp;quot;: &amp;quot;Authorization&amp;quot;,
                                &amp;quot;description&amp;quot;: &amp;quot;OAuth2 Authorization Header&amp;quot;,
                                &amp;quot;paramType&amp;quot;: &amp;quot;header&amp;quot;,
                                &amp;quot;required&amp;quot;: false,
                                &amp;quot;allowMultiple&amp;quot;: false,
                                &amp;quot;dataType&amp;quot;: &amp;quot;String&amp;quot;
                            }
    
                        ]
                    }
                ]
            }
        ]
    }
    

    Under parameters section you can see the already defined header “Authorization”. Let’s add another header “username” by adding the below parameter definition under the “parameters” section.

    		{
                                &amp;quot;name&amp;quot;: &amp;quot;username&amp;quot;,
                                &amp;quot;description&amp;quot;: &amp;quot;username of the user&amp;quot;,
                                &amp;quot;paramType&amp;quot;: &amp;quot;header&amp;quot;,
                                &amp;quot;required&amp;quot;: false,
                                &amp;quot;allowMultiple&amp;quot;: false,
                                &amp;quot;dataType&amp;quot;: &amp;quot;String&amp;quot;
                            }
    

    After adding the header representation, whole Swagger documentation of the API is

    {
        &amp;quot;apiVersion&amp;quot;: &amp;quot;1.0.0&amp;quot;,
        &amp;quot;swaggerVersion&amp;quot;: &amp;quot;1.1&amp;quot;,
        &amp;quot;basePath&amp;quot;: &amp;quot;http://192.168.122.1:8280&amp;quot;,
        &amp;quot;resourcePath&amp;quot;: &amp;quot;/swagger&amp;quot;,
        &amp;quot;apis&amp;quot;: [
            {
                &amp;quot;path&amp;quot;: &amp;quot;/swagger/1.0.0/users&amp;quot;,
                &amp;quot;description&amp;quot;: &amp;quot;&amp;quot;,
                &amp;quot;operations&amp;quot;: [
                    {
                        &amp;quot;httpMethod&amp;quot;: &amp;quot;GET&amp;quot;,
                        &amp;quot;summary&amp;quot;: &amp;quot;&amp;quot;,
                        &amp;quot;nickname&amp;quot;: &amp;quot;&amp;quot;,
                        &amp;quot;parameters&amp;quot;: [
                            {
                                &amp;quot;name&amp;quot;: &amp;quot;Query Parameters&amp;quot;,
                                &amp;quot;description&amp;quot;: &amp;quot;Request Query Parameters&amp;quot;,
                                &amp;quot;paramType&amp;quot;: &amp;quot;body&amp;quot;,
                                &amp;quot;required&amp;quot;: false,
                                &amp;quot;allowMultiple&amp;quot;: false,
                                &amp;quot;dataType&amp;quot;: &amp;quot;String&amp;quot;
                            },
                            {
                                &amp;quot;name&amp;quot;: &amp;quot;Authorization&amp;quot;,
                                &amp;quot;description&amp;quot;: &amp;quot;OAuth2 Authorization Header&amp;quot;,
                                &amp;quot;paramType&amp;quot;: &amp;quot;header&amp;quot;,
                                &amp;quot;required&amp;quot;: false,
                                &amp;quot;allowMultiple&amp;quot;: false,
                                &amp;quot;dataType&amp;quot;: &amp;quot;String&amp;quot;
                            },
    {
                                &amp;quot;name&amp;quot;: &amp;quot;username&amp;quot;,
                                &amp;quot;description&amp;quot;: &amp;quot;username of the user&amp;quot;,
                                &amp;quot;paramType&amp;quot;: &amp;quot;header&amp;quot;,
                                &amp;quot;required&amp;quot;: false,
                                &amp;quot;allowMultiple&amp;quot;: false,
                                &amp;quot;dataType&amp;quot;: &amp;quot;String&amp;quot;
                            }
    
                        ]
                    }
                ]
            }
        ]
    }
    

    New Swagger Doc

    new swagger

     

    Now add the new header name to Access-Control-Allow-Headers section of repository/conf/api-manager.xml as below

    &lt;Access-Control-Allow-Headers&gt;authorization,Access-Control-Allow-Origin,Content-Type, CustomHeader&lt;/Access-Control-Allow-Headers&gt;
    

     

    HTTP request before and after

    Request to the API before

    udara@udara$ nc -l 7777
    GET http://localhost:7777/users HTTP/1.1
    Accept: */*
    Host: localhost:7777
    Connection: Keep-Alive
    User-Agent: Synapse-PT-HttpComponents-NIO
    

    Request to the API after adding the header to the Swagger documentation

    udara@udara$ nc -l 7777
    GET http://192.168.122.1:7777/users HTTP/1.1
    username: udara
    Accept: */*
    Host: 192.168.122.1:7777
    Connection: Keep-Alive
    User-Agent: Synapse-PT-HttpComponents-NIO
    

     


    John MathonEnterprise Application Platform 3.0 is a Social, Mobile, API Centric, Bigdata, Open Source, Cloud Native Multi-tenant Internet of Things IOT Platform

    What is Platform 3.0?

    It is the new development methodology, technologies and best practices that have evolved since the revolution caused by the growth of mobile, cloud, social, bigdata, open source and APIs.  This article explains what Platform 3.0 is, what it is composed of, why those things are part of the platform.

    I’m not the only one talking about a new platform.   You can find others promoting the idea we are migrating to a new software world.  I have some references to them at the end of the article.

    The source of Platform 3.0

    I have written about the virtuous circle as basis for Platform 3.0, providing the motivation for why you should adopt a large part of the circle to succeed.    It is not any particular technology of the “day” that will gain your success, i.e. Cassandra.

    The circle introduces a disruptive improvement in real business opportunity:

    1. More and better interaction with customers and partners leading to higher customer satisfaction and transactions

    2. faster time to market

    3. More insight into your customers 

    4. lower capital costs

    5. lower transaction or variable cost

     

    Virtuous Circle

    History of Enterprise Platforms

    Let’s start with a small intro to the past Enterprise Software platforms 1.0 and 2.0.

    Platform 1.0

    The yin-yang that has been seen over and over again in the tech industry is between centralized vs distributed computing.  In the early days of computing and up to the mainframe era there was a lot of support for centralization of data and processing due to the cost and power of the mainframe.   IBM dominated this era and still has many customers for its mainframe centralized technology platform.  While those of us raised in the distributed era may pay short shrift to this platform the fact that most large companies today still have large parts of platform 1.0 running in their infrastructure performing critical business functions testifies to the enormous lasting value created.  This was the era that hardware dominated computing.

    Platform 2.0

    During the late 80s and through 2010 we saw the growth of the distributed platform.   The distributed platform was started by the development and promulgation of low cost computers, the invention of TCP/IP and the internet.  The widespread adoption of networking technology, rapid improvement in performance of the lower and lower cost servers allowed the industry to move to a distributed architecture that enabled moving compute and data across many computers. This became the dominant new paradigm and architecture and allowed scalability to the millions of users.   The tools that distributed architectures required were middleware technologies like messaging from TIBCO, JAVA J2EE, JMS, Application Servers, Web servers and numerous other technologies.  The open source movement grew during this era but a lot of questions about licensing, the legality of open source, how to support open source limited the wide scale success of this movement during this era.   This era really saw the growth of the enterprise software companies.   When I started TIBCO the astonishing thing to realize is that VCs refused to fund us because there were no successful examples of software only companies.

    Platform 3.0

    Several simultaneous things happened that define Platform 3.0:

    1. In the mid-2000s Apple came up with the smart phone and the app store.  I have an interesting article on the success of the App Store paradigm and the implications for computing, IT and CIOs.

    2. Simultaneously, Google, Yahoo, Facebook, Netflix, and other Internet companies started new open source projects to enable them to scale to billions of customers.  I have an article how they created a new paradigm of how they collaborated using open source.  Here

    3. At the same time Amazon realized that it could sell infrastructure as well as books, toys and toiletries.  :)  They were driven to this because they had chosen a different approach than Ebay in how to extend their value.   Amazon embraced the problem of facilitating its partners to rent infrastructure.  Internally Amazon was structured differently.  Ebay had a problem in early days where they were put out of business for 2 days in a row.  They solved that major outage by choosing to centralize their technology and keeping strict control.  Amazon had a looser philosophy and saw the opportunity to rent infrastructure sooner.  This is a surprising development to come from Amazon but it happened.  I think this is fascinating how this happened.  Jeff Bezos was obviously a critical factor in Amazon creating the cloud.  To have a simple explanation of cloud technology read this article.

    4. Facebook and some other social companies succeeded wildly beyond what anybody imagined.   The social movement was clearly an important new paradigm that changed the way companies saw they could learn about their customers and markets and interact with them.

    The emergence and growth of these simultaneous activities can only be explained by the virtuous circle.  Without open source the companies may have had trouble growing to billions of users so easily.  The existence of billions of users of these services created massive new demand for APIs, infrastructure and social.  These things all played together to create a new set of technologies and paradigms for building applications.

    Platform 2.0 doesn’t really help you build applications for this new paradigm.  Platform 2.0 doesn’t include rapid deployment technologies, API management capabilities, social capabilities.  Platform 2.0 encouraged the development of reusable services but it largely it failed at this.  The idea of distributed software by itself didn’t solve the problem of reuse, scaling applications, solving the problem of how to gain adoption of the services it created.   A large missing piece of Platform 2.0 was the social aspect required for true re-use to occur.

    Platform 3.0 incorporates these above technologies and combines it with Platform 2.0.   You can build a Platform 3.0 infrastructure from building blocks in open source yourself or work with vendors to acquire pieces or some combination.

    It is my belief that Platform 3.0 heralds in the era of cloud computing, i.e. SaaS, PaaS, IaaS.   The hardware and software that many companies acquired during Platform 1.0 and Platform 2.0 is now set to be replaced by incremental services in the cloud.  This makes sense because these services then are scalable to the billions and are more cost effective initially and in the long term put the management and expertise for technology into the hands of those most expert in delivering it leaving most companies to be able to concentrate on their core business competencies.

    Critical Components of Platform 3.0

    This is what I believe what the minimum a platform 3.0 must include and some of your options in getting there.

    1.  Open Source

    This is because it is composed of many open source projects and depends on those projects.   One of the primary benefits of Platform 3.0 is agility which open source is a critical component to achieve.

    This doesn’t mean you must make your software open source.  However, ideally everyone in your organization has access to the source code of the entire company so any group or project can improve and contribute back improvements to any part of your applications, services or platform.  Check out “inner source” article to understand how to do this.  You don’t need to be an open source company but you should take advantage of open source methodology to maximize the benefit from Platform 3.0.  You need to incorporate values of agility, transparency, rapid iterations.   Your platform should include tools to help you with the culture of open source development.

    2. RESTful APIs throughout and API Management throughout facilitating an API-Centric programming model

    RESTful APIs and the idea of social advertised APIs that can be managed, versioned, iterated rapidly, usage tracked, quality of service controlled scaled arbitrarily is a key aspect of Platform 3.0 as described in the virtuous circle.

    APIs or services are key to the agility in Platform 3.0 by enabling the rapid composition of applications by reusing APIs.  Also, being able to understand how those APIs are used, to improve them easily requires an API management platform.   There are numerous API management platforms available in open source:

    Google Search for open source API management platforms

    Your API Management platform should include:

    a. a social store for APIs that provides a community with the ability to see transparently all there is to know about services, to see how to use the API, who has used the API and their experiences, tips, etc…

    b. tiers of service and management of tenants and users across tiers

    c. tracking usage of the services by giving you bigdata stream that you can perform analytics on

    d. a way to manage the services, load balancing of usage of the services, proxying services

    You may also want to include capability to secure your APIs, provide OAUTH2 low level entitlement controls on the data.   You want to be able to as easily manage internal APIs used only within your organization as well as APIs you export to the outside world.

    When you have this capability you have the ability to rapidly leverage APIs both inside and outside your organization and build API-Centric applications which gives you more agility as well.

    3. Social capabilities (transparency, store, streaming usage, analytics, visualization, real-time actionable business processes) around any asset in the platform: mobile app, web application and APIs and even IoTs.

    A key aspect of Platform 3.0 is Social because Social is a key element of the business advantage of the new paradigm.  Reaching customers through mobile, IoT devices, web apps or APIs is key and learning from those customers, understanding how to improve your service well as offer them things based on your improved understanding.   Platform 3.0 should make it easy to build APIs and Applications of any type that incorporates social aspects.

    A key element of this part is the Enterprise Store.  This is where you can offer information about your services and products, encourage community and leverage that community.  The community could be initially simply within your own organization but ultimately it is expected you would offer APIs, Mobile Apps, Web apps, IoT devices externally as well.  You will want to “social” enable these so that you can collect information and analyze it.

    Platform 3.0 should automatically enable you to instrument applications, APIs, web Apps etc to produce social usage data and facilitate leveraging that data through visualizations as well as actionable business processes.

    If you are building your own social application consider using a social API such as http://opensocial.org/.

    You need to have adapters and technology to stream data into bigdata databases.  There are several technologies to do this:  BAM, Kafka which enable you to easily collect social information.

    Most of the time you aren’t necessarily building your own social app but leveraging the usage of your web applications, APIs, mobile apps or other software or hardware such as IoTs.    API Management automatically tracks the social usage of the APIs and the applications which use the APIs.  That is a key component.

    You also need to use one or more bigdata architectures.  The common ones at this time are HBase, Cassandra, MongoDB.

    Most of the successful cloud companies are leveraging MULTIPLE big data technologies.  Each of these technologies and the RDB database technologies have a place depending on the type of data and how it is used.

    Once you have social big data information you need to be able to process, analyze and create actionable business processes to automate the intelligence you’ve gained.   There are several components you need to consider in your Platform to facilitate this.  Hive and Pentaho are considered the leading open source bigdata analysis and visualization platforms at this time.  Numerous others are available.

    Frequently the actionable part requires a real time ability to react to social activity.  The current accepted architecture to implement an actionable bigdata streaming and analysis architecure is called:  The Lambda Architecture.   The tools that can do this are the aforementioned big data components and some real-time stream event processing capability such as:  WSO2 CEP and Apache Storm.

    These components give you the ability to easily create applications and services that collect bigdata on social usage of your enterprise assets, give you the ablity to do analysis of this data, visualization and to create actionable business processes off of this data.

    4. Cloud PaaS – Fast deployment and scaling

    A key aspect of Platform 3.0 is the ability to build software cheaply and fast, deploy it instantly and if successful rapidly iterate and grow the users up to virtually an infinite number with costs growing linearly as usage and revenues grow.   This requires the cloud, PaaS, DevOps technologies to be in Platform 3.0.

    As the virtuous circle started gaining traction and speed one thing was apparent right away.  Being able to use cloud resources was a key element of success.   The cloud reduced the startup cost and risk of building anything and the scalability meant virtually infinite resources could be deployed to meet demand.  These resources would only be used if they were needed and therefore presumably the revenue or assets to support the usage would be there as well.  As usage grew the advantage of sharing resources would reduce costs even further.

    In order to take advantage of this early adopters of the cloud started building DevOps tools such as Puppet and Chef to make deployment across cloud architectures easier and less labor intensive.  They also sped up dramatically the process of deployment.  Companies came into existence such as Heroku who provided development environments on demand allowing companies to start work with hardly any startup cost and grow their usage as needed.   I believe that the DevOps Puppet and Chef approach is a halfway step as it doesn’t deal with key aspects of reducing costs of development, deployment and operation.

    How you incorporate Cloud and PaaS into your Platform 3.0 is complex.  There are a lot of things to think about in making this decision as it affects your future dependencies and cost structure.  I have an article to help you decompose the kinds of features and things you need in deciding on an enteprise PaaS.

    The 3 main open source PaaS technologies to consider in my opinion are:   WSO2 Private PaaS, CloudFoundry, OpenShift

    5.  Multi-tenancy, Componentisation, Containerization, Reuse, Resource Sharing

    A critical element of Platform 3.0 is the idea of reuse embodied in open source and APIs as well as the resource sharing that comes from PaaS.   In order to take advantage of this you must support a number of architectural patterns:

    A) Software must be written to be multi-tenant

    Designing your software to be multi-tenant is simply good architectural practice.  It means separating “user” and “enterprise” data from “application” data so that this information can be delivered to the component as the PaaS architecture decides is the most efficient.   You should make sure that logs of activities in your application relevant to users or customers is similarly segregated to provide easy ways to provide social data analysis.

    B) Software should be designed to be components

    Componentisation is simply good architecture  advocated for a long time.  There are several aspects of making something a component.  Limiting the functionality of any service or software piece to a single purpose so that functionality can be reused easily.  Trying to limit the usage of other components so that using the component doesn’t require brining in too many other components which then obviates the purpose of componentisation.    It is also about reusing other components to do things in a consistent way throughout your architecture.   It can include going as far as making your components OSGi compatible bundles.  This is not necessary but one problem that can emerge with components is that they have dependencies on other components that can break if those other components change.   OSGi makes it clear what dependencies different components have and what versions of those components this component will work with.  It allows components to really operate as components safely.    You should seriously consider using a container technology like OSGi that facilitates building reusable components.  There are other ways to accomplish this but it is a solid way to build a component architecture.  OSGi also allows the ability to stop and start individual components and replace them while your software is still operating allowing a zero-downtime application.  Componentisation also means being able to spin up multiple instances of each component to meet demand.  Being able to scale just the component rather than the whole application is vastly more efficient way to scale usage.

    C) Software should be designed to be encapsulated in containers

    Ultimately in Platform 3.0 components will be encapsulated in a container to provide isolation and virtualization.  These make it easy for a PaaS to automatically scale and reuse components.   Most software environments you get today will be supported automatically by many container technologies.   When selecting your software environments to do development and the tools you use you need to make sure they can be containerized in the container technology you choose efficiently.   There are several open source containerization standards and technologies.  Selecting one or more is desirable.  An important advanced future consideration is “Templating” which is discussed later and is related to containers and synthesizing applications from multiple containers.

    D) Software should be published in a social Store to facilitate transparency, improvements, feedback, tracking of usage

    A key aspect of  the value and advantage of Platform 3.0 is reuse and increasing innovation. Platform 2.0 largely failed at getting widespread reuse happening.  A key aspect of that is transparency and social nature of the assets you build.   Developing everything else, using all the right tools but limiting it to a select team that are the only people who know how to use it defeats the whole advantage of Platform 3.0 therefore a key point is how to socialize the assets and tools you are using and building.  This could be simply within your own organization or it could include a wider array of external partners, entrepreneurs, developers.  It is your choice how open or social you want to be but you should consider how to gain maximum visibility of any pieces you can.  An Enterprise Asset (API, App, Application) store is the paradigm that is most being promulgated today.

    Today you can get social components like this as part of a mobile platform for mobile applications and API social capabilities from an API management platform.   Another innovative approach is to use a combined asset store such as the WSO2 Enterprise Store which lets you advertise any type of asset even code fragments or pieces that are not strictly a component.   The Store gives everyone in your organization the ability to see what are the building blocks of your company software infrastructure.  This allows each individual the ability to contribute to the improvement of those assets in the way they can.

    If your organization adopts these practices then you will be able to rapidly develop, deploy, iterate and scale your applications.  You will see greater reuse, faster innovation and you will join the virtuous circle and see the benefits that other organizations are seeing.

    6. Standard Messaging SOA components

    The SOA world was created for a reason that is still very valid.   In the messaging world which is still very much a part of Platform 3.0 we built standard software “applications” or “components” that provided significant value in reducing the complexity and agility building enterprise software.   These components can now be thought of as providing basic functionality for the various axis of software applications you build.  Please consult my article on the completeness of a platform to understand what are the minimum set of components

    a. Enterprise Service Bus for mediation and integration of anything, connectors and adapters to almost everything including IoT protocols

    b. Message Broker for storage, reliable delivery, publish/subscribe of messages

    c. BAM for collecting data from multiple logs and sources to create key metrics and store streams of data to databases or bigdata

    d. Storage Server support for multiple bigdata databases as well as relational database services

    e. Complex Event Processor (CEP) engine for analyzing sequences of events and producing new events

    f. Business Process Server to support both human and machine processes

    g. Data services to support creating services around data in relational databases and bigdata databases

    h. Business Rules to be used anywhere in the platform

    i. Governance Registry to support configuration

    j. Load Balancing anywhere in the platform

    k. Application services

    l. User Engagement Services for visualizing data anywhere in the platform

    New Event Driven Components

    Additional Components

    7. Application templating

    Platforms 3.0 allows you build components, APIs, Web Applications, Mobile applications, IoT applications.   These are the pieces you use to build more of these things.  This creates layers of technology.

    Applications typically are composed of other applications, APIs and components.   For instance, building an Account Creation application might involve using a business process server, enterprise service bus, APIs to data services to acquire information about customers, a Business Rules process to manage special rules for different classes of customers.   The result is that the account creation app is really a web app, APIs to be used by several mobile applications that allow users to create accounts.

    When deploying these pieces to production a PaaS doesn’t necessarily understand how the various pieces of the application fit together.  Various description languages are proposed for describing the structure of applications as combinations of components.  These description languages allow PaaS to automate the deployment of mote complex applications composed of multiple pieces and the management in failure modes, scaling the application more efficiently.

    I will write a blog about this topic because it involves a lot of very interesting topics and this is a very recent evolution of the PaaS framework.

    8. Backend services for IoT, Mobile

    A complete Platform 3.0 environment should give developers developing mobile applications and IoTs basic services to help them build quality Mobile Apps and IoTs.  The types of services these kinds of applications find useful are:

    1) Proxy to enterprise databases and enterprise data

    2) a simple storage mechanism for application storage

    3) connections to social services such as Facebook, LinkedIn

    4) connections to payment services

    5) connections to identity stores

    6) advertising services

    9. Support for development in the cloud

    It is clear that over time more and more development will be done directly in the cloud without the need of a local desktop computer.

    10. Lifecycle Tools

    Platform 3.0 should be built using what is emerging as standard lifecycle management tools for the cloud:

     

    Summary

    Platform 3.0 in my opinion is a real thing.  It is a true revolutionary change from the distributed architectural pattern of the last 30+ years even though it subsumes many of the ideas of Platform 2.0.  I think the evidence of this will be apparent as the emphasis on API-centric service based development becomes more and more the dominant way people build and deliver applications and services.   Ultimately this model makes computing so easy and transparent that almost anybody can create applications and new services easily by composing existing services and adding some business logic.

    It is important to realize that ultimately Platform 3.0 will be cloud based.   You will get most or all of your pieces of Platform 3.0 as services in the cloud.  In the short term this is not possible.  It will take another 5 years or more for the markets to mature for services and the component technologies to be available and enough competitors to make the “all-services” based enterprise possible.  So, today your only option is to acquire most of Platform 3.0 from open source and run it on a cloud infrastructure either public or private and stitch the pieces together as your own services.

    The real advantage for Platform 3.0 is that it is a radical change in cost to develop, deploy and operate software.  It provides mechanisms to promote reuse and adoption and most important constant innovation and agility.  Without this any enterprise will rapidly fall behind others in their ability to provide services to their customers and partners.

    The good news is that Platform 3.0 is cheap  and it can be brought on in incremental steps.  You don’t have to swallow the whole thing in one bite.  You may not get all the advantages but Platform 3.0 is component oriented so it can be consumed in pieces.

     

    Articles referenced in this blog and additional sources:

    Open Group Platform 3.0 Definition

    Wikipedia Platform 3.0 Definition

    Bloomberg says IT Platform 3.0 is about agility

    value and advantage of Platform 3.0 is reuse and increasing innovation

    Why OSGi?

    Google Search for open source API management platforms

    “inner source” article to understand how to do this.

    The Virtuous Circle.

    The Lambda Architecture.

    Success of the App Store paradigm and the implications for computing, 

    Decompose the kinds of features and things you need in deciding on an enteprise PaaS.


    Lali DevamanthriFog Before The Cloud

    Cisco working on carve out a new computing category introduce as Fog Computing by combining two existing categories “Internet of Things” + “cloud computing”. Fog computing, also known as fogging, is a model in which data, processing and applications are concentrated in devices at the network edge rather than existing almost entirely in the cloud.

     (When people talk about “edge computing,” what they literally mean is the edge of the network, the periphery where the Internet ends and the real world begins. Data centers are in the “center” of the network, personal computers, phones,surveillance cameras and  IoT devices are on the edge.)

    The problem of how to get things done when we’re dependent on the cloud is becoming all the more acute as more and more objects become “smart,” or able to sense their environments, connect to the Internet, and even receive commands remotely. Everything from jet engines to refrigerators is being pushed onto wireless networks and joining the “Internet of Things. Modern 3G and 4G cellular networks simply aren’t fast enough to transmit data from devices to the cloud at the pace it is generated, and as every mundane object at home and at work gets in on this game, it’s only going to get worse unless bandwidth increasing.

    If devices at the network routing can be self learning, organizing and healing it will decentralize the network.  Cisco wants to turn its routers into hubs for gathering data and making decisions about what to do with it. In Cisco’s vision, its smart routers will never talk to the cloud unless they have to—say, to alert operators to an emergency on a sensor-laden rail car on which one of these routers acts as the nerve center.

    Fog Computing can enable a new breed of aggregated applications and services, such as smart energy distribution. This is where energy load-balancing applications run on network edge devices that automatically switch to alternative energies like solar and wind, based on energy demand, availability, and the lowest price.

    Fog_Computing1

    The Fog computing applications and services include :

    • Interplay between the Fog and the Cloud. Typically, the Fog platform supports real-time, actionable analytics, processes, and filters the data, and pushes to the Cloud data that is global in geographical scope and time.
    • Data collection and analytics (pulled from access devices, pushed to Cloud)
    • Data storage for redistribution (pushed from Cloud, pulled by downstream devices)
    • Technologies that facilitate data fusion in the above contexts.
    • Analytics relevant for local communities across various verticals (ex: advertisements, video analytics, health care, performance monitoring, sensing etc.)
    • Methodologies, Models and Algorithms to optimize the cost and performance through workload mobility between Fog and Cloud.

    Another example are smart traffic lights. A video camera senses an ambulance’s flashing lights and then automatically changes streetlights for the vehicle to pass through traffic. Also through Fog Computing, sensors on self-maintaining trains can monitor train components. If they detect trouble, they send an automatic alert to the train operator to stop at the next station for emergency maintenance.


    Saliya EkanayakeWeekend Carpentry: Wall Shelf

    Well, it wasn't really a weekend project, but it could have been, hence the title.

    Update Aug 2014
     
         Sketchup file at https://www.dropbox.com/s/0qa79linxceagwr/shelf.skp
         PDF file at https://www.dropbox.com/s/5iaalnhmfmkkke8/Shelf.pdf


    If you like to give it a try, here's the plan.




    The top two are the vertical and horizontal center pieces. The last four pieces are for top, bottom, left, and right dividers. The joints are simply half lap joints (see  http://jawoodworking.com/wp-content/uploads/2008/09/half-lap-joint.jpg).

    Just remember to finish wood pieces before assembling. It's much easier than having to apply finish to the assembled product, which unfortunately is what I did.

    Senaka FernandoAPI Management for OData Services

    The OData protocol is a standard for creating and consuming Data APIs. While REST gives you the freedom of choice to choose how you design your API and the queries you pass to it, OData tends to be a little bit more structured but at the same time more convenient, in terms of exposing data repositories as universally accessible APIs.


    However, when it comes to API Management for OData endpoints, there aren’t many good options out there. WSO2 API Manager makes it fairly straightforward for you to manage your OData APIs. In this post, we will looking how manage a WCF Data Service based on the OData protocol using WSO2 API Manager 1.7.0. The endpoint that I have used in this example is accessible at http://services.odata.org/V3/Northwind/Northwind.svc.

    Open the WSO2 API Publisher by visiting https://localhost:9443/publisher on your browser. Login with your credentials and click on Add to create a new API. Set the name as northwind, the context as /northwind and the version as 3.0.0 as seen below. Once done, click on the Implement button towards the bottom of your screen. Then click Yes to create a wildcard resource entry and click on Implement again.

    Please note that instead of creating a wildcard resource here, you can specify some valid resources. I have explained this towards the end of this post.


    In the next step, specify the Production Endpoint as http://services.odata.org/V3/Northwind/Northwind.svc/ and click Manage. Finally, select Unlimited from the Tier Availability list box, and click Save and Publish. Once done, you should find your API created.

    Now Open the WSO2 API Store by visiting https://localhost:9443/store on your browser, where you should find the northwind API we just created. Make sure you are logged in and click on the name of the northwind API, which should bring you to a screen as seen below.


    You now need to click on the Subscribe button, which will then take you to the Subscriptions page. In here, you need to click on the Generate button to create an access token. If everything went well, your screen should look something similar to what you find below. Take special note of the access token. Moving forward, you will need to make a copy of this to your clipboard.


    The next step is to try the API. You have several choices here. The most convinient way is to use the RESTClient tool which comes with the product. You simply need to select RESTClient from the Tools menu on the top. To use this tool, simply set the URL as http://localhost:8280/northwind/3.0.0/Customers('ALFKI')/ContactName/$value and the Headers as Authorization:Bearer TOKEN. Remember to replace TOKEN with the access token you got from the step above. Once you click Send, you should see something similar to the screenshot below.


    Another easy option is to use curl. You can install curl on most machines and it is a very straightforward command line tool. After having installed curl, run the following command in your command line interface:
    curl -H "Authorization:Bearer TOKEN" -X GET "http://localhost:8280/northwind/3.0.0/Customers('ALFKI')/ContactName/$value"
    Remember to replace TOKEN with the access token you got from the step above.

    For more challenging queries, please read through Microsoft’s guidelines on Accessing Data Service Resources (WCF Data Services). Remember to replace http://services.odata.org/Northwind/Northwind.svc with http://localhost:8280/northwind/3.0.0 in every example you find. For the RESTClient, note that you will have to replace " " with "%20" for things to work. Also, for curl note that on some command line interfaces such as the Terminal in your Mac OS X, you might have to replace "$" with "\$" and " " with "%20" for things to work.

    In the very first step, note that we used wildcard resource. Instead of that, you can specify some resources to control what types of access is possible. For example, in the list of queries mentioned in the link above, if you want to allow the queries related to Customers but not the ones related to Orders, you can setup a restriction as follows.

    Open the WSO2 API Publisher by visiting https://localhost:9443/publisher on your browser. First click on the northwind API and then click the Edit link. Now at the very bottom of your screen, in the Resources section, set URL Pattern to /Customers* and Resource Name to /default. Then click Add New Resource. After having this done, click on the delete icon in front of all the contexts marked /*. If everything went well your screen should look similar to the following.


    Finally, click on the Save button. Now, retry some of the queries. You should find the queries related to Customers working well but the queries related to Orders failing unlike before. This is a very simple example on how to make use of these resources. More information can be found in here.

    Please read the WSO2 API Manager Documentation to learn more on managing OData Services and also other types of endpoints.

    Melan JayasinghaHello World using OpenCL

    I've recently acquired a new laptop with an API Radeon GPU. I'm always interested in HPC and how it is used in various industries and research areas. I had experience with MPI/OpenMP earlier but not got a chance to look into GPU style frameworks such as CUDA or OpenCL. I've a lot of free time these days, so I studied some online tutorials and got a little bit familiar with OpenCL and it's very interesting to me.

    I'm using AMD's APP SDK 2.9 for my work, my first OpenCL code as below, it can be compiled using gcc with -lOpenCL

    #include <stdio.h>
    #include <string.h>
    #include <CL/cl.h>

    const char source[] = " \
    __kernel void hello( __global char* buf, __global char* buf2 ){ \
    int x = get_global_id(0); \
    buf2[x] = buf[x]; \
    }";


    int main() {
    char buf[]="Hello, World!";
    char build_c[4096];
    size_t srcsize, worksize=strlen(buf);

    cl_platform_id platform;
    cl_device_id device;
    cl_uint platforms, devices;

    /* Fetch the Platforms, we only want one. */
    clGetPlatformIDs(1, &platform, &platforms);

    /* Fetch the Devices for this platform */
    clGetDeviceIDs(platform, CL_DEVICE_TYPE_ALL, 1, &device, &devices);

    /* Create a memory context for the device we want to use */
    cl_context_properties properties[]={CL_CONTEXT_PLATFORM, (cl_context_properties)platform,0};
    cl_context context=clCreateContext(properties, 1, &device, NULL, NULL, NULL);

    /* Create a command queue to communicate with the device */
    cl_command_queue cq = clCreateCommandQueue(context, device, 0, NULL);

    const char *src=source;
    srcsize=strlen(source);

    const char *srcptr[]={src};
    /* Submit the source code of the kernel to OpenCL, and create a program object with it */
    cl_program prog=clCreateProgramWithSource(context, 1, srcptr, &srcsize, NULL);

    /* Compile the kernel code (after this we could extract the compiled version) */
    clBuildProgram(prog, 0, NULL, "", NULL, NULL);

    /* Create memory buffers in the Context where the desired Device is. These will be the pointer
    parameters on the kernel. */
    cl_mem mem1, mem2;
    mem1=clCreateBuffer(context, CL_MEM_READ_ONLY, worksize, NULL, NULL);
    mem2=clCreateBuffer(context, CL_MEM_WRITE_ONLY, worksize, NULL, NULL);

    /* Create a kernel object with the compiled program */
    cl_kernel k_hello=clCreateKernel(prog, "hello", NULL);

    /* Set the kernel parameters */
    clSetKernelArg(k_hello, 0, sizeof(mem1), &mem1);
    clSetKernelArg(k_hello, 1, sizeof(mem2), &mem2);

    /* Create a char array in where to store the results of the Kernel */
    char buf2[sizeof buf];
    buf2[0]='?';
    buf2[worksize]=0;

    /* Send input data to OpenCL */
    clEnqueueWriteBuffer(cq, mem1, CL_FALSE, 0, worksize, buf, 0, NULL, NULL);

    /* Tell the Device, through the command queue, to execute que Kernel */
    clEnqueueNDRangeKernel(cq, k_hello, 1, NULL, &worksize, &worksize, 0, NULL, NULL);

    /* Read the result back into buf2 */
    clEnqueueReadBuffer(cq, mem2, CL_FALSE, 0, worksize, buf2, 0, NULL, NULL);

    /* Await completion of all the above */
    clFinish(cq);

    /* Finally, output the result */
    puts(buf2);
    }

    Ganesh PrasadMy Books On Dependency-Oriented Thinking - Why They Should Count

    InfoQ has published both volumes of my book "Dependency-Oriented Thinking". The links are below.


    I'll admit it feels somewhat anticlimactic for me to see these books finally published, because I finished writing them in December 2013 after about two years of intermittent work. They have been available as white papers on Slideshare since Christmas 2013. The last seven months have gone by in reviews, revisions and the various other necessary steps in the publication process. And they have made their appearance on InfoQ's site with scarcely a splash. Is that all?, I feel like asking myself. But I guess I shouldn't feel blasé. These two books are a major personal achievement for me and represent a significant milestone for the industry, and I say this entirely without vanity.

    You see, the IT industry has been misled for over 15 years by a distorted and heavyweight philosophy that has gone by the name "Service-Oriented Architecture" (SOA). It has cost organisations billions of dollars of unnecessary spend, and has fallen far short of the benefits that it promised. I too fell victim to the hype around SOA in its early days, and like many other converted faithful, tried hard to practise my new religion. Finally, like many others who turned apostate, I grew disillusioned with the lies, and what disillusioned me the most was the heavyhandedness of the "Church of SOA", a ponderous cathedral of orthodox practice that promised salvation, yet delivered nothing but daily guilt.

    But unlike others who turned atheist and denounced SOA itself, I realised that I had to found a new church. Because I realised that there was a divine truth to SOA after all. It was just not to be found in the anointed bible of the SOA church, for that was a cynical document designed to suit the greed of the cardinals of the church rather than the needs of the millions of churchgoers. The actual truth was much, much simpler. It was not easy, because "simple" and "easy" are not the same thing. (If you find this hard to understand, think about the simple principle "Don't tell lies", and tell me whether it is easy to follow.)

    I stumbled upon this simple truth through a series of learnings. I thought I had hit upon it when I wrote my white paper "Practical SOA for the Solution Architect" under the aegis of WSO2. But later, I realised there was more. The WSO2 white paper identified three core components at the technology layer. It also recognised that there was something above the technology layer that had to be considered during design. What was that something? Apart from a recognition of the importance of data, the paper did not manage to pierce the veil.

    The remaining pieces of the puzzle fell into place as I began to consider the notion of dependencies as a common principle across the technology and data layers. The more I thought about dependencies, the more things started to make sense at layers even above data, and the more logically design at all these layers followed from requirements and constraints.

    In parallel, there was another train of thought to which I once again owe a debt of gratitude to WSO2. While I was employed with the company, I was asked to write another white paper on SOA governance. A lot of the material I got from company sources hewed to the established industry line on SOA governance, but as with SOA design, the accepted industry notion of SOA governance made me deeply uncomfortable. Fortunately, I'm not the kind to suppress my misgivings to please my paymasters, and so at some point, I had to tell them that my own views on SOA governance were very different. To WSO2's credit, they encouraged me to write up my thoughts without the pressure to conform to any expected models. And although the end result was something so alien to establishment thought that they could not endorse it as a company, they made no criticism.

    So at the end of 2011, I found myself with two related but half-baked notions of SOA design and SOA governance, and as 2012 wore on, my thoughts began to crystallise. The notion of dependencies, I saw, played a central role in every formulation. The concept of dependencies also suggested how analysis, design, governance and management had to be approached. It had a clear, compelling logic.

    I followed my instincts and resisted all temptation to cut corners. Gradually, the model of "Dependency-Oriented Thinking" began to take shape. I conducted a workshop where I presented the model to some practising architects, and received heartening validation and encouragement. The gradual evolution of the model mainly came about through my own ruminations upon past experiences, but I also received significant help from a few friends. Sushil Gajwani and Ravish Juneja are two personal friends who gave me examples from their own (non-IT) experience. These examples confirmed to me that dependencies underpin every interaction in the world. Another friend and colleague, Awadhesh Kumar, provided an input that elegantly closed a gaping hole in my model of the application layer. He pointed out that grouping operations according to shared interface data models and according to shared internal data models would lead to services and to products, respectively. Kalyan Kumar, another friend who attended one of my workshops, suggested that I split my governance whitepaper into two to address the needs of two different audiences - designers and managers.

    And so, sometime in 2013, the model crystallised. All I then had to do was write it down. On December 24th, I completed the two whitepapers and uploaded them to Slideshare. There has been a steady trickle of downloads since then, but it was only after their publication by InfoQ that the documents have gained more visibility.

    These are not timid, establishment-aligned documents. They are audacious and iconoclastic. I believe the IT industry has been badly misled by a wrongheaded notion of SOA, and that I have discovered (or re-discovered, if you will) the core principle that makes SOA practice dazzlingly simple and blindingly obvious. I have not just criticised an existing model. I have been constructive in proposing an alternative - a model that I have developed rigorously from first principles, validated against my decades of experience, and delineated in painstaking detail. This is not an edifice that can be lightly dismissed. Again, these are not statements of vanity, just honest conviction.

    I believe that if an organisation adopts the method of "Dependency-Oriented Thinking" that I have laid out in these two books (after testing the concepts and being satisfied that they are sound), then it will attain the many benefits of SOA that have been promised for years - business agility, sustainably lower operating costs, and reduced operational risk.

    It takes an arc of enormous radius to turn around a gigantic oil tanker cruising at top speed, and I have no illusions about the time it will take to bring the industry around to my way of thinking. It may be 5-10 years before the industry adopts Dependency-Oriented Thinking as a matter of course, but I'm confident it will happen. This is an idea whose time has come.

    Saliya EkanayakeWeekend Carpentry: Baby Gate

    My 10 month old son is pioneering his crawling skills and has just begun to cruise. It's been hard to keep him out of the shoe rack with these mobile skills, so I decided to make this little fence.
    Download Sketchup file
    Download PDF file
    Here's a video of the sliding lock mechanism I made.

    Saliya EkanayakeBlogging with Markdown in Blogger

    tl;dr - Use Dillinger and paste the formatted content directly to blogger
    Recently, I tried many techinques, which will allow me to write blogs in markdown. The available choice in broad categories are,
    • Use makrdown aware static blog generator such as Jekyll or something based on it like Octopress
    • Use a blogging solution based on markdown such as svbtle
    • Use a tool that'll either enable markdown support in blogger (see this post) or can post to blogger (like StackEdit)
    First is the obvious choice if you need total control over your blog, but I didn't want to get into too much trouble just to blog because it involes hosting the generated static html pages on your own - not to mention the trouble of enabling comments. I like the second solution from and went the distance to even move my blog to svbtle. It's pretty simple and straightforward, but after doing a post or two I realized the lack of comments is a showstopper. I agree it's good for posts intended for "read only" use, but usually it's not the case for me.
    This is when I started investigating on the third option and thought StackEdit to be a nice solution as it'll allow posting to blogger directly. However, it doesn't support syntax highlighting for code blocks - bummer!
    Then came the "aha!" moment. I've been using Dillinger to edit markdown regularly as it's very simple and gives you instant formatted output. I thought why not just copy the formatted content and paste it in the blog post - duh. No surprises - it worked like a charm. Dillinger beatifully formats everything including syntax highligting for code/scripts. Also, it allows you to link with either Dropbox or Github where I use Github.
    All in all, I found Dillinger to be the easiest solution and if you like to see a formatted post see my first post with it.

    Saliya EkanayakeGet PID from Java

    This may not be elegant, but it works !

    public static String getPid() throws IOException {
    byte[] bo = new byte[100];
    String[] cmd = {"bash", "-c", "echo $PPID"};
    Process p = Runtime.getRuntime().exec(cmd);
    p.getInputStream().read(bo);
    return new String(bo);
    }

    sanjeewa malalgodaFixing issue in WSO2 API Manager due to matching resource found or API authentication failure for a API call with valid access token

     

    No matching resource found error or authentication failure can happen due to few reasons. Here we will discuss about errors can happen due to resource define

    Here in this article we will see how resource mapping work in WSO2 API Manager. When you create API with resource we will store them in API Manager database and synapse configuration. When some request comes to gateway it will first look for matching resource and then dispatch to inside that. For this scenario resource is as follows.

    /resource1/*

    In this configuration * means you can have any string(in request url) after that point. If we take your first resource sample then matching request would be something like this.

    http://gatewayhostname:8280/t/test.com/apicontext/resource1/

    Above request is the minimum matching request. In addition to that following requests also will map to this resource.

    http://gatewayhostname:8280/t/test.com/apicontext/resource1/data?name=value

    http://gatewayhostname:8280/t/test.com/apicontext/resource1/data?name=value&id=97

    And following requests will not map properly to this resource. The reason for this is we specifically expecting /resource1/ in the request(* means you can have any string after that point).

    http://gatewayhostname:8280/t/test.com/apicontext/resource1?name=value&id=97

    From the web service call you will get following error response.

    <am:fault xmlns:am="403Status'>http://wso2.org/apimanager"><am:code>403</am:code><am:type>Status report</am:type><am:message>Runtime Error</am:message><am:description>No matching resource found in the API for the given request</am:description></am:fault>

    If you sent request to t/test.com/apicontext/1.0/resource1?id=97&Token=cd it will not work. Because unfortunately there is no matching resource for that. Because as i explained earlier you resource definition is having  /resource1/*. Then request will not map to any resource and you will get no matching resource found error and auth failure(because trying to authenticate against none existing resource).

    Solution for this issue would be something like this.

    In API Manager we do support both uri-template and url mapping support. If you create API from API Publisher user interface then it will create url-mapping based definition. From API Manager 1.7.0 on wards we will support both options from UI level. Normally when we need to do some kind of complex pattern matching we use uri-template. So here we will update synapse configuration to use uri-template instead of url-mapping. For this edit wso2admin-AT-test.com--apicontext_v1.0.xml file as follows.

    Replace <resource methods="GET" url-mapping="/resource1/*"> with <resource methods="GET" uri-template="/resource1?*">

    Hope this will help you to to understand how resource mapping work. You will find more information from this link[1]

    [1]http://charithaka.blogspot.com/2014/03/common-mistakes-to-avoid-in-wso2-api.html

    sanjeewa malalgodaHow to avoid dipatching Admin service calls to ELB services - WSO2 ELB

    We can front WSO2 services by WSO2 ELB. If we have this kind of deployment all requests to services should route through WSO2 ELB. Some scenarios we might need to invoke admin services deployed in servers through ELB. If you send request to some of back end servers admin service load balancer will try to find that service it self. To avoid that we need to define different service path. So admin services in ELB can access through defined service path and other services will not mix up with it.

    For this Then we can change ELB's service context to /elbservices/. Edit  servicePath property in axis2.xml as follows.


    <parameter name="servicePath">elbservices</parameter>

    sanjeewa malalgodaTrust all hosts when send https request – How to avoid SSL error when we connect https service

    Sometimes when we write client applications we might need to communicate with services exposed over SSL. Some scenarios we might need to skip certificate check from client side. This is bit risky but if we know server and we can trust it we can skip certificate check. Also we can skip host name verification. So basically we are going to trust all certs. See following sample code.

    //Connect to Https service     
    HttpsURLConnection  conHttps = (HttpsURLConnection) new URL(urlVal).openConnection();
                    conHttps.setRequestMethod("HEAD");
                    //We will skip host name verification as this is just testing endpoint. This verification skip
                    //will be limited only for this connection
                    conHttps.setHostnameVerifier(DO_NOT_VERIFY);
                    //call trust all hosts method then we will trust all certs
                    trustAllHosts();
                    if (conHttps.getResponseCode() == HttpURLConnection.HTTP_OK) {
                        return "success";

                   }
    //Required utility methods
    static HostnameVerifier DO_NOT_VERIFY = new HostnameVerifier() {
        public boolean verify(String hostname, SSLSession session) {
            return true;
        }
    };

    private static void trustAllHosts() {
        // Create a trust manager that does not validate certificate chains
        TrustManager[] trustAllCerts = new TrustManager[] { new X509TrustManager() {
            public java.security.cert.X509Certificate[] getAcceptedIssuers() {
                return new java.security.cert.X509Certificate[] {};
            }

            public void checkClientTrusted(X509Certificate[] chain,
                                           String authType) throws CertificateException {
            }

            public void checkServerTrusted(X509Certificate[] chain,
                                           String authType) throws CertificateException {
            }
        } };

        // Install the all-trusting trust manager
        try {
            SSLContext sc = SSLContext.getInstance("TLS");
            sc.init(null, trustAllCerts, new java.security.SecureRandom());
            HttpsURLConnection
                    .setDefaultSSLSocketFactory(sc.getSocketFactory());
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

    sanjeewa malalgodaHow to build and access message body from custom handler – WSO2 API Manager

    From API Manager 1.3.0 onward we will be using pass-through transport inside API Manager. Normally in passthrough we do not build message body. When we use pass-through you need to build message inside handler to access message body. But please note that this is bit costly operations when we compare it with the default mediation. Actually we introduced pass-through transport to improve performance of gateway. There we do not build or touch message body. Add followings to your handler to see message body.

     

    Add following dependency to your handler implementation project


           <dependency>
               <groupId>org.apache.synapse</groupId>
               <artifactId>synapse-nhttp-transport</artifactId>
               <version>2.1.2-wso2v5</version>
           </dependency>


    Then import RelayUtils to handler as follows.
    import org.apache.synapse.transport.passthru.util.RelayUtils;

    Then build message before process message body as follows(add try catch blocks when needed).
    RelayUtils.buildMessage(((Axis2MessageContext)messageContext).getAxis2MessageContext());


    Then you will be able to access message body as follows.
    <soapenv:Body><test>sanjeewa</test></soapenv:Body>

    sanjeewa malalgodaHow to configure WSO2 API Manager to access by multiple devices(from single user and token) at the same time

     

    This would be very useful when we setup production kind of deployments and use it by many users. According to current architecture if logged out from one device and revoke key then all other call made with that token will get authentication failures. In that case application should be smart enough to detect authentication failure and request for new token. Once user log into application, that user might want to provide user name and password. So we can use that information and consumer/ secret keys to retrieve new token once detect authentication failure. In our honest opinion we should handle this from client application side. If we allowed users to have multiple tokens at the same time. And that will cause to security related issues and finally users will end up with thousands of tokens that user cannot maintain. Also this might be problem when it comes to usage metering and statistics.

     
    So recommended solution for this issue is having one active user token at a given time. And make client application aware about error responses send by API Manager gateway. Also you should consider refresh token approach for this application. When you request user token you will get refresh token along with the token response so you can use that for refresh access token.

    How this should work

    Lets assume same user logged in form desktop and tablet. Client should provide user name and password both when they log into desktop and tablet apps. At that time we can generate token request with username, password and consumer key, secret pair. So we can keep this request in memory until user close or logout from application(we do not persist this data to anywhere then there is no security issue) 

    then when they logout from the desktop or the application on the desktop decides to refresh the OAuth Token first, then the user will be prompted for their username and password on the tablet since the tablet has a revoked or inactivated OAuth Token.  But here we should not prompt username password as client is already provided them and we have token request in memory. Once we detect auth failure from tablet app it will immediately send token generation request and get new token. User will not aware about what happen underline.

    sanjeewa malalgodaHow to retrive property and process/iterate them in synapse using xpath and script mediator - WSO2 ESB

    Sometimes we need to retrieve properties and manipulate them according to custom requirement. For this i can suggest you two approaches.



    01. Retrieve roles list and fetch them using script mediator.
    //Store Roles into message context
     <property name="ComingRoles" expression="get-property('transport','ROLES')"/>

    //Or you can you following if property is already set to default message context
     <property name="ComingRoles" expression="get-property(''ROLES')"/>


    //process them inside script mediator
    <script language="js">
                var rolelist = mc.getProperty('ComingRoles');
    //Process rolelist and set roles or required data to message context as follows. Here we set same role set
                mc.setProperty('processedRoles',rolelist);
    </script>
     <log>
                <property name="Processed Roles List" expression="get-property('processedRoles')"/>
    </log>





    02. Retrieve roles list and fetch them using xpath support provided.
    //Retrive incoming role list
      <property name="ComingRoles" expression="get-property('transport','ROLES')"/>

    //Or you can you following if property is already set to default message context
     <property name="ComingRoles" expression="get-property(''ROLES')"/>

    //Fetch roles one by one using xpath operations
             <property name="Role1"
                       expression="fn:substring-before(get-property('ComingRoles'),',')"/>
             <property name="RemainingRoles"
                       expression="fn:substring-after(get-property('transport','ROLES'),',')"/>

    //Fetch roles one by one using xpath operations
             <property name="Role2"
                       expression="fn:substring-before(get-property('RemainingRoles'),',')"/>
             <property name="RemainingRoles"
                       expression="fn:substring-after(get-property('RemainingRoles'),',')"/>

    //Fetch roles one by one using xpath operations
             <property name="Role3" expression="(get-property('RemainingRoles'))"/>

    //Then log all properties using log mediator
             <log>
                <property name="testing" expression="get-property('Role1')"/>
             </log>
             <log>
                <property name="testing" expression="get-property('Role2')"/>
             </log>
             <log>
                <property name="testing" expression="get-property('Role3')"/>
             </log>

    //Check whether roles list having String "sanjeewa". If so we will set isRolesListHavingSanjeewa as true else its false.
             <log>
                <property name="isRolesListHavingSanjeewa"
                          expression="fn:contains(get-property('transport','ROLES'),'sanjeewa')"/>
             </log>


    You will find xpath expressions and sample here(http://www.w3schools.com/xpath/xpath_functions.asp)

    sanjeewa malalgodaHow to clear token cache in gateway nodes – API Manager 1.7.0 distributed deployment

     

    In API Manager deployments we need to clear gateway cache when we regenerate application tokens from API store user interface(or calling revoke API).  So we added new configuration for that in API Manager 1.7.0. Lets see how we can apply it and use.

    01. If we generate new application access token from ui old tokens remain as active in gateway cache.

    02. If we use revoke API deployed in gateway it will clear only super tenants cache.

    To address these issues recently we introduced new parameter named RevokeAPIURL. In distributed deployment we need to configure this parameter in API key manager node. Then it will call API pointed by RevokeAPIURL parameter. RevokeAPIURL parameter should be pointed to revoke API deployed API gateway node. If it is gateway clustered we can point to one node. So from this release(1.7.0) on ward all revoke requests will route to oauth service through revoke API deployed in API manager. When revoke response route through revoke API cache clear handler will invoke. Then it will extract relevant information form transport headers and clear associated cache entries. In distributed deployment we should configure followings.

    01. In key manager node, point gateway API revoke end point as follows.

    <!-- This the API URL for revoke API. When we revoke tokens revoke requests should go through this

                 API deployed in API gateway. Then it will do cache invalidations related to revoked tokens.

        In distributed deployment we should configure this property in key manager node by pointing

        gateway https url. Also please note that we should point gateway revoke service to key manager-->

    <RevokeAPIURL>https://${carbon.local.ip}:${https.nio.port}/revoke</RevokeAPIURL>

    02. In API gateway revoke API should be pointed to oauth application deployed in key manager node.

      <api name="_WSO2AMRevokeAPI_" context="/revoke">

            <resource methods="POST" url-mapping="/*" faultSequence="_token_fault_">

                <inSequence>

                    <send>

                        <endpoint>

                            <address uri="https://keymgt.wso2.com:9445/oauth2/revoke"/>

                        </endpoint>

                    </send>

                </inSequence>

                <outSequence>

                    <send/>

                </outSequence>

            </resource>

            <handlers>

                <handler class="org.wso2.carbon.apimgt.gateway.handlers.ext.APIManagerCacheExtensionHandler"/>

            </handlers>

        </api>

    sanjeewa malalgodaHow to skip Host name verification when we do http request over SSL

     

    Sometimes we need to skip host name verification when we do Https call to external server. Most of the cases you will get error saying host name verification failed. In such cases we should implement host name verifier and  return true from verify method.  See following sample code.

    HttpsURLConnection conHttps = (HttpsURLConnection) new URL(urlVal).openConnection();

    conHttps.setRequestMethod("HEAD");

    //We will skip host name verification as this is just testing endpoint. This verification skip

    //will be limited only for this connection

    conHttps.setHostnameVerifier(DO_NOT_VERIFY);

    if (conHttps.getResponseCode() == HttpURLConnection.HTTP_OK) {

    //Connection was successful

    }

    static HostnameVerifier DO_NOT_VERIFY = new HostnameVerifier() {

    public boolean verify(String hostname, SSLSession session) {

                return true;

            }

      };

    sanjeewa malalgodaHow to avoid web application deployment faliures due to deployment listener class loading issue. - WSO2 Application Server

    WSO2 Application server can be use to deploy web applications and services. For some advance use cases we might need to handle deployment tasks and post deployment tasks. There can be listeners defined globally in repository/conf/tomcat/web.xml to achieve this as follows.


    <listener>
      <listener-class>com.test.task.handler.DeployEventGenerator</listener-class>
    </listener>



    When we deploy web application to WSO2 Application Server sometimes class loading environment can be changed. To fix this we can deploy the webapp with "Carbon" class loading environment. For some use cases we are shipping CXF/Spring dependencies within the web application so any class loader environment other than "CXF" might fail. 

    To fix this we need to add file named webapp-classloading.xml to META-INF. Then content should be as follows.


    <Classloading xmlns="http://wso2.org/projects/as/classloading">
       <Environments>Carbon</Environments>
    </Classloading>

    Saliya EkanayakeRunning C# MPI.NET Applications with Mono and OpenMPI

    I wrote an earlier post on the same subject, but just realized it's not detailed enough even for me to retry, hence the reason for this post.
    I've tested this in FutreGrid with Infiniband to run our C# based pairwise clustering program on real data up to 32 nodes (I didn't find any restriction to go above this many nodes - it was just the maximum I could reserve at that time)
    What you'll need
    • Mono 3.4.0
    • MPI.NET source code revision 338.
        svn co https://svn.osl.iu.edu/svn/mpi_net/trunk -r 338 mpi.net
    • OpenMPI 1.4.3. Note this is a retired version of OpenMPI and we are using it only because that's the best that I could get MPI.NET to compile against. If in future MPI.NET team provides support for a newer version of OpenMPI, you may be able to use it as well.
    • Automake 1.9. Newer versions may work, but I encountered some errors in the past, which made me stick with version 1.9.
    How to install
    1. I suggest installing everything to a user directory, which will avoid you requiring super user privileges. Let's create a directory called build_mono inside home directory.
       mkdir ~/build_mono
      The following lines added to your ~/.bashrc will help you follow the rest of the document.
       BUILD_MONO=~/build_mono
      PATH=$BUILD_MONO/bin:$PATH
      LD_LIBRARY_PATH=$BUILD_MONO/lib
      ac_cv_path_ILASM=$BUILD_MONO/bin/ilasm

      export BUILD_MONO PATH LD_LIBRARY_PATH ac_cv_path_ILASM
      Once these lines are added do,
       source ~/.bashrc
    2. Build automake by first going to the directory that containst automake-1.9.tar.gz and doing,
       tar -xzf automake-1.9.tar.gz
      cd automake-1.9
      ./configure --prefix=$BUILD_MONO
      make
      make install
      You can verify the installation by typing which automake, which should point to automake inside $BUILD_MONO/bin
    3. Build OpenMPI. Again, change directory to where you downloaded openmpi-1.4.3.tar.gz and do,
       tar -xzf openmpi-1.4.3.tar.gz
      cd openmpi-1.4.3
      ./configure --prefix=$BUILD_MONO
      make
      make install
      Optionally if Infiniband is available you can point to the verbs.h (usually this is in /usr/include/infiniband/) by specifying the folder /usr in the above configure command as,
       ./configure --prefix=$BUILD_MONO --with-openib=/usr
      If building OpenMPI is successfull, you'll see the following output for mpirun --version command,
       mpirun (Open MPI) 1.4.3

      Report bugs to http://www.open-mpi.org/community/help/
      Also, to make sure the Infiniband module is built correctly (if specified) you can do,
       ompi_info|grep openib
      which, should output the following.
       MCA btl: openib (MCA v2.0, API v2.0, Component v1.4.3)
    4. Build Mono. Go to directory containing mono-3.4.0.tar.bz2 and do,
       tar -xjf mono-3.4.0.tar.bz2
      cd mono-3.4.0
      Mono 3.4.0 release is missing a file, which you'll need to add by pasting the following content to a file called./mcs/tools/xbuild/targets/Microsoft.Portable.Common.targets
       <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
      <Import Project="..\Microsoft.Portable.Core.props" />
      <Import Project="..\Microsoft.Portable.Core.targets" />
      </Project>
      You can continue with the build by following,
       ./configure --prefix=$BUILD_MONO
      make
      make install
      There are several configuration parameters that you can play with and I suggest going through them either in README.md or in./configure --help. One parameter, in particular, that I'd like to test with is --with-tls=pthread
    5. Build MPI.NET. If you were wonder why we had that ac_cv_path_ILASM variable in ~/.bashrc then this is where it'll be used. MPI.NET by default tries to find the Intermediate Language Assembler (ILASM) at /usr/bin/ilasm2, which for 1. does not exist because we built Mono into $BUILD_MONO and not /usr 2. does not exist because newer versions of Mono calls this ilasm notilasm2. Therefore, after digging through the configure file I found that we can specify the path to the ILASM by exporting the above environment variable.
      Alright, back to building MPI.NET. First copy the downloaded Unsafe.pl.patch to the subversion checkout of MPI.NET. Then change directory there and do,
       patch MPI/Unsafe.pl < Unsafe.pl.patch
      This will say some hunks failed to apply, but that should be fine. It only means that those are already fixed in the checkout. Once patching is completed continue with the following.
       ./autogen.sh
      ./configure --prefix=$BUILD_MONO
      make
      make install
      At this point you should be able to find MPI.dll and MPI.dll.config inside MPI directory, which you can use to bind against your C# MPI application.
    How to run
    • Here's a sample MPI program written in C# using MPI.NET.
        using System;
      using MPI;
      namespace MPINETinMono
      {

      class Program
      {

      static void Main(string[] args)
      {
      using (new MPI.Environment(ref args))
      {
      Console.Write("Rank {0} of {1} running on {2}\n",
      Communicator.world.Rank,
      Communicator.world.Size,
      MPI.Environment.ProcessorName);
      }
      }
      }
      }
    • There are two ways that you can compile this program.
      1. Use Visual Studio referring to MPI.dll built on Windows
      2. Use mcs from Linux referring to MPI.dll built on Linux
        mcs Program.cs -reference:$MPI.NET_DIR/tools/mpi_net/MPI/MPI.dll
        where $MPI.NET_DIR refers to the subversion checkout directory of MPI.NET
        Either way you should be able to get Program.exe in the end.
    • Once you have the executable you can use mono with mpirun to run this in Linux. For example you can do the following within the directory of the executable,
        mpirun -np 4 mono ./Program.exe
      which will produce,
        Rank 0 of 4 running on i81
      Rank 2 of 4 running on i81
      Rank 1 of 4 running on i81
      Rank 3 of 4 running on i81
      where i81 is one of the compute nodes in FutureGrid cluster.
      You may also use other advance options with mpirun to determine process mapping and binding. Note. the syntax for such controlling is different from latest versions of OpenMPI. Therefore, it's a good idea to look at different options from mpirun --help. For example you may be interested in specifying the following options,
        hostfile=<path-to-hostfile-listing-available-computing-nodes>
      ppn=<number-of-processes-per-node>
      cpp=<number-of-cpus-to-allocate-for-a-process>

      mpirun --display-map --mca btl ^tcp --hostfile $hostfile --bind-to-core --bysocket --npernode $ppn --cpus-per-proc $cpp -np $(($nodes*$ppn)) ...
      where, --display-map will print how processes are bind to processing units and --mca btl ^tcp forces to turn off tcp
    That's all you'll need to run C# based MPI.NET applications in Linux with Mono and OpenMPI. Hope this helps!

    Sajith RavindraDetermining the size of a SOAP message inside a proxy service of WSO2 ESB

    Below are two methods that can be used to determine the SOAP message size inside a WSO2 ESB proxy service.

    Method 1 - Using script mediator

    you can use the scrip mediator to find the size of the complete message. Following is example how you can do it,

     <script language="js">var msgLength = mc.getEnvelopeXML().toString().length;
    mc.setProperty("MSG_LENGTH", msgLength);</script>

    <log level="custom">
    <property name="MSG_LENGTH" expression="get-property('MSG_LENGTH')"/>
    </log>

    In this sample it gets the string length of message, and assign the value to a property. And then read the value from outside the script mediator and log it.

    Also you can only get the length of the payload of the message by  calling mc.getPayloadXML() inside script mediator.

    Refer [1] for more information on script mediator.

    Method 2 - Read the Content-Length header

    Note that this method can be only used if the header is since it's not a required header. Value of the Content-Length header can be read as follows,

    <property name="Lang" expression="get-property('transport', 'Content-Length')"/>

    Please reply if you have any alternative or better methods other than these to find the SOAP message size inside a proxy service.

    [1]-https://docs.wso2.com/display/ESB480/Script+Mediator

    Dimuthu De Lanerolle

    Java Tips .....

    To get directory names inside a particular directory ....

    private String[] getDirectoryNames(String path) {

            File fileName = new File(path);
            String[] directoryNamesArr = fileName.list(new FilenameFilter() {
                @Override
                public boolean accept(File current, String name) {
                    return new File(current, name).isDirectory();
                }
            });
            log.info("Directories inside " + path + " are " + Arrays.toString(directoryNamesArr));
            return directoryNamesArr;
        }



    To retrieve links on a web page ......

     private List<String> getLinks(String url) throws ParserException {
            Parser htmlParser = new Parser(url);
            List<String> links = new LinkedList<String>();

            NodeList tagNodeList = htmlParser.extractAllNodesThatMatch(new NodeClassFilter(LinkTag.class));
            for (int x = 0; x < tagNodeList.size(); x++) {
                LinkTag loopLinks = (LinkTag) tagNodeList.elementAt(m);
                String linkName = loopLinks.getLink();
                links.add(linkName);
            }
            return links;
        }


    To search for all files in a directory recursively from the file/s extension/s ......

    private List<String> getFilesWithSpecificExtensions(String filePath) throws ParserException {

    // extension list - Do not specify "." 
     List<File> files = (List<File>) FileUtils.listFiles(new File(filePath),
                    new String[]{"txt"}, true);

            File[] extensionFiles = new File[files.size()];

            Iterator<File> itFileList = files.iterator();
            int count = 0;

            while (itFileList.hasNext()) {
                File filePath = itFileList.next();
               
    extensionFiles[count] = filePath;
                count++;
            }
            return
    extensionFiles;



    Reading files in a zip

         public static void main(String[] args) throws IOException {
            final ZipFile file = new ZipFile("Your zip file path goes here");
            try
            {
                final Enumeration<? extends ZipEntry> entries = file.entries();
                while (entries.hasMoreElements())
                {
                    final ZipEntry entry = entries.nextElement();
                    System.out.println( "Entry "+ entry.getName() );
                    readInputStream( file.getInputStream( entry ) );
                }
            }
            finally
            {
                file.close();
            }
        }
            private static int readInputStream( final InputStream is ) throws IOException {
                final byte[] buf = new byte[8192];
                int read = 0;
                int cntRead;
                while ((cntRead = is.read(buf, 0, buf.length) ) >=0)
                {
                    read += cntRead;
                }
                return read;
            }

    John MathonThe Virtuous Circle is key to understanding how the world is changing – Mobile, Social, Cloud, Open Source, APIs, DevOps

    Virtuous Circle

    I could talk about each of these components in isolation, for instance, Mobile and talk about the growth of it and changes it is fostering but you wouldn’t get the big picture.  The change we are seeing can only be understood by considering all these components together.  They each affect each other and the widespread adoption and improvement in each drives the success of the other.  Therefore, this article is intended to give you the context to understand the entire change we are undergoing rather than any particular element.  By seeing that you see that adopting any one technology or doing any one thing will ultimately fail.

    The idea of the virtuous circle is that each component of the circle has been critical to the success of the other parts of the circle.  It is hard to imagine the success of any of these elements in isolation.  The massive adoption of key aspects of the circle is clearly needed but each element of the circle has been key to the success and adoption of the other parts.    The constant improvement in each aspect of the circle drives improvements in other parts of the circle driving more adoption and more innovation.  The process feeds itself and grows faster as each element grows.   It is impossible to say which element is less – critical so they have to be considered as one.

    These components of the virtuous circle imply that to leverage the benefits of the circle you have to be using components of the circle in your processes.   The more components you use the more benefit you gain from the circle and the faster the change you will experience.   The combination of these technologies is being called Platform 3.0.

    The benefits of each part of the virtuous circle

    1. APIs

    Apis_mellifera_carnica_worker_hive_entrance_3APIs are the honeybee of software (apis is latin for honeybee)

    RESTful APIs became popular after the Cloud and Mobile became big.  APIs were the basic building blocks of Mobile Applications and most Mobile Applications were hosted in the cloud.

    After Apple introduced the App Store over the next 5 years some 600,000 applications were created .  These Apps went through a massive increase in power, capability and they needed services (APIs) in the cloud more and more to support the mobile application growth.

    Increasing API services in the Cloud drive new Mobile Apps but also drove new revenues to companies that were able to sell those APIs for billions in new revenue further powering the API revolution.   As Enterprises and developers became more and more enamored with the simple RESTful API paradigm it begins to replace in Enterprises the SOAP / SOA mantra.

    Increasing success of APIs fosters a parallel development to the App Stores with the proliferation of social API Stores which let you find APIs, subscribe to APIs and publish APIs to communities of developers.   The number of APIs has ballooned to 10s of thousands, many earning billions for their creators.  The success of APIs is changing the role of the CIO into having a role in the top line of a company not just the costs of a company making them more important player.

    The growth and success of APIs fosters Enterprises to refactor their Enterprises using RESTful APIs, implement an API Store, deploy API Managers for internal development as well as to publish for external consumption leading to vast changes in Enterprise Architecture.   The selling of services as APIs requires a new way to manage APIs and applications built on the APIs whether mobile or conventional for internal or external consumption.  The success of APIs inside organizations forces vendors to support RESTful APIs in Enterprise Software of all types.    Most open source software is changed to use RESTful APIs as standard ways of interfacing.

    2. Mobile

    iphone5s_silver_portrait

    Mobile took off after the introdutction of the Apple Iphone in 2006.  Less than 10 years ago there are now over 1 BILLION smartphones in use and the number of expected to reach ubiquity fairly fast.

    Even more amazing the adoption of powerful expensive smartphones which can support powerful mobile apps is keeping pace.  Mobile App usage grows dramatically so that users now spend 84% of their time on average in Apps on their phones not doing calls or other traditional phone things.

    The social aspect of the mobile devices themselves lend themselves to proliferation of mobile apps and socialization via the stores and other social apps leads to massive adoption and growth of both.

    The growing adoption of Mobile was accelerated by the introduction of the iPad and subsequently other tablet devices.   The mass adoption in Enterprises and consumer space of the mobile interface has led many to believe that within a few years all interaction with computers will be via mobile devices like tablets and smartphones transforming how everyone believes applications will be built and delivered in the future.

    Mobile devices support literally hundreds of Apps because of the successful model of the App Store and the integration of Apps in the mobile devices.   This has resulted in changes to Desktop software to become more and more like a mobile device.  The trend is unstoppable that all interactions will eventually be via a mobile like interface and applications managed on any device with App Stores and Social capabilities.

    Lest one think this is simply fad and dependent on the smartphone success one has to realize that companies such as Uber have a valuation of $17 billion and all they have is an App.  The Uber app delivers a disruptive capability that empowers many people to make money in new ways and for people  to find services faster than ever.

    Other examples of transformative power of the combination of mobile, applications and social are ubiquitous.  Companies in the retail sector frequently hold meetings once a week to review their feedback on Yelp to see if they can improve their service.  New ways of sharing documents and pictures transform how we view privacy and Enterprise data is distributed.  Mobile has a major impacts on security strategies of companies.

    This unstoppable force of mobile is driving all other aspects of the virtuous circle as well.   For instance, the requirements to deliver, update and improve mobile apps frequently has put pressure on DevOps automation and Cloud scalability.

    3. Social

    social3

    The growth of social on the desktop started before the Smartphone but really took off in the last 8 years with the smartphone.

    Most users now do their social activity on the smartphone in order to interact in the moment wherever they are doing something.   Pictures, texting, various forms of interconnection apps are being created constantly.

    Facebook and other pioneers have more than 1 billion users and the use of social has fed a tremendous change in the way Enterprises think of connecting with customers.  They want more and more to learn from social interactions and be part of them so they can influence buying behavior.

    Enterprises realize they have to have social capabilities, to be able to capture detailed usage information, detailed interactions and then to mine that information for actionable knowledge like the social companies, Google, other cloud companies have.   Such capability helps them improve their services, make their services more intelligent as well as increasing opportunities to sell to customers.  This requires big-data in most cases.  This new way of storing and analyzing data helps improve applications and services rapidly and is being adopted by Enterprises en masse.

    The growth of social applications, bigdata and the need to scale to billions of users has driven collaboration in open source like never before.  The success of social becomes a key aspect of the success of mobile, the success of APIs and services, Applications so that most companies must deal with this new way of interacting and learning from customers and users.

    4.Cloud

    connectivity

    Cloud started at a time close to the start of Mobile.  One could look at Cloud as simply the extension of the Internet era but the real start of the Cloud is really about Amazon and the creation of public services for infrastructure it started.

    Today, this business is at close to $25 billion and Amazon has a 50% market share.  Amazon’s business is growing at 137% annually and the cloud is becoming an unstoppable force as much as any of these other components in the virtuous circle.

    Cloud is the underpinning of most of the other elements of the virtuous circle as the way that companies deliver services.   The way most startups get going is by leveraging the disruptive power of the Cloud.   The Cloud enables a company (big or small) to acquire hardware for offering a service instantly instead of the months required before.  More important for small companies the ability to build and deliver a service in a fraction of the time it used to take and with almost zero capital cost and variable expenses that grow as they grow makes many more companies viable as startups.

    The Cloud disruption means most companies no longer need as much venture capital putting more of the benefit in  entrepreneurs hands fostering increasing numbers of startups.  The cloud and social also promulgates a new way of funding companies with Kickstarter campaigns able to raise millions for entrepreneurs.   This drives massive innovation and the creation of new devices and applications, services.

    Larger Enterprises are realizing that the cloud has benefits for them too.  Many are adopting more and more cloud services.  Numerous SaaS companies started in the internet era are now based on cloud services.   Companies can’t avoid the Cloud adoption as Personal Cloud use explodes and more and more skunk works usage of the cloud happens.

    SaaS has grown to over $130 billion industry.  SaaS applications are combined with IaaS and now PaaS (DevOps) is changing the infrastructure and how Enterprises are built.

    The transformation of Enterprise infrastructure to Cloud will take decades but is a multi-trillion dollar business eventually.

    The economics of the Cloud are unstoppable.  Most Enterprises are simply not in the technology business and have no reason or basis for running, hosting, buying technology infrastructure and basic applications.

    Open source projects have driven massive adoption of cloud technology and cloud technology is dependent on the open source technology that underlies much of it.

    5. DevOps

    lots of cogs

    The Cloud by itself allowed you to speed the acquisition of hardware but the management of this hardware was still largely manual, laborious and expensive.  DevOps is the process of taking applications from development into production.  This was a significant cost and time sink for new services, applications and technology.

    DevOps automates the acquisition, operation, upgrade, deployment, customer support of services and applications.  Without DevOps automation the ability to upgrade daily, the cost to maintain and reliability of Cloud based services would have faltered.   DevOps started with the growth of the open source projects Puppet and Chef but quickly went beyond that with the growth of PaaS.  PaaS is expected to be a $6-14 billion market in 3 or 4 years.  Heroku demonstrated within several years of its founding that they had 70,000 applications developed, built and deployed in the cloud, demonstrating the power of PaaS to reduce costs and make it easier for small companies companies to do development.

    The ability to deliver applications fast, to develop applications faster and faster, easier and easier is because of the automation and capability of PaaS’s and DevOps to rapidly allow people to dream up applications and implement them, deploy them and scale them almost effortlessly.    This has allowed so many people to offer new mobile applications, new services, for new companies to be formed and succeed faster than ever before.  It has allowed applications to grow to billions of users.

    Numerous open source projects provide the basic building blocks of DevOps and PaaS technology which drive the industry forward.   The success of the DevOps / PaaS technology is also changing the way Enterprises build and deploy software for inside or outside consumption.

    6. Open Source

    subway diagram

    Underlying all these other elements of the virtuous circle has been a force of collaboration allowing companies to share technology and ideas that has enabled the rapid development and proliferation of new technologies.

    The number of open source projects is doubling every 13 months according to surveys.   Enterprises now consider Open Source software the best quality software available.  In many cases it is the ONLY way to build and use some technologies.

    The open source movement is powering Cloud technology, Big-data, analytics for big-data, social, APIs, Mobile technology with so many projects and useful components it is beyond elaboration.  It is an essential piece of building any application today.

    The growth of open source has fostered increasing innovation dramatically.  Initially HBase was one of the only BigData open source projects but Cassandra, MongoDB and numerous others popped out soon.   The NSA itself having built its own big-data technology open sourced its technology as well.  In every area of the virtuous circle we see open source companies critical to the growth and innovation taking place.

    Companies form around the open source projects to support the companies using the projects which is critical to the success of the open source project.  Some of these companies are now approaching the valuation and sales of traditional Proprietary software companies.    There is no doubt that many of these companies will eventually become as big as traditional closed source companies and we may see the disruption of the closed source model more and more as companies realize there is no advantage to the closed source model.

    The Impact of the Virtuous Circle

    15192654-a-man-in-shirt-and-tie-acting-afraid-of-being-crushed-on-white-background

    The virtuous circle of technology has been in operation like this for the last 8 years or so.   Its existence cannot be denied so the questions are:

    1) To what extent do these technologies change the underlying costs and operation of my business?

    2) To what extent do these technologies change the way I sell my services to the world?

    3) To what extent do I need to adopt these technologies or become a victim of disruption?

    These questions should be critical to any company, to its business leaders as well as the CIO, CTO and software technologists.   An example of this would be Uber I refer to every now and then.   The cab industries in NYC and Paris and other cities weren’t looking at the Cloud, Mobile apps, Social.  They didn’t see that they could offer dramatically better service to customers by integrating their cabs with mobile devices, the cloud and social.  So, they have uniformly been surprised by the growth of Uber and now competitors like Lyft etc…    I don’t know how this will resolve in that case but we can see how the music industry hasn’t had a smooth transition to the new technologies.

    Some businesses such as advertising are undergoing a radical transformation.  Advertising was one of the least technology savvy industries for many years.  The growth of digital advertising has changed this business to one of the most technology intensive businesses.  One advertising business I talked to is contemplating 70 billion transactions/day.

    Every organization that faces consumers is feeling the effects of the virtuous circle.  The need to adopt mobile apps, to adopt social technology, big-data, APIs and consequently to adopt Cloud, DevOps, Open Source is unmistakable.

    The impact of these technology improvements affects the way everyone develops software, affects the cost to operate their business and to innovate, to be more agile and adapt faster to the changes happening faster and faster.    Some are calling this change in the basic building blocks of software Platform 3.0.   I will explain Platform 3.0 and what it is in later blogs but it is a critical change in Enterprise and software development that every organization needs to look at.

    Therefore the impact of the virtuous circle has become virtually ubiquitous.  The scale of the businesses of mobile with billions of users, social with billions of users, APIs with billions in revenue and 10s of thousands of APIs, Cloud now a $160 billion/year business growing at a very high rate and other changes that have spun off from these in terms of how everyone operates makes this technology and circle a critical understanding in todays world.

    Changes to the Virtuous Circle

    As we move forward there are some things we can see that are happening.  I will be blogging on all these topics below more.

    1) Internet of Things is real and growing very fast

    NinjaSphere-663x442

    The Internet of Things (IoT) is expected to be somewhere in the 7-19 trillion business in very few years.  This is counted by looking at all the hardware that will have IoT capability.  In business we will see IoT everywhere.  The underlying technology of IoTs will undergo massive change like all the previously described areas so I see IoT is integrated into the Virtuous Circle already.   Some IoT technologies are open source already and there is more and more movement to standards and collaborative development.

    2) The Network Effect

    Having a thermostat that can learn and be managed remotely is cool and somewhat useful but when you combine it with other IoTs the value grows substantially.  Being able to monitor your body work and consumption is useful to the individual but nobody knows what we could do if we had this information over many millions of people.  The effect on health could be dramatic essentially allowing us to reduce the cost of medical trials and affecting health care costs and outcomes dramatically.  The same with all IoTs.   The same with all API’s.  Each API by itself has some utility.  However, when one combines APIs, IoTs, mobile apps and billions of people with billions of devices we don’t know where all this is going but I believe this means the virtuous circle will continue to dominate the change we see in the world for the foreseeable future.

    3) Privacy and Security

    So far in this evolution of the virtuous circle we have mostly sidestepped issues of privacy and security.  It is only in 2013 and now in 2014 that we have seen the Cloud start to pull ahead dealing with some security issues.  All of these trends have had a negative effect on privacy.  People seem to be waiting for the first scandal or to see where this will go before they make any decisions about how we will adjust our ideas of privacy.   I believe at some point in the next decade we will see a tremendous change in technology to support privacy but that remains to be seen.

    Other resources you can read on this topic:

    The virtuous Circle 

    The Nexus of Forces: Social, Mobile, Cloud and Information

    The “Big Five” IT trends of the next half decade: Mobile, social, cloud, consumerization, and big data

    Cloud computing empowers Mobile, Social, Big Data

    Nexus of New Forces – Big Data, Cloud, Mobile and Social

    The technology “disruption” occurring in today’s business world is driven by open source and APIs and a new paradigm of enterprise collaboration

    IoT 7 TRILLION, 14 TRILLION or 19 TRILLION DOLLARS!


    Prabath Siriwardena[Book] Advanced API Security: Securing APIs with OAuth 2.0, OpenID Connect, JWS, and JWE

    APIs are becoming increasingly popular for exposing business functionalities to the rest of the world. According to an infographic published by Layer 7, 86.5% of organizations will have an API program in place in the next fi ve years. Of those, 43.2% already have one. APIs are also the foundation of building communication channels in the Internet of Th ings (IoT). From motor vehicles to kitchen appliances, countless items are beginning to communicate with each other via APIs. Cisco estimates that as many as 50 billion devices could be connected to the Internet by 2020.

    This book is about securing your most important APIs. As is the case with any software system design, people tend to ignore the security element during the API design phase. Only at deployment or at the time of integration do they start to address security. Security should never be an afterthought—it’s an integral part of any software system design, and it should be well thought out from the design’s inception. One objective of this book is to educate you about the need for security and the available options for securing an API. Th e book also guides you through the process and shares best practices for designing APIs for rock-solid security.

    API security has evolved a lot in the last five years. The growth of standards has been exponential. OAuth 2.0 is the most widely adopted standard. But it’s more than just a standard—it’s a framework that lets people build standards on top of it. Th e book explains in depth how to secure APIs, from traditional HTTP Basic Authentication to OAuth 2.0 and the standards built around it, such as OpenID Connect, User Managed Access (UMA), and many more. JSON plays a major role in API communication. Most of the APIs developed today support only JSON, not XML. Th is book also focuses on JSON security. JSON Web Encryption (JWE) and JSON Web Signature (JWS) are two increasingly popular standards for securing JSON messages. The latter part of this book covers JWE and JWS in detail.


    Another major objective of this book is to not just present concepts and theories, but also explain each of them with concrete examples. The book presents a comprehensive set of examples that work with APIs from Google, Twitter, Facebook, Yahoo!, Salesforce, Flickr, and GitHub. Th e evolution of API security is another topic covered in the book. It’s extremely useful to understand how security protocols were designed in the past and how the drawbacks discovered in them pushed us to where we are today. Th e book covers some older security protocols such as Flickr Authentication, Yahoo! BBAuth, Google AuthSub, Google ClientLogin, and ProtectServe in detail.

    There are so many - who helped me writing the book. Among them, I would first like to thank Jonathan Hassel, senior editor at Apress, for evaluating and accepting my proposal for this book. Th en, of course, I must thank Rita Fernando, coordinating editor at Apress, who was extremely patient and tolerant of me throughout the publishing process. Thank you very much Rita for your excellent support—I really appreciate it. Also, Gary Schwartz and Tiff any Taylor did an amazing job reviewing the manuscript—many thanks, Gary and Tiff any! Michael Peacock served as technical reviewer—thanks, Michael, for your quality review comments, which were extremely useful. Thilina Buddhika from Colorado State University also helped in reviewing the first two chapters of the book—many thanks, again, Thilina!

    Dr. Sanjiva Weerawarana, the CEO of WSO2, and Paul Fremantle, the CTO of WSO2, are two constant mentors for me. I am truly grateful to both Dr. Sanjiva and Paul for everything they have done for me. I also must express my gratitude to Asanka Abeysinghe, the Vice President of Solutions Architecture at WSO2 and a good friend of mine—we have done designs for many Fortune 500 companies together, and those were extremely useful in writing this book. Thanks, Asanka!

    Of course, my beloved wife, Pavithra, and my little daughter, Dinadi, supported me throughout this process. Pavithra wanted me to write this book even more than I wanted to write it. If I say she is the driving force behind this book, it’s no exaggeration. She simply went beyond just feeding me with encouragement—she also helped immensely in reviewing the book and developing samples. She was always the first reader. Thank you very much, Pavithra.

    My parents and my sister have been the driving force behind me since my birth. If not for them, I wouldn’t be who I am today. I am grateful to them for everything they have done for me. Last but not least, my wife’s parents—they were amazingly helpful in making sure that the only thing I had to do was to write this book, taking care of almost all the other things that I was supposed to do.

    The point is that although writing a book may sound like a one-man effort, it’s the entire team behind it who makes it a reality. Thank you to everyone who supported me in many different ways.

    I hope this book effectively covers this much-needed subject matter for API developers, and I hope you enjoy reading it.

    Amazon : http://www.amazon.com/Advanced-API-Security-Securing-Connect/dp/1430268182

    Lali DevamanthriCan We Trust Endpoint Security ?

     

    Endpoint security is an approach to network protection that requires each computing device on a corporate network to comply with certain standards before network access is granted. Endpoints can include PCs, laptops, smart phones, tablets and specialized equipment such as bar code readers or point of sale (POS) terminals.

    Endpoint security systems work on a client/server model in which a centrally managed server or gateway hosts the security program and an accompanying client program is installed on each network device. When a client attempts to log onto the network, the server program validates user credentials and scans the device to make sure that it complies with defined corporate security policies before allowing access to the network.

    When it comes to endpoint protection,  information security professionals believe that their existing security solutions are unable to prevent all endpoint infections, and that anti-virus solutions are ineffective against advanced targeted attacks. Overall, end-users are their biggest security concern.

    “The reality today is that existing endpoint protection, such as anti-virus, is ineffective because it is based on an old-fashioned model of detecting and fixing attacks after they occur,” said Rahul Kashyap, chief security architect at Bromium, in a statement. “Sophisticated malware can easily evade detection to compromise endpoints, enabling cybercriminals to launch additional attacks that penetrate deeper into sensitive systems. Security professionals should explore a new paradigm of isolation-based protection to prevent these attacks.”

    Saltzer’s and Schroeder’s design principles ( http://nob.cs.ucdavis.edu/classes/ecs153-2000-04/design.html ) provides us with an opportunity to reflect on the protection mechanisms that we employ (as well as some principles that we may have forgotten about). Using these to examine AV’s effectiveness as a protection mechanism leads us to conclude that AV, as a protection mechanism, is a non-starter.

    That does not mean that AV is completely useless — on the contrary, its utility as a warning or detection mechanism that primary protection mechanisms have failed is valuable — assuming of course that there is a mature security incident response plan and process in place (i.e. with proper post incident review (PIR), root cause analysis (RCA) and continual improvement process (CIP) mechanisms).

    Unfortunately, many organisations employ AV as a primary endpoint defense against malware. But that is not all: their expectation of the technology is not only to protect, but to perform remediation as well. They “outsource” the PIR, RCA and CIP to the AV vendor. The folly of their approach is painfully visible as they float rudderless from one malware outbreak to the next.

    There are many alternatives for endpoint security, such as Applocker, LUA, SEHOP, ASLR and DEP are all freely provided by Microsoft. So is removing users’ administrative rights (why did we ever give it to them in the first place?).

    Other whitelisting technologies worthy of consideration are NAC (with remediation) and other endpoint compliance checking tools, as well as endpoint firewalls in default deny mode.

     

     

     


    Footnotes