Quantcast
Channel: Sanjeewa Malalgoda's Blog
Viewing all 220 articles
Browse latest View live

How to use Authorization code grant type (Oauth 2.0) with WSO2 API Manager 1.8.0

$
0
0
1. Create API in WSO2 API Manager publisher and create application in API store. When you create application give some call back url as follows. http://localhost:9764/playground2/oauth2client
Since i'm running playground2 application in application server with port offset 1 i used above address. But you are free to use any url.

2. Paste the following on browser - set your value for client_id

Sample command
curl -v -X POST --basic -u YOUR_CLIENT_ID:YOUR_CLIENT_SECRET -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -k -d "client_id=YOUR_CLIENT_ID&grant_type=authorization_code&code=YOUR_AUTHORIZATION_CODE&redirect_uri=https://localhost/callback" https://localhost:9443/oauth2/token

Exact command:
http://localhost:8280/authorize?response_type=code&scope=PRODUCTION&client_id=O2OkOAfBQlicQeq5ERgE7Wh4zeka&redirect_uri=http://localhost:9764/playground2/oauth2client

3. Then it will return something like this. Copy the authorization code from:
Response from step 02:
http://localhost:9764/playground2/oauth2client?code=e1934548d0a0883dd5734e24412310

4. Get the access token and ID token from following

Sample command:
curl -v -X POST --basic -u YOUR_CLIENT_ID:YOUR_CLIENT_SECRET -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -k -d "client_id=YOUR_CLIENT_ID&grant_type=authorization_code&code=YOUR_AUTHORIZATION_CODE&redirect_uri=https://localhost/callback" https://localhost:9443/oauth2/token

Exact command:
curl -v -X POST --basic -u O2OkOAfBQlicQeq5ERgE7Wh4zeka:Eke1MtuQCHj1dhM6jKsIdxsqR7Ea -H "Content-Type: application/x-www-form-urlencoded;charset=UTF-8" -k -d "client_id=O2OkOAfBQlicQeq5ERgE7Wh4zeka&grant_type=authorization_code&code=e1934548d0a0883dd5734e24412310&redirect_uri=http://localhost:9764/playground2/oauth2client" http://localhost:8280/token

Response from step 04:
{"scope":"default","token_type":"bearer","expires_in":3600,
"refresh_token":"a0d9c7c4f96baed42da2c167e1ebbb75","access_token":"2de7da7e3822cf75fd7983cfe1337ec"}

5. Now call your API with the access token from step-4

curl -k -H "Authorization: Bearer 2de7da7e3822cf75fd7983cfe1337ec"
http://10.100.1.65:8280/test-api/1.0.0

Deploy WSO2 API Manager across multiple datacenters - High availability for API Manager

$
0
0
Here in this post i will discuss how we can deploy WSO2 API Manager across multiple data centers.


Problems with normal clustered deployment across multiple data centers.
  • The databases are accessed (by the gateway node on Secondary data center) over two regions. (This will slow down the server startup as there are multiple DB calls get invoked)
  • Publishing API to gateway is done through web-service calls across data centers.
  • Since the Gateway at Secondary site uses the KayManager node at Master site (Please correct us, if we have mistaken this), API access token validation is done through web-service calls across data centers.
  • As we observed gateways in both deployments will not be synced-up properly.
  • Throttle counts will be maintain per data center.
So this will not be scalable solution as servers need to communicate across data centers.
Large number of database calls and web service calls may cause extreme slowness and in future that may cause to lot of other issues.
In this this kind of situations ideal solution would be having one master data center and few read only data centers.
Still we will not be able to overcome issues happen due to missing cluster communicate across nodes. But we will be able to perform basic tasks across all data centers.

Master data center.
API store, publisher, gateway, key manager will be deployed here and all are connected to read/write databases.
All API creation subscription creation, token generation should happen here.
Once new API published it will be pushed to some servers like dropbox or file server(artifact server). Then read only nodes have to pick API config from there(we may not be able to use deployment synchronizer here).
That way only we can avoid API publishing call between data centers.


Read only data centers.
In this data canters we will have gateway and key management nodes and those will only serve API requests.
In this data center we will have database servers which are synch up with master data center databases. We can have replicated database cluster and replicate data.
Also we may not be able to enable clustering across data centers. Due to that data centers may keep their own throttle counters etc(which we cannot avoid).

Here is a sample deployment diagram for suggested solution.

How to add API using curl commands using 3 steps in UI design/implement/manage

$
0
0
 Listing these instructions as it can help to anyone.

Create a API with the normal add api with design/implement/manage calls.
Execute following three commands so the API will be created, this is how we create API from the UI.

curl -F name="test-api" -F version="1.0" -F provider="admin" -F context="/test-apicontext" -F visibility="public" -F roles="" -F apiThumb="" -F description="" -F tags="testtag" -F action="design" -F swagger='{"apiVersion":"1.0","swaggerVersion":"1.2","authorizations":{"oauth2":{"scopes":[],"type":"oauth2"}},"apis":[{"index":0,"file":{"apiVersion":"1.0","basePath":"http://10.100.5.112:8280/test-apicontext/1.0","swaggerVersion":"1.2","resourcePath":"/test","apis":[{"index":0,"path":"/test","operations":[{"nickname":"get_test","auth_type":"Application & Application User","throttling_tier":"Unlimited","method":"GET","parameters":[{"dataType":"String","description":"AccessToken","name":"Authorization","allowMultiple":false,"required":true,"paramType":"header"},{"description":"RequestBody","name":"body","allowMultiple":false,"required":true,"type":"string","paramType":"body"}]},{"nickname":"options_test","auth_type":"None","throttling_tier":"Unlimited","method":"OPTIONS","parameters":[{"dataType":"String","description":"AccessToken","name":"Authorization","allowMultiple":false,"required":true,"paramType":"header"},{"description":"RequestBody","name":"body","allowMultiple":false,"required":true,"type":"string","paramType":"body"}]}]}]},"description":"","path":"/test"}],"info":{"title":"test-api","termsOfServiceUrl":"","description":"","license":"","contact":"","licenseUrl":""}}' -k -X POST -b cookies https://localhost:9443/publisher/site/blocks/item-design/ajax/add.jag


curl -F implementation_methods="endpoint" -F endpoint_type="http" -F endpoint_config='{"production_endpoints":{"url":"http://appserver/resource/ycrurlprod","config":null},"endpoint_type":"http"}' -F production_endpoints="http://appserver/resource/ycrurlprod" -F sandbox_endpoints="" -F endpointType="nonsecured" -F epUsername="" -F epPassword="" -F wsdl="" -F wadl="" -F name="test-api" -F version="1.0" -F provider="admin" -F action="implement" -F swagger='{"apiVersion":"1.0","swaggerVersion":"1.2","authorizations":{"oauth2":{"scopes":[],"type":"oauth2"}},"apis":[{"index":0,"file":{"apiVersion":"1.0","basePath":"http://10.100.5.112:8280/test-apicontext/1.0","swaggerVersion":"1.2","resourcePath":"/test","apis":[{"index":0,"path":"/test","operations":[{"nickname":"get_test","auth_type":"Application & ApplicationUser","throttling_tier":"Unlimited","method":"GET","parameters":[{"dataType":"String","description":"AccessToken","name":"Authorization","allowMultiple":false,"required":true,"paramType":"header"},{"description":"RequestBody","name":"body","allowMultiple":false,"required":true,"type":"string","paramType":"body"}]},{"nickname":"options_test","auth_type":"None","throttling_tier":"Unlimited","method":"OPTIONS","parameters":[{"dataType":"String","description":"AccessToken","name":"Authorization","allowMultiple":false,"required":true,"paramType":"header"},{"description":"RequestBody","name":"body","allowMultiple":false,"required":true,"type":"string","paramType":"body"}]}]}]},"description":"","path":"/test"}],"info":{"title":"test-api","termsOfServiceUrl":"","description":"","license":"","contact":"","licenseUrl":""}}' -k -X POST -b cookies https://localhost:9443/publisher/site/blocks/item-design/ajax/add.jag


curl -F default_version_checked="" -F tier="Unlimited" -F transport_http="http" -F transport_https="https" -F inSequence="none" -F outSequence="none" -F faultSequence="none" -F responseCache="disabled" -F cacheTimeout="300" -F subscriptions="current_tenant" -F tenants="" -F bizOwner="" -F bizOwnerMail="" -F techOwner="" -F techOwnerMail="" -F name="test-api" -F version="1.0" -F provider="admin" -F action="manage" -F swagger='{"apiVersion":"1.0","swaggerVersion":"1.2","authorizations":{"oauth2":{"scopes":[],"type":"oauth2"}},"apis":[{"index":0,"file":{"apiVersion":"1.0","basePath":"http://10.100.5.112:8280/test-apicontext/1.0","swaggerVersion":"1.2","resourcePath":"/test","apis":[{"index":0,"path":"/test","operations":[{"nickname":"get_test","auth_type":"Application & ApplicationUser","throttling_tier":"Unlimited","method":"GET","parameters":[{"dataType":"String","description":"AccessToken","name":"Authorization","allowMultiple":false,"required":true,"paramType":"header"},{"description":"RequestBody","name":"body","allowMultiple":false,"required":true,"type":"string","paramType":"body"}]},{"nickname":"options_test","auth_type":"None","throttling_tier":"Unlimited","method":"OPTIONS","parameters":[{"dataType":"String","description":"AccessToken","name":"Authorization","allowMultiple":false,"required":true,"paramType":"header"},{"description":"RequestBody","name":"body","allowMultiple":false,"required":true,"type":"string","paramType":"body"}]}]}]},"description":"","path":"/test"}],"info":{"title":"test-api","termsOfServiceUrl":"","description":"","license":"","contact":"","licenseUrl":""}}' -F outSeq="" -F faultSeq="json_fault" -F tiersCollection="Unlimited" -k -X POST -b cookies https://localhost:9443/publisher/site/blocks/item-design/ajax/add.jag

Planning large scale API Management deployment with clustering - WSO2 API Manager

$
0
0
When we do capacity we need to consider several factors. Here i will take basic use case as scenario and explain production recommendations.

With default configuration we can have following TPS per gateway node.
Single gateway = 1000 TPS
Single gateway by adding 30% buffer = 1300

Normally following are mandatory for HA setup

WSO2 API Manager : Gateway - 1-active. 1-passive
WSO2 API Manager : Authentication - 1-active. 1-passive
WSO2 API Manager : Publisher - 1-active. 1-passive
WSO2 API Manager : Store - 1-active. 1-passive

You can compute exact instance count

Hardware Recommendation
Physical :
3GHz Dual-core Xeon/Opteron (or latest), 4 GB RAM (minimum : 2 GB for JVM and 2GB for the OS, 10GB free disk space (minimum) disk based on the expected storage requirements (calculate by considering the file uploads and the backup policies) . (e.g if 3 Carbon instances running in a machine, it requires 4 CPU, 8 GB RAM 30 GB free space)
Virtual Machine :
2 compute units minimum (each unit having 1.0-1.2 GHz Opteron/Xeon processor) 4 GB RAM 10GB free disk space. One cpu unit for OS and one for JVM. (e.g if 3 Carbon instances running require VM of 4 compute units 8 GB RAM 30 GB free space)
EC2 : c3.large instance to run one Carbon instance. (e.g if 3 Carbon instances EC2 Extra-Large instance) Note : based on the I/O performance of c3.large instance, it is recommended to run multiple instances in a Larger instance (c3.xlarge or c3.2xlarge).


When we setup clusters normally we will have gateway cluster, store-publisher cluster and key manager clusters separately.
Let me explain why we need this.
In API Manager all store and publisher clusters need to be in same cluster as they need to do cluster communications related to registry artifacts.
When you create API from publisher it should immediately appear in store node. For this registry cache should be shared between store and publisher.
To do that replication we need to have them in single cluster.

In the same way we need to have all gateway nodes in single cluster as they need to share throttle counts and other run time specific data.

And having few(10-15) gateway nodes in single cluster will not cause any issue.
Only thing we need to keep in mind is when node count increases(within cluster) cluster communication may take very small additional time.

So in production deployments normally we will not cluster all nodes together.
Instead we will cluster gateways, key managers, Store/publishers separately.

How to import, export APIs with WSO2 API Manager 1.9.0

$
0
0
API Manager 1.9.0 we have introduced API import/export capability. With that we will be able to to download API from one platform and export it to other platform.
With this feature we will retrieve all the required meta information and registry resources for the requested API and generate a zipped archive.
And then we can upload that to other API Manager server.

To try this first you need to get web application source code from this git repo(https://github.com/thilinicooray/api-import-export).

Then build and generate web application.
After that you have to deploy that in API Manager. For that you may use web application ui. Login to management console as admin user and go to this link.

Home     > Manage     > Applications     > Add     > Web Applications

Then add web application.
Zipped archive of API will consists of the following structure

-
|_ Meta Information
   |_ api.json
|_ Documents
   |_ docs.json
|_ Image
   |_ icon.
|_ WSDL
   |_ -.wsdl
|_ Sequences
   |_ In Sequence
      |_.xml
   |_ Out Sequence
      |_.xml
   |_ Fault Sequence
      |_.xml
API Import accepts the exported zipped archive and create an API in the imported environment.

This feature has been implemented as a RESTful API.

Please use following curl command to export API.
Here you need to provide basic auth headers for admin user.
And need to pass following parameters.

Name of the API as > name=test-sanjeewa
Version of API > version=1.0.0
Provider of API > provider=admin

curl -H "Authorization:Basic YWRtaW46YWRtaW4=" -X GET "https://localhost:9443/api-import-export/export-api?name=test-sanjeewa&version=1.0.0&provider=admin" -k > exportedApi.zip
Now you will see downloaded zip file in current directory.

Then you need to import downloaded zip file to other deployment.
See following sample command for that.

Here file is above downloaded archive file.
And service call should go to the server we need to import this API. Here i'm running my second server with port offset one. So url would be "https://localhost:9444"
curl -H "Authorization:Basic YWRtaW46YWRtaW4=" -F file=@"/home/sanjeewa/work/exportedApi.zip" -k -X POST "https://localhost:9444/api-import-export/import-api"
Now go to API publisher and change API life cycle to publish(by default imported APIs will be in created state once you imported).

Then go to API store and subscribe,use it :-)
Thanks thilini and chamin for getting this done.

How to enable AWS based clustering mode in WSO2 Carbon products (WSO2 API Manager cluster with AWS clustering)

$
0
0
To try aws based clustering on AWS you can change the membership scheme to AWS, and then provide the following parameters in the clustering section of the axis2.xml file. Before you try this on API Manager 1.8.0 please download this jar[JarFile] files and add them as patch.

1. accessKey       
<parametername="accessKey">TestKey</parameter>

2. secretKey
<parametername="secretKey">testkey</parameter>

3. securityGroup       
<parametername="securityGroup">AWS_Cluster</parameter>

4. connTimeout (optional)
5. hostHeader (optional)
6. region (optional)
7. tagKey (optional)
8. tagValue (optional)


See following sample configuration. Edit clustering section in the CARBON_HOME/repository/conf/axis2/axis2.xml file as follows.

<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent"
                enable="true">
        <parameter name="AvoidInitiation">true</parameter>
        <parameter name="membershipScheme">aws</parameter>
        <parameter name="domain">wso2.am.domain</parameter>
        <parameter name="localMemberPort">5701</parameter>
        <parameter name="accessKey">test</parameter>
        <parameter name="secretKey">test</parameter>
        <parameter name="securityGroup">AWS_Cluster</parameter>


By default, Hazelcast uses port 5701. It is recommended to create a Hazelcast specific security group. Then, an inbound rule for port 5701 from s
g-hazelcast needs to be added to this security group.
Open the Amazon EC2 console.
Click Security Groups in the left menu.
Click Create Security Group and enter a name (e.g. sg-hazelcast ) and description for the security group, click Yes, Create .
On Security Groups page, select the security group sg-hazelcast on the right pane.
You will see a field below the security group list with the tabs Details and Inbound. Select Inbound.
Select Custom TCP rule in the field Create a new rule.
Type 5701 into the field Port range and sg-hazelcast into Source.

Then when we initialize cluster all nodes in same security group will be added as WKA members.
Once you done with configurations restart servers.

Then you will following message in carbon logs.
[2015-06-23 10:02:47,730]  INFO - HazelcastClusteringAgent Cluster domain: wso2.am.domain
[2015-06-23 10:02:47,731]  INFO - HazelcastClusteringAgent Using aws based membership management scheme
[2015-06-23 10:02:57,604]  INFO - HazelcastClusteringAgent Hazelcast initialized in 9870ms
[2015-06-23 10:02:57,611]  INFO - HazelcastClusteringAgent Local member: [5e6bd517-512a-45a5-b702-ebf304cdb8c4] - Host:10.0.0.172, Remote Host:null, Port: 5701, HTTP:8280, HTTPS:8243, Domain: wso2.am.domain, Sub-domain:worker, Active:true
[2015-06-23 10:02:58,323]  INFO - HazelcastClusteringAgent Cluster initialization completed
Then spawn next instance. When next server startup completed you will see following message in current node.
[2015-06-23 10:06:21,344]  INFO - AWSBasedMembershipScheme Member joined [417843d3-7456-4368-ad4b-5bad7cf21b09]: /10.0.0.245:5701
Then terminate second instance. Then you will see following message.
[2015-06-23 10:07:39,148]  INFO - AWSBasedMembershipScheme Member left [417843d3-7456-4368-ad4b-5bad7cf21b09]: /10.0.0.245:5701
This means you have done configurations properly.

Enable debug logs and check token expire time in WSO2 API Manager

$
0
0
To do that you can enable debug logs for following class.
org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO

Then it will print following log
log.debug("Checking Access token: " + accessToken + " for validity." + "((currentTime - timestampSkew) > (issuedTime + validityPeriod)) : " + "((" + currentTime + "-" + timestampSkew + ")" + "> (" + issuedTime + " + " + validityPeriod + "))");


Then whenever this call fails we need to check for this log during that time. Then we can get clear idea about validity period calculation.

To enable debug logs add below line to log4j.properties that reside in /repository/conf/
log4j.logger.org.wso2.carbon.apimgt.impl.dao.ApiMgtDAO=DEBUG

And restart the server. You need to do enable debug log in Identity Server side if you use IS as key manager scenario.


Then you can check how token validity period behave with each API call we make.

WSO2 API Manager CORS support and how it works with API gateway - APIM 1.8.0

$
0
0
According to wiki cross-origin resource sharing (CORS) is a mechanism that allows restricted resources (e.g. fonts, JavaScript, etc.) on a web page to be requested from another domain outside the domain from which the resource originated.  Also "Cross-domain" AJAX requests are forbidden by default because of their ability to perform advanced requests (POST, PUT, DELETE and other types of HTTP requests, along with specifying custom HTTP headers) that introduce many security issues as described in cross-site scripting.

In WSO2 API Manager cross domain resource sharing is happening between AM and the client application.
See following sample CORS specific headers
< Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type
< Access-Control-Allow-Origin: localhost
< Access-Control-Allow-Methods: GET,PUT,POST,DELETE,OPTIONS
'Access-Control-Allow-Origin' header in the response is set in API gateway by validating the 'Origin' header from the request.
(CORS related requests should have a 'Origin' header to identify the requesting domain).
Please refer following config element in api-manager.xml file.

    <CORSConfiguration>
    <!--Configuration to enable/disable sending CORS headers from the Gateway-->
    <Enabled>true</Enabled>
    <!--The value of the Access-Control-Allow-Origin header. Default values are
        API Store addresses, which is needed for swagger to function.-->

    <Access-Control-Allow-Origin>localhost</Access-Control-Allow-Origin>
    <!--Configure Access-Control-Allow-Methods-->
    <Access-Control-Allow-Methods>GET,PUT,POST,DELETE,OPTIONS</Access-Control-Allow-Methods>
    <!--Configure Access-Control-Allow-Headers-->
    <Access-Control-Allow-Headers>authorization,Access-Control-Allow-Origin,Content-Type</Access-Control-Allow-Headers>
    </CORSConfiguration>

We set the CORS related headers in the response from the APIAuthenticationHandler before we send the response back to the client application.

API gateway first we check the 'Origin' header value from the request (one sent by the client) against the list of defined in the api-manager.xml.
If this host is in the list, we set it in the Access-Control-Allow-Origin header of the response.
Otherwise we set it to null. If this is null, then this header will be removed from the response(not allow access).

See following sample curl commands and responses to see how this origin header change response.

curl -k -v -H "Authorization: Bearer 99c85b7da8691f547bd46d159f1d581" -H "Origin: localhost"  https://10.100.1.65:8243/qqqq/1.0.0/

< HTTP/1.1 200 OK
< ETag: "b1-4fdc9b19d2b93"
< Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type
< Vary: Accept-Encoding
< Access-Control-Allow-Origin: localhost
< Last-Modified: Wed, 09 Jul 2014 21:50:16 GMT
< Access-Control-Allow-Methods: GET,PUT,POST,DELETE,OPTIONS
< Content-Type: text/html
< Accept-Ranges: bytes
< Date: Wed, 24 Jun 2015 14:17:16 GMT
* Server WSO2-PassThrough-HTTP is not blacklisted
< Server: WSO2-PassThrough-HTTP
< Transfer-Encoding: chunked


 curl -k -v -H "Authorization: Bearer 99c85b7da8691f547bd46d159f1d581" -H "Origin: localhostXX"  https://10.100.1.65:8243/qqqq/1.0.0/
< HTTP/1.1 200 OK
< ETag: "b1-4fdc9b19d2b93"
< Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type
< Vary: Accept-Encoding
< Last-Modified: Wed, 09 Jul 2014 21:50:16 GMT
< Access-Control-Allow-Methods: GET,PUT,POST,DELETE,OPTIONS
< Content-Type: text/html
< Accept-Ranges: bytes
< Date: Wed, 24 Jun 2015 14:17:53 GMT
* Server WSO2-PassThrough-HTTP is not blacklisted
< Server: WSO2-PassThrough-HTTP
< Transfer-Encoding: chunked


As you can see Access-Control-Allow-Origin header is missed in 2nd response as we send origin which was not defined in cors configuration in api-manager.xml file.

How to send specific status code and message based on different authentication faliures WSO2 API Manager

$
0
0
In WSO2 API Manager all authentication faliures will hit auth failure handler. There you will be able to change message body, content, header based on internal error codes.
As example if we got resource not found error while doing token validation then Error Code will be 900906. So in same way we will have different error codes for different failures.

So in this sample will generate custom message for resource not found issues while doing token validation.
For this we will specifically check error code 900906 and then route request to specific sequence.

Please refer following sequence and change to auth_failure_handler to call sequence.

_auth_failure_handler_

<sequencename="_auth_failure_handler_"xmlns="http://ws.apache.org/ns/synapse">
    <propertyname="error_message_type"value="application/xml"/>   
    <filtersource="get-property('ERROR_CODE')"regex="900906">
      <then>
          <sequencekey="sample"/>
          <drop/> 
      </then>
      <else>        
      </else>
    </filter>
    <sequencekey="_build_"/>
</sequence>


sequence

<?xmlversion="1.0"encoding="UTF-8"?>
<sequencexmlns="http://ws.apache.org/ns/synapse"name="sample">
    <payloadFactorymedia-type="xml">
        <format>
            <am:faultxmlns:am="http://wso2.org/apimanager">  
                <am:message>Resource not found</am:message>
                <am:description>Wrong http method</am:description>
            </am:fault>
        </format>
    </payloadFactory>
    <propertyname="RESPONSE"value="true"/>
    <headername="To"action="remove"/>
    <propertyname="HTTP_SC"value="405"scope="axis2"/>
    <propertyname="messageType"value="application/xml"scope="axis2"/>
    <send/>  
</sequence>   

How to write custom throttle handler to throttle requests based on IP address - WSO2 API Manager

$
0
0
Please find the sample source code for custom throttle handler to throttle requests based on IP address. Based on your requirements you can change the logic here.

package org.wso2.carbon.apimgt.gateway.handlers.throttling;import org.apache.axiom.om.OMAbstractFactory;
import org.apache.axiom.om.OMElement;
import org.apache.axiom.om.OMFactory;
import org.apache.axiom.om.OMNamespace;
import org.apache.axis2.context.ConfigurationContext;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.http.HttpStatus;
import org.apache.neethi.PolicyEngine;
import org.apache.synapse.Mediator;
import org.apache.synapse.MessageContext;
import org.apache.synapse.SynapseConstants;
import org.apache.synapse.SynapseException;
import org.apache.synapse.config.Entry;
import org.apache.synapse.core.axis2.Axis2MessageContext;
import org.apache.synapse.rest.AbstractHandler;
import org.wso2.carbon.apimgt.gateway.handlers.Utils;
import org.wso2.carbon.apimgt.gateway.handlers.security.APISecurityUtils;
import org.wso2.carbon.apimgt.gateway.handlers.security.AuthenticationContext;
import org.wso2.carbon.apimgt.impl.APIConstants;
import org.wso2.carbon.throttle.core.AccessInformation;
import org.wso2.carbon.throttle.core.RoleBasedAccessRateController;
import org.wso2.carbon.throttle.core.Throttle;
import org.wso2.carbon.throttle.core.ThrottleContext;
import org.wso2.carbon.throttle.core.ThrottleException;
import org.wso2.carbon.throttle.core.ThrottleFactory;

import java.util.Map;
import java.util.TreeMap;


public class IPBasedThrottleHandler extends AbstractHandler {

    private static final Log log = LogFactory.getLog(IPBasedThrottleHandler.class);

    /** The Throttle object - holds all runtime and configuration data */
    private volatile Throttle throttle;

    private RoleBasedAccessRateController applicationRoleBasedAccessController;

    /** The key for getting the throttling policy - key refers to a/an [registry] entry    */
    private String policyKey = null;
    /** The concurrent access control group id */
    private String id;
    /** Version number of the throttle policy */
    private long version;

    public IPBasedThrottleHandler() {
        this.applicationRoleBasedAccessController = new RoleBasedAccessRateController();
    }

    public boolean handleRequest(MessageContext messageContext) {
        return doThrottle(messageContext);
    }

    public boolean handleResponse(MessageContext messageContext) {
        return doThrottle(messageContext);
    }

    private boolean doThrottle(MessageContext messageContext) {
        boolean canAccess = true;
        boolean isResponse = messageContext.isResponse();
        org.apache.axis2.context.MessageContext axis2MC = ((Axis2MessageContext) messageContext).
                getAxis2MessageContext();
        ConfigurationContext cc = axis2MC.getConfigurationContext();
        synchronized (this) {

            if (!isResponse) {
                initThrottle(messageContext, cc);
            }
        }        // if the access is success through concurrency throttle and if this is a request message
        // then do access rate based throttling
        if (!isResponse && throttle != null) {
            AuthenticationContext authContext = APISecurityUtils.getAuthenticationContext(messageContext);
            String tier;            if (authContext != null) {
                AccessInformation info = null;
                try {

                    String ipBasedKey = (String) ((TreeMap) axis2MC.
                            getProperty("TRANSPORT_HEADERS")).get("X-Forwarded-For");
                    if (ipBasedKey == null) {
                        ipBasedKey = (String) axis2MC.getProperty("REMOTE_ADDR");
                    }
                    tier = authContext.getApplicationTier();
                    ThrottleContext apiThrottleContext =
                            ApplicationThrottleController.
                                    getApplicationThrottleContext(messageContext, cc, tier);
                    //    if (isClusteringEnable) {
                    //      applicationThrottleContext.setConfigurationContext(cc);
                    apiThrottleContext.setThrottleId(id);
                    info = applicationRoleBasedAccessController.canAccess(apiThrottleContext,
                                                                          ipBasedKey, tier);
                    canAccess = info.isAccessAllowed();
                } catch (ThrottleException e) {
                    handleException("Error while trying evaluate IPBased throttling policy", e);
                }
            }
        }        if (!canAccess) {
            handleThrottleOut(messageContext);
            return false;
        }

        return canAccess;
    }    private void initThrottle(MessageContext synCtx, ConfigurationContext cc) {
        if (policyKey == null) {
            throw new SynapseException("Throttle policy unspecified for the API");
        }        Entry entry = synCtx.getConfiguration().getEntryDefinition(policyKey);
        if (entry == null) {
            handleException("Cannot find throttling policy using key: " + policyKey);
            return;
        }
        Object entryValue = null;
        boolean reCreate = false;        if (entry.isDynamic()) {
            if ((!entry.isCached()) || (entry.isExpired()) || throttle == null) {
                entryValue = synCtx.getEntry(this.policyKey);
                if (this.version != entry.getVersion()) {
                    reCreate = true;
                }
            }
        } else if (this.throttle == null) {
            entryValue = synCtx.getEntry(this.policyKey);
        }        if (reCreate || throttle == null) {
            if (entryValue == null || !(entryValue instanceof OMElement)) {
                handleException("Unable to load throttling policy using key: " + policyKey);
                return;
            }
            version = entry.getVersion();            try {
                // Creates the throttle from the policy
                throttle = ThrottleFactory.createMediatorThrottle(
                        PolicyEngine.getPolicy((OMElement) entryValue));

            } catch (ThrottleException e) {
                handleException("Error processing the throttling policy", e);
            }
        }
    }    public void setId(String id) {
        this.id = id;
    }    public String getId(){
        return id;
    }    public void setPolicyKey(String policyKey){
        this.policyKey = policyKey;
    }    public String gePolicyKey(){
        return policyKey;
    }    private void handleException(String msg, Exception e) {
        log.error(msg, e);
        throw new SynapseException(msg, e);
    }    private void handleException(String msg) {
        log.error(msg);
        throw new SynapseException(msg);
    }    private OMElement getFaultPayload() {
        OMFactory fac = OMAbstractFactory.getOMFactory();
        OMNamespace ns = fac.createOMNamespace(APIThrottleConstants.API_THROTTLE_NS,
                                               APIThrottleConstants.API_THROTTLE_NS_PREFIX);
        OMElement payload = fac.createOMElement("fault", ns);        OMElement errorCode = fac.createOMElement("code", ns);
     errorCode.setText(String.valueOf(APIThrottleConstants.THROTTLE_OUT_ERROR_CODE));
        OMElement errorMessage = fac.createOMElement("message", ns);
        errorMessage.setText("Message Throttled Out");
        OMElement errorDetail = fac.createOMElement("description", ns);
        errorDetail.setText("You have exceeded your quota");

        payload.addChild(errorCode);
        payload.addChild(errorMessage);
        payload.addChild(errorDetail);
        return payload;
    }    private void handleThrottleOut(MessageContext messageContext) {
        messageContext.setProperty(SynapseConstants.ERROR_CODE, 900800);
        messageContext.setProperty(SynapseConstants.ERROR_MESSAGE, "Message throttled out");

        Mediator sequence = messageContext.getSequence(APIThrottleConstants.API_THROTTLE_OUT_HANDLER);
        // Invoke the custom error handler specified by the user
        if (sequence != null && !sequence.mediate(messageContext)) {
            // If needed user should be able to prevent the rest of the fault handling
            // logic from getting executed
            return;
        }        // By default we send a 503 response back
        if (messageContext.isDoingPOX() || messageContext.isDoingGET()) {
            Utils.setFaultPayload(messageContext, getFaultPayload());
        } else {
            Utils.setSOAPFault(messageContext, "Server", "Message Throttled Out",
                               "You have exceeded your quota");
        }
        org.apache.axis2.context.MessageContext axis2MC = ((Axis2MessageContext) messageContext).
                getAxis2MessageContext();

        if (Utils.isCORSEnabled()) {
            /* For CORS support adding required headers to the fault response */
            Map headers = (Map) axis2MC.getProperty(org.apache.axis2.context.MessageContext.TRANSPORT_HEADERS);
            headers.put(APIConstants.CORSHeaders.ACCESS_CONTROL_ALLOW_ORIGIN, Utils.getAllowedOrigin((String)headers.get("Origin")));
            headers.put(APIConstants.CORSHeaders.ACCESS_CONTROL_ALLOW_METHODS, Utils.getAllowedMethods());
            headers.put(APIConstants.CORSHeaders.ACCESS_CONTROL_ALLOW_HEADERS, Utils.getAllowedHeaders());
            axis2MC.setProperty(org.apache.axis2.context.MessageContext.TRANSPORT_HEADERS, headers);
        }
        Utils.sendFault(messageContext, HttpStatus.SC_SERVICE_UNAVAILABLE);
    }
}

As listed above your custom handler class is : "org.wso2.carbon.apimgt.gateway.handlers.throttling.IPBasedThrottleHandler", the following will be the handler definition for your API.


<handler class="org.wso2.carbon.apimgt.gateway.handlers.throttling.IPBasedThrottleHandler">
<propertyname="id"value="A"/>
<propertyname="policyKey"value="gov:/apimgt/applicationdata/tiers.xml"/>
</handler>

Then try to invoke API and see how throttling works.

How to add secondry user store domain name to SAML response from shibboleth side. WSO2 Identity server SSO with secondary user store.

$
0
0
When we configure shibboleth as identity provider in WSO2 Identity server as described in this article(http://xacmlinfo.org/2014/12/04/federatation-shibboleth/) deployment would be something like below.

http://i0.wp.com/xacmlinfo.org/wp-content/uploads/2014/12/sidp0.png



In this case shibboleth will act as identity provider for WSO2 IS and will provide SAML assertion to WSO2 IS. But actual permission check will happen from IS side and we may need complete user name for that. If we configured user store as secondary user store then user store domain should be part of name. But shibboleth do not know about secondary user store. So in IS side you will username instead of DomainName/UserName. Then it will be an issue if we try to validate permissions per user.

To over come this we can configure shibboleth to send domain aware user name from their end. Let say domain name is LDAP-Domain then we can set it from shibboleth side with following configuration. Then it will send user name like this LDAP-Domain/userName.

 (attribute-resolver.xml)

    <!-- This is the NameID value we send to the WS02 Identity Server. -->
    <resolver:AttributeDefinitionxsi:type="ad:Script"id="eduPersonPrincipalNameWSO2">
        <resolver:Dependencyref="eduPersonPrincipalName" />

        <resolver:AttributeEncoderxsi:type="enc:SAML2StringNameID"nameFormat="urn:oasis:names:tc:SAML:2.0:nameid-format:persistent" />

        <ad:Script>
            <![CDATA[
                importPackage(Packages.edu.internet2.middleware.shibboleth.common.attribute.provider);

                eduPersonPrincipalNameWSO2 = new BasicAttribute("eduPersonPrincipalNameWSO2");
                eduPersonPrincipalNameWSO2.getValues().add("LDAP-Domain/" + eduPersonPrincipalName.getValues().get(0));
            ]]
>

        </ad:Script>
    </resolver:AttributeDefinition>

How to change endpoit configurations, timeouts of already created large number of APIs - WSO2 API Manager

$
0
0
How to add additional properties for already create APIs. Sometimes in deployments we may need to change endpoint configurations and some other parameters after we created them.
For this we can go to management console, published and change them. But if you have large number of APIs that may be extremely hard. In this post lets see how we can do it for batch of API.

Please note that test this end to end before you push this change to production deployment. And also please note that some properties will be stored in registry, database and synapse configurations. So we need to change all 3 places. In this example we will consider endpoint configurations only(which available on registry and synapse).

Changing velocity template will work for new APIs. But when it comes to already published APIs, you have to do following process if you are not modifying it manually.

Write simple application to change synapse configuration and add new properties(as example we can consider timeout value).
 Use a checkin/checkout client to edit the registry files with the new timeout value.
   you can follow below mentioned steps to use the checkin/checkout client,
 Download Governance Registry binary from http://wso2.com/products/governance-registry/ and extract the zip file.
 Copy the content of Governance Registry in to APIM home.
 Go into the bin directory of the Governance Registry directory.
 Run the following command to checkout registry files to your local repository.
         ./checkin-client.sh co https://localhost:9443/registry/path -u admin -p admin  (linux environment)
           checkin-client.bat co https://localhost:9443/registry/path -u admin -p admin (windows environment)
        
Here the path is where your registry files are located. Normally API meta data will be listed under each provider '_system/governance/apimgt/applicationdata/provider'.

Once you run this command, registry files will be downloaded to your Governance Registry/bin directory. You can find the directories with user names who created the API.
Inside those directories there are files with same name 'api' in the location of '{directory with name of the api}/{directory with version of the api}/_system/governance
/apimgt/applicationdata/provider/{directory with name of the user}\directory with name of the api}/{directory with version of the api}' and you can edit the timeout value by
using a batch operation(shell script or any other way).

Then you have to checkin what you have changed by using the following command.
     ./checkin-client.sh ci https://localhost:9443/registry/path -u admin -p admin  (linux)
      checkin-client.bat ci https://localhost:9443/registry/path -u admin -p admin (windows)
   

Open APIM console and click on browse under resources. Provide the loaction as '/_system/governance/apimgt/applicationdata/provider'. Inside the {user name} directory
there are some directories with your API names. Open the 'api' files inside those directories and make sure the value has been updated.

Its recommend to change both registry and synapse configuration. This change will not be applicable to all properties available in API Manager.
This solution specifically designed for endpoint configurations such as time outs etc.

How to use SAML2 grant type to generate access tokens in web applications (Generate access tokens programatically using SAML2 grant type). - WSO2 API Manager

$
0
0
Exchanging SAML2 bearer tokens with OAuth2 (SAML extension grant type)

SAML 2.0 is an XML-based protocol. It uses security tokens containing assertions to pass information about an enduser between a SAML authority and a SAML consumer.
A SAML authority is an identity provider (IDP) and a SAML consumer is a service provider (SP).
A lot of enterprise applications use SAML2 to engage a third-party identity provider to grant access to systems that are only authenticated against the enterprise application.
These enterprise applications might need to consume OAuth-protected resources through APIs, after validating them against an OAuth2.0 authentication server.
However, an enterprise application that already has a working SAML2.0 based SSO infrastructure between itself and the IDP prefers to use the existing trust relationship, even if the OAuth authorization server is entirely different from the IDP. The SAML2 Bearer Assertion Profile for OAuth2.0 helps leverage this existing trust relationship by presenting the SAML2.0 token to the authorization server and exchanging it to an OAuth2.0 access token.

You can use SAML grant type for web applications to generate tokens.
https://docs.wso2.com/display/AM160/Token+API#TokenAPI-ExchangingSAML2bearertokenswithOAuth2(SAMLextensiongranttype)


Sample curl command .
curl -k -d "grant_type=urn:ietf:params:oauth:grant-type:saml2-bearer&assertion=&scope=PRODUCTION" -H "Authorization: Basic SVpzSWk2SERiQjVlOFZLZFpBblVpX2ZaM2Y4YTpHbTBiSjZvV1Y4ZkM1T1FMTGxDNmpzbEFDVzhh, Content-Type: application/x-www-form-urlencoded" https://serverurl/token

How to invoke token API from web app and get token programmatically.

To generate user access token using SAML assertion you can add following code block inside your web application.
When you login to your app using SSO there would be access you will get SAML response. You can store that in application session and use it to get token whenever requires.



Please refer following code for Access token issuer.

package com.test.org.oauth2;
import org.apache.amber.oauth2.client.OAuthClient;
import org.apache.amber.oauth2.client.URLConnectionClient;
import org.apache.amber.oauth2.client.request.OAuthClientRequest;
import org.apache.amber.oauth2.common.token.OAuthToken;
import org.apache.catalina.Session;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;

public class AccessTokenIssuer {
    private static Log log = LogFactory.getLog(AccessTokenIssuer.class);
    private Session session;
    private static OAuthClient oAuthClient;

    public static void init() {
        if (oAuthClient == null) {
            oAuthClient = new OAuthClient(new URLConnectionClient());
        }
    }

    public AccessTokenIssuer(Session session) {
        init();
        this.session = session;
    }

    public String getAccessToken(String consumerKey, String consumerSecret, GrantType grantType)
            throws Exception {
        OAuthToken oAuthToken = null;

        if (session == null) {
            throw new Exception("Session object is null");
        }
// You need to implement logic for this operation according to your system design. some url
        String oAuthTokenEndPoint = "token end point url"

        if (oAuthTokenEndPoint == null) {
            throw new Exception("OAuthTokenEndPoint is not set properly in digital_airline.xml");
        }


        String assertion = "";
        if (grantType == GrantType.SAML20_BEARER_ASSERTION) {
    // You need to implement logic for this operation according to your system design
            String samlResponse = "get SAML response from session";
    // You need to implement logic for this operation according to your system design
            assertion = "get assertion from SAML response";
        }
        OAuthClientRequest accessRequest = OAuthClientRequest.
                tokenLocation(oAuthTokenEndPoint)
                .setGrantType(getAmberGrantType(grantType))
                .setClientId(consumerKey)
                .setClientSecret(consumerSecret)
                .setAssertion(assertion)
                .buildBodyMessage();
        oAuthToken = oAuthClient.accessToken(accessRequest).getOAuthToken();

        session.getSession().setAttribute("OAUTH_TOKEN" , oAuthToken);
        session.getSession().setAttribute("LAST_ACCESSED_TIME" , System.currentTimeMillis());

        return oAuthToken.getAccessToken();
    }

    private static org.apache.amber.oauth2.common.message.types.GrantType getAmberGrantType(
            GrantType grantType) {
        if (grantType == GrantType.SAML20_BEARER_ASSERTION) {
            return org.apache.amber.oauth2.common.message.types.GrantType.SAML20_BEARER_ASSERTION;
        } else if (grantType == GrantType.CLIENT_CREDENTIALS) {
            return org.apache.amber.oauth2.common.message.types.GrantType.CLIENT_CREDENTIALS;
        } else if (grantType == GrantType.REFRESH_TOKEN) {
            return org.apache.amber.oauth2.common.message.types.GrantType.REFRESH_TOKEN;
        } else {
            return org.apache.amber.oauth2.common.message.types.GrantType.PASSWORD;
        }
    }
}


After you login to system get session object and initiate access token issuer as follows.
AccessTokenIssuer accessTokenIssuer = new AccessTokenIssuer(session);

Then keep reference for that object during session.
Then when you need access token request token as follows. You need to pass consumer key and secret key.

tokenResponse = accessTokenIssuer.getAccessToken(key,secret, GrantType.SAML20_BEARER_ASSERTION);

Then you will get access token and you can use it as required.

How to get MD5SUM of all files available in conf directory

$
0
0
We can use following command to get MD5SUM of all files available in the system. We can use this approach to check status of configuration file of multiple servers and check those are same or not.
find ./folderName -type f -exec md5sum {} \; > test.xml

How to handle ditributed counter across cluster when each node contribute to counter - Distributed throttling

$
0
0
Handle throttling in distributed environment is bit tricky task. For this we need to maintain time window and counters per instance and also those counters should be shared across cluster as well. Recently i worked on similar issue and i will share my thoughts about this problem.

Lets say we have 5 nodes. Then each node will serve x number of requests within minutes. So across cluster we can server 5X requests per minutes. And some cases node1 may server 2x while other servers 1x. But still we need to have 5x across cluster. To address this issue we need shared counter across cluster. So each and every node can contribute to that and maintain counters.

To implement something like that we may use following approach.

We can maintain two Hazelcast IAtomicLong data structures or similar distributed counter as follow. This should be handle in cluster level.
And node do not have to do anything about replication.

  • Shared Counter : This will maintain global request count across the cluster
  • Shared Timestamp : This will be used for manage time window across the cluster for particular throttling period

In each and every instance we should maintain following per each counter object.
  • Local global counter which sync up with shared counter in replication task(Local global counter = shared counter + Local counter )
  • Local counter which holds request counts until replication task run.(after replication Local counter = 0)

We may use replication task that will run periodically.
During the replication task following tasks will be happen.
Update the shared counter with node local counter and then update local global counter with the shared counter.
If global counter set to zero, it will reset the global counter.


See following diagrams.






How to minimize solr idexing time(registry artifact loading time) in newly spawned instance

$
0
0
In API Management platform sometimes we need to add store and publisher nodes to cluster. But if you have large number of resources in registry solr indexing will take some time.
Solr indexing will be used to index registry data in local file system. In this post we will discuss how we can minimize time take to this loading process. Please note this will apply to all carbon kernel 4.2.0 or below versions. In GREG 5.0.0 we have handled this issue and do not need to do anything to handle this scenario.

You can minimize the time taken to list down existing API list in Store and Publisher by copying an already indexed Solr/data directory to a fresh APIM instance.
However note that you should NOT copy and replace Solr/data directories of different APIM product versions. (For example you can NOT copy and replace Solr/data directory of APIM 1.9 to APIM 1.7)

[1] First create a backup of Solr indexed files using the last currently running API product version.
   [APIM_Home]/solr/data directory
[2] Now copy and replace [Product_Home]/solr/data directory in the new APIM instance/s before puppet initializes it, which will list existing APIs since by the time your new carbon instance starts running we have already copied the Solr indexed file to the instance.

If you are using automated process its recommend to automate this process.
So you can follow these instructions to automate it.
01. Get backup of solr/data directory of running server and push it to some artifact server(you can use Rsynch or svn for this).
02. When new instance spawned before start it copy updated solr content from remote artifact server.
03. Then start new server.


If you need to manually re-index data you can follow approach listed below.

Shutdown the server if it is already started.
Rename the lastAccessTimeLocation in registry.xml ,
Eg:
/_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime
To
/_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime_1
Backup the solr directory and delete it.

/solr

Restart the server and keep the server idle few minutes to re-index.

How to avoid getting incorrect access tokens due to constraint violation when we have high load on token API(CON_APP_KEY violated).

$
0
0
Sometimes you may see following behavior when we have very high load on token API.
1. Call https://localhost:8243/token
2. Get constraint error.
{org.wso2.carbon.identity.oauth2.dao.TokenPersistenceTask} - Error occurred while persisting access token bsdsadaa209esdsadasdae21a17d {org.wso2.carbon.identity.oauth2.dao.TokenPersistenceTask}
org.wso2.carbon.identity.oauth2.IdentityOAuth2Exception: Access Token for consumer key : H5sadsdasdasddasdsa, user : sanjeewa and scope : default already exists
at org.wso2.carbon.identity.oauth2.dao.TokenMgtDAO.storeAccessToken(TokenMgtDAO.java:194)
at org.wso2.carbon.identity.oauth2.dao.TokenMgtDAO.persistAccessToken(TokenMgtDAO.java:229)
at org.wso2.carbon.identity.oauth2.dao.TokenPersistenceTask.run(TokenPersistenceTask.java:56)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: java.sql.SQLIntegrityConstraintViolationException: ORA-00001: unique constraint (WSO2_APIM.CON_APP_KEY) violated

3. Attempt to use the access token when calling an API but get HTTP Status-Code=401 (Unauthorized) Invalid Credentials error

This issue happens because our token storing logic is not blocking call(this was implemented as improvement to token API as persisting can block token generation flow).
So due to that we already returned incorrect token to client(which is not already persisted). This happens only if constraint failed when we try to persist token.
But at that time we may return token to client.

If we made token persisting pool size 0 then this issue will not be there and user will immediately get error (probably internal server error) and token will not be return to client.
See following code block

try {
tokenMgtDAO.storeAccessToken(accessToken, oAuth2AccessTokenReqDTO.getClientId(),
accessTokenDO, userStoreDomain);
} catch (IdentityException e) {
throw new IdentityOAuth2Exception(
"Error occurred while storing new access token : " + accessToken, e);
}

You can set pool size as follows. By default it set to 100.
wso2am-1.9.0/repository/conf/identity.xml

<JDBCPersistenceManager>
    <SessionDataPersist>
        <PoolSize>0</PoolSize>
    </SessionDataPersist>
</JDBCPersistenceManager>
 

This Will resolve your issue
 

How to increase time out value in WSO2 API Manager

$
0
0
When we increase timeout value in API Manager we have to set 3 properties.

1) Global timeout defined in synapse.properties (\repository\conf\synapse.properties)

synapse.global_timeout_interval=60000000


2) Socket timeout defined in the passthru-http.properties (ESB_HOME\repository\conf\passthru-http.properties )

http.socket.timeout=60000000

3) Also we need to set timeout in API level per each API.
<endpointname="admin--Stream_APIproductionEndpoint_0">
      <addressuri="http://localhost:9763/example-v4/example">
 <timeout>
   <duration>12000000</duration>
  <responseAction>fault</responseAction>
  </timeout>
 </address>

How to install Redis in ubuntu and send event

$
0
0
Please follow below instructions to install and use Redis
Type following commands in command line.

wget http://download.redis.io/releases/redis-stable.tar.gz

tar xzf redis-stable.tar.gz

cd redis-stable

make

make test

sudo make install

cd utils

sudo ./install_server.sh
As the script runs, you can choose the default options by pressing enter.

Port depends on the port you set during the installation. 6379 is the default port setting.

sudo service redis_6379 start
sudo service redis_6379 stop

Access redis command line tool
redis-cli

You will see following command line.
redis 127.0.0.1:6379>


Then send events with key and value as follows.
127.0.0.1:6379> publish EVENTCHANNEL sanjeewa11111199999999



Sample data source configuration for WSO2 Servers to connect jdbc using LDAP

$
0
0
Please find following sample data source configuration to access jdbc using LDAP connection.

       <datasource>
            <name>DATASOURCE_NAME</name>
            <description>The datasource used for BPS</description>
            <jndiConfig>
                <name>jdbc/JNDI_NAME</name>
            </jndiConfig>
            <definitiontype="RDBMS">
                <configuration>
                    <url>jdbc:oracle:thin:@ldap://localhost:389/cn=wso2dev2,cn=OracleContext,dc=test,dc=com</url>
                    <username>DB_USER_NAME</username>
                    <password>DB_PASSWORD</password>
                    <driverClassName>oracle.jdbc.OracleDriver</driverClassName>
                    <maxActive>50</maxActive>
                    <maxWait>60000</maxWait>
                    <testOnBorrow>true</testOnBorrow>
                    <validationQuery>SELECT 1 FROM DUAL</validationQuery>
                    <validationInterval>30000</validationInterval>
                </configuration>
            </definition>
        </datasource>




Viewing all 220 articles
Browse latest View live