Data-Driven Microservices
- Hasitha Abeykoon
- Technical Lead - WSO2
Introduction
Data is a precious asset, and the ability to collect, manage, and use it gives a competitive advantage to any digital business. Within microservices architecture, there are multiple practices and patterns to deal with data. This article discusses the patterns, limitations, and pitfalls around data integration in microservices and then moves on to a sample implementation.
Database Per Service
Scale and autonomy for business units are significant motivations for enterprises to move towards microservices. This brings upon the pattern of “database per service”. Each microservice communicates only with its data store. The communication or exchange of data can only happen using a set of well-defined APIs. This pattern has the following advantages.
- Data scaling: a database does not become a bottleneck for scaling as it is not shared.
- Each service can use its own database implementation (e.g., one microservice can work with MySQL, whereas another works with Cloud Spanner).
- Each team can design the service and data integration without coordinating.
However, in practice, this pattern is not easy to implement. It should be done with extra care because, in the end, several services can grow rapidly and data exchange can become spaghetti with a lot of calls happening across microservices via APIs.
Identify Correct Boundaries for Microservices
Avoid Nano Services
It is important to identify boundaries and granularity for microservices. For example, if the design contains too much coarse grain microservices, to deliver meaningful business value, several microservices might need to call each other. If they also include data transactions, the system’s total performance will drastically drop. Hence, it is important to identify which elements of the enterprise should become independent services. Designers might take some iterations to figure out the correct boundaries of services; however, it is important to identify them correctly early in the design process to avoid ending up with a lot of nano services.
Make Cohesiveness Prominent
The boundaries of microservices should be decided along with the business domains and elements. It should not be based upon technical requirements. Typically, when boundaries are decided based on business requirements, the transactions between the services go down and the services become cohesive. This is important when it comes to data handling because data independence is decided by business requirements. One service calling another to fetch data to deliver business value is perfectly fine if the communication is itself a requirement of business units operating in the real world.
Transactions Across Microservices (Transactional Boundaries)
When moving from a single normalized database to microservices, designers face several challenges.
- Sharing a database across services. This makes scaling of services dependant. Teams need to coordinate when designing database tables, and, therefore, this can cause runtime conflicts.
- Redesign boundaries for microservices. If a transaction needs to happen across microservices, it essentially means that services are not cohesive. Take the example of a credit and debit transaction at a bank. The reliability of the transaction is the top priority to the business in this case. Credit and debit services may merge as they are tightly coupled with each other. This prevents handling transactions across services.
-
Use distributed transactions.To achieve this, database technologies such as a two-phase commit can be used. However, this increases the complexity of data integration in a microservices platform. At the same time, it makes the system fragile.
- The network between the microservice can break.
- One microservice can crash in the middle of the transaction.
- Different services can deal with different database implementations. It is complex performing a distributed transaction among them. If every microservice should communicate with the same type of database, it takes away the benefit of using microservices.
- Microservices need to track the states of the transaction and the peer.
- This can lead to data inconsistencies that can affect the business. Dealing with transient states, eventual consistency between services, isolation, and rollbacks makes the system error-prone.
Based on the above, for a simple and consistent design, point number two is the most important to consider.
Other patterns have emerged in microservices architecture to handle transactions across multiple microservices. The Saga Pattern (refer here as well) is a sequence of local transactions where each transaction updates data within a single service.
Outbox Pattern (Data Change Events)
Eventing (with the adoption of technologies such as Kafka) has become a common approach when one microservice needs to pass information to another. This allows microservices to scale independently. When events are received, each service can process information individually as needed.
The same concept can be adopted to make data communication across microservices. Within the microservice boundary, we can provide transactional guarantees and generate events for any other interested microservice to consume. For example, in a bank system, if a user updates the account type, the “user manager” microservice can trigger an event so that any other service that needs to act on it can receive it and perform the changes necessary to update itself and its data.
Figure 1: Outbox pattern
The Outbox Pattern ensures guaranteed delivery of events with the help of a message broker in the middle. In case the message broker service goes down, the generated events will be first stored locally, and then sent over to the message broker. Upon successful delivery of events to the broker, events will be removed from the local store.
API Composition
This is a simple and direct solution to the problem of implementing complex queries spanning across services in a microservice architecture. In this pattern, the API composer (i.e., a coarse-grained service) invokes other data APIs in the required order and joins the data received to compose the reply to the query it takes in.
OData is an attempt to standardize the way data is exposed. Exposing data as REST APIs (over HTTP/HTTPS) is also possible. Here, the advantage is other REST or web services can also utilize the same APIs to present data rather than composing. The caller of the data API may transform or join with the data received from another service before presenting.
The rest of this article discusses how WSO2 Micro Integrator (the cloud native version of WSO2 Enterprise Integrator) can be used to expose data in a relational database, such as MySQL, as REST services.
Example Implementation
This section assumes that you have hands-on experience with Docker, which is a microservices container orchestration platform.
Prerequisites
- Install Docker
- Install Java
- Install Apache Maven
Solution Overview
In this section, we discuss how to use WSO2 Micro Integrator to expose data as an API that other microservices or external applications can consume. Data is stored in a MySQL database instance.
There is a web service hosted by government services to find a suitable doctor out of a given set of hospitals in the country. As the input, a user needs to specify in which field (i.e., skin, heart, eyes) he/she is looking for a specialist. The service will give back information of a doctor and the hospital so that the user can book an appointment.
Before going into details, let’s overview the solution we are going to implement.
Figure 2: A solution overview
In synopsis, the data stored in the database is converted to a defined API via WSO2 Micro Integrator. To do this conversion, we need a definition of how operations and SQL statements, which become executed on databases and tables, get mapped to API context, resources, and variables. We will discuss this in detail in a subsequent topic. The API exposed by WSO2 Micro Integrator can be managed by fronting it through a WSO2 API Gateway service. This will provide QOS parameters such as security (i.e., OpenID Connect, OAuth 2.0), throttling limits to the API. At the same time, the gateway layer can provide essential information about the API for server admins, such as stats and monetization details.
However, when it comes to a microservices architecture, a database itself can be another microservice containing databases and tables, which is required for specific functionality in the solution. The WSO2 Micro Integrator service container can connect to the MySQL server container to access data. Hence, as the starting point, let’s see how to configure and run the MySQL server container with the necessary data.
Setting Up the MySQL Container
Let’s pull the latest MySQL image from docker hub and configure the required databases.
Pull the latest image for MySQL
docker pull mysql/mysql-server
Run MySQL server
docker run --name=mysqlDataStore --rm -d mysql/mysql-server
Update the root password
ALTER USER 'root'@'localhost' IDENTIFIED BY 'root';
Now we have a MySQL container running (named as mysqlDataStore) in Docker. The next step is to configure tables and insert some data.
mysql > create database doctors; mysql > use doctors; mysql > create table doctor(name varchar(100), field varchar(100), hospital varchar(100)); mysql > insert into doctor (name, field, hospital) values ('David','skin','Clemency'); mysql > insert into doctor (name, field, hospital) values ('Paul','heart','Pine Vally'); mysql > insert into doctor (name, field, hospital) values ('Brown','eyes','Pine Vally');
Set permissions
mysql > CREATE USER wso2@'%' IDENTIFIED BY 'wso2'; mysql > GRANT ALL PRIVILEGES ON *.* TO 'wso2'@'%'; mysql > FLUSH PRIVILEGES;
With the above quick steps, we have a MySQL instance running with data we need for our implementation. To learn more about running MySQL containers, please refer here.
Writing a Data Service
In general, when we talk about microservices, a service is written using a programming language. A typical choice is Python. However, WSO2 has a more powerful programming language called Ballerina, which you can use to “code” your services. We call this the code-first approach to construct microservices.
Ironically, WSO2 Micro Integrator uses a configuration-driven approach!! All the libraries that read that configuration and execute is packaged into WSO2 Micro Integrator. However, you will see the service we configure will be up and running in under five seconds when we run the container. This is where the power and uniqueness of WSO2 Micro Integrator come into play. To set up and run a microservices cluster made out of images of WSO2 Micro Integrator services, you do not require sophisticated coding skills. You only need to install Integration Studio to construct a comprehensive data service or a service orchestration. On the other hand, if you think about the approach, it improves the agility of digitizing your business.
Pertaining to the above facts, WSO2 Micro Integrator uses a configuration-driven approach to construct integrated services, with a studio to develop artifacts.
Download Integration Studio here. Please follow the documentation on how to create a data service. The final data service implementation can be downloaded from GitHub. You can import it to integration studio (data service project name: DoctorRegistry, data service name: findDoctor).
NOTE:
You need to specify a MySQL data source for the data service created. The data source contains details of the DB connection.
Figure 3: Creating a MySQL datasource - Integration Studio
Datasource ID (name for the datasource) - doctorInformationStoreUsername (privileged user to access database as configured on previous section) - wso2
Password (password of above user) - wso2
JDBC url (discussed below) - jdbc:mysql://mysqlDataStore:3306/doctors
Driver class - com.mysql.jdbc.Driver
Please note the hostname for the MySQL instance. We need to specify the IP address or host name of the MySQL container we are going to connect. For that, we can use “container name” of the mySQL container utilizing the dynamic discovery feature of docker containers. This makes the implementation independent of the deployment.
Now we have the data service definition to expose data as a REST API. It is time to burn this service to a Docker image.
Building a Docker Image
As the next step, we need to embed the data service into WSO2 Micro Integrator and create a Docker image out of it. Integration Studio makes this simple and a user can export a Docker image out of artifacts that have been developed.
The deployable unit of artifacts for WSO2 MI is a CAR application. Users can bundle the artifacts developed into a CAR and export.
In Integration Studio, add a car application to the project as below (navigate to help >> getting started) and select the data service developed in the above steps.
Figure 4: Creating a CAPP project - Integration Studio
The Image with a Single Click
You can right-click the CAR application project and “Generate Docker Image”.
Figure 5: Generating a Docker image - Integration Studio
However, we need some control over creating the image here as we need to pack external dependencies. Therefore, we will not do this and go into a few more steps.
The Image with a Few More Clicks
- Create a folder named docker, and a folder named files inside it.
- Export the CAR application using Integration Studio to the docker/files folder.
Figure 6:Exporting artifacts as a CAPP - Integration Studio
- Download the MySQL connector jar file from here (the latest version is 8.0.16) and copy it to the same folder.
- Create a file name Dockerfile inside the docker folder with following content
FROM wso2/micro-integrator:latest COPY files/FindDoctorApplication_1.0.0.car /home/wso2carbon/wso2mi/repository/deployment/server/carbonapps COPY files/mysql-connector-java-8.0.16.jar /home/wso2carbon/wso2mi/lib
This file is the definition of the docker image we are going to create. It will copy the data service artifacts and required MySQL dependencies to micro-integrator base image and create the child image.
- Create the Docker image by executing the following
docker build -t/doctor-registry
Figure 7:Creating a Docker image using Micro Integrator
Deploying on Docker and Testing
At this point, we have the MySQL container running and a WSO2 MI docker image with the data service we developed. It is time to deploy it and run.
Figure 8:Deploying containers
We need a MySQL container and a WSO2 MI container to discover each other. Remember when we defined a MySQL server url at the data source config we defined it by container name “mysqlDataStore”? For this to work, we need to place both containers in the same overlay network.
- Stop already running mySQL container.
docker stop mysqlDataStore
- Create a new docker network of type bridge. In terms of Docker, a bridge network uses a software bridge that allows containers connected to the same bridge network to communicate, while providing isolation from containers that are not connected to that bridge network.
docker network create overlay
- Start the MySQL container with the correct name under the above network.
docker run --name mysqlDataStore --net overlay --rm -d mysql/mysql-server
- Start the WSO2 Micro Integrator container from the above image created under the same network. The container will listen to HTTP requests on port 8290. We map the same port on the host.
docker run -d --rm -p 8290:8290 --name doctor-registry --net overlay/doctor-registry
- If you inspect the overlay network, you will note both containers are listed.
docker network inspect overlay
"Containers": { "5aca7be657733b15bfef51be1ab44d68cba8f22c1beaf3b166ea4d8f855b0d8a": { "Name": "doctor-registry", "EndpointID": "e5316f427373c1c0513f0c6b12cd56e0d00190adab80f08266960756acd36ff9", "MacAddress": "02:42:ac:15:00:03", "IPv4Address": "172.21.0.3/16", "IPv6Address": "" }, "c3357e2bb673ad0621297497c180330cdab8601e73d5fdd617ef860dfadc92bd": { "Name": "mysqlDataStore", "EndpointID": "d5ca9b49a8ed50ae4b21114bebea5853aa621f4e47cb1a8d15b1073d59994a2e", "MacAddress": "02:42:ac:15:00:02", "IPv4Address": "172.21.0.2/16", "IPv6Address": "" }
- You can invoke the API using postman or curl.
curl -v -X GET "https://localhost:8290/services/findDoctor.HTTPEndpoint/doctor/skin" -H "accept: application/xml"
- To view WSO2 Micro Integrator logs, you can use
docker logs doctor-registry
Automating All with Docker Compose
In the above topics, we discussed how to deploy the MySQL container and WSO2 Micro Integrator container. What if we can start and shut down the whole setup in one command? Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services.
Note that if you stop the MySQL container (named mysqlDataStore) it will be deleted and all the data we inserted will be removed. To make data permanent even if the container is restarted, a volume mount can be used. Please refer hereto learn how to do that.
Create a folder named compose and create the docker-compose.yml file as shown below.
version: "3" services: mysqlDataStore: image: mysql/mysql-server container_name: mysqlDataStore networks: - overlay volumes: - /usr/local/opt/mysql/8.0:/var/lib/mysql doctor-registry: image: abeykoon/doctor-registry container_name: doctor-registry depends_on: - mysqlDataStore ports: - 8290:8290 networks: - overlay networks: overlay: driver: bridge
To run the the compose use below command. It will create docker network, start containers, mount volumes, handle container dependencies going by the configuration.
docker-compose up
Figure 9:Using Docker compose
You can just invoke the API and test. The whole setup is done with one command!
Managing Data APIs
API management is a trending subject. Having a set of plain APIs exposed for anyone to use is not practical. APIs should be protected from being misused by wrapping it with policies and security measures. Monitoring traffic from outside applications is useful to see for what the enterprise system is used by customers most. Controlling and throttling API invocations is essential to ensure that a hostile application does not bring the system down. Performance, easy scalability, and redundancy to handle traffic spikes are characteristics of a good API management solution.
WSO2 has a separate product named WSO2 API Manager as the API management solution. For microservices architecture, it has a separate gateway distribution named Microgateway. For example, the findDoctor API, which is exposed byWSO2 Micro Integrator can be managed using WSO2 API Manager.
Example Implementation
Please follow the official documentation here to set up Microgateway in Docker. Using “Publisher”, you can configure the findDoctor API and publish to the “store”. At the store side, you can create an application and generate the required OAuth 2.0 tokens. Then, you can burn the docker images and run.
Analytics support is described here. It uses an API analytics server to compile statistics.
Conclusion
In every business, data handling is the prominent factor for success; there are various patterns and technologies emerging to deal with enterprise data in a microservice architecture. We discussed the importance of how identifying proper microservices boundaries in enterprise system leads to well-designed data integration. We also discussed data orchestration and data choreography in microservices as well as certain patterns. The latter part focused on how to compose an HTTP API out of data in a relational database using WSO2 technologies.