The workshop gives a practical perspective of key principles needed to develop, build, and maintain a set of microservices in the Node.js stack. It covers specifics of creating isolated TypeScript
dockerized services using the monorepo approach with lerna and yarn workspaces. The workshop includes an overview and a live exercise to create cloud environment with Pulumi framework and Azure services. The sessions fits the best developers who want to learn and practice build and deploy techniques using Azure stack and Pulumi, Docker for Node.js.
Alex Korzhikov & Andrew Reddikh
Software Engineer, Netherlands
My primary interest is self development and craftsmanship. I enjoy exploring technologies, coding open source and enterprise projects, teaching, speaking and writing about programming - JavaScript, Node.js, TypeScript, Go, Java, Docker, Kubernetes, JSON Schema, DevOps, Web Components, Algorithms 👋 ⚽️ 🧑‍💻 🎧
Software Engineer, United Kingdom
Passionate software engineer with expertise in software development, microservice architecture, and cloud infrastructure. On daily basis, I use Node.js, TypeScript, Golang, and DevOps best practices to build a better tech world by contributing to open source projects.
We’re building a currency converter, which can be used over gRPC calls.
Our intention is to send a request similar to convert 0.345 ETH to CAD
and as a result we want to know the final amount in CAD and conversion rate.
We also assume that, it could be more than one currency provider, e.g.
Here is how it works:
Let’s get started from cloning demo monorepo
git clone git@github.com:x-technology/micro-services-infrastructure-pulumi-azure-devops.git
For efficient work with .proto
format, and to be able to generate TypeScript-based representation of protocol buffers we need to install protoc
library.
If you’re a MacOS user and have brew package manager, the following command is the easiest way for installation:
brew install protobuf
# Ensure it's installed and the compiler version at least 3+
protoc --version
For Linux users
Run the following commands:
PROTOC_ZIP=protoc-3.14.0-linux-x86_64.zip
curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v3.14.0/$PROTOC_ZIP
sudo unzip -o $PROTOC_ZIP -d /usr/local bin/protoc
sudo unzip -o $PROTOC_ZIP -d /usr/local 'include/*'
rm -f $PROTOC_ZIP
Alternately, manually download and install protoc from here.
Make sure we have Node.js v14+ installed. If not, nvm is a very good tool to install multiple node versions locally and easily switch between them.
Then we need to install dependencies and bootstrap lerna within the monorepo.
yarn install
yarn lerna bootstrap
Yay! 🎉 Now we’re ready to go with the project.
For better monorepo project management we used Lerna & Yarn Workspaces
The project shapes into the following structure:
./packages/common
folder contains common libraries used in other project’s services../packages/services/grpc
folder contains gRPC services we build to share the product../proto
folder contains proto files, which describe protocol of input/output and communication between the services../node_modules
- folder with dependencies, shared between all microservices../lerna.json
- lerna’s configuration file, defining how it should work with monorepo../package.json
- description of our package, containing the important part "workspaces": [
"packages/common/*",
"packages/services/grpc/*"
]
Let’s move on 🚚
Lerna brings to the table few commands which can be easily executed across all/or filtered packages.
We use our common modules compiled to JavaScript, so before using it in services we need to build it first.
Following command executed build
command against all common packages filtered with flag --scope=@common/*
yarn lerna run build --scope=@common/*
gRPC is a modern, open source remote procedure call (RPC) framework that can run anywhere. It enables client and server applications to communicate transparently, and makes it easier to build connected systems
// http://protobuf-compiler.herokuapp.com/
syntax = "proto3";
package hello;
service HelloService {
rpc JustHello (HelloRequest) returns (HelloResponse);
rpc ServerStream(HelloRequest) returns (stream HelloResponse);
rpc ClientStream(stream HelloRequest) returns (HelloResponse);
rpc BothStreams(stream HelloRequest) returns (stream HelloResponse);
}
message HelloRequest {
string greeting = 1;
}
message HelloResponse {
string reply = 1;
}
An efficient technology to serialize structured data
message Person {
string name = 1;
int32 id = 2;
bool has_ponycopter = 3;
}
Does anyone know what numbers on the right side mean?
.proto
formatprotoc
- the protocol buffers compilersyntax = "proto3";
package hello;
service HelloService {
rpc SayHello (HelloRequest) returns (HelloResponse);
}
message HelloRequest {
string greeting = 1;
}
message HelloResponse {
string reply = 1;
}
npm start
const all = require('@common/go-grpc')
const client = new all.ecbProvider.EcbProviderClient('0.0.0.0:50051', all.createInsecure());
const response = await client.GetRates(new all.currencyProvider.GetRatesRequest())
response.toObject()
// inside converter container
const all = require('@common/go-grpc')
const client = new all.currencyConverter.CurrencyConverterClient('0.0.0.0:50052', all.createInsecure());
const response = await client.Convert(new all.currencyConverter.ConvertRequest({ sellAmount: 100, sellCurrency: 'USD', buyCurrency: 'GBP' }));
response.toObject()
Toolset to develop, declare, deliver and run applications
docker version
docker run hello-world
Powered by Linux Containers - virtualization method to run multiple isolated Linux systems (containers) on a control host
cgroups
- limitation and prioritization of CPU, memory, block I/O, network resourcesnamespace
- isolated process trees, networking, users and file systemsFeatures
docker-compose
…Dockerfile
Instructions to declare an application
Build to Image
FROM
base imageRUN
install assets and dependenciesCMD
to startCOPY
to copyLifecycle
docker build -t tmp-base1 . # build image from Dockerfile
docker run hello-world # create a container from image
# docker run = docker create + docker start
# docker run -it tmp-base2:latest bash
docker ps # --all - list of running containers
docker kill # also stop
docker system prune
docker logs
# step into running container
docker exec -it $DOCKER_CONTAINER_HASH /bin/sh
# also ports, mount -v
To start all components at once in configured network
docker-compose
up# docker-compose.yaml
redis:
image: 'redislabs/redismod'
ports:
- '6379:6379'
web1:
restart: on-failure
build: ./web
hostname: web1
ports:
- '81:5000'
web2:
restart: on-failure
build: ./web
hostname: web2
ports:
- '82:5000'
nginx:
build: ./nginx
ports:
- '80:80'
depends_on:
- web1
- web2
docker run ... echo hello
command?Dockerfile
?COPY package.json yarn.lock /usr/src/main
# Install runtime dependencies
RUN yarn install
COPY . /usr/src/main
node:lts-alpine
?Microsoft Azure
is a enourmous cloud ecosystem that enables to organize, develop, publish applications worldwide
docker login xtechnology.azurecr.io
docker image tag tmp-base2:latest xtechnology.azurecr.io/microservices-united:latest
docker push xtechnology.azurecr.io/microservices-united:latest
Pulumi - Developer-First Infrastructure as Code
Loops, conditionals, functions, classes, and more.
Gets things done in seconds rather than hours.
Define and consume patterns and practices to reduce boilerplate.
Terraform is an open-source infrastructure as code software tool that provides a consistent CLI workflow to manage hundreds of cloud services.
Let’s get started from pulumi installation and initial infrastructure repo setup.
azure-cli
with a command. If you’re a MacOS user follow the brew command.
brew install azure-cli
pulumi
cli installed.
brew install pulumi
infra
folder with the following command.
pulumi new azure-typescript
az login
pulumi login
yarn install
Let’s get started with Kubernetes Cluster in Azure, and for this purpose we’re going to use Pulumi to start.
We need to import a file, containing description of our cluster.
import * as cluster from "./cluster";
import * as resourceGroup from "./resourceGroup";
export let clusterName = cluster.k8sCluster.name;
export let groupName = resourceGroup.resourceGroup.name;
Now, let’s try to do a simple command to build our infrastructure in the cloud:
pulumi up
Let’s check our cluster at the Azure website. Great! It’s there, just in few lines of TypeScript code.
Now we’re good to add docker registry, where we’re going to put our application code as a Docker image.
Let’s add the following lines into our index.ts
:
import { registry } from "./registry";
export let registryName = registry.loginServer;
Once again pulumi up
to see it’s deployed.
Now we can see our new registry created and here is the name of the registry in the output of pulumi command. So, let’s build our application code into a docker image and push it to newly created registry by the following commands:
Replace registry-name
with real registry name from pulumi output.
az acr login --name registry-name
docker build -t registry-name.azurecr.io/grpc:latest .
docker push registry-name.azurecr.io/grpc:latest
Great the image is there in the cloud! It’s ready to be installed from the cluster, or not yet?
We need to give reading permissions to our cluster, so it’s allowed to pull images from registry.
import * as azure from "@pulumi/azure";
const principalId = cluster.k8sCluster.identityProfile.apply(p => p!["kubeletidentity"].objectId!);
const assignment = new azure.authorization.Assignment("workshop-assignment", {
principalId: principalId,
roleDefinitionName: "AcrPull",
scope: registry.id,
skipServicePrincipalAadCheck: true,
});
What is Ingress? Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
Let’s add it with the following code:
import * as k8s_system from "./k8s/system";
export let ingressServiceIP = k8s_system.ingressServiceIP;
Right after applying this code, we can see ingressServiceIP
it’s our public IP of the cluster.
Now let’s attach DNS to this IP.
We’re going to use CloudFlare for DNS as it provides very rich api and also an extra features like Anti-Ddos and more.
import * as dns from "./dns";
export let dnsRecord = dns.mainRecord.hostname;
Once again pulumi up
to see the changes applied.
import * as apps from "./k8s/apps";
export let currencyConverter = apps.currencyConverter.urn;
export let appNamespace = apps.appNamespace.metadata.name;
Get credentials for using kubectl
az aks get-credentials --admin --name workshop-cluster1437dadd -g workshop-group5e64df12
Let’s create a proxy forwarding to our service inside the kubernetes cluster
kubectl port-forward -n apps-q0fg8ahd svc/currency-converter-grpc 50051:50051
Now it’s the moment to call our currency-converter:
echo '{"sellCurrency": "GBP", "buyCurrency": "USD", "sellAmount": 150}' | grpcurl -plaintext -import-path ./proto -proto currency-converter.proto -d @ 127.0.0.1:50051 currencyConverter.CurrencyConverter.Convert
Great 🎉!
We’ve just created the full infrastructure and deployed our microservices into the Kubernetes cluster using Helm charts and pulumi.
Helm is a package manager for Kubernetes. Helm is the K8s equivalent of yum or apt. Helm deploys charts, which you can think of as a packaged application.
We store our helm charts inside the ./infrastructure/charts
folder.
By running a command we can create a new helm chart:
helm create grpc
Helm also provides an ability to easily template our package, so we can provide multiple values into the chart, when we deploy it.
Following command will show us rendered a yaml
definition of the helm chart
helm template grpc
It’s time to have some practice and evolve our services even more!
Let’s grab a task based on the things you’d like to do 👇
Please share your feedback on our workshop. Thank you and have a great coding!
If you like the workshop, you can become our patron, yay! 🙏
microservices pulumi azure devops node.js javascript protobuf grpc typescript lerna npm yarn docker git architecture crypto currency