by Jef Jansen, IBM Integration Specialist.
I wanted to experiment a bit with the latest new features from IBM API Connect v10.0.2.0. I thought let’s perform a quick install on the IBM Kubernetes Service (IKS). As it is from the same vendor it should be easy …
I downloaded the following files from fix central:
- apiconnect-operator-release-files_v10.0.2.0.zip
- apiconnect-image-tool-10.0.2.0.tar.gz
The following (relevant) tools are installed on my machine:
- ibmcloud cli
- Docker Desktop for windows (WSL2 engine)
- Helm3
- TightVNC Viewer
- kubectl
To install IBM API Connect 10.0.2.0 in IKS three major steps need to be performed.
- Upload the container images to an Image repository
- Create and configure the IKS cluster
- Deploy IBM API Connect to the Kubernetes cluster
Upload the container images to an Image Repository

First, we need to start uploading the container images to a repository. I had already a Container Repository created in the IBM cloud. The Container Repository Instance is located in Frankfurt. Depending on the location the subdomain will change. In my case the repository is reachable on ‘de.icr.io’. I created a namespace id-apic in the registry using the WebUI.
I had some authentication problems with the ibmcloud cr plugin. I then switched to the my local docker installation. To login procedures needs an IAM key. You can create an IAM Key by logging in the IBM console. In the IBML Console click Manage >Access (IAM) > API Keys


You can now login by executing ‘docker login de.icr.io -u iamapikey’ and as password the IAM key.

Now load the container images in your local image repository.
docker load < apiconnect-image-tool-10.0.20.tar.gz
Code language: CSS (css)
After it is completed, you can start a docker image which will upload all the images to image repository in the cloud.
docker run --rm -v ~/.docker:/root/.docker --user 0 apiconnect-image-tool-10.0.2.0 upload de.icr.io/id-apic --username iamapikey --password <change the IAM key>
Code language: HTML, XML (xml)
We can now verify that the images are uploaded by going to our Image repository.

Create and configure the IKS Cluster
Create the IKS Cluster
If you don’t have a IKS cluster yet you can create one in the IBM Cloud console or by using the ibmcloud cli. I used the command from below. This creates a Kubernetes cluster with 3 worker nodes.
ibmcloud ks cluster create classic --name ID-IKS --zone fra04 --flavor c3c.16x16 --hardware shared --workers 3 --public-vlan <public_VLAN_ID> --private-vlan <private_VLAN_ID>
Code language: HTML, XML (xml)

It will take some time for the cluster to become fully functional.
Change vm.max_map_count
For the analytics service the vm.max_map_count should be increased on the worker nodes. To change this we need to login on the worker nodes. We can do this by login into the node by using the KVM console.
The first step is to setup a vpn to the private network. You can create the vpn user in manage > Access (IAM) > Users

Scroll down to VPN Password. Check the VPN Subnets that you have selected are the contain the correct VLAN ID from your cluster.

Look up the nearest VPN access point on log in. If you are having troubles see https://cloud.ibm.com/docs/iaas-vpn?topic=iaas-vpn-getting-started for more information.

Now go to your device list in the classic infrastructure.

Select your worker node

Click Actions > KVM Console > Continue

You see a screen with the ip address, port and the root password.
Open TightVNC Viewer as remote host use <ip address>::<port> select Preferred encoding raw under options.


Click connect

You now have bash access to the workernode.
Login and change the value in /etc/sysctl.conf using vi.
vm.max_map_count = 262144
execute sudo sysctl -w vm.max_map_count=262144 so it becomes active immediate. Do the same change on the other worker nodes.
Create the correct storage class
The APIC containers need a block storage class to create the necessary volumes (using a PVC). By default no block storage is available in IKS. You can install the IBM block storage plugin in your cluster.
Set the context of your kubectl to the IKS cluster

Install the IBM block storage plugin
helm install ibm-block-storage iks-charts/ibmcloud-block-storage-plugin -n kube-system

Create the new storageclass. I used the following yaml. The most important config is that it should be block storage and the volumeBindMode:WaitForFirstCustomer is set. Note that I have set the reclaimPolicy on delete. In a real setup you would probably set it to retain.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: apic-storageclass
labels:
kubernetes.io/cluster-service: “true”
provisioner: ibm.io/ibmc-block
parameters:
billingType: “hourly”
classVersion: “2”
sizeIOPSRange: |-
“[20-39]Gi:[100-1000]”
“[40-79]Gi:[100-2000]”
“[80-99]Gi:[100-4000]”
“[100-499]Gi:[100-6000]”
“[500-999]Gi:[100-10000]”
“[1000-1999]Gi:[100-20000]”
“[2000-2999]Gi:[200-40000]”
“[3000-3999]Gi:[200-48000]”
“[4000-7999]Gi:[300-48000]”
“[8000-9999]Gi:[500-48000]”
“[10000-12000]Gi:[1000-48000]”
type: “Performance”
reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer
Storageclass.yaml
kubectl apply -f apic-storageclass

Install the community Ingress nginx controller
The IKS Ingress controller doesn’t support the SSL passthrough. Thit is a requirement for IBM API Connect. We are going to install the community ingress controller. In a real environment you should also change the read and write timeout settings
controller:
config:
hsts-max-age: “31536000”
keepalive: “32”
log-format: ‘{ “@timestamp”: “$time_iso8601”, “@version”: “1”, “clientip”: “$remote_addr”,
“tag”: “ingress”, “remote_user”: “$remote_user”, “bytes”: $bytes_sent, “duration”:
$request_time, “status”: $status, “request”: “$request_uri”, “urlpath”: “$uri”,
“urlquery”: “$args”, “method”: “$request_method”, “referer”: “$http_referer”,
“useragent”: “$http_user_agent”, “software”: “nginx”, “version”: “$nginx_version”,
“host”: “$host”, “upstream”: “$upstream_addr”, “upstream-status”: “$upstream_status”
}’
main-snippets: load_module “modules/ngx_stream_module.so”
proxy-body-size: “0”
proxy-buffering: “off”
server-name-hash-bucket-size: “128”
server-name-hash-max-size: “1024”
server-tokens: “False”
ssl-ciphers: HIGH:!aNULL:!MD5
ssl-prefer-server-ciphers: “True”
ssl-protocols: TLSv1.2
use-http2: “true”
worker-connections: “10240”
worker-cpu-affinity: auto
worker-processes: “1”
worker-rlimit-nofile: “65536”
worker-shutdown-timeout: 5m
daemonset:
useHostPort: false
extraArgs:
annotations-prefix: ingress.kubernetes.io
enable-ssl-passthrough: true
hostNetwork: true
kind: DaemonSet
name: controller
rbac: create: “true”
Ingress-config.yaml
Execute
<meta charset="utf-8">Helm install ingress stable/nginx-ingress --values ingress-config.yaml --namespace kube-system
Code language: HTML, XML (xml)

The nginx ingress controller is now running in our IKS cluster. It is a best practice to use a loadbalancer before the controller. The loadbalancer will also give us a resolvable dns name. We first need to retrieve the public ip address of the ingress controller. Execute
kubectl get svc -n kube-system ingress-nginx-ingress-controller
Code language: JavaScript (javascript)
This will return the External-IP. Next we can use this external Ip to create the loadbalancer.
ibmcloud ks nlb-dns create classic --cluster <clustername> --ip <External ip>
Code language: HTML, XML (xml)

The hostname returned can be used as $STACK_HOST in the custom resources.
Install the Cert-Manager v0.12
The default cert-manager v0.10.1 is not compatible in IKS. We can use v0.12. During the deployment in the next phase we also need to change some custom resources.
Execute
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v0.12.0/cert-manager.yaml
Code language: JavaScript (javascript)

Validation
If you want you can check the installation of the cert-manager and the nginx controller by checking the pods.


Deploy IBM API Connect to the Kubernetes cluster
We now have a IKS cluster that can be used to install APIC. We first start by creating a namespace to keep everything organized.
Secrets
We need a secret to download the container image from the registry.
kubectl create secret docker-registry apic-registry-secret --docker-server=de.icr.io/id-apic --docker-username=iamapikey --docker-password=TheSameKeyUsedToUploadTheImages -n apic
We also need a secret to set the admin password of the datapower.
kubectl create secret generic datapower-admin-credentials --from-literal=password=ChangeThisPassword -n apic
Code language: JavaScript (javascript)
Custom Resources
We now need to create the custom resources. You do this by unzipping the helper file. This files contain a bunch of templates that need to be changed to reflect your environment (STACKHOST, StorageClass, …)
We now start with deploying IBM APIC and datapower Operators.
kubectl apply -f ibm-apiconnect-crds.yaml
Code language: CSS (css)

kubectl apply -f ibm-apiconnect.yaml
Code language: CSS (css)

kubectl apply -f ibm-datapower.yaml -n apic
Code language: CSS (css)

If you want you can check if everything has been deployed correctly by checking the pods.
kubectl get pod -n apic
Code language: JavaScript (javascript)

Now we create the custom certificates with the help of our cert-manager.

When everything is in place we can create the API Manager.

You can check the status by executing the following command.


If there is a problem with the deployment you can check the logs from the pods.


When the API Manager is started you can access it on the url https://admin.<STACKHOST=dns name of the loadbalance>/admin.
The default username and password is admin/ 7iron-hide

When the API manager is deployed you can continue with the other components.

Some Pods of the developer portal will keep on restarting cause the cert generated are not correct. If you check the log of the pods you will find something similar like the excerpt below.

We need to manual correct the certificates. The certificate below are created by the APIC operator. So you first need to create the developer portal before changing the certificates. We need to copy the tls-crt part of the portal-ca secret into the ca-crt part of the portal-server and portal-client secret.
To do this I used the edit command.
kubectl edit secret portal-ca -n apic
Copy the tls-crt from the portal-ca secret.

Paste the tls.crt into the ca.cert of the portal-client and portal-server secret.
kubectl edit secret portal-client -n apic
kubectl edit secret portal-server -n apic

The next time the pods are restarted your IBM API cloud will be ready to be configured.
IBM Integration Specialists
Enabling Digital Transformations.
Let's get in touch...
Find us here
Belgium
Veldkant 33B
2550 Kontich
Netherlands
Utrecht
Van Deventerlaan 31-51
3528 AG Utrecht
© 2019 Integration Designers - Privacy policy - Part of Cronos Group & CBX