Frappe Deployment in Azure Kubernetes Service


  1. Create Azure Kubernetes Service in Azure Portal. Home>Kubernetes Service>Create Kubernetes Service.


  1. It will ask the configuration you wanted to be set. Like Number of nodes wanted, Size of Each Nodes, etc.

  2. After Creation, You can access the AKS Service in Azure Portal.



  1. If you want to access the AKS service in your local system. Please follow the steps below:

    1. We have to install the kubectl (Kubecontroller) command line tool for managing Kubernetes. Follow the commands to install:

      sudo apt-get update \

      sudo apt-get install -y apt-transport-https ca-certificates curl gnupg \

      curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg \

      sudo chmod 644 /etc/apt/keyrings/kubernetes-apt-keyring.gpg \

      echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list \

      sudo chmod 644 /etc/apt/sources.list.d/kubernetes.list \

      sudo apt-get update \

      sudo apt-get install -y kubectl

    2. curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash . Type this command in Ubuntu Terminal for installing Azure az package.

    3. az login. Type this command in Terminal. It will open an Authentication Page. Using Mail ID and Password Authenticate the page.

    4. az aks get-credentials -n {kubernetes service name} -g {resource group name} . This command will add the credentials to .kube/config file.

    5. You can verify whether the kubernetes service is add using kubectl config get-context. This command will list all kubernetes connected to local.

    6. To switch to another kubernetes service use the command kubectl config use-context {kubernetes service name}.

    7. You can use alias for kubectl. For using alias add alias k='kubectl' in .bashrc file.

  2. Next Step is to create storage class. For that use the storage_class.yaml file.


    storage_class.yaml:


    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: {storage_class_name}
    provisioner: file.csi.azure.com # replace with "kubernetes.io/azure-file" if aks version is less than 1.21
    allowVolumeExpansion: true
    mountOptions:
     - dir_mode=0777
     - file_mode=0777
     - uid=0
     - gid=0
     - mfsymlinks
     - cache=strict
     - actimeo=30
    parameters:
      skuName: Premium_LRS


  3. Create storage class using command kubectl apply -f storage_class.yaml.

  4. Next Step is to create Namespace. We have to create two Namespaces. One for Mariadb and Another one for Frappe. For creating Namespace use command kubectl create ns {namspace name}.

  5. Next Step is to create Persistant Volume Claim. For that use the persistant_volume_claim.yaml file.


    persistant_volume_claim.yaml:


    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      labels:
        app: "erpnext"
        chart: "erpnext-7.0.84"
        heritage: "Helm"
        release: "finiteerp"
      name: "{pvc name}"
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 8Gi
      storageClassName: "{Storage class name}"
  6. Create PVC using command kubectl apply -f persistant_volume_claim.yaml -n {namespace}. We have to create this PVC under Frappe service Namespace(Eg. finiteerp, claimgenie)

  7. Next Step is to create Mariadb Service. For the we are using helm chart of mariadb provided by Bitnami. For installing that helm we have to use our own values.

mariadb-values.yaml:

auth:
  rootPassword: "someSecurePassword"

primary:
  configuration: |-
    [mysqld]
    character-set-client-handshake=FALSE
    skip-name-resolve
    explicit_defaults_for_timestamp
    basedir=/opt/bitnami/mariadb
    plugin_dir=/opt/bitnami/mariadb/plugin
    port=3306
    socket=/opt/bitnami/mariadb/tmp/mysql.sock
    tmpdir=/opt/bitnami/mariadb/tmp
    max_allowed_packet=16M
    bind-address=0.0.0.0
    pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid
    log-error=/opt/bitnami/mariadb/logs/mysqld.log
    character-set-server=utf8mb4
    collation-server=utf8mb4_unicode_ci

    [client]
    port=3306
    socket=/opt/bitnami/mariadb/tmp/mysql.sock
    default-character-set=utf8mb4
    plugin_dir=/opt/bitnami/mariadb/plugin

    [manager]
    port=3306
    socket=/opt/bitnami/mariadb/tmp/mysql.sock
    pid-file=/opt/bitnami/mariadb/tmp/mysqld.pid


  1. Create Mariadb Service using the command: helm install mariadb --namespace mariadb oci://registry-1.docker.io/bitnamicharts/mariadb -f mariadb-values.yaml.

    Note: Before using helm, we have to install helm. Install helm using command:

    curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
    sudo apt-get install apt-transport-https --yes
    echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
    sudo apt-get update
    sudo apt-get install helm
  2. Next Step is to create Frappe Service. For that we are using helm chart of frappe. For installing that helm we have to use our own values.

frappe-values.yaml:

dbHost: "mariadb.mariadb.svc.cluster.local"
dbPort: 3306
dbRootUser: "root"
dbRootPassword: "someSecurePassword"
# dbRds: false

image:
  repository: balamurugan1207/claimgenie
  tag: latest
  pullPolicy: Always

nginx:
  replicaCount: 1
  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 3
    targetCPU: 75
    targetMemory: 75
  # config: |
  #   # custom conf /etc/nginx/conf.d/default.conf
  environment:
    upstreamRealIPAddress: "127.0.0.1"
    upstreamRealIPRecursive: "off"
    upstreamRealIPHeader: "X-Forwarded-For"
    frappeSiteNameHeader: "$host"
  livenessProbe:
    tcpSocket:
      port: 8080
    initialDelaySeconds: 5
    periodSeconds: 10
  readinessProbe:
    tcpSocket:
      port: 8080
    initialDelaySeconds: 5
    periodSeconds: 10
  service:
    type: ClusterIP
    port: 8080
  resources: {}
  nodeSelector: {}
  tolerations: []
  affinity: {}
  envVars: []
  initContainers: []
  sidecars: []

worker:
  gunicorn:
    replicaCount: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPU: 75
      targetMemory: 75
    livenessProbe:
      tcpSocket:
        port: 8000
      initialDelaySeconds: 5
      periodSeconds: 10
    readinessProbe:
      tcpSocket:
        port: 8000
      initialDelaySeconds: 5
      periodSeconds: 10
    service:
      type: ClusterIP
      port: 8000
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    args: []
    envVars: []
    initContainers: []
    sidecars: []

  default:
    replicaCount: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPU: 75
      targetMemory: 75
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    livenessProbe:
      override: false
      probe: {}
    readinessProbe:
      override: false
      probe: {}
    envVars: []
    initContainers: []
    sidecars: []

  short:
    replicaCount: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPU: 75
      targetMemory: 75
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    livenessProbe:
      override: false
      probe: {}
    readinessProbe:
      override: false
      probe: {}
    envVars: []
    initContainers: []
    sidecars: []

  long:
    replicaCount: 1
    autoscaling:
      enabled: false
      minReplicas: 1
      maxReplicas: 3
      targetCPU: 75
      targetMemory: 75
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    livenessProbe:
      override: false
      probe: {}
    readinessProbe:
      override: false
      probe: {}
    envVars: []
    initContainers: []
    sidecars: []

  scheduler:
    replicaCount: 1
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    livenessProbe:
      override: false
      probe: {}
    readinessProbe:
      override: false
      probe: {}
    envVars: []
    initContainers: []
    sidecars: []

  healthProbe: |
    exec:
      command:
        - bash
        - -c
        - echo "Ping backing services";
        {{- if .Values.mariadb.enabled }}
        {{- if eq .Values.mariadb.architecture "replication" }}
        - wait-for-it {{ .Release.Name }}-mariadb-primary:{{ .Values.mariadb.primary.service.ports.mysql }} -t 1;
        {{- else }}
        - wait-for-it {{ .Release.Name }}-mariadb:{{ .Values.mariadb.primary.service.ports.mysql }} -t 1;
        {{- end }}
        {{- else if .Values.dbHost }}
        - wait-for-it {{ .Values.dbHost }}:{{ .Values.mariadb.primary.service.ports.mysql }} -t 1;
        {{- end }}
        {{- if index .Values "redis-cache" "host" }}
        - wait-for-it {{ .Release.Name }}-redis-cache-master:{{ index .Values "redis-cache" "master" "containerPorts" "redis" }} -t 1;
        {{- else if index .Values "redis-cache" "host" }}
        - wait-for-it {{ index .Values "redis-cache" "host" }} -t 1;
        {{- end }}
        {{- if index .Values "redis-queue" "host" }}
        - wait-for-it {{ .Release.Name }}-redis-queue-master:{{ index .Values "redis-queue" "master" "containerPorts" "redis" }} -t 1;
        {{- else if index .Values "redis-queue" "host" }}
        - wait-for-it {{ index .Values "redis-queue" "host" }} -t 1;
        {{- end }}
        {{- if index .Values "redis-socketio" "host" }}
        - wait-for-it {{ .Release.Name }}-redis-socketio-master:{{ index .Values "redis-socketio" "master" "containerPorts" "redis" }} -t 1;
        {{- else if index .Values "redis-socketio" "host" }}
        - wait-for-it {{ index .Values "redis-socketio" "host" }} -t 1;
        {{- end }}
        {{- if .Values.postgresql.host }}
        - wait-for-it {{ .Values.postgresql.host }}:{{ .Values.postgresql.primary.service.ports.postgresql }} -t 1;
        {{- else if .Values.postgresql.enabled }}
        - wait-for-it {{ .Release.Name }}-postgresql:{{ .Values.postgresql.primary.service.ports.postgresql }} -t 1;
        {{- end }}
    initialDelaySeconds: 15
    periodSeconds: 5

socketio:
  replicaCount: 1
  autoscaling:
    enabled: false
    minReplicas: 1
    maxReplicas: 3
    targetCPU: 75
    targetMemory: 75
  livenessProbe:
    tcpSocket:
      port: 3000
    initialDelaySeconds: 5
    periodSeconds: 10
  readinessProbe:
    tcpSocket:
      port: 3000
    initialDelaySeconds: 5
    periodSeconds: 10
  resources: {}
  nodeSelector: {}
  tolerations: []
  affinity: {}
  service:
    type: ClusterIP
    port: 3000
  envVars: []
  initContainers: []
  sidecars: []

persistence:
  worker:
    enabled: true
    existingClaim: "claimgenie-pvc"
    size: 8Gi
    storageClass: "tfs-azurefile"
  logs:
    # Container based log search and analytics stack recommended
    enabled: false
    existingClaim: "claimgenie-pvc"
    size: 8Gi
    storageClass: "nfs"

# Ingress
ingress:
  enabled: false
  ingressName: "dev.finite-erp.tech"
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/ingress.class: nginx
    kubernetes.io/tls-acme: "true"
  hosts:
  - host: dev.finite-erp.tech
    paths:
    - path: /
      pathType: Prefix
  tls:
   - secretName: dev-finiteerp-tech-tls
     hosts:
       - dev.finite-erp.tech
jobs:
  volumePermissions:
    enabled: false
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  configure:
    enabled: true
    fixVolume: true
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}
    envVars: []
    command: []
    args: []

  createSite:
    enabled: false
    forceCreate: false
    siteName: "dev.finite-erp.tech"
    adminPassword: "tfinite@24"
    installApps:
    - "erpnext"
    dbType: "mariadb"
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  dropSite:
    enabled: false
    forced: false
    siteName: "erp.cluster.local"
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  backup:
    enabled: false
    siteName: "erp.cluster.local"
    withFiles: true
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  migrate:
    enabled: false
    siteName: "erp.cluster.local"
    skipFailing: false
    backoffLimit: 0
    resources: {}
    nodeSelector: {}
    tolerations: []
    affinity: {}

  custom:
    enabled: false
    jobName: ""
    labels: {}
    backoffLimit: 0
    initContainers: []
    containers: []
    restartPolicy: Never
    volumes: []
    nodeSelector: {}
    affinity: {}
    tolerations: []

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

serviceAccount:
  # Specifies whether a service account should be created
  create: true

podSecurityContext:
  supplementalGroups: [1000]

securityContext:
  capabilities:
    add:
    - CAP_CHOWN
  # readOnlyRootFilesystem: true
  # runAsNonRoot: true
  # runAsUser: 1000

redis-cache:
  # https://github.com/bitnami/charts/tree/master/bitnami/redis
  enabled: true
  # host: ""
  architecture: standalone
  auth:
    enabled: false
    sentinal: false
  master:
    containerPorts:
      redis: 6379
    persistence:
      enabled: true

redis-queue:
  # https://github.com/bitnami/charts/tree/master/bitnami/redis
  enabled: true
  # host: ""
  architecture: standalone
  auth:
    enabled: false
    sentinal: false
  master:
    containerPorts:
      redis: 6379
    persistence:
      enabled: true

redis-socketio:
  # https://github.com/bitnami/charts/tree/master/bitnami/redis
  enabled: true
  # host: ""
  architecture: standalone
  auth:
    enabled: false
    sentinal: false
  master:
    containerPorts:
      redis: 6379
    persistence:
      enabled: false

mariadb:
  # https://github.com/bitnami/charts/tree/master/bitnami/mariadb
  enabled: false
  auth:
    rootPassword: "rPLEcYyTiky2"
    username: "erpnext"
    password: "CwM1kVDv175b"
    replicationPassword: "rPLEcYyTiky2"
  primary:
    service:
      ports:
        mysql: 3306
    extraFlags: >-
      --skip-character-set-client-handshake
      --skip-innodb-read-only-compressed
      --character-set-server=utf8mb4
      --collation-server=utf8mb4_unicode_ci

postgresql:
  # https://github.com/bitnami/charts/tree/master/bitnami/postgresql
  enabled: false
  # host: ""
  auth:
    username: "postgres"
    postgresPassword: "changeit"
  primary:
    service:
      ports:
        postgresql: 5432


  1. For creating Frappe service use this command: helm install finiteerp --namespace finiteerp frappe/erpnext --version {chart version} -f frappe-values.yaml. Before this we had to add frappe helm in our local helm repo. For that use the command: helm repo add helm repo add frappe https://helm.erpnext.com.

  2. Now the Frappe service will be Running in Kubernetes. Now we to expose the site with Ingress. For that Ingress we are using Ingress-nginx controller.

  3. Next Step is to create Ingress-Nginx Controller. For the use the command: helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace

  4. Next Step is to Create Ingress for exposing Frappe service. For the we can use ingress.yaml. Before that we have to store our tls certificate as secret and have to give that secret to ingress. For creating that use the command: kubectl create secret tls finite-erp-tls --cert={certfile} --key={keyfile} -n erpnext.

Ingress.yaml:

# Source: erpnext/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: claimgenie-erpnext
  labels:
    helm.sh/chart: erpnext-6.0.96
    app.kubernetes.io/name: erpnext
    app.kubernetes.io/instance: claimgenie
    app.kubernetes.io/version: "latest"
    app.kubernetes.io/managed-by: Helm
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - "tef.claimgenie.ai"
      secretName: claimgenie-tls
  rules:
    - host: "tef.claimgenie.ai"
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: claimgenie-erpnext
                port:
                  number: 8080
  1. For creating ingress use the command: kubectl apply -f ingress.yaml -n {namespace}.