1. Home
  2. Docs
  3. Infrastructure
  4. ERPNext
  5. Installing ERPNext on MicroK8s/Kubernetes

Installing ERPNext on MicroK8s/Kubernetes


  1. Add ERPNext Helm chart repository
  2. Prepare Kubernetes
  3. Install frappe/erpnext Helm chart
  4. Create Resources

1. Add ERPNext Helm chart repository

kubectl config use-context microk8s
helm repo add frappe https://helm.erpnext.com
helm repo update

2. Prepare Kubernetes

This phase includes:

  1. LoadBalancer Service
  2. Certificate Management
  3. MariaDB
  4. Shared Filesystem

2.1. LoadBalancer Service

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx

2.3. MariaDB Installation (and AWS RDS MariaDB Workaround)

See ERPNext Helm chart > Prepare Kubernetes > MariaDB. Changes:

  • Container image tag: 10.4 (AWS RDS MariaDB already supports this per June 2020)
  • For local development, set slave.replicas to 0.
  • You may want to set master.persistence.size (default is 8Gi).
  • For DigitalOcean, set master.persistence.storageClass and slave.persistence.storageClass to "do-block-storage".

ERPNext AWS RDS MariaDB bug workaround: Due to #22658, our workaround is:

  1. Install a temporary MariaDB using Helm chart.
  2. Install ERPNext and create site using that MariaDB.
  3. Backup from Kubernetes MariaDB and restore into AWS RDS MariaDB. (see section “Moving (Temporary) MariaDB Database to AWS RDS MariaDB” below for details)
  4. Reconfigure ERPNext to use AWS RDS MariaDB.
  5. Delete the temporary Kubernetes MariaDB.

To install:

helm install -n mariadb mariadb bitnami/mariadb -f values-production.yaml

MariaDB Host should be mariadb.mariadb.svc.cluster.local. Check the StatefulSet progress:

$ kubectl get statefulset -n mariadb -o wide
mariadb-master   0/1     12m   mariadb,metrics   docker.io/bitnami/mariadb:10.4,docker.io/bitnami/mysqld-exporter:0.12.1-debian-10-r146
mariadb-slave    0/1     12m   mariadb,metrics   docker.io/bitnami/mariadb:10.4,docker.io/bitnami/mysqld-exporter:0.12.1-debian-10-r146

You may get error due to PVC’s storage class:

ceefour@amanah:~/project/lovia/lovia-devops/erpnext$ kubectl describe statefulset -n mariadb mariadb-master
  Type     Reason        Age                  From                    Message
  ----     ------        ----                 ----                    -------
  Warning  FailedCreate  12m (x12 over 12m)   statefulset-controller  create Pod mariadb-master-0 in StatefulSet mariadb-master failed error: failed to create PVC data-mariadb-master-0: persistentvolumeclaims "data-mariadb-master-0" is forbidden: Internal error occurred: 2 default StorageClasses were found
  Warning  FailedCreate  119s (x18 over 12m)  statefulset-controller  create Claim data-mariadb-master-0 for Pod mariadb-master-0 in StatefulSet mariadb-master failed error: persistentvolumeclaims "data-mariadb-master-0" is forbidden: Internal error occurred: 2 default StorageClasses were found

To monitor deployment progress, use:

ceefour@amanah:~/project/lovia/lovia-devops/erpnext$ kubectl get pods -w --namespace mariadb -l release=mariadb
mariadb-master-0   0/2     Pending   0          7s
mariadb-master-0   0/2     Pending   0          9s
mariadb-master-0   0/2     ContainerCreating   0          9s
mariadb-master-0   0/2     Running             0          35s

2.4. Shared Filesystem (NFS Server Provisioner)

Tutorial by DigitalOcean: https://www.digitalocean.com/community/tutorials/how-to-set-up-readwritemany-rwx-persistent-volumes-with-nfs-on-digitalocean-kubernetes

Save as nfs-server-provisioner-values.yaml: (you can change the size as you want, for example 20Gi for production)

  enabled: true
  storageClass: "microk8s-hostpath"
  size: 8Gi

  defaultClass: true

To list storage classes in your Kubernetes cluster, use:

kubectl get storageclass

For production in DigitalOcean Kubernetes, you can use 22Gi (need to have 2 Gi spare space for overhead, since ERPNext will need pure 20 Gi). DigitalOcean’s block storage class is do-block-storage. WARNING: persistence.storageClass is needed! If you don’t fill it, nfs-server-provisioner-0 will store in EmptyDir instead, which is definitely not what you want. Example:

  enabled: true
  storageClass: "do-block-storage"
  size: 22Gi

  defaultClass: true


helm install nfs-server-provisioner stable/nfs-server-provisioner -f nfs-server-provisioner-values.yaml

Storage class to be used by ERPNext is “nfs“. You can check this by using:

$ kubectl get storageclass
do-block-storage (default)   dobs.csi.digitalocean.com              Delete          Immediate           true                   93d
nfs                          cluster.local/nfs-server-provisioner   Delete          Immediate           true                   6m23s

Check if nfs-server-provisioner-0 pod is Running:

kubectl describe po nfs-server-provisioner-0

The PVC data-nfs-server-provisioner-0 should exist that is used by nfs-server-provisioner as backing store.

kubectl describe pvc data-nfs-server-provisioner-0
kubectl get pvc -A

Note: If you want to delete this, not enough to just helm delete nfs-server-provisioner but also need to kubectl delete pvc data-nfs-server-provisioner-0

3. Install frappe/erpnext Helm chart

Create erpnext-values.yaml so you can helm upgrade later: (note: by default erpnext’s PVC persistence.size is 8Gi, change to at least 20Gi for production)

mariadbHost: mariadb.mariadb.svc.cluster.local
  storageClass: nfs
  size: 8Gi

  # repository: registry.gitlab.com/lovia/frappe_docker/lovia-nginx
  tag: version-13-beta
  # repository: registry.gitlab.com/lovia/frappe_docker/lovia-worker
  tag: version-13-beta
# frappe/frappe-socketio
  tag: version-13-beta

# imagePullSecrets: 
#   - name: regcred

The Helm values above use ERPNext images without custom app.

Install ERPNext without custom app:

kubectl create namespace erpnext
helm install frappe-bench-0001 --namespace erpnext frappe/erpnext -f erpnext-values.yaml

Note that an erpnext pod contains 2 containers: erpnext-assets and erpnext-python. By default the pod itself has replicaCount of 1.

Ensure frappe-bench-0001-erpnext pod PVC is working/Bound:

ceefour@amanah:~/project/erpnext-local$ kubectl get pvc -A
NAMESPACE            NAME                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
container-registry   registry-claim                  Bound    pvc-1f86ebc5-2239-4af1-8b84-28cd3e07e8d1   20Gi       RWX            microk8s-hostpath   59m
default              data-nfs-server-provisioner-0   Bound    pvc-9529a0ff-329c-47d9-93e9-0f3af4d6bad2   1Gi        RWO            microk8s-hostpath   67s
erpnext              frappe-bench-0001-erpnext       Bound    pvc-3b96af59-60f8-469a-88c4-2de73e506a89   8Gi        RWX            nfs                 15m
mariadb              data-mariadb-master-0           Bound    pvc-95e4caf2-96be-4669-a639-798ca3b68b5f   8Gi        RWO            microk8s-hostpath   35m

You’ll get the following services:

ceefour@amanah:~/project/erpnext-local$ kubectl get svc -n erpnext
NAME                                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE
frappe-bench-0001-erpnext                  ClusterIP    <none>        80/TCP      14s
frappe-bench-0001-erpnext-redis-cache      ClusterIP   <none>        13000/TCP   14s
frappe-bench-0001-erpnext-redis-queue      ClusterIP    <none>        12000/TCP   14s
frappe-bench-0001-erpnext-redis-socketio   ClusterIP   <none>        11000/TCP   14s
frappe-bench-0001-erpnext-socketio         ClusterIP   <none>        9000/TCP    14s

Set the mariadb-root-password secret with key password:

kubectl create secret -n erpnext generic mariadb-root-password --from-literal=password=super_secret_password

Troubleshooting: Warning FailedMount 2m13s kubelet, amanah MountVolume.SetUp failed for volume “pvc-3b96af59-60f8-469a-88c4-2de73e506a89” : mount failed: exit status 32

Problem: This happens on erpnext describe pod (kubectl describe po -n erpnext frappe-bench-0001-erpnext-erpnext-7bd5c94d46-lnv8m).

Solution: Hendy: For some reason, after I deleted the pod, then it works.

Post-install (No site yet)

You can open browser on service/frappe-bench-0001-erpnext‘s Cluster IP on the same computer, e.g. . You should get a “Sorry! We will be back soon.” message. Now you can create a Site and Ingress.

4. Create Resources

Reference: ERPNext Helm chart > Kubernetes Resources.

  1. Create New Site Job.
  2. Create New Site Ingress.
  3. Create CronJob to take backups and push them to cloud regularly.

4.1. Create New Site Job

Create add-example-site-job.yaml. Important things to change:

  • Make sure you have set Kubernetes secret mariadb-root-password, key secret, in namespace erpnext
  • Change the frappe/erpnext-worker version to the latest specific stable version, or alternatively use a rolling tag like v12.
  • Change SITE_NAME to your real subdomain name
  • Generate & save admin password using Bitwarden, and set it as ADMIN_PASSWORD

AWS RDS MariaDB Notes: The following attempts did not work. Instead, see workaround in “Moving (Temporary) MariaDB Database to AWS RDS MariaDB” section below.

kubectl exec -n erpnext frappe-bench-0001-erpnext-worker-d-847966697-fwkwc -it -- bash
# IGNORE the commands below, they were experimental and turns out not needed, just skip to "common_site_config.json" part
#apt update
#apt install nano less
#export SITE_NAME=erp.lovia.life
#export DB_ROOT_USER=root
#export INSTALL_APPS=erpnext
# need to patch GRANT ALL PRIVILEGES: nano /home/frappe/frappe-bench/commands/new.py
#. /home/frappe/frappe-bench/env/bin/activate
#su frappe -c 'python ~/frappe-bench/commands/new.py'

Then use “cat” (since nano, vim, vi, none of them are available) to update common_site_config.json as follows:

    "rds_db": 1,
    "db_host": "****************",
    "db_port": 3306,
    "redis_cache": "redis://frappe-bench-0001-erpnext-redis-cache:13000",
    "redis_queue": "redis://frappe-bench-0001-erpnext-redis-queue:12000",
    "redis_socketio": "redis://frappe-bench-0001-erpnext-redis-socketio:11000",
    "socketio_port": 9000

Create job to create site. Important: Make sure the worker version here matches the image tags used by Frappe Bench Helm chart. Otherwise, you’ll get error i.e. “Sorry! We will be back soon.” (not!)

apiVersion: batch/v1
kind: Job
  name: create-erp-example-com
  backoffLimit: 0
        supplementalGroups: [1000]
      - name: create-site
        image: frappe/erpnext-worker:v13-beta
        args: ["new"]
        imagePullPolicy: IfNotPresent
          - name: sites-dir
            mountPath: /home/frappe/frappe-bench/sites
          - name: "SITE_NAME"
            value: erpnext-example.svc.cluster.local
          - name: "DB_ROOT_USER"
            value: root
          - name: "MYSQL_ROOT_PASSWORD"
                name: mariadb-root-password
                key: password
          - name: "ADMIN_PASSWORD"
            value: super_secret_password
          - name: "INSTALL_APPS"
            value: "erpnext"
      restartPolicy: Never
        - name: sites-dir
            claimName: frappe-bench-0001-erpnext
            readOnly: false

Note: If you use custom image (including custom app), you can change “INSTALL_APPS” value to e.g. “erpnext,lovia“.

kubectl create -n erpnext -f add-example-site-job.yaml
kubectl -n erpnext describe job create-erp-example-com

You’ll get a new Job pod for that site, e.g. create-erp-example-com-c2wzv. You can follow logs on that site’s job: (this will take about 2 minutes, after that the pod’s status will go to Completed)

ceefour@amanah:~/project/erpnext-local$ kubectl logs -f -n erpnext create-erp-example-com-c2wzv
Attempt 1 to connect to mariadb.mariadb.svc.cluster.local:3306
Attempt 1 to connect to frappe-bench-0001-erpnext-redis-queue:12000
Attempt 1 to connect to frappe-bench-0001-erpnext-redis-cache:13000
Attempt 1 to connect to frappe-bench-0001-erpnext-redis-socketio:11000
Connections OK
Created user _684bfe87ae59e1a8
Created database _684bfe87ae59e1a8
Granted privileges to user _684bfe87ae59e1a8 and database _684bfe87ae59e1a8
Starting database import...
Imported from database /home/frappe/frappe-bench/apps/frappe/frappe/database/mariadb/framework_mariadb.sql

Installing frappe...
Updating DocTypes for frappe        : [========================================]
Updating country info               : [========================================]

Installing erpnext...
Updating DocTypes for erpnext       : [========================================]
Updating customizations for Address
*** Scheduler is disabled ***

Troubleshooting AWS RDS MariaDB: Unfortunately, GRANT ALL won’t work on AWS RDS MariaDB. The rather good news is frappe/database/db_manager.py has supported AWS RDS since v12.8. You need to apply the AWS RDS MariaDB workaround.

Troubleshooting AWS RDS MariaDB Part 2:

Updating country info               : [========================================]

Installing erpnext...
Updating DocTypes for erpnext       : [========================================]
Updating customizations for Address
*** Scheduler is disabled ***
ERROR 1054 (42S22) at line 1: Unknown column 'ERROR (RDS): SUPER PRIVILEGE CANNOT BE GRANTED OR MAINTAINED' in 'field list'
ERROR 1396 (HY000) at line 1: Operation ALTER USER failed for '_9a6f28ddcbb1acb1'@'%'
ERROR 1044 (42000) at line 1: Access denied for user 'root'@'%' to database '_9a6f28ddcbb1acb1'

Troubleshooting pymysql.err.InternalError: Packet sequence number wrong with frappe/erpnext-worker:v13.0.0-beta.3 and also v12 with rds_db=1: Caused by AWS RDS MariaDB blocking the IP address:

ERROR 1129 (HY000): Host ‘…’ is blocked because of many connection errors; unblock with ‘mysqladmin flush-hosts’

Quick fix is to run: mysqladmin -h... -u... -p flush-hosts

A longer-term solution is to find out what actually caused the errors, inspect MariaDB variables max_connections and max_connect_errors, and increase the maximum limits below.

show variables like "max_connections";
show variables like "max_connect_errors";

For example, you can set (using AWS RDS MariaDB Parameter Group) max_connect_errors to 10000 and max_connections to 200.

Some other people say this is caused by multi-threading.

Attempt 1 to connect to frappe-bench-0001-erpnext-redis-queue:12000
Attempt 1 to connect to frappe-bench-0001-erpnext-redis-cache:13000
Attempt 1 to connect to frappe-bench-0001-erpnext-redis-socketio:11000
Connections OK
Traceback (most recent call last):
  File "/home/frappe/frappe-bench/commands/new.py", line 127, in <module>
  File "/home/frappe/frappe-bench/commands/new.py", line 75, in main
  File "/home/frappe/frappe-bench/apps/frappe/frappe/commands/site.py", line 88, in _new_site
    db_password=db_password, db_type=db_type, db_host=db_host, db_port=db_port, no_mariadb_socket=no_mariadb_socket)
  File "/home/frappe/frappe-bench/apps/frappe/frappe/installer.py", line 35, in install_db
    setup_database(force, source_sql, verbose, no_mariadb_socket)
  File "/home/frappe/frappe-bench/apps/frappe/frappe/database/__init__.py", line 16, in setup_database
    return frappe.database.mariadb.setup_db.setup_database(force, source_sql, verbose, no_mariadb_socket=no_mariadb_socket)
  File "/home/frappe/frappe-bench/apps/frappe/frappe/database/mariadb/setup_db.py", line 39, in setup_database
    if force or (db_name not in dbman.get_database_list()):
  File "/home/frappe/frappe-bench/apps/frappe/frappe/database/db_manager.py", line 61, in get_database_list
    return [d[0] for d in self.db.sql("SHOW DATABASES")]
  File "/home/frappe/frappe-bench/apps/frappe/frappe/database/database.py", line 122, in sql
  File "/home/frappe/frappe-bench/apps/frappe/frappe/database/database.py", line 75, in connect
    self._conn = self.get_connection()
  File "/home/frappe/frappe-bench/apps/frappe/frappe/database/mariadb/database.py", line 91, in get_connection
    local_infile = frappe.conf.local_infile)
  File "/home/frappe/frappe-bench/env/lib/python3.7/site-packages/pymysql/__init__.py", line 94, in Connect
    return Connection(*args, **kwargs)
  File "/home/frappe/frappe-bench/env/lib/python3.7/site-packages/pymysql/connections.py", line 325, in __init__
  File "/home/frappe/frappe-bench/env/lib/python3.7/site-packages/pymysql/connections.py", line 598, in connect
  File "/home/frappe/frappe-bench/env/lib/python3.7/site-packages/pymysql/connections.py", line 975, in _get_server_information
    packet = self._read_packet()
  File "/home/frappe/frappe-bench/env/lib/python3.7/site-packages/pymysql/connections.py", line 671, in _read_packet
    % (packet_number, self._next_seq_id))
pymysql.err.InternalError: Packet sequence number wrong - got 1 expected 0

ERPNext v12 vs v13-beta? See thread for discussion on v12 vs v13 stability. See: https://discuss.erpnext.com/t/release-note-erpnext-and-frappe-version-13-beta-3/63308/14

Option 1: Access Website using /etc/hosts (Temporarily/Development)

Now edit your /etc/hosts file: erpnext-example.svc.cluster.local

And open the browser at the SITE_NAME, e.g. http://erpnext-example.svc.cluster.local/ . You should get a good welcome. 🙂 Congratulations!

Option 2: Access Website using DNS, Ingress, and SSL (Production)

1. Create CNAME DNS record erp.lovia.life pointing to load balancer or nginx node’s external IP address (not proxied).

2. Create frappe-bench-0001-erpnext-ingress.yaml:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
  name: frappe-bench-0001-erpnext-ingress
  namespace: erpnext
    kubernetes.io/ingress.class: nginx
    # https://github.com/nginxinc/kubernetes-ingress/issues/21#issuecomment-521338887
    nginx.ingress.kubernetes.io/proxy-body-size: 64m
    # https://discuss.erpnext.com/t/erpnext-ssl-https-config-not-working-with-nginx/11314 (default is 60)
    nginx.ingress.kubernetes.io/proxy-read-timeout: '120'
    # https://pumpingco.de/blog/using-signalr-in-kubernetes-behind-nginx-ingress/
    nginx.ingress.kubernetes.io/affinity: cookie
    cert-manager.io/cluster-issuer: letsencrypt-prod
  # REQUIRES helm cert-manager
    - hosts:
        - erp.lovia.life
      secretName: frappe-bench-0001-erpnext-tls
    - host: erp.lovia.life
          - backend:
              serviceName: frappe-bench-0001-erpnext
              servicePort: 80

2. Deploy the ingress:

kubectl apply -f frappe-bench-0001-erpnext-ingress.yaml

To check certificate issuance progress:

ceefour@amanah:~/project/lovia/lovia-devops/erpnext$ kubectl describe cert -n erpnext frappe-bench-0001-erpnext-tls
  Type    Reason        Age   From          Message
  ----    ------        ----  ----          -------
  Normal  GeneratedKey  3m3s  cert-manager  Generated a new private key
  Normal  Requested     3m3s  cert-manager  Created new CertificateRequest resource "frappe-bench-0001-erpnext-tls-4179684101"
  Normal  Issued        105s  cert-manager  Certificate issued successfully

3. Access ERPNext at https://erp.lovia.life/desk

How can we help?

Leave a Reply

Your email address will not be published. Required fields are marked *