We left our cluster while sysbench was loading the data into the Vitess. This process may take a bit of time, let’s start this blog by discussing what we can do to monitor the status of the process.
If you would like to follow our steps you can clone this repository: vitesstests
Monitoring the replication
Using vtctlclient to execute SQL
First of all, we can check what vttablets are available and running in our Vitess cluster:
root@k8smaster:~/vitesstests# vtctlclient listalltablets
zone1-2344898534 sbtest - replica 10.244.3.8:15000 10.244.3.8:3306 []
zone1-2646235096 sbtest - primary 10.244.2.8:15000 10.244.2.8:3306 [] 2021-09-16T13:44:09Z
As you can see, we have two vttablets with aliases zone1-2646235096 (primary) and zone1-2344898534 (replica).
Let’s see what is the state of the primary tablet:
root@k8smaster:~/vitesstests# vtctlclient VtTabletExecute zone1-2646235096 "SELECT COUNT(*) FROM sbtest1;" +----------+ | COUNT(*) | +----------+ | 473358 | +----------+
We can repeat it on the replica, switching the vttablet alias:
root@k8smaster:~/vitesstests# vtctlclient VtTabletExecute zone1-2344898534 "SELECT COUNT(*) FROM sbtest1;" +----------+ | COUNT(*) | +----------+ | 88755 | +----------+
As you can see, we have quite a replication lag in our system. We can verify it by running “SHOW SLAVE STATUS”:
root@k8smaster:~/vitesstests# vtctlclient VtTabletExecute zone1-2344898534 "SHOW SLAVE STATUS;" +--------------------------------+-------------+-------------+-------------+---------------+--------------------------+---------------------+--------------------------------+---------------+--------------------------+------------------+-------------------+-----------------+---------------------+--------------------+------------------------+-------------------------+-----------------------------+------------+------------+--------------+---------------------+-----------------+-----------------+----------------+---------------+--------------------+--------------------+--------------------+-----------------+-------------------+----------------+-----------------------+-------------------------------+---------------+---------------+----------------+----------------+-----------------------------+------------------+--------------------------------------+-------------------------+-----------+---------------------+--------------------------------+--------------------+-------------+-------------------------+--------------------------+----------------+--------------------+--------------------------------------------+--------------------------------------------+---------------+----------------------+--------------+--------------------+ | Slave_IO_State | Master_Host | Master_User | Master_Port | Connect_Retry | Master_Log_File | Read_Master_Log_Pos | Relay_Log_File | Relay_Log_Pos | Relay_Master_Log_File | Slave_IO_Running | Slave_SQL_Running | Replicate_Do_DB | Replicate_Ignore_DB | Replicate_Do_Table | Replicate_Ignore_Table | Replicate_Wild_Do_Table | Replicate_Wild_Ignore_Table | Last_Errno | Last_Error | Skip_Counter | Exec_Master_Log_Pos | Relay_Log_Space | Until_Condition | Until_Log_File | Until_Log_Pos | Master_SSL_Allowed | Master_SSL_CA_File | Master_SSL_CA_Path | Master_SSL_Cert | Master_SSL_Cipher | Master_SSL_Key | Seconds_Behind_Master | Master_SSL_Verify_Server_Cert | Last_IO_Errno | Last_IO_Error | Last_SQL_Errno | Last_SQL_Error | Replicate_Ignore_Server_Ids | Master_Server_Id | Master_UUID | Master_Info_File | SQL_Delay | SQL_Remaining_Delay | Slave_SQL_Running_State | Master_Retry_Count | Master_Bind | Last_IO_Error_Timestamp | Last_SQL_Error_Timestamp | Master_SSL_Crl | Master_SSL_Crlpath | Retrieved_Gtid_Set | Executed_Gtid_Set | Auto_Position | Replicate_Rewrite_DB | Channel_Name | Master_TLS_Version | +--------------------------------+-------------+-------------+-------------+---------------+--------------------------+---------------------+--------------------------------+---------------+--------------------------+------------------+-------------------+-----------------+---------------------+--------------------+------------------------+-------------------------+-----------------------------+------------+------------+--------------+---------------------+-----------------+-----------------+----------------+---------------+--------------------+--------------------+--------------------+-----------------+-------------------+----------------+-----------------------+-------------------------------+---------------+---------------+----------------+----------------+-----------------------------+------------------+--------------------------------------+-------------------------+-----------+---------------------+--------------------------------+--------------------+-------------+-------------------------+--------------------------+----------------+--------------------+--------------------------------------------+--------------------------------------------+---------------+----------------------+--------------+--------------------+ | Queueing master event to the | 10.244.2.8 | vt_repl | 3306 | 10 | vt-2646235096-bin.000001 | 160119406 | vt-2344898534-relay-bin.000002 | 59057220 | vt-2646235096-bin.000001 | Yes | Yes | | | | | | | 0 | | 0 | 59057512 | 160119329 | None | | 0 | No | | | | | | 990 | No | 0 | | 0 | | | 1872311303 | e2f1d84d-16f3-11ec-b84c-d24f6e4899bd | mysql.slave_master_info | 0 | | Reading event from the relay | 86400 | | | | | | e2f1d84d-16f3-11ec-b84c-d24f6e4899bd:3-350 | e2f1d84d-16f3-11ec-b84c-d24f6e4899bd:1-154 | 1 | | | | | relay log | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | log | | | | | | | | | | | | | +--------------------------------+-------------+-------------+-------------+---------------+--------------------------+---------------------+--------------------------------+---------------+--------------------------+------------------+-------------------+-----------------+---------------------+--------------------+------------------------+-------------------------+-----------------------------+------------+------------+--------------+---------------------+-----------------+-----------------+----------------+---------------+--------------------+--------------------+--------------------+-----------------+-------------------+----------------+-----------------------+-------------------------------+---------------+---------------+----------------+----------------+-----------------------------+------------------+--------------------------------------+-------------------------+-----------+---------------------+--------------------------------+--------------------+-------------+-------------------------+--------------------------+----------------+--------------------+--------------------------------------------+--------------------------------------------+---------------+----------------------+--------------+--------------------+
Indeed, there’s a significant (990 seconds) lag. The output formatting is unfortunate and we couldn’t find a way to use “\G” with vtctlclient. Luckily, with a bit of overhead, we can do this directly on the MySQL container. It’s quite likely that you’ll be doing it at some point so let’s get it out of the way and let us show you how to locate and then connect directly to the database container.
Connecting directly to the database container
We are going to start in the same way – we need to get the vttablet alias. Our replica is called “zone1-2344898534”. Now we have to find where it is physically located. We can use:
root@k8smaster:~/vitesstests# kubectl describe nodes | less
to check on which node out vttablet is located. In our case it is node1:
Addresses: InternalIP: 10.20.0.11 Hostname: k8snode1 Capacity: cpu: 8 ephemeral-storage: 64284292Ki hugepages-2Mi: 0 memory: 16397144Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 59244403410 hugepages-2Mi: 0 memory: 16294744Ki pods: 110 System Info: Machine ID: b92621f6e4c84b9aa624778f7d27121a System UUID: 3754d7df-e0a5-7f4e-a539-2d17efd6ede1 Boot ID: 40305419-15e6-4bfa-aba2-29b7af3f670a Kernel Version: 5.4.0-80-generic OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.7 Kubelet Version: v1.20.9 Kube-Proxy Version: v1.20.9 PodCIDR: 10.244.3.0/24 PodCIDRs: 10.244.3.0/24 Non-terminated Pods: (5 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- default example-etcd-faf13de3-3 100m (1%) 0 (0%) 256Mi (1%) 256Mi (1%) 7h50m default example-vttablet-zone1-2344898534-e9abaf0e 2010m (25%) 100m (1%) 4128Mi (25%) 128Mi (0%) 7h50m default example-zone1-vtctld-1d4dcad0-64668cccc8-swmj4 100m (1%) 0 (0%) 128Mi (0%) 128Mi (0%) 7h50m kube-system kube-flannel-ds-njdfv 100m (1%) 100m (1%) 50Mi (0%) 50Mi (0%) 9h kube-system kube-proxy-9j7wp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9h Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 2310m (28%) 200m (2%) memory 4562Mi (28%) 562Mi (3%) ephemeral-storage 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: <none>
As a next step, let’s connect to node1 and check the docker containers running on it:
root@k8snode1:~# docker ps | grep zone1-2344898534 9a4e59f15727 vitess/lite "/vt/bin/vttablet --…" 8 hours ago Up 8 hours k8s_vttablet_example-vttablet-zone1-2344898534-e9abaf0e_default_43e46b7a-624b-48d7-9a29-dc98048078f5_1 cd60189a0d2e e80442e91b90 "/bin/mysqld_exporte…" 8 hours ago Up 8 hours k8s_mysqld-exporter_example-vttablet-zone1-2344898534-e9abaf0e_default_43e46b7a-624b-48d7-9a29-dc98048078f5_0 3994bcee9342 vitess/lite "/vt/bin/mysqlctld -…" 8 hours ago Up 8 hours k8s_mysqld_example-vttablet-zone1-2344898534-e9abaf0e_default_43e46b7a-624b-48d7-9a29-dc98048078f5_0 0549b34b2cda k8s.gcr.io/pause:3.2 "/pause" 8 hours ago Up 8 hours k8s_POD_example-vttablet-zone1-2344898534-e9abaf0e_default_43e46b7a-624b-48d7-9a29-dc98048078f5_0
As you can see, we have four containers running for our vttablet. “Management” POD container, Prometheus exporter, vttablet and mysqlctld. VTTablet container contains, well, vttablet binary, which acts as a management layer while mysqlctld contains the database itself. Let’s open a bash session to the database container:
root@k8snode1:~# docker exec -it 3994bcee9342 /bin/bash
We are in. Let’s see how we can connect to the MySQL database:
vitess@example-vttablet-zone1-2344898534-e9abaf0e:/$ ps auxf | more USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND vitess 667 0.0 0.0 3996 3268 pts/0 Ss 21:38 0:00 /bin/bash vitess 674 0.0 0.0 7636 2760 pts/0 R+ 21:38 0:00 \_ ps auxf vitess 675 0.0 0.0 2496 900 pts/0 S+ 21:38 0:00 \_ more vitess 1 0.0 0.1 726916 20488 ? Ssl 13:42 0:02 /vt/bin/mysqlctld --db-config-dba-uname=vt_dba --db_charset=utf8 --init_db_sql_file=/vt/secrets/db-init-script/init_db.sql --logtostderr=true --mysql_socket=/vt/socket/mysql.sock --socket_file=/vt/socket/mysqlct l.sock --tablet_uid=2344898534 --wait_time=2h0m0s vitess 48 0.0 0.0 2384 1720 ? S 13:43 0:00 /bin/sh /usr/bin/mysqld_safe --defaults-file=/vt/vtdataroot/vt_2344898534/my.cnf --basedir=/usr vitess 598 0.2 2.1 4141016 345024 ? Sl 13:43 1:13 \_ /usr/sbin/mysqld --defaults-file=/vt/vtdataroot/vt_2344898534/my.cnf --basedir=/usr --datadir=/vt/vtdataroot/vt_2344898534/data --plugin-dir=/usr/lib/mysql/plugin --log-error=/vt/config/stderr.symlink --pid- file=/vt/vtdataroot/vt_2344898534/mysql.pid --socket=/vt/socket/mysql.sock --port=3306
We should use /vt/socket/mysql.sock socket to connect to MySQL:
vitess@example-vttablet-zone1-2344898534-e9abaf0e:/$ mysql -uroot -S /vt/socket/mysql.sock Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 123 Server version: 5.7.31-log MySQL Community Server (GPL) Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql>
And we are in. From here you can execute any query you like.
mysql> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
Slave_IO_State: Queueing master event to the relay log
Master_Host: 10.244.2.8
Master_User: vt_repl
Master_Port: 3306
Connect_Retry: 10
Master_Log_File: vt-2646235096-bin.000001
Read_Master_Log_Pos: 328207471
Relay_Log_File: vt-2344898534-relay-bin.000002
Relay_Log_Pos: 118617220
Relay_Master_Log_File: vt-2646235096-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 118617512
Relay_Log_Space: 328207394
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 2028
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1872311303
Master_UUID: e2f1d84d-16f3-11ec-b84c-d24f6e4899bd
Master_Info_File: mysql.slave_master_info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: System lock
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set: e2f1d84d-16f3-11ec-b84c-d24f6e4899bd:3-678
Executed_Gtid_Set: e2f1d84d-16f3-11ec-b84c-d24f6e4899bd:1-270
Auto_Position: 1
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version:
1 row in set (0.05 sec)
On top of that, this docker container is quite well prepared when it comes to tooling:
vitess@example-vttablet-zone1-2344898534-e9abaf0e:/$ pt pt-align pt-diskstats pt-fingerprint pt-ioprofile pt-online-schema-change pt-sift pt-stalk pt-table-usage ptar pt-archiver pt-duplicate-key-checker pt-fk-error-logger pt-kill pt-pmp pt-slave-delay pt-summary pt-upgrade ptardiff pt-config-diff pt-fifo-split pt-heartbeat pt-mext pt-query-digest pt-slave-find pt-table-checksum pt-variable-advisor ptargrep pt-deadlock-logger pt-find pt-index-usage pt-mysql-summary pt-show-grants pt-slave-restart pt-table-sync pt-visual-explain ptx
Percona Toolkit is ready for use, making it pretty feasible to execute different management tasks like checking the data consistency, running a schema change, collecting data with pt-stalk, parsing slow query log with pt-query-digest and so on. Some of those tasks can be executed from outside, through mechanisms available in Vitess but even then you may still prefer to run, for example, schema changes directly on the database, avoiding vtctlclient.
Adding replica to the keyspace
Let’s assume that the data has been loaded and that our replica is up to date and in sync with the primary. We can now add a new replica to the cluster. It will come handy should our existing setup be under load. Please keep in mind that adding a replica will let our cluster handle more SELECT queries. If you are looking at scaling out writes, sharding will be a way to go. We’ll get there later. Read scale-out – replicas, write scale-out – shards.
Creating backup
We are going to start with taking a backup. It will make the whole process way faster than if we’d just use the replication to get the data from primary to the new replica. As we mentioned earlier, for now we have our vttablets configured to use “native” Vitess backup. We will show you how to switch to XtraBackup later, for now the existing method is enough. To run the backup we have to use, as usual, vtctlclient.
First, double-check what alias is used by the replica:
root@k8smaster:~/vitesstests# vtctlclient listalltablets zone1-2344898534 sbtest - replica 10.244.3.8:15000 10.244.3.8:3306 [] <null> zone1-2646235096 sbtest - primary 10.244.2.8:15000 10.244.2.8:3306 [] 2021-09-16T13:44:09Z
Then we’ll run the backup:
root@k8smaster:~/vitesstests# vtctlclient Backup zone1-2344898534 I0916 23:01:18.464510 312594 main.go:67] I0916 23:01:18.199353 backup.go:178] I0916 23:01:18.198178 builtinbackupengine.go:141] Hook: , Compress: true I0916 23:01:18.474143 312594 main.go:67] I0916 23:01:18.210537 backup.go:178] I0916 23:01:18.209848 builtinbackupengine.go:151] getting current replication status I0916 23:01:18.550303 312594 main.go:67] I0916 23:01:18.282539 backup.go:178] I0916 23:01:18.281277 builtinbackupengine.go:192] using replication position: e2f1d84d-16f3-11ec-b84c-d24f6e4899bd:1-1531 I0916 23:01:22.628677 312594 main.go:67] I0916 23:01:22.366198 backup.go:178] I0916 23:01:22.364816 builtinbackupengine.go:287] found 299 files to backup
It should be stored on our additional volume. We can check its contents in /storage/backup directory on every node:
root@k8smaster:~/vitesstests# ls -alh /storage/backup/backup/sbtest/-/ total 12K drwxr-xr-x 3 systemd-coredump systemd-coredump 4.0K Sep 16 23:01 . drwxr-xr-x 3 systemd-coredump systemd-coredump 4.0K Sep 16 23:01 .. drwxr-xr-x 2 systemd-coredump systemd-coredump 4.0K Sep 16 23:01 2021-09-16.230118.zone1-2344898534
We can see that the backup has been created for “sbtest” keyspace and “-” shard (which means whole range, so no sharding at all). Now is the time to add a replica. We can apply 102_add_replica_to_sbtest.yaml which basically increments value of replicas in the keyspace definition.
keyspaces:
- name: sbtest
turndownPolicy: Immediate
partitionings:
- equal:
parts: 1
shardTemplate:
databaseInitScriptSecret:
name: example-cluster-config
key: init_db.sql
replication:
enforceSemiSync: false
tabletPools:
- cell: zone1
type: replica
replicas: 3
vttablet:
extraFlags:
db_charset: utf8mb4
backup_storage_implementation: file
backup_engine_implementation: builtin
restore_from_backup: 'true'
file_backup_storage_root: /mnt/backup
resources:
requests:
cpu: 1
memory: 2Gi
mysqld:
resources:
requests:
cpu: 1
memory: 2Gi
dataVolumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 100Gi
extraVolumes:
- name: backupvol
persistentVolumeClaim:
claimName: "backupvol"
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 100Gi
volumeName: backup
extraVolumeMounts:
- name: backupvol
mountPath: /mnt
We could also edit existing the cluster definition:
root@k8smaster:~/vitesstests# kubectl get vt NAME AGE example 17h root@k8smaster:~/vitesstests# kubectl edit vt example
and do the same, increase the number of replicas for ‘sbtest’ keyspace.
One way or the other we should see a new pod starting:
root@k8smaster:~# kubectl get pod,pv NAME READY STATUS RESTARTS AGE pod/example-etcd-faf13de3-1 1/1 Running 0 17h pod/example-etcd-faf13de3-2 1/1 Running 0 17h pod/example-etcd-faf13de3-3 1/1 Running 0 17h pod/example-vttablet-zone1-2179083526-f3060bc1 0/3 Init:0/2 0 3s pod/example-vttablet-zone1-2344898534-e9abaf0e 3/3 Running 1 17h pod/example-vttablet-zone1-2646235096-9ba85582 3/3 Running 1 17h pod/example-zone1-vtctld-1d4dcad0-64668cccc8-swmj4 1/1 Running 1 17h pod/example-zone1-vtgate-bc6cde92-8665cd4df-kwgcn 1/1 Running 1 17h pod/vitess-operator-f44545df8-l5kk9 1/1 Running 0 19h
We can now track the progress by tailing the logs from vttablet and mysqld containers:
kubectl logs -f example-vttablet-zone1-2179083526-f3060bc1 vttablet kubectl logs -f example-vttablet-zone1-2179083526-f3060bc1 mysqld
On the vttablet we should see something like this:
I0917 07:42:45.408679 1 tm_state.go:176] Changing Tablet Type: RESTORE
I0917 07:42:45.408712 1 tm_state.go:355] Publishing state: alias:{cell:"zone1" uid:2179083526} hostname:"10.244.1.16" port_map:{key:"grpc" value:15999} port_map:{key:"vt" value:15000} keyspace:"sbtest" shard:"-" key_range:{} type:RESTORE db_name_override:"vt_sbtest" mysql_hostname:"10.244.1.16" mysql_port:3306
I0917 07:42:45.417721 1 syslogger.go:129] sbtest/-/zone1-2179083526 [tablet] updated
I0917 07:42:45.417747 1 backup.go:270] Restore: looking for a suitable backup to restore
I0917 07:42:45.417798 1 shard_sync.go:70] Change to tablet state
I0917 07:42:45.421498 1 backupengine.go:221] Restore: found backup sbtest/- 2021-09-16.230118.zone1-2344898534 to restore
I0917 07:42:45.429810 1 backupengine.go:240] Restore: shutdown mysqld
I0917 07:42:45.429823 1 mysqld.go:485] Mysqld.Shutdown
I0917 07:42:45.429828 1 mysqld.go:489] executing Mysqld.Shutdown() remotely via mysqlctld server: /vt/socket/mysqlctl.sock
I0917 07:42:49.457017 1 backupengine.go:245] Restore: deleting existing files
I0917 07:42:49.457035 1 backup.go:223] Restore: removing files in BinLogPath.* (/vt/vtdataroot/vt_2179083526/bin-logs/vt-2179083526-bin.*)
I0917 07:42:49.471432 1 backup.go:241] Restore: removing files in DataDir (/vt/vtdataroot/vt_2179083526/data)
I0917 07:42:51.617767 1 backup.go:241] Restore: removing files in InnodbDataHomeDir (/vt/vtdataroot/vt_2179083526/innodb/data)
I0917 07:42:51.641330 1 backup.go:241] Restore: removing files in InnodbLogGroupHomeDir (/vt/vtdataroot/vt_2179083526/innodb/logs)
I0917 07:42:51.672061 1 backup.go:223] Restore: removing files in RelayLogPath.* (/vt/vtdataroot/vt_2179083526/relay-logs/vt-2179083526-relay-bin.*)
I0917 07:42:51.673486 1 backup.go:238] Restore: skipping removal of nonexistent RelayLogIndexPath (/vt/vtdataroot/vt_2179083526/relay-logs/vt-2179083526-relay-bin.index)
I0917 07:42:51.673687 1 backup.go:238] Restore: skipping removal of nonexistent RelayLogInfoPath (/vt/vtdataroot/vt_2179083526/relay-logs/relay-log.info)
I0917 07:42:51.673700 1 backupengine.go:250] Restore: reinit config file
I0917 07:42:51.673704 1 mysqld.go:905] Mysqld.ReinitConfig
I0917 07:42:51.673706 1 mysqld.go:909] executing Mysqld.ReinitConfig() remotely via mysqlctld server: /vt/socket/mysqlctl.sock
I0917 07:42:51.694562 1 builtinbackupengine.go:479] Restore: copying 299 files
I0917 07:42:51.694606 1 builtinbackupengine.go:512] Copying file 19: _vt/vreplication_log.frm
I0917 07:42:51.694699 1 builtinbackupengine.go:512] Copying file 141: performance_schema/memory_summary_by_account_by_event_name.frm
I0917 07:42:51.694706 1 builtinbackupengine.go:512] Copying file 298: vt_sbtest/sbtest4.ibd
This is where the restore process kicks in, it detects a backup on the mounted volume and use it to provision the data. Eventually new vttablet should become ready to serve the traffic:
root@k8smaster:~# vtctlclient listalltablets zone1-2179083526 sbtest - replica 10.244.1.16:15000 10.244.1.16:3306 [] <null> zone1-2344898534 sbtest - replica 10.244.3.8:15000 10.244.3.8:3306 [] <null> zone1-2646235096 sbtest - primary 10.244.2.8:15000 10.244.2.8:3306 [] 2021-09-16T13:44:09Z
To scale down keyspace all you need to do is to reverse this change. Apply 101_initial_cluster.yaml file or edit cluster definition and change number of replicas back to 2. This will terminate one of the pods. In our “poor man’s” K8s cluster we, most likely will have to wipe out the data on PV as it will end up in “Failed” state. Cleaning it should let Kubernetes recycle it.
Please keep in mind that, after every change in the number of replicas, you want to re-execute port forwarding by stopping and restarting script:
root@k8smaster:~/vitesstests# ./pf.sh Forwarding from 127.0.0.1:15306 -> 3306 Forwarding from [::1]:15306 -> 3306 Forwarding from 127.0.0.1:15000 -> 15000 Forwarding from [::1]:15000 -> 15000 Forwarding from 127.0.0.1:15999 -> 15999 Forwarding from [::1]:15999 -> 15999 You may point your browser to http://localhost:15000, use the following aliases as shortcuts: alias vtctlclient="/root/go/bin/vtctlclient -server=localhost:15999 -logtostderr" alias mysql="mysql -h 127.0.0.1 -P 15306 -u user" Hit Ctrl-C to stop the port forwards
In the next blog post we will take a look at adding a second keyspace. We’ll also try to use xtrabackup to get the backup going.