Last time we ended up with scaling the ‘sbtest’ keyspace up and down. We have also discussed how we can monitor the state of the replication between primary vttablet and replicas. Let’s continue by adding another keyspace to the mix.
If you would like to follow our steps you can clone this repository: vitesstests
Adding new keyspace
As you may have already noticed, keyspaces in Vitess are simply schemas. They can contain one or more tables. Keyspaces can be sharded or not. As of now we have just one keyspace in our cluster:
root@k8smaster:~/vitesstests# vtctlclient GetKeyspaces
sbtest
It is a non-sharded keyspace with four tables and two vttablets – a primary and a replica. Let’s proceed by adding another keyspace. It will also be non-sharded and we will make it with one vttablet only – primary. We’ll do it by applying file 103_add_newsbtest_keyspace.yaml, where we added a definition of another keyspace:
- name: newsbtest
turndownPolicy: Immediate
partitionings:
- equal:
parts: 1
shardTemplate:
databaseInitScriptSecret:
name: example-cluster-config
key: init_db.sql
replication:
enforceSemiSync: false
tabletPools:
- cell: zone1
type: replica
replicas: 1
vttablet:
extraFlags:
db_charset: utf8mb4
backup_storage_implementation: file
backup_engine_implementation: builtin
restore_from_backup: 'true'
file_backup_storage_root: /mnt/backup
resources:
requests:
cpu: 1
memory: 2Gi
mysqld:
resources:
requests:
cpu: 1
memory: 2Gi
dataVolumeClaimTemplate:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 100Gi
extraVolumes:
- name: backupvol
persistentVolumeClaim:
claimName: "backupvol"
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 100Gi
volumeName: backup
extraVolumeMounts:
- name: backupvol
mountPath: /mnt
Let’s do that and let’s wait till the pods are ready.
root@k8smaster:~/vitesstests# kubectl get pods NAME READY STATUS RESTARTS AGE example-etcd-faf13de3-1 1/1 Running 0 18h example-etcd-faf13de3-2 1/1 Running 0 18h example-etcd-faf13de3-3 1/1 Running 0 18h example-vttablet-zone1-1817996704-e4f712d6 0/3 Init:0/2 0 16s example-vttablet-zone1-2179083526-f3060bc1 3/3 Running 1 31m example-vttablet-zone1-2344898534-e9abaf0e 3/3 Running 1 18h example-vttablet-zone1-2646235096-9ba85582 3/3 Running 1 18h example-zone1-vtctld-1d4dcad0-64668cccc8-swmj4 1/1 Running 1 18h example-zone1-vtgate-bc6cde92-8665cd4df-kwgcn 1/1 Running 1 18h
Once this is done, let’s restart port forwarding and then populate the database with some data.
root@k8smaster:~/vitesstests# ./pf.sh Forwarding from 127.0.0.1:15306 -> 3306 Forwarding from [::1]:15306 -> 3306 Forwarding from 127.0.0.1:15000 -> 15000 Forwarding from [::1]:15000 -> 15000 Forwarding from 127.0.0.1:15999 -> 15999 Forwarding from [::1]:15999 -> 15999 You may point your browser to http://localhost:15000, use the following aliases as shortcuts: alias vtctlclient="/root/go/bin/vtctlclient -server=localhost:15999 -logtostderr" alias mysql="mysql -h 127.0.0.1 -P 15306 -u user" Hit Ctrl-C to stop the port forwards
We’ll make it slightly larger this time, two tables, 5M rows each. It will help us later when we’ll start to test operations like schema change.
root@k8smaster:~/vitesstests# sysbench /root/sysbench/src/lua/oltp_read_write.lua --threads=4 --events=0 --time=3600 --mysql-host=127.0.0.1 --mysql-user=sbtest --mysql-password=sbtest --mysql-port=15306 --mysql-db=newsbtest --tables=2 --report-interval=1 --skip-trx=on --table-size=5000000 --db-ps-mode=disable prepare sysbench 1.1.0-ead2689 (using bundled LuaJIT 2.1.0-beta3) Initializing worker threads... Creating table 'sbtest1'... Creating table 'sbtest2'... Inserting 5000000 records into 'sbtest1' Inserting 5000000 records into 'sbtest2'
Once the data is loaded into the database, we can proceed further. As of now, we have just one vttablet. This is not suitable for production as there is no replica to failover to should there’s a need. This is intentional. We can create a new replica by following the process we described in the previous blog post: create a backup and then scale out keyspace. What is different, this time we will try to use XtraBackup instead of native backup for Vitess.
Switch to XtraBackup
To make the change, we need to edit the keyspace definition and change the flags that are passed to vttablets. We can accomplish this by applying the following yaml file:
root@k8smaster:~/vitesstests# kubectl apply -f 104_xtrabackup_for_newsbtest.yaml
In short, we changed the backup_engine_implementation for “newsbtest” keyspace to xtrabackup. We also added:
xtrabackup_root_path: /usr/bin
xtrabackup_user: root
Those changes require restart of pod and this will be performed by the operator.
root@k8smaster:~# kubectl get pods NAME READY STATUS RESTARTS AGE example-etcd-faf13de3-1 1/1 Running 0 21h example-etcd-faf13de3-2 1/1 Running 0 21h example-etcd-faf13de3-3 1/1 Running 0 21h example-vttablet-zone1-1817996704-e4f712d6 3/3 Terminating 1 165m example-vttablet-zone1-2179083526-f3060bc1 3/3 Running 1 3h16m example-vttablet-zone1-2344898534-e9abaf0e 3/3 Running 1 21h example-vttablet-zone1-2646235096-9ba85582 3/3 Running 1 21h example-zone1-vtctld-1d4dcad0-64668cccc8-swmj4 1/1 Running 1 21h example-zone1-vtgate-bc6cde92-8665cd4df-kwgcn 1/1 Running 1 21h
It is a very quick operation and data is not affected. Still, not necessarily something you would like to do on production systems.
Once the pod has been restarted, we can create our backup. Please note that we had to add -allow_primary=true as the only vttablet that we have is marked as primary.
root@k8smaster:~/vitesstests# vtctlclient backup -allow_primary=true zone1-1817996704 I0917 11:19:26.268413 629000 main.go:67] I0917 11:19:26.006505 backup.go:178] I0917 11:19:26.340518 xtrabackupengine.go:147] Detected MySQL flavor: MySQL56 I0917 11:19:26.268465 629000 main.go:67] I0917 11:19:26.006531 backup.go:178] I0917 11:19:26.340543 xtrabackupengine.go:157] Starting backup with 0 stripe(s) I0917 11:19:26.296840 629000 main.go:67] I0917 11:19:26.035425 backup.go:178] I0917 11:19:26.369467 xtrabackupengine.go:311] xtrabackup stderr: xtrabackup: recognized server arguments: --datadir=/vt/vtdataroot/vt_1817996704/data --innodb_data_home_dir=/vt/vtdataroot/vt_1817996704/innodb/data --innodb_log_group_home_dir=/vt/vtdataroot/vt_1817996704/innodb/logs --log_bin=/vt/vtdataroot/vt_1817996704/bin-logs/vt-1817996704-bin --server-id=211328461 --tmpdir=/vt/vtdataroot/vt_1817996704/tmp I0917 11:19:26.297298 629000 main.go:67] I0917 11:19:26.035889 backup.go:178] I0917 11:19:26.370110 xtrabackupengine.go:311] xtrabackup stderr: xtrabackup: recognized client arguments: --backup=1 --socket=/vt/socket/mysql.sock --slave-info=1 --user=root --target-dir=/vt/vtdataroot/vt_1817996704/tmp --stream=tar I0917 11:19:26.369649 629000 main.go:67] I0917 11:19:26.108198 backup.go:178] I0917 11:19:26.442277 xtrabackupengine.go:311] xtrabackup stderr: 210917 11:19:26 version_check Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup;mysql_socket=/vt/socket/mysql.sock' as 'root' (using password: NO). I0917 11:19:26.370568 629000 main.go:67] I0917 11:19:26.109055 backup.go:178] I0917 11:19:26.443193 xtrabackupengine.go:311] xtrabackup stderr: 210917 11:19:26 version_check Connected to MySQL server I0917 11:19:26.371273 629000 main.go:67] I0917 11:19:26.109924 backup.go:178] I0917 11:19:26.443400 xtrabackupengine.go:311] xtrabackup stderr: 210917 11:19:26 version_check Executing a version check against the server... I0917 11:19:27.133888 629000 main.go:67] I0917 11:19:26.872244 backup.go:178] I0917 11:19:27.206312 xtrabackupengine.go:311] xtrabackup stderr: I0917 11:19:27.133912 629000 main.go:67] I0917 11:19:26.872385 backup.go:178] I0917 11:19:27.206333 xtrabackupengine.go:311] xtrabackup stderr: # A software update is available: I0917 11:19:27.133917 629000 main.go:67] I0917 11:19:26.872498 backup.go:178] I0917 11:19:27.206468 xtrabackupengine.go:311] xtrabackup stderr: 210917 11:19:27 version_check Done. I0917 11:19:27.137857 629000 main.go:67] I0917 11:19:26.876501 backup.go:178] I0917 11:19:27.210600 xtrabackupengine.go:311] xtrabackup stderr: 210917 11:19:27 Connecting to MySQL server host: localhost, user: root, password: not set, port: not set, socket: /vt/socket/mysql.sock I0917 11:19:27.139574 629000 main.go:67] I0917 11:19:26.878192 backup.go:178] I0917 11:19:27.212363 xtrabackupengine.go:311] xtrabackup stderr: Using server version 5.7.31-log I0917 11:19:27.159402 629000 main.go:67] I0917 11:19:26.897754 backup.go:178] I0917 11:19:27.231776 xtrabackupengine.go:311] xtrabackup stderr: /usr/bin/xtrabackup version 2.4.23 based on MySQL server 5.7.34 Linux (x86_64) (revision id: 3320f39) I0917 11:19:27.159900 629000 main.go:67] I0917 11:19:26.897956 backup.go:178] I0917 11:19:27.231989 xtrabackupengine.go:311] xtrabackup stderr: xtrabackup: uses posix_fadvise(). I0917 11:19:27.159914 629000 main.go:67] I0917 11:19:26.897999 backup.go:178] I0917 11:19:27.232006 xtrabackupengine.go:311] xtrabackup stderr: xtrabackup: cd to /vt/vtdataroot/vt_1817996704/data I0917 11:19:27.159921 629000 main.go:67] I0917 11:19:26.898013 backup.go:178] I0917 11:19:27.232010 xtrabackupengine.go:311] xtrabackup stderr: xtrabackup: open files limit requested 0, set to 1048576 I0917 11:19:27.159925 629000 main.go:67] I0917 11:19:26.898049 backup.go:178] I0917 11:19:27.232013 xtrabackupengine.go:311] xtrabackup stderr: xtrabackup: using the following InnoDB configuration: I0917 11:19:27.159929 629000 main.go:67] I0917 11:19:26.898061 backup.go:178] I0917 11:19:27.232017 xtrabackupengine.go:311] xtrabackup stderr: xtrabackup: innodb_data_home_dir = /vt/vtdataroot/vt_1817996704/innodb/data I0917 11:19:27.159935 629000 main.go:67] I0917 11:19:26.898178 backup.go:178] I0917 11:19:27.232021 xtrabackupengine.go:311] xtrabackup stderr: xtrabackup: innodb_data_file_path = ibdata1:12M:autoextend I0917 11:19:27.159943 629000 main.go:67] I0917 11:19:26.898199 backup.go:178] I0917 11:19:27.232024 xtrabackupengine.go:311] xtrabackup stderr: xtrabackup: innodb_log_group_home_dir = /vt/vtdataroot/vt_1817996704/innodb/logs I0917 11:19:27.159951 629000 main.go:67] I0917 11:19:26.898211 backup.go:178] I0917 11:19:27.232030 xtrabackupengine.go:311] xtrabackup stderr: xtrabackup: innodb_log_files_in_group = 2 I0917 11:19:27.159959 629000 main.go:67] I0917 11:19:26.898222 backup.go:178] I0917 11:19:27.232036 xtrabackupengine.go:311] xtrabackup stderr: xtrabackup: innodb_log_file_size = 50331648 I0917 11:19:27.159965 629000 main.go:67] I0917 11:19:26.898268 backup.go:178] I0917 11:19:27.232296 xtrabackupengine.go:311] xtrabackup stderr: InnoDB: Number of pools: 1 I0917 11:19:27.379051 629000 main.go:67] I0917 11:19:27.117596 backup.go:178] I0917 11:19:27.451612 xtrabackupengine.go:311] xtrabackup stderr: 210917 11:19:27 >> log scanned up to (2480748800) I0917 11:19:27.586983 629000 main.go:67] I0917 11:19:27.325429 backup.go:178] I0917 11:19:27.659544 xtrabackupengine.go:311] xtrabackup stderr: xtrabackup: Generating a list of tablespaces I0917 11:19:27.587724 629000 main.go:67] I0917 11:19:27.326349 backup.go:178] I0917 11:19:27.660519 xtrabackupengine.go:311] xtrabackup stderr: InnoDB: Allocated tablespace ID 21 for sys/sys_config, old maximum was 0 I0917 11:19:27.625069 629000 main.go:67] I0917 11:19:27.363544 backup.go:178] I0917 11:19:27.697466 xtrabackupengine.go:311] xtrabackup stderr: 210917 11:19:27 [01] Streaming /vt/vtdataroot/vt_1817996704/innodb/data/ibdata1
We are looking for the sign that the backup has been completed successfully:
I0917 11:20:06.043644 629000 main.go:67] I0917 11:20:05.772390 backup.go:178] I0917 11:20:06.104471 xtrabackupengine.go:311] xtrabackup stderr: xtrabackup: Transaction log of lsn (2480748791) to (2480748800) was copied. I0917 11:20:06.240205 629000 main.go:67] I0917 11:20:05.978636 backup.go:178] I0917 11:20:06.312686 xtrabackupengine.go:311] xtrabackup stderr: 210917 11:20:06 completed OK! I0917 11:20:06.253342 629000 main.go:67] I0917 11:20:05.991785 backup.go:178] I0917 11:20:06.322701 xtrabackupengine.go:634] Found position: 0c526d17-178f-11ec-9d8d-b6bf3fb70a78:1-35,0f2cbc65-178f-11ec-9105-f20b4a03d898:1-3761 I0917 11:20:06.253361 629000 main.go:67] I0917 11:20:05.991828 backup.go:178] I0917 11:20:06.322721 xtrabackupengine.go:117] Closing backup file backup.tar.gz I0917 11:20:06.253365 629000 main.go:67] I0917 11:20:05.991850 backup.go:178] I0917 11:20:06.323226 xtrabackupengine.go:164] Writing backup MANIFEST I0917 11:20:06.267813 629000 main.go:67] I0917 11:20:06.006312 backup.go:178] I0917 11:20:06.340350 xtrabackupengine.go:198] Backup completed I0917 11:20:06.267831 629000 main.go:67] I0917 11:20:06.006364 backup.go:178] I0917 11:20:06.340368 xtrabackupengine.go:117] Closing backup file MANIFEST
Once we have the backup, we can scale out our keyspace and add a new replica to it. We can do it either by editing the cluster definition or by applying 105_add_replica_to_newsbtest.yaml file:
root@k8smaster:~/vitesstests# kubectl apply -f 105_add_replica_to_newsbtest.yaml
As before, we can track the process by tailing logs of vttablet and, if you like, mysqld:
root@k8smaster:~/vitesstests# kubectl logs -f example-vttablet-zone1-3154448388-73c3b20a vttablet
root@k8smaster:~/vitesstests# vtctlclient kubectl logs -f example-vttablet-zone1-3154448388-73c3b20a mysqld
Again, we should see log entry that indicates backup is used to provision the vttablet:
I0917 11:44:30.372023 1 tm_state.go:176] Changing Tablet Type: RESTORE
I0917 11:44:30.372069 1 tm_state.go:355] Publishing state: alias:{cell:"zone1" uid:3154448388} hostname:"10.244.3.25" port_map:{key:"grpc" value:15999} port_map:{key:"vt" value:15000} keyspace:"newsbtest" shard:"-" key_range:{} type:RESTORE db_name_override:"vt_newsbtest" mysql_hostname:"10.244.3.25" mysql_port:3306
I0917 11:44:30.386738 1 syslogger.go:129] newsbtest/-/zone1-3154448388 [tablet] updated
I0917 11:44:30.386774 1 backup.go:270] Restore: looking for a suitable backup to restore
I0917 11:44:30.386801 1 shard_sync.go:70] Change to tablet state
I0917 11:44:30.390849 1 backupengine.go:221] Restore: found backup newsbtest/- 2021-09-17.111926.zone1-1817996704 to restore
I0917 11:44:30.396985 1 backupengine.go:240] Restore: shutdown mysqld
I0917 11:44:30.397023 1 mysqld.go:485] Mysqld.Shutdown
I0917 11:44:30.397033 1 mysqld.go:489] executing Mysqld.Shutdown() remotely via mysqlctld server: /vt/socket/mysqlctl.sock
W0917 11:44:30.397453 1 component.go:41] [core] grpc: addrConn.createTransport failed to connect to {/vt/socket/mysqlctl.sock /vt/socket/mysqlctl.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /vt/socket/mysqlctl.sock: connect: no such file or directory". Reconnecting...
I0917 11:44:35.424256 1 backupengine.go:245] Restore: deleting existing files
I0917 11:44:35.424365 1 backup.go:223] Restore: removing files in RelayLogPath.* (/vt/vtdataroot/vt_3154448388/relay-logs/vt-3154448388-relay-bin.*)
I0917 11:44:35.430360 1 backup.go:238] Restore: skipping removal of nonexistent RelayLogIndexPath (/vt/vtdataroot/vt_3154448388/relay-logs/vt-3154448388-relay-bin.index)
I0917 11:44:35.431400 1 backup.go:238] Restore: skipping removal of nonexistent RelayLogInfoPath (/vt/vtdataroot/vt_3154448388/relay-logs/relay-log.info)
I0917 11:44:35.431433 1 backup.go:223] Restore: removing files in BinLogPath.* (/vt/vtdataroot/vt_3154448388/bin-logs/vt-3154448388-bin.*)
I0917 11:44:35.454570 1 backup.go:241] Restore: removing files in DataDir (/vt/vtdataroot/vt_3154448388/data)
I0917 11:44:38.463435 1 backup.go:241] Restore: removing files in InnodbDataHomeDir (/vt/vtdataroot/vt_3154448388/innodb/data)
I0917 11:44:38.484417 1 backup.go:241] Restore: removing files in InnodbLogGroupHomeDir (/vt/vtdataroot/vt_3154448388/innodb/logs)
I0917 11:44:38.516808 1 backupengine.go:250] Restore: reinit config file
I0917 11:44:38.516821 1 mysqld.go:905] Mysqld.ReinitConfig
I0917 11:44:38.516824 1 mysqld.go:909] executing Mysqld.ReinitConfig() remotely via mysqlctld server: /vt/socket/mysqlctl.sock
I0917 11:44:38.535154 1 xtrabackupengine.go:394] Restore: Extracting files from backup.tar.gz
I0917 11:44:38.544544 1 xtrabackupengine.go:557] Executing tar cmd with flags [-C /vt/vtdataroot/vt_3154448388/tmp/xtrabackup-2021-09-17.114438 -xiv]
After everything is ready, we should see another vttablet assigned to “newsbtest” keyspace.
root@k8smaster:~/vitesstests# vtctlclient listalltablets zone1-1817996704 newsbtest - primary 10.244.2.11:15000 10.244.2.11:3306 [] 2021-09-17T12:57:38Z zone1-2179083526 sbtest - replica 10.244.1.16:15000 10.244.1.16:3306 [] <null> zone1-2344898534 sbtest - replica 10.244.3.8:15000 10.244.3.8:3306 [] <null> zone1-2646235096 sbtest - primary 10.244.2.8:15000 10.244.2.8:3306 [] 2021-09-16T13:44:09Z zone1-3154448388 newsbtest - replica 10.244.3.26:15000 10.244.3.26:3306 [] <null>
This is it for now. In the next blog we are going to explain how to shard our “newsbtest” schema using Vitess.