In the previous blog in our Vitess series we have explained how to migrate your data into Vitess cluster using logical backup and then keep it in sync for the period of testing using asynchronous replication. Let’s try something similar but a bit different this time. We are going to use a nice feature of Vitess which allows us to spin vttablets working with external databases. This, eventually, will let us treat the production node as a part of the Vitess and it will allow us to use some of the Vitess magic to migrate the data into vttablets fully managed by Vitess.
First, we have to start by creating a user on our production system. We will use it to connect to the database from Vitess.
mysql> CREATE USER vitess_admin@'%' IDENTIFIED BY 'Vitesspass';
Query OK, 0 rows affected (0.00 sec)
mysql> GRANT ALL ON *.* TO vitess_admin@'%';
Query OK, 0 rows affected (0.00 sec)
Then, as a next step, we need to deploy our Vitess cluster. We will use 203_external_vttablet.yaml. The most important bit of this file are:
tabletPools:
- cell: zone1
type: externalreplica
replicas: 1
vttablet:
extraFlags:
db_charset: utf8mb4
mysql_auth_static_reload_interval: 30s
watch_replication_stream: "true"
track_schema_versions: "true"
enable_semi_sync: "false"
disable_active_reparents: "true"
We are defining the tablet pool as type: externalreplica. Theoretically speaking, we are going to connect to the primary node but there seems to be an issue in Vitess that prevents us from doing that and resulting in an error:
flag provided but not defined: -demote_master_type
Replica will work just fine though, we will let Vitess detect its true state later.
Second important bit is the definition of connectivity details to the external database:
externalDatastore:
user: vitess_admin
host: 192.168.10.121
port: 3306
database: sbtest
credentialsSecret:
name: example-cluster-config
key: external_users.json
We defined the user that we’ll use to connect (the one we created at the beginning), host, port and the database that will be mapped to our “externalsbtest” keyspace. We also defined credentials for the user that we’ll use to connect:
metadata:
name: example-cluster-config
type: Opaque
stringData:
external_users.json: |
{
"vitess_admin": ["Vitesspass"]
}
After applying the yaml file, a vttablet should show up:
root@k8smaster:~/vitesstests# kubectl apply -f 203_external_vttablet.yaml vitesscluster.planetscale.com/vitesstpcc created secret/example-cluster-config created . . . root@k8smaster:~/vitesstests# kubectl get pods NAME READY STATUS RESTARTS AGE vitess-operator-f44545df8-l5kk9 1/1 Running 0 34d vitesstpcc-etcd-8f68363b-1 1/1 Running 0 9h vitesstpcc-etcd-8f68363b-2 1/1 Running 0 9h vitesstpcc-etcd-8f68363b-3 1/1 Running 0 9h vitesstpcc-vttablet-zone1-2413274560-db084ec8 1/1 Running 2 9h vitesstpcc-zone1-vtctld-f38ee748-7b75487874-flfql 1/1 Running 1 9h vitesstpcc-zone1-vtgate-67d1e711-7b8556d549-b8jsw 1/1 Running 2 9h
Unlike “normal” Vitess pods, this one has just a vttablet. There is no mysqlctl. It makes sense – the connection is to the external database. The vttablet should also show up on the list:
root@k8smaster:~/vitesstests# vtctlclient listalltablets
zone1-2413274560 externalsbtest - replica 10.244.3.154:15000 192.168.10.121:3306 [] <null>
We started it as a replica thus this is its state. We can change that by letting Vitess reassess the state of the vttablet, just like if an external replication topology change would have happened:
root@k8smaster:~/vitesstests# vtctlclient listalltablets zone1-2413274560 externalsbtest - replica 10.244.3.154:15000 192.168.10.121:3306 [] <null>
As expected, it shows as a primary now. After making sure that the port forwarding is properly set up:
root@k8smaster:~/vitesstests# ./pf.sh Forwarding from 127.0.0.1:15306 -> 3306 Forwarding from [::1]:15306 -> 3306 Forwarding from 127.0.0.1:15000 -> 15000 Forwarding from [::1]:15000 -> 15000 Forwarding from 127.0.0.1:15999 -> 15999 Forwarding from [::1]:15999 -> 15999 You may point your browser to http://localhost:15000, use the following aliases as shortcuts: alias vtctlclient="/root/go/bin/vtctlclient -server=localhost:15999 -logtostderr" alias mysql="mysql -h 127.0.0.1 -P 15306 -u user" Hit Ctrl-C to stop the port forwards
we can log into MySQL via vGate and, to verify where we have ended up, we can run show tables:
root@k8smaster:~/vitesstests# mysql Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.7.9-vitess-12.0.0-SNAPSHOT Version: 12.0.0-SNAPSHOT (Git revision 912fb2b85a branch 'main') built on Tue Oct 12 04:17:46 UTC 2021 by vitess@buildkitsandbox using go1.17 linux/amd64 Copyright (c) 2000, 2021, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> show tables from externalsbtest; +--------------------------+ | Tables_in_externalsbtest | +--------------------------+ | customer1 | | customer10 | | customer2 | | customer3 | | customer4 | | customer5 | . . . | warehouse8 | | warehouse9 | +--------------------------+ 90 rows in set (0.02 sec) mysql> SELECT * FROM externalsbtest.orders1 LIMIT 1; +------+--------+--------+--------+---------------------+--------------+----------+-------------+ | o_id | o_d_id | o_w_id | o_c_id | o_entry_d | o_carrier_id | o_ol_cnt | o_all_local | +------+--------+--------+--------+---------------------+--------------+----------+-------------+ | 1 | 1 | 1 | 2383 | 2021-10-12 14:09:26 | 8 | 15 | 1 | +------+--------+--------+--------+---------------------+--------------+----------+-------------+ 1 row in set (0.00 sec)
As you can see, we have access to the original server. Now, we have to create “native” vttablets that will eventually take over from the currently used setup. We can do that by applying yaml file: 204_add_native_vttablets.yaml:
root@k8smaster:~/vitesstests# kubectl apply -f 204_add_native_vttablets.yaml vitesscluster.planetscale.com/vitesstpcc configured secret/example-cluster-config configured root@k8smaster:~/vitesstests# kubectl get pods,pv NAME READY STATUS RESTARTS AGE pod/recycler-for-pv1 0/1 Completed 0 14h pod/recycler-for-pv20 0/1 Completed 0 36h pod/recycler-for-pv4 0/1 Completed 0 14h pod/vitess-operator-f44545df8-l5kk9 1/1 Running 0 34d pod/vitesstpcc-etcd-8f68363b-1 1/1 Running 0 13h pod/vitesstpcc-etcd-8f68363b-2 1/1 Running 0 13h pod/vitesstpcc-etcd-8f68363b-3 1/1 Running 0 13h pod/vitesstpcc-vttablet-zone1-1817996704-58f13a5a 2/3 Running 1 8s pod/vitesstpcc-vttablet-zone1-2413274560-db084ec8 1/1 Running 2 13h pod/vitesstpcc-vttablet-zone1-3154448388-e058f0f6 2/3 Running 1 8s pod/vitesstpcc-vttablet-zone1-3896337564-86d89914 2/3 Running 1 7s pod/vitesstpcc-zone1-vtctld-f38ee748-7b75487874-flfql 1/1 Running 1 13h pod/vitesstpcc-zone1-vtgate-67d1e711-7b8556d549-b8jsw 1/1 Running 2 13h
Now, we have to prepare the migration. We are going to use vtctlclient and MoveTables option to do that. What is important, the data set includes foreign keys so the migration has to be performed in a particular order. We have analysed the tables and their relations and we come up with a proper order in which tables can be migrated:
root@k8smaster:~/vitesstests# cat order warehouse district customer history item orders new_orders stock order_line
Then, we used one-liner in Bash to generate the proper vtctlclient command:
root@k8smaster:~/vitesstests# VAR="" ; for t in $(cat order) ; do for tab in $(mysql -e "SHOW TABLES FROM externalsbtest LIKE '${t}%';" | grep -v Tables_in_externalsbtest) ; do VAR+="${tab},"; done ; done ; echo "vtctlclient MoveTables -source externalsbtest -tables '${VAR%?}' Create newsbtest.migration"
vtctlclient MoveTables -source externalsbtest -tables 'warehouse1,warehouse10,warehouse2,warehouse3,warehouse4,warehouse5,warehouse6,warehouse7,warehouse8,warehouse9,district1,district10,district2,district3,district4,district5,district6,district7,district8,district9,customer1,customer10,customer2,customer3,customer4,customer5,customer6,customer7,customer8,customer9,history1,history10,history2,history3,history4,history5,history6,history7,history8,history9,item1,item10,item2,item3,item4,item5,item6,item7,item8,item9,orders1,orders10,orders2,orders3,orders4,orders5,orders6,orders7,orders8,orders9,new_orders1,new_orders10,new_orders2,new_orders3,new_orders4,new_orders5,new_orders6,new_orders7,new_orders8,new_orders9,stock1,stock10,stock2,stock3,stock4,stock5,stock6,stock7,stock8,stock9,order_line1,order_line10,order_line2,order_line3,order_line4,order_line5,order_line6,order_line7,order_line8,order_line9' Create newsbtest.migration
Finally, we can start the process of migration:
root@k8smaster:~/vitesstests# vtctlclient MoveTables -source externalsbtest -tables 'warehouse1,warehouse10,warehouse2,warehouse3,warehouse4,warehouse5,warehouse6,warehouse7,warehouse8,warehouse9,district1,district10,district2,district3,district4,district5,district6,district7,district8,district9,customer1,customer10,customer2,customer3,customer4,customer5,customer6,customer7,customer8,customer9,history1,history10,history2,history3,history4,history5,history6,history7,history8,history9,item1,item10,item2,item3,item4,item5,item6,item7,item8,item9,orders1,orders10,orders2,orders3,orders4,orders5,orders6,orders7,orders8,orders9,new_orders1,new_orders10,new_orders2,new_orders3,new_orders4,new_orders5,new_orders6,new_orders7,new_orders8,new_orders9,stock1,stock10,stock2,stock3,stock4,stock5,stock6,stock7,stock8,stock9,order_line1,order_line10,order_line2,order_line3,order_line4,order_line5,order_line6,order_line7,order_line8,order_line9' Create newsbtest.migration Waiting for workflow to start: 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... 0% ... E1021 11:55:51.279808 549762 main.go:67] E1021 11:55:51.359824 vtctl.go:2273] workflow did not start within 30s MoveTables Error: rpc error: code = Unknown desc = workflow did not start within 30s E1021 11:55:51.279913 549762 main.go:76] remote error: rpc error: code = Unknown desc = workflow did not start within 30s root@k8smaster:~/vitesstests# vtctlclient MoveTables Progress newsbtest.reshard MoveTables Error: rpc error: code = Unknown desc = workflow newsbtest.reshard does not exist E1021 11:59:38.672667 551627 main.go:76] remote error: rpc error: code = Unknown desc = workflow newsbtest.reshard does not exist
As usual with migrations in Vitess, we can monitor the progress and see which tables have not yet been migrated:
root@k8smaster:~/vitesstests# vtctlclient MoveTables Progress newsbtest.migration Copy Progress (approx): item10: rows copied 0/99510 (0%), size copied 16384/11026432 (0%) history1: rows copied 0/536727 (0%), size copied 16384/38338560 (0%) warehouse3: rows copied 0/8 (0%), size copied 16384/16384 (100%) stock6: rows copied 0/981764 (0%), size copied 16384/371195904 (0%) orders1: rows copied 0/501395 (0%), size copied 16384/27836416 (0%) warehouse1: rows copied 0/8 (0%), size copied 16384/16384 (100%) warehouse2: rows copied 0/8 (0%), size copied 16384/16384 (100%) stock3: rows copied 0/979578 (0%), size copied 16384/372244480 (0%) order_line4: rows copied 0/5985917 (0%), size copied 16384/540016640 (0%) order_line1: rows copied 0/5751430 (0%), size copied 16384/545259520 (0%) order_line9: rows copied 0/5269073 (0%), size copied 16384/544210944 (0%) orders9: rows copied 0/538434 (0%), size copied 16384/27836416 (0%) district8: rows copied 0/100 (0%), size copied 16384/16384 (100%) new_orders1: rows copied 0/86427 (0%), size copied 16384/4341760 (0%) history3: rows copied 0/535797 (0%), size copied 16384/38338560 (0%) order_line2: rows copied 0/5073464 (0%), size copied 16384/548405248 (0%) new_orders8: rows copied 0/87384 (0%), size copied 16384/4390912 (0%) new_orders2: rows copied 0/90493 (0%), size copied 16384/4276224 (0%) warehouse9: rows copied 0/8 (0%), size copied 16384/16384 (100%) history10: rows copied 0/537881 (0%), size copied 16384/38338560 (0%) item3: rows copied 0/99541 (0%), size copied 16384/11026432 (0%) item2: rows copied 0/99541 (0%), size copied 16384/11026432 (0%) district4: rows copied 0/100 (0%), size copied 16384/16384 (100%) order_line3: rows copied 0/5142364 (0%), size copied 16384/547340288 (0%) district10: rows copied 0/100 (0%), size copied 16384/16384 (100%) orders6: rows copied 0/577494 (0%), size copied 16384/27836416 (0%) district1: rows copied 0/100 (0%), size copied 16384/16384 (100%) history7: rows copied 0/536601 (0%), size copied 16384/38338560 (0%) order_line10: rows copied 0/5432538 (0%), size copied 16384/545243136 (0%) district7: rows copied 0/100 (0%), size copied 16384/16384 (100%) new_orders6: rows copied 0/89291 (0%), size copied 16384/4259840 (0%) customer7: rows copied 0/289204 (0%), size copied 16384/203276288 (0%) warehouse7: rows copied 0/8 (0%), size copied 16384/16384 (100%) item1: rows copied 0/99601 (0%), size copied 16384/11026432 (0%) item5: rows copied 0/99571 (0%), size copied 16384/11026432 (0%) district6: rows copied 0/100 (0%), size copied 16384/16384 (100%) warehouse10: rows copied 0/8 (0%), size copied 16384/16384 (100%) stock4: rows copied 0/981898 (0%), size copied 16384/369082368 (0%) stock2: rows copied 0/981035 (0%), size copied 16384/372244480 (0%) orders10: rows copied 0/555192 (0%), size copied 16384/27836416 (0%) item7: rows copied 0/99662 (0%), size copied 16384/11026432 (0%) orders7: rows copied 0/496483 (0%), size copied 16384/27836416 (0%) customer6: rows copied 0/286698 (0%), size copied 16384/201146368 (0%) stock9: rows copied 0/980709 (0%), size copied 16384/369098752 (0%) warehouse8: rows copied 0/8 (0%), size copied 16384/16384 (100%) warehouse6: rows copied 0/8 (0%), size copied 16384/16384 (100%) order_line5: rows copied 0/5218243 (0%), size copied 16384/536870912 (0%) orders5: rows copied 0/499476 (0%), size copied 16384/27836416 (0%) history9: rows copied 0/536954 (0%), size copied 16384/38338560 (0%) stock5: rows copied 0/980441 (0%), size copied 16384/371195904 (0%) stock1: rows copied 0/979508 (0%), size copied 16384/370147328 (0%) item4: rows copied 0/99449 (0%), size copied 16384/11026432 (0%) stock7: rows copied 0/979044 (0%), size copied 16384/370130944 (0%) history6: rows copied 0/534879 (0%), size copied 16384/38338560 (0%) warehouse5: rows copied 0/8 (0%), size copied 16384/16384 (100%) warehouse4: rows copied 0/8 (0%), size copied 16384/16384 (100%) order_line8: rows copied 0/5181321 (0%), size copied 16384/551550976 (0%) order_line7: rows copied 0/4816734 (0%), size copied 16384/541065216 (0%) stock10: rows copied 0/980887 (0%), size copied 16384/371195904 (0%) customer9: rows copied 0/289530 (0%), size copied 16384/202227712 (0%) history5: rows copied 0/537069 (0%), size copied 16384/38338560 (0%) orders8: rows copied 0/535050 (0%), size copied 16384/27836416 (0%) customer8: rows copied 0/291672 (0%), size copied 16384/204341248 (0%) orders3: rows copied 0/553823 (0%), size copied 16384/27836416 (0%) orders4: rows copied 0/570694 (0%), size copied 16384/26787840 (0%) new_orders10: rows copied 0/85311 (0%), size copied 16384/4276224 (0%) district9: rows copied 0/100 (0%), size copied 16384/16384 (100%) item8: rows copied 0/99571 (0%), size copied 16384/11026432 (0%) new_orders3: rows copied 0/88581 (0%), size copied 16384/4341760 (0%) new_orders5: rows copied 0/87897 (0%), size copied 16384/3309568 (0%) history8: rows copied 0/537302 (0%), size copied 16384/38338560 (0%) new_orders4: rows copied 0/83205 (0%), size copied 16384/4243456 (0%) district3: rows copied 0/100 (0%), size copied 16384/16384 (100%) stock8: rows copied 0/981289 (0%), size copied 16384/370130944 (0%) orders2: rows copied 0/527200 (0%), size copied 16384/27836416 (0%) new_orders9: rows copied 0/90559 (0%), size copied 16384/4259840 (0%) new_orders7: rows copied 0/89397 (0%), size copied 16384/4308992 (0%) order_line6: rows copied 0/6538036 (0%), size copied 16384/616562688 (0%) district5: rows copied 0/100 (0%), size copied 16384/16384 (100%) item9: rows copied 0/99632 (0%), size copied 16384/11026432 (0%) item6: rows copied 0/99601 (0%), size copied 16384/11026432 (0%) history2: rows copied 0/536842 (0%), size copied 16384/38338560 (0%) district2: rows copied 0/100 (0%), size copied 16384/16384 (100%) history4: rows copied 0/536038 (0%), size copied 16384/38338560 (0%) Following vreplication streams are running for workflow newsbtest.migration: id=1 on -/zone1-3896337564: Status: Copying. VStream Lag: 0s.
Eventually, after a while, the process will complete:
root@k8smaster:~/vitesstests# vtctlclient MoveTables Progress newsbtest.migration Copy Completed. Following vreplication streams are running for workflow newsbtest.migration: id=1 on -/zone1-3896337564: Status: Running. VStream Lag: -1s. Tx time: Thu Oct 21 12:33:19 2021.
Now, we have to decide how to proceed from here. Currently traffic is directed to the database using ProxySQL. Reads are split across replicas. If the load is enough to be handled by one node only, we can switch the traffic to vtgate and, initially, use the ‘externalsbtest’ schema while moving some reads step by step to the “newsbtest” schema using the application. We can also make the full switch at any moment – just start sending queries to the “newsbtest” and then finalize the migration. One way or the other, we will eventually move the traffic to the new vttablet:
root@k8smaster:~/vitesstests# vtctlclient MoveTables SwitchTraffic newsbtest.migration
I1103 22:19:59.398286 394505 main.go:67] I1103 22:19:59.468387 traffic_switcher.go:442] Built switching metadata: &{migrationType:0 wr:0xc000b5cb40 workflow:migration frozen:false reverseWorkflow:migration_reverse id:9072913144584891300 sources:map[-:0xc00059a2a0] targets:map[-:0xc000e06d50] sourceKeyspace:externalsbtest targetKeyspace:newsbtest tables:[customer1 customer10 customer2 customer3 customer4 customer5 customer6 customer7 customer8 customer9 district1 district10 district2 district3 district4 district5 district6 district7 district8 district9 history1 history10 history2 history3 history4 history5 history6 history7 history8 history9 item1 item10 item2 item3 item4 item5 item6 item7 item8 item9 new_orders1 new_orders10 new_orders2 new_orders3 new_orders4 new_orders5 new_orders6 new_orders7 new_orders8 new_orders9 order_line1 order_line10 order_line2 order_line3 order_line4 order_line5 order_line6 order_line7 order_line8 order_line9 orders1 orders10 orders2 orders3 orders4 orders5 orders6 orders7 orders8 orders9 stock1 stock10 stock2 stock3 stock4 stock5 stock6 stock7 stock8 stock9 warehouse1 warehouse10 warehouse2 warehouse3 warehouse4 warehouse5 warehouse6 warehouse7 warehouse8 warehouse9] sourceKSSchema:0xc00059a780 optCells: optTabletTypes:primary,replica,rdonly externalCluster: externalTopo:<nil>}
I1103 22:19:59.427163 394505 main.go:67] I1103 22:19:59.497453 traffic_switcher.go:480] No previous journals were found. Proceeding normally.
I1103 22:19:59.427199 394505 main.go:67] I1103 22:19:59.497486 traffic_switcher.go:490] Stopping streams
I1103 22:19:59.427211 394505 main.go:67] I1103 22:19:59.497495 traffic_switcher.go:502] Stopping source writes
I1103 22:19:59.437321 394505 main.go:67] I1103 22:19:59.507700 keyspace.go:40] RefreshTabletsByShard called on shard externalsbtest/-
I1103 22:19:59.442350 394505 main.go:67] I1103 22:19:59.512788 keyspace.go:69] Calling RefreshState on tablet zone1-2413274560
.
.
.
I1103 22:19:59.755093 394505 main.go:67] I1103 22:19:59.825376 traffic_switcher.go:1524] Marking target streams frozen for workflow migration db_name vt_newsbtest
SwitchTraffic was successful for workflow newsbtest.migration
Start State: Reads Not Switched. Writes Not Switched
Current State: All Reads Switched. Writes Switched
Now, we assume that, one way or the other, you have moved the traffic to vtgate and the traffic is taken care of by Vitess. If so, we can complete the migration:
root@k8smaster:~/vitesstests# vtctlclient MoveTables Complete newsbtest.migration I1104 12:39:48.972084 761157 main.go:67] I1104 12:39:49.042141 traffic_switcher.go:1472] Dropping table sbtest.customer1 E1104 12:39:48.987919 761157 main.go:67] E1104 12:39:49.058256 traffic_switcher.go:1480] Error removing table customer1: rpc error: code = Unknown desc = TabletManager.ExecuteFetchAsDba on zone1-2413274560 error: rpc error: code = Unknown desc = Cannot drop table 'customer1' referenced by a foreign key constraint 'fkey_orders_1_1' on table 'orders1'. (errno 3730) (sqlstate HY000) during query: drop table sbtest.customer1: rpc error: code = Unknown desc = Cannot drop table 'customer1' referenced by a foreign key constraint 'fkey_orders_1_1' on table 'orders1'. (errno 3730) (sqlstate HY000) during query: drop table sbtest.customer1 E1104 12:39:48.994814 761157 main.go:67] E1104 12:39:49.065254 vtctl.go:2067] rpc error: code = Unknown desc = TabletManager.ExecuteFetchAsDba on zone1-2413274560 error: rpc error: code = Unknown desc = Cannot drop table 'customer1' referenced by a foreign key constraint 'fkey_orders_1_1' on table 'orders1'. (errno 3730) (sqlstate HY000) during query: drop table sbtest.customer1: rpc error: code = Unknown desc = Cannot drop table 'customer1' referenced by a foreign key constraint 'fkey_orders_1_1' on table 'orders1'. (errno 3730) (sqlstate HY000) during query: drop table sbtest.customer1 I1104 12:39:49.014643 761157 main.go:67] I1104 12:39:49.085110 vtctl.go:2069] Workflow Status: All Reads Switched. Writes Switched Following vreplication streams are running for workflow newsbtest.migration: id=1 on -/zone1-3896337564: Status: Stopped. VStream Lag: -1s. Tx time: Wed Nov 3 11:13:08 2021. MoveTables Error: rpc error: code = Unknown desc = TabletManager.ExecuteFetchAsDba on zone1-2413274560 error: rpc error: code = Unknown desc = Cannot drop table 'customer1' referenced by a foreign key constraint 'fkey_orders_1_1' on table 'orders1'. (errno 3730) (sqlstate HY000) during query: drop table sbtest.customer1: rpc error: code = Unknown desc = Cannot drop table 'customer1' referenced by a foreign key constraint 'fkey_orders_1_1' on table 'orders1'. (errno 3730) (sqlstate HY000) during query: drop table sbtest.customer1 E1104 12:39:49.028812 761157 main.go:76] remote error: rpc error: code = Unknown desc = TabletManager.ExecuteFetchAsDba on zone1-2413274560 error: rpc error: code = Unknown desc = Cannot drop table 'customer1' referenced by a foreign key constraint 'fkey_orders_1_1' on table 'orders1'. (errno 3730) (sqlstate HY000) during query: drop table sbtest.customer1: rpc error: code = Unknown desc = Cannot drop table 'customer1' referenced by a foreign key constraint 'fkey_orders_1_1' on table 'orders1'. (errno 3730) (sqlstate HY000) during query: drop table sbtest.customer1
Ok, it did not work, again, due to the foreign key constraint. What happened here is Vitess wanted to clean the source vttablet, which points to the primary node in our old cluster. The process failed because with foreign keys you cannot just drop a table if it is referenced by another foreign key. We can solve this problem by disabling foreign key checks. It’s not a problem as we are going to shut that cluster down anyway and the traffic is already redirected to Vitess.
mysql> SET GLOBAL foreign_key_checks=OFF;
Query OK, 0 rows affected (0.00 sec)
Once this is done we can retry the last command which, this time, will end up working just fine:
root@k8smaster:~/vitesstests# vtctlclient MoveTables Complete newsbtest.migration I1104 12:40:15.591185 761365 main.go:67] I1104 12:40:15.661189 traffic_switcher.go:1472] Dropping table sbtest.customer1 I1104 12:40:15.808242 761365 main.go:67] I1104 12:40:15.878747 traffic_switcher.go:1483] Removed table sbtest.customer1 I1104 12:40:15.808278 761365 main.go:67] I1104 12:40:15.878787 traffic_switcher.go:1472] Dropping table sbtest.customer10 I1104 12:40:15.951036 761365 main.go:67] I1104 12:40:16.021621 traffic_switcher.go:1483] Removed table sbtest.customer10 . . . I1104 12:40:19.144271 761365 main.go:67] I1104 12:40:19.214137 traffic_switcher.go:1472] Dropping table sbtest.warehouse9 I1104 12:40:19.159065 761365 main.go:67] I1104 12:40:19.229517 traffic_switcher.go:1483] Removed table sbtest.warehouse9 I1104 12:40:19.171176 761365 main.go:67] I1104 12:40:19.240343 keyspace.go:40] RefreshTabletsByShard called on shard externalsbtest/- I1104 12:40:19.177991 761365 main.go:67] I1104 12:40:19.248155 keyspace.go:69] Calling RefreshState on tablet zone1-2413274560 I1104 12:40:19.181501 761365 main.go:67] I1104 12:40:19.251809 traffic_switcher.go:1546] Deleting reverse streams for workflow migration db_name sbtest I1104 12:40:19.199220 761365 main.go:67] I1104 12:40:19.269483 traffic_switcher.go:1537] Deleting target streams for workflow migration db_name vt_newsbtest Complete was successful for workflow newsbtest.migration Start State: All Reads Switched. Writes Switched Current State: Workflow Not Found
This is almost it. The external vttablet is still around:
root@k8smaster:~/vitesstests# vtctlclient listalltablets zone1-1817996704 newsbtest - replica 10.244.2.81:15000 10.244.2.81:3306 [] <null> zone1-2413274560 externalsbtest - primary 10.244.3.154:15000 192.168.10.121:3306 [] 2021-10-20T22:24:12Z zone1-3154448388 newsbtest - replica 10.244.1.175:15000 10.244.1.175:3306 [] <null> zone1-3896337564 newsbtest - primary 10.244.3.156:15000 10.244.3.156:3306 [] 2021-10-21T11:18:37Z
We can fix it by editing the cluster definition and removing the external vttabled from it. Then we need to apply the new yaml file. You can find required changes in 205_remove_external_vttablet.yaml.
root@k8smaster:~/vitesstests# kubectl apply -f 205_remove_external_vttablet.yaml vitesscluster.planetscale.com/vitesstpcc configured secret/example-cluster-config configured root@k8smaster:~/vitesstests# vtctlclient listalltablets zone1-1817996704 newsbtest - replica 10.244.2.81:15000 10.244.2.81:3306 [] <null> zone1-3154448388 newsbtest - replica 10.244.1.175:15000 10.244.1.175:3306 [] <null> zone1-3896337564 newsbtest - primary 10.244.3.156:15000 10.244.3.156:3306 [] 2021-10-21T11:18:37Z
This is it, we have migrated the application data from MySQL replication cluster to Vitess.