J'ai planté tous mes 3 nœuds. Après que tous les nœuds ont été démarrés, j'ai remarqué que la mariadb est morte. Et je ne pouvais pas recommencer.
J'utilise CentOS 7 sur tous les serveurs
J'ai essayé de démarrer le premier nœud puis les autres mais sans succès.
Tout d'abord, j'ai essayé de trouver le plus récent seqno, comme le dit la documentation. J'ai donc regardé dans ce fichier sur les 3 nœuds: /var/lib/mysql/grastate.dat
et j'ai remarqué que le contenu est identique sur les 3 nœuds (uuid est le même et seqno est le même)! Voici ce fichier:
# GALERA saved state
version: 2.1
uuid: ec3e180d-bbff-11e6-b989-3273ac13ba57
seqno: -1
cert_index:
D'accord. Comme tous les nœuds sont identiques, je peux exécuter n'importe quel nœud comme un nouveau et y ajouter d'autres nœuds. J'ai utilisé la commande suivante:
galera_new_cluster
Et ça n'a pas marché. Node n'a pas démarré.
Voici ce que j'ai obtenu:
-- Unit mariadb.service has begun starting up.
Dec 07 18:20:55 GlusterDC1_1 sh[4298]: 2016-12-07 18:20:55 139806456780992 [Note] /usr/sbin/mysqld (mysqld 10.1.19-MariaDB) starting as process 4332 ...
Dec 07 18:20:58 GlusterDC1_1 sh[4298]: WSREP: Recovered position ec3e180d-bbff-11e6-b989-3273ac13ba57:83
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] /usr/sbin/mysqld (mysqld 10.1.19-MariaDB) starting as process 4364 ...
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: Read nil XID from storage engines, skipping position init
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib64/galera/libgalera_smm.so'
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: wsrep_load(): Galera 25.3.18(r3632) by Codership Oy <[email protected]> loaded successfully.
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: CRC-32C: using hardware acceleration.
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: Found saved state: ec3e180d-bbff-11e6-b989-3273ac13ba57:-1
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: Passing config to GCS: base_dir = /var/lib/mysql/; base_Host = 192.168.0.120; base_port = 4567; cert.log_conflicts = no; debug = no; evs.auto_evict = 0; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.size = 128M; gcomm.thread_prio = ; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.segment = 0; gmcast.version = 0; pc.announce_timeout = PT3S; pc.checksum = false; pc.ignore_quorum = false
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830658434816 [Note] WSREP: Service thread queue flushed.
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: Assign initial position for certification: 83, protocol version: -1
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: wsrep_sst_grab()
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: Start replication
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: 'wsrep-new-cluster' option used, bootstrapping the cluster
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: Setting initial position to ec3e180d-bbff-11e6-b989-3273ac13ba57:83
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: protonet asio version 0
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: Using CRC-32C for message checksums.
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: backend: asio
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: gcomm thread scheduling priority set to other:0
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Warning] WSREP: access file(/var/lib/mysql//gvwstate.dat) failed(No such file or directory)
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: restore pc from disk failed
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: GMCast version 0
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: (23356fd8, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: (23356fd8, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: EVS version 0
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: gcomm: bootstrapping new group 'my_cluster'
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: start_prim is enabled, turn off pc_recovery
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [ERROR] WSREP: Address already in use
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [ERROR] WSREP: failed to open gcomm backend connection: 98: error while trying to listen 'tcp://0.0.0.0:4567?socket.non_blocking=1', asio error 'Address already in use': 98 (Address already in use)
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: at gcomm/src/asio_tcp.cpp:listen():810
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [ERROR] WSREP: gcs/src/gcs_core.cpp:gcs_core_open():208: Failed to open backend connection: -98 (Address already in use)
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [ERROR] WSREP: gcs/src/gcs.cpp:gcs_open():1380: Failed to open channel 'my_cluster' at 'gcomm://192.168.0.120,192.168.0.121,192.168.0.122': -98 (Address already in use)
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [ERROR] WSREP: gcs connect failed: Address already in use
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [ERROR] WSREP: wsrep::connect(gcomm://192.168.0.120,192.168.0.121,192.168.0.122) failed: 7
Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [ERROR] Aborting
Dec 07 18:20:59 GlusterDC1_1 systemd[1]: mariadb.service: main process exited, code=exited, status=1/FAILURE
Dec 07 18:20:59 GlusterDC1_1 systemd[1]: Failed to start MariaDB database server.
-- Subject: Unit mariadb.service has failed
Ok, j'ai essayé d'exécuter le nœud manuellement. Avec la commande suivante:
systemctl start mariadb
Et j'ai eu:
-- Unit mariadb.service has begun starting up.
Dec 07 18:31:55 GlusterDC1_1 sh[4505]: 2016-12-07 18:31:55 139834720598208 [Note] /usr/sbin/mysqld (mysqld 10.1.19-MariaDB) starting as process 4539 ...
Dec 07 18:31:58 GlusterDC1_1 sh[4505]: WSREP: Recovered position ec3e180d-bbff-11e6-b989-3273ac13ba57:83
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] /usr/sbin/mysqld (mysqld 10.1.19-MariaDB) starting as process 4571 ...
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: Read nil XID from storage engines, skipping position init
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib64/galera/libgalera_smm.so'
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: wsrep_load(): Galera 25.3.18(r3632) by Codership Oy <[email protected]> loaded successfully.
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: CRC-32C: using hardware acceleration.
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: Found saved state: ec3e180d-bbff-11e6-b989-3273ac13ba57:-1
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: Passing config to GCS: base_dir = /var/lib/mysql/; base_Host = 192.168.0.120; base_port = 4567; cert.log_conflicts = no; debug = no; evs.auto_evict = 0; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.size = 128M; gcomm.thread_prio = ; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.segment = 0; gmcast.version = 0; pc.announce_timeout = PT3S; pc.checksum = false; pc.ignore_quorum = false
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525285508864 [Note] WSREP: Service thread queue flushed.
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: Assign initial position for certification: 83, protocol version: -1
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: wsrep_sst_grab()
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: Start replication
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: Setting initial position to ec3e180d-bbff-11e6-b989-3273ac13ba57:83
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: protonet asio version 0
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: Using CRC-32C for message checksums.
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: backend: asio
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: gcomm thread scheduling priority set to other:0
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Warning] WSREP: access file(/var/lib/mysql//gvwstate.dat) failed(No such file or directory)
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: restore pc from disk failed
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: GMCast version 0
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: (acad4591, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: (acad4591, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: EVS version 0
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: gcomm: connecting to group 'my_cluster', peer '192.168.0.120:,192.168.0.121:,192.168.0.122:'
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [ERROR] WSREP: Address already in use
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [ERROR] WSREP: failed to open gcomm backend connection: 98: error while trying to listen 'tcp://0.0.0.0:4567?socket.non_blocking=1', asio error 'Address already in use': 98 (Address already in use)
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: at gcomm/src/asio_tcp.cpp:listen():810
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [ERROR] WSREP: gcs/src/gcs_core.cpp:gcs_core_open():208: Failed to open backend connection: -98 (Address already in use)
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [ERROR] WSREP: gcs/src/gcs.cpp:gcs_open():1380: Failed to open channel 'my_cluster' at 'gcomm://192.168.0.120,192.168.0.121,192.168.0.122': -98 (Address already in use)
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [ERROR] WSREP: gcs connect failed: Address already in use
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [ERROR] WSREP: wsrep::connect(gcomm://192.168.0.120,192.168.0.121,192.168.0.122) failed: 7
Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [ERROR] Aborting
Dec 07 18:31:59 GlusterDC1_1 systemd[1]: mariadb.service: main process exited, code=exited, status=1/FAILURE
Dec 07 18:31:59 GlusterDC1_1 systemd[1]: Failed to start MariaDB database server.
-- Subject: Unit mariadb.service has failed
J'ai fatigué les deux commandes sur d'autres nœuds et j'ai eu la même erreur.
J'ai aussi essayé d'exécuter les commandes suivantes mais sans succès aussi:
/etc/init.d/mysql start --wsrep-new-cluster
service mysql start --wsrep_cluster_address="gcomm://192.168.0.120,192.168.0.121,192.168.0.122" \
--wsrep_cluster_name="my_cluster"
Est-il possible de récupérer un cluster dans une telle situation?
Paramètres de pré-récupération:
Étapes de récupération après incident:
Trouvez un seqno valide. Regardez le fichier grastate.dat sur chaque serveur pour voir quelle machine a les données les plus récentes. Le nœud avec le plus grand seqno est le nœud avec les données actuelles.
Ensuite, regardez trois fichiers grastate.dat.
a) Node0: ce grastate.dat montre un arrêt gracieux. Notez le seqno. Nous recherchons le nœud avec le plus grand seqno.
/var/lib/mysql/grastate.dat
version: 2.1
uuid: cbd332a9-f617-11e2-b77d-3ee9fa637069
seqno: 43760
b) Node1: ce fichier grastate.dat affiche -1 dans le seqno. Ce nœud s'est bloqué lors du traitement des transactions. Démarrez ce nœud à l'aide de l'option wsrep-restore. MySQL stockera le dernier GTID validé dans l'en-tête de données InnoDB.
/var/lib/mysql/grastate.dat
version: 2.1
uuid: cbd332a9-f617-11e2-b77d-3ee9fa637069
seqno: -1
c) Node2: ce fichier grastate.dat n'a pas de seqno ou d'ID de groupe. Ce nœud s'est planté pendant DDL.
/var/lib/mysql/grastate.dat
version: 2.1
uuid: 00000000-0000-0000-0000-000000000000
seqno: -1
/ chemin/vers/mysql/bin/mysqld --wsrep-restore. Mysqld lira les fichiers d'en-tête InnoDB et s'arrêtera immédiatement. La dernière position wsrep est imprimée dans le fichier mysqld.log.
Exemple: 140716 12:55:45 [Remarque] WSREP: état enregistré trouvé: cbd332a9- f617-11e2-b77d-3ee9fa637069: 36742
Regardez le seqno de Node0 (seqno: 43760) et Node1 (seqno: -1). Node0 a l'instantané actuel des données et doit être démarré en premier.
Sur Node0, exécutez cette commande pour démarrer le nœud:
a) Nohup/path/to/mysql/bin/mysqld_safe - wsrep_cluster_address = gcomm: // &; attendez que ce nœud soit en ligne.
b) Démarrez ensuite Node1 et Node2. Ces deux nœuds doivent être démarrés un par un et peuvent être démarrés comme vous le feriez normalement.
c) Une fois que les trois nœuds sont en place et dans un état primaire, redémarrez Node0 de la manière normale (afin qu'il apparaisse comme faisant partie de l'ensemble du cluster, et pas seulement comme un bootstrap).