The applier thread did not stop in time after receiving a STOP GROUP_REPLICATION signal, so MySQL raises error 3095.
MySQL error 3095 ER_GROUP_REPLICATION_STOP_APPLIER_THREAD_TIMEOUT occurs when the STOP GROUP_REPLICATION command times out because the applier thread is still processing a transaction. Wait for the thread to finish, remove blocking locks, or raise group_replication_stop_timeout to resolve the issue.
ER_GROUP_REPLICATION_STOP_APPLIER_THREAD_TIMEOUT
MySQL throws error 3095 ER_GROUP_REPLICATION_STOP_APPLIER_THREAD_TIMEOUT when the STOP GROUP_REPLICATION command cannot shut down the applier thread within the expected time. The applier thread is still processing a transaction and signals that it will terminate only after finishing the task.
The error appears only on group replication nodes and was introduced in MySQL 5.7.6. Although replication keeps running, administrative scripts may hang or exit with this error code until the applier thread ends.
Large or long running transactions can keep the applier thread busy, delaying its response to the stop signal.
Row locks or metadata locks held by concurrent sessions can block the applier from committing, extending the wait time.
Slow disk I/O, high network latency, or insufficient CPU can slow down transaction apply rate and trigger the timeout.
Mismatched tables or missing secondary indexes may cause row lookups to take longer, prolonging applier activity.
First, identify whether the applier thread is still running by querying performance_schema.replication_group_member_stats. Wait if the remaining queue size is small.
If the queue is large or stuck, locate blocking transactions with performance_schema.events_transactions_summary_by_thread and kill them to free the applier.
You can extend the timeout by adjusting group_replication_stop_timeout before issuing STOP GROUP_REPLICATION.
After fixing blockers, run STOP GROUP_REPLICATION again and verify that the applier thread disappears from SHOW PROCESSLIST.
When deploying rolling upgrades, large schema changes pushed by pt-online-schema-change may keep the applier busy. Schedule upgrades during low traffic windows or chunk the migration.
If analytical workloads create huge transactions, enforce max_transaction_rows or split batches to reduce apply time.
During failover tests, background backup jobs can lock tables. Pause backups before stopping replication.
Keep transactions small and short to let the applier stop quickly.
Monitor replication apply delay with performance_schema tables or custom queries in Galaxy to spot backlogs early.
Set realistic group_replication_stop_timeout based on workload characteristics and test it in staging.
Create automated alerts in Galaxy whenever the apply queue surpasses a threshold so engineers act before issuing stop commands.
ER_GROUP_REPLICATION_START_APPLIER_THREAD_TIMEOUT occurs when the applier cannot start promptly; investigate long DDL and resource limits.
ERROR 3092 ER_GROUP_REPLICATION_RUNNING signals that START GROUP_REPLICATION was issued on an already running node.
ERROR 3108 ER_GROUP_REPLICATION_COMMAND_FAILURE appears when group replication commands fail for unspecified reasons; review error log for details.
Gigantic INSERT or UPDATE statements keep the applier thread occupied until the entire batch finishes, delaying the stop operation.
Row or metadata locks held by other sessions prevent the applier from committing, so it ignores the stop request until locks release.
CPU, I/O, or network bottlenecks slow down the apply process, making the timeout more likely during peak load.
Missing indexes or schema differences force full table scans during apply, extending execution time.
Timeout while starting the applier thread; often caused by locks or high load at startup.
Start command issued but replication is already running; check node state before starting.
Generic failure executing a group replication command; inspect the error log for details.
Error parsing conflict function for group replication; verify JSON syntax in configuration.
If APPLY_QUEUE_LENGTH is steadily decreasing, wait until it reaches zero. If it is stagnant for several minutes, investigate locks and consider killing blockers.
The node continues to operate, but maintenance scripts may hang. Always verify that replication stops cleanly before shutdown.
It only changes how long the STOP command waits; it does not impact runtime performance.
Galaxy lets you run the monitoring queries above, share them with your team, and receive AI suggestions to optimize timeout settings.