Skip to content

[Bug]: Unable to open db at path ...ds_1 with error: Invalid argument ...ds_1/db//CURRENT: does not exist (create_if_missing is false) #166

@xiexiaoy

Description

@xiexiaoy
[2025-12-19T12:21:06.110270 I 1133524] [cc_stream_receiver.cpp:1527] RecoverStateCheckRequest, node 1 is not leader of node group 1
[2025-12-19T12:21:06.112022 I 1133630] [sharder.cpp:701] update node group ng2 leader to node_id: 2
[2025-12-19T12:21:06.113811 I 1133630] [sharder.cpp:701] update node group ng2 leader to node_id: 2
[2025-12-19T12:21:06.114836 I 1133630] [sharder.cpp:701] update node group ng2 leader to node_id: 2
[2025-12-19T12:21:06.277074 I 1133524] [sharder.cpp:701] update node group ng0 leader to node_id: 0
[2025-12-19T12:21:06.352659 I 1133524] [log_instance.cpp:850] Received ReplayLogRequest.
[2025-12-19T12:21:06.352993 I 1133524] [log_instance.cpp:933] The node is not leader, term_if_is_lg_leader_ < 0
[2025-12-19T12:21:06.353053 I 1133524] [log_state.h:844] Updating node group term, ng:0,term:2
[2025-12-19T12:21:06.354315 I 1133524] [log_state.h:885] Create a new leader info
[2025-12-19T12:21:06.862104 I 1133525] [data_store_service.cpp:393] Connecting and starting data store for shard id:1, open_mode:2, create_db_if_missing:0, data_store_ is null:1
[2025-12-19T12:21:06.868587 I 1133525] [rocksdb_data_store.cpp:256] DB Open took 2 ms
[2025-12-19T12:21:06.868821 E 1133525] [rocksdb_data_store.cpp:262] Unable to open db at path /home/xiexy/edisk/service/eloqdata-mongodb-cluster/data/data-b/eloq_dss/rocksdb_data/ds_1 with error: Invalid argument: /home/xiexy/edisk/service/eloqdata-mongodb-cluster/data/data-b/eloq_dss/rocksdb_data/ds_1/db//CURRENT: does not exist (create_if_missing is false)
[2025-12-19T12:21:06.869271 E 1133525] [rocksdb_data_store_factory.h:65] Failed to start db instance in data store service
[2025-12-19T12:21:06.869603 I 1133525] [rocksdb_data_store.cpp:71] Shutting down RocksDBDataStore
[2025-12-19T12:21:06.871012 E 1133525] [data_store_service.cpp:405] Failed to create data store
[2025-12-19T12:21:06.871053 E 1133525] [data_store_service.cpp:2223] OpenDataStore failed for DSS shard 1, shard_id_: 1, shard_status_: 4, use time: 9 ms
[2025-12-19T12:21:06.871136 I 1133525] [cc_node.cpp:382] CC node 1 becomes the leader of ng#1. Term: 2
[2025-12-19T12:21:06.873618 I 1133524] [sharder.cpp:701] update node group ng1 leader to node_id: 1
[2025-12-19T12:21:06.879593 I 1133525] [log_replay_service.cpp:321] replay service accepting new stream: 8589934694 from log group: 0 to cc_ng: 1 at term: 2
[2025-12-19T12:21:06.884104 I 1133524] [cc_node.cpp:125] The leader of cc node group ng#1 with the term 2 has been recovered.
[2025-12-19T12:21:06.884148 I 1133524] [log_replay_service.cpp:671] replay connection: cc node group: 1, term: 2, log group: 0, set recovering status to finished
[2025-12-19T12:21:06.884451 I 1133524] [log_replay_service.cpp:772] replay service stream: 8589934694, is closed, active stream cnt: 0
[2025-12-19T12:21:06.958520 I 1133524] [log_instance.cpp:850] Received ReplayLogRequest.
[2025-12-19T12:21:06.958622 I 1133524] [log_instance.cpp:933] The node is not leader, term_if_is_lg_leader_ < 0
[2025-12-19T12:21:06.958645 I 1133524] [log_state.h:844] Updating node group term, ng:1,term:2
[2025-12-19T12:21:06.958992 I 1133524] [log_state.h:885] Create a new leader info
[2025-12-19T12:21:07.111126 I 1133630] [sharder.cpp:701] update node group ng0 leader to node_id: 0
[2025-12-19T12:21:07.111111 I 1133524] [cc_stream_receiver.cpp:1553] RecoverStateCheckResponse with error, node_group: 2, error_code:-1
[2025-12-19T12:21:07.111961 I 1133630] [sharder.cpp:701] update node group ng1 leader to node_id: 1
[2025-12-19T12:21:07.113524 I 1133630] [sharder.cpp:701] update node group ng2 leader to node_id: 2
[2025-12-19T12:21:07.114226 I 1133630] [sharder.cpp:701] update node group ng0 leader to node_id: 0
[2025-12-19T12:21:07.115046 I 1133630] [sharder.cpp:701] update node group ng1 leader to node_id: 1
[2025-12-19T12:21:07.116200 I 1133630] [sharder.cpp:701] update node group ng2 leader to node_id: 2
[2025-12-19T12:21:07.117269 I 1133630] [sharder.cpp:701] update node group ng2 leader to node_id: 2
[2025-12-19T12:21:08.110986 I 1133524] [cc_stream_receiver.cpp:1553] RecoverStateCheckResponse with error, node_group: 2, error_code:-1
[2025-12-19T12:21:08.111560 I 1133630] [sharder.cpp:701] update node group ng0 leader to node_id: 0
[2025-12-19T12:21:08.112385 I 1133630] [sharder.cpp:701] update node group ng1 leader to node_id: 1
[2025-12-19T12:21:08.113775 I 1133630] [sharder.cpp:701] update node group ng2 leader to node_id: 2
[2025-12-19T12:21:08.114792 I 1133630] [sharder.cpp:701] update node group ng2 leader to node_id: 2
[2025-12-19T12:21:09.111903 I 1133524] [cc_stream_receiver.cpp:1553] RecoverStateCheckResponse with error, node_group: 2, error_code:-1
[2025-12-19T12:21:09.112032 I 1133630] [sharder.cpp:701] update node group ng0 leader to node_id: 0
[2025-12-19T12:21:09.112887 I 1133630] [sharder.cpp:701] update node group ng1 leader to node_id: 1
[2025-12-19T12:21:09.114332 I 1133630] [sharder.cpp:701] update node group ng2 leader to node_id: 2
[2025-12-19T12:21:09.115479 I 1133630] [sharder.cpp:701] update node group ng2 leader to node_id: 2
[2025-12-19T12:21:10.111411 I 1133524] [cc_stream_receiver.cpp:1553] RecoverStateCheckResponse with error, node_group: 2, error_code:-1
[2025-12-19T12:21:10.111611 I 1133630] [sharder.cpp:701] update node group ng0 leader to node_id: 0
[2025-12-19T12:21:10.112023 I 1133630] [sharder.cpp:701] update node group ng1 leader to node_id: 1
[2025-12-19T12:21:10.113174 I 1133630] [sharder.cpp:701] update node group ng2 leader to node_id: 2
[2025-12-19T12:21:10.114403 I 1133630] [sharder.cpp:701] update node group ng2 leader to node_id: 2
[2025-12-19T12:21:11.111896 I 1133524] [cc_stream_receiver.cpp:1553] RecoverStateCheckResponse with error, node_group: 2, error_code:-1
[2025-12-19T12:21:11.112426 I 1133630] [sharder.cpp:701] update node group ng0 leader to node_id: 0
[2025-12-19T12:21:11.113127 I 1133630] [sharder.cpp:701] update node group ng1 leader to node_id: 1
[2025-12-19T12:21:11.114302 I 1133630] [sharder.cpp:701] update node group ng2 leader to node_id: 2
[2025-12-19T12:21:11.115054 I 1133630] [sharder.cpp:701] update node group ng2 leader to node_id: 2
[2025-12-19T12:21:12.112342 I 1133524] [cc_stream_receiver.cpp:1553] RecoverStateCheckResponse with error, node_group: 2, error_code:-1
[2025-12-19T12:21:12.112742 I 1133630] [sharder.cpp:701] update node group ng0 leader to node_id: 0
[2025-12-19T12:21:12.113730 I 1133630] [sharder.cpp:701] update node group ng1 leader to node_id: 1
[2025-12-19T12:21:12.114725 I 1133630] [sharder.cpp:701] update node group ng2 leader to node_id: 2
[2025-12-19T12:21:12.115468 I 1133630] [sharder.cpp:701] update node group ng2 leader to node_id: 2
[2025-12-19T12:21:13.112493 I 1133524] [cc_stream_receiver.cpp:1553] RecoverStateCheckResponse with error, node_group: 2, error_code:-1
[2025-12-19T12:21:13.113023 I 1133630] [sharder.cpp:701] update node group ng0 leader to node_id: 0
[2025-12-19T12:21:13.113693 I 1133630] [sharder.cpp:701] update node group ng1 leader to node_id: 1
[2025-12-19T12:21:13.115110 I 1133630] [sharder.cpp:701] update node group ng2 leader to node_id: 2
[2025-12-19T12:21:13.115871 I 1133630] [sharder.cpp:701] update node group ng2 leader to node_id: 2
[2025-12-19T12:21:13.944593 I 1133618] [cc_request.h:3153] ccs 0 memory usage report, committed 9558240, allocated 9096000, frag ratio 4.8 , heap full: 0
[2025-12-19T12:21:13.944674 I 1133618] [cc_request.h:3153] ccs 1 memory usage report, committed 9558240, allocated 9096000, frag ratio 4.8 , heap full: 0
[2025-12-19T12:21:13.944804 I 1133618] [local_cc_shards.h:486] Table range memory report: allocated 0, committed 0, full: 0
[2025-12-19T12:21:13.945168 I 1133618] [checkpointer.cpp:229] Begin checkpoint of node group #1 with timestamp: 1766118068944303. The memory usage of node is: 17765 KB.
[2025-12-19T12:21:13.946324 I 1133618] [checkpointer.cpp:372] Checkpoint of node group #1 succeeded with timestamp: 1766118068944303
[2025-12-19T12:21:13.946877 I 1133618] [log_agent.cpp:453] UpdateCheckpointTs lg_id:0 node_id:0 ckpt_ts:1766118068944303
[2025-12-19T12:21:14.112579 I 1133524] [cc_stream_receiver.cpp:1553] RecoverStateCheckResponse with error, node_group: 2, error_code:-1
[2025-12-19T12:21:14.112686 I 1133630] [sharder.cpp:701] update node group ng0 leader to node_id: 0
[2025-12-19T12:21:14.112928 I 1133630] [sharder.cpp:701] update node group ng1 leader to node_id: 1
[2025-12-19T12:21:14.113539 I 1133630] [sharder.cpp:701] update node group ng2 leader to node_id: 2
[2025-12-19T12:21:14.114056 I 1133630] [sharder.cpp:701] update node group ng2 leader to node_id: 2
[2025-12-19T12:21:14.814174 I 1133525] [data_store_service.cpp:393] Connecting and starting data store for shard id:2, open_mode:2, create_db_if_missing:0, data_store_ is null:1
[2025-12-19T12:21:14.817266 I 1133525] [rocksdb_data_store.cpp:256] DB Open took 0 ms
[2025-12-19T12:21:14.817296 E 1133525] [rocksdb_data_store.cpp:262] Unable to open db at path /home/xiexy/edisk/service/eloqdata-mongodb-cluster/data/data-b/eloq_dss/rocksdb_data/ds_2 with error: Invalid argument: /home/xiexy/edisk/service/eloqdata-mongodb-cluster/data/data-b/eloq_dss/rocksdb_data/ds_2/db//CURRENT: does not exist (create_if_missing is false)
[2025-12-19T12:21:14.817340 E 1133525] [rocksdb_data_store_factory.h:65] Failed to start db instance in data store service
[2025-12-19T12:21:14.817360 I 1133525] [rocksdb_data_store.cpp:71] Shutting down RocksDBDataStore
[2025-12-19T12:21:14.817941 E 1133525] [data_store_service.cpp:405] Failed to create data store
[2025-12-19T12:21:14.818003 E 1133525] [data_store_service.cpp:2223] OpenDataStore failed for DSS shard 2, shard_id_: 2, shard_status_: 4, use time: 3 ms
[2025-12-19T12:21:14.818037 I 1133525] [cc_node.cpp:382] CC node 1 becomes the leader of ng#2. Term: 3
[2025-12-19T12:21:14.822325 I 1133525] [sharder.cpp:701] update node group ng2 leader to node_id: 1
[2025-12-19T12:21:14.826436 I 1133525] [log_replay_service.cpp:321] replay service accepting new stream: 17179869609 from log group: 0 to cc_ng: 2 at term: 3
[2025-12-19T12:21:14.827185 I 1133524] [cc_node.cpp:125] The leader of cc node group ng#2 with the term 3 has been recovered.
[2025-12-19T12:21:14.827256 I 1133524] [log_replay_service.cpp:671] replay connection: cc node group: 2, term: 3, log group: 0, set recovering status to finished
[2025-12-19T12:21:14.827448 I 1133524] [log_replay_service.cpp:772] replay service stream: 17179869609, is closed, active stream cnt: 0
[2025-12-19T12:21:14.827703 I 1133495] [data_substrate.cpp:192] DataSubstrate started successfully
[2025-12-19T12:21:14.842269 I 1133525] [log_instance.cpp:850] Received ReplayLogRequest.
[2025-12-19T12:21:14.842339 I 1133525] [log_instance.cpp:933] The node is not leader, term_if_is_lg_leader_ < 0
[2025-12-19T12:21:14.842351 I 1133525] [log_state.h:844] Updating node group term, ng:2,term:3
[2025-12-19T12:21:14.842406 I 1133525] [log_state.h:885] Create a new leader info
[2025-12-19T12:21:14.844724 I 1133495] [data_store_service_client_closure.cpp:671] DiscoverAllTableNamesCallback, error_code:2, error_msg: KV store not opened yet.
[2025-12-19T12:21:14.846159 I 1133495] [data_store_service_client_closure.cpp:671] DiscoverAllTableNamesCallback, error_code:2, error_msg: KV store not opened yet.
[2025-12-19T12:21:14.850543 I 1133495] [data_store_service_client_closure.cpp:671] DiscoverAllTableNamesCallback, error_code:2, error_msg: KV store not opened yet.
[2025-12-19T12:21:14.851691 I 1133903] [data_substrate.cpp:227] Shutting down the tx service.
[2025-12-19T12:21:14.852574 I 1133618] [cc_request.h:3153] ccs 0 memory usage report, committed 9558240, allocated 9096000, frag ratio 4.8 , heap full: 0
[2025-12-19T12:21:14.852631 I 1133618] [cc_request.h:3153] ccs 1 memory usage report, committed 9558240, allocated 9096000, frag ratio 4.8 , heap full: 0
[2025-12-19T12:21:14.852650 I 1133618] [local_cc_shards.h:486] Table range memory report: allocated 16, committed 4096, full: 0
[2025-12-19T12:21:14.852707 I 1133618] [checkpointer.cpp:229] Begin checkpoint of node group #2 with timestamp: 1766118074852421. The memory usage of node is: 17765 KB.
[2025-12-19T12:21:14.852782 I 1133618] [checkpointer.cpp:372] Checkpoint of node group #2 succeeded with timestamp: 1766118074852421
[2025-12-19T12:21:14.852826 I 1133618] [log_agent.cpp:453] UpdateCheckpointTs lg_id:0 node_id:0 ckpt_ts:1766118074852421
[2025-12-19T12:21:14.855968 I 1133618] [cc_request.h:3153] ccs 0 memory usage report, committed 9558240, allocated 9096000, frag ratio 4.8 , heap full: 0
[2025-12-19T12:21:14.856089 I 1133618] [cc_request.h:3153] ccs 1 memory usage report, committed 9558240, allocated 9096000, frag ratio 4.8 , heap full: 0
[2025-12-19T12:21:14.856127 I 1133618] [local_cc_shards.h:486] Table range memory report: allocated 32, committed 4096, full: 0
[2025-12-19T12:21:14.856220 I 1133618] [checkpointer.cpp:229] Begin checkpoint of node group #1 with timestamp: 1766118074855854. The memory usage of node is: 17765 KB.
[2025-12-19T12:21:14.856313 I 1133618] [checkpointer.cpp:372] Checkpoint of node group #1 succeeded with timestamp: 1766118074855854
[2025-12-19T12:21:14.856405 I 1133618] [log_agent.cpp:453] UpdateCheckpointTs lg_id:0 node_id:0 ckpt_ts:1766118074855854
[2025-12-19T12:21:14.863565 I 1133903] [sharder.cpp:67] Shutting down the sharder at node #1
[2025-12-19T12:21:14.864886 I 1133903] [server.cpp:1189] Server[txservice::remote::CcStreamReceiver] is going to quit
[2025-12-19T12:21:14.865962 I 1133631] [log_replay_service.cpp:142] replay service notify thread quits
[2025-12-19T12:21:14.866353 I 1133903] [server.cpp:1189] Server[txservice::fault::RecoveryService] is going to quit
[2025-12-19T12:21:14.866726 I 1133903] [server.cpp:1189] Server[txservice::remote::CcNodeService] is going to quit
[2025-12-19T12:21:14.869184 I 1133903] [sharder.cpp:110] The sharder at node #1 shut down.
[2025-12-19T12:21:14.901480 I 1133903] [tx_service.h:1322] Tx service is unregistered.
[2025-12-19T12:21:14.901634 I 1133903] [sharder.cpp:115] Close Stream sender at node #1
[2025-12-19T12:21:14.904505 I 1133903] [data_substrate.cpp:229] Tx service shut down.
[2025-12-19T12:21:14.904567 I 1133903] [data_substrate.cpp:234] Shutting down the storage handler.
[2025-12-19T12:21:14.905517 I 1133903] [server.cpp:1189] Server[EloqDS::DataStoreService] is going to quit
[2025-12-19T12:21:14.907719 I 1133903] [data_substrate.cpp:242] Storage handler shut down.
[2025-12-19T12:21:14.907805 I 1133903] [data_substrate.cpp:248] Shutting down the internal logservice.
[2025-12-19T12:21:14.907930 I 1133903] [server.cpp:1189] Server[braft::RaftStatImpl+braft::FileServiceImpl+braft::RaftServiceImpl+braft::CliServiceImpl+txlog::LogServiceImpl] is going to quit
[2025-12-19T12:21:14.908987 I 1133903] [node.cpp:994] node lg0:127.0.0.1:12012:0:0 shutdown, current_term 2 state FOLLOWER
[2025-12-19T12:21:14.909447 I 1133903] [node.cpp:994] node lg0:127.0.0.1:12012:0:0 shutdown, current_term 2 state SHUTDOWN
[2025-12-19T12:21:14.909478 I 1133903] [node.cpp:994] node lg0:127.0.0.1:12012:0:0 shutdown, current_term 2 state SHUTDOWN
[2025-12-19T12:21:14.914058 I 1133903] [data_substrate.cpp:282] Internal logservice shut down.

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions