Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Query failed with error "failed to query: loaded collection do not found any channel in target, may be in recovery: collection on recovering" after reboot the mixcoord during continuous major compaction #38811

Open
1 task done
binbinlv opened this issue Dec 27, 2024 · 3 comments
Assignees
Labels
kind/bug Issues or changes related a bug triage/accepted Indicates an issue or PR is ready to be actively worked on.
Milestone

Comments

@binbinlv
Copy link
Contributor

binbinlv commented Dec 27, 2024

Is there an existing issue for this?

  • I have searched the existing issues

Environment

- Milvus version: 2.4-20241224-648078e8
- Deployment mode(standalone or cluster): cluster
- MQ type(rocksmq, pulsar or kafka):   pulsar 
- SDK version(e.g. pymilvus v2.0.0rc2): 2.4.13rc5
- OS(Ubuntu or CentOS): 
- CPU/Memory: 
- GPU: 
- Others:

Current Behavior

Query failed with error "failed to query: loaded collection do not found any channel in target, may be in recovery: collection on recovering" after reboot the mixcoord during continuous major compaction

Expected Behavior

Query successfully after mixcoord is rebooted during major compaction

Steps To Reproduce

1. create collection with 10m data(dim=128) inserted (set is_clustering_key=True to the int64 field)
2. trigger continuous major compaction to this collection
3. trigger continuous query(count(*)) to this collection
4. reboot the mixcoord
5. check the query(count(*)) result

Milvus Log

granfana:
https://grafana-4am.zilliz.cc/d/uLf5cJ3Ga/milvus2-0?orgId=1&from=now-1h&to=now&var-datasource=prometheus&var-cluster=&var-namespace=qa-milvus&var-instance=major-24-ndoap&var-collection=All&var-app_name=milvus

log:
https://grafana-4am.zilliz.cc/explore?orgId=1&panes=%7B%22ITw%22:%7B%22datasource%22:%22vhI6Vw67k%22,%22queries%22:%5B%7B%22refId%22:%22A%22,%22expr%22:%22%7Bcluster%3D%5C%224am%5C%22,namespace%3D%5C%22qa-milvus%5C%22,pod%3D~%5C%22major-24-ndoap.%2A%5C%22%7D%22,%22datasource%22:%7B%22type%22:%22loki%22,%22uid%22:%22vhI6Vw67k%22%7D%7D%5D,%22range%22:%7B%22from%22:%22now-1h%22,%22to%22:%22now%22%7D%7D%7D&schemaVersion=1

Anything else?

collection name: major_compaction_collection_enable_scalar_clustering_key_1kw

@binbinlv binbinlv added kind/bug Issues or changes related a bug needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Dec 27, 2024
@binbinlv binbinlv added this to the 2.4.19 milestone Dec 27, 2024
@binbinlv
Copy link
Contributor Author

/assign @xiaocai2333

@binbinlv binbinlv added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Dec 27, 2024
@xiaocai2333
Copy link
Contributor

Here, the compaction iterates many times. To prevent GetRecoveryInfo from returning duplicate data, we iterate through all the valid segments' compaction "From" when returning the segment view. In this case, it takes a long time. There are a total of 450 segments, and each compaction produces 10 segments, so it requires 45 iterations, resulting in checking 10 to the power of 45 segment combinations.

@xiaocai2333
Copy link
Contributor

Perhaps we can use a map here to keep track of whether a segment has already been iterated, so that each segment is checked only once.
Or we could limit the recursion depth during the check. Once it exceeds a certain number of generations, we stop processing.

@yanliang567 yanliang567 removed their assignment Dec 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Issues or changes related a bug triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

3 participants