Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add link to NO_COPIES allocation explain message #113656

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

matthewabbott
Copy link

Adds no_valid_shard_copies reference link to the NO_COPIES allocation explanations string

@matthewabbott matthewabbott added >non-issue :Distributed/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes) Team:Distributed Meta label for distributed team Supportability Improve our (devs, SREs, support eng, users) ability to troubleshoot/self-service product better. labels Sep 27, 2024
@elasticsearchmachine elasticsearchmachine added v9.0.0 external-contributor Pull request authored by a developer outside the Elasticsearch team labels Sep 27, 2024
@matthewabbott matthewabbott added >docs General docs changes and removed >docs General docs changes labels Sep 27, 2024
@@ -29,10 +29,10 @@ public static final class Allocation {
will allocate this shard when a node containing a good copy of its data joins the cluster. If no such node is available, \
restore this index from a recent snapshot.""";

public static final String NO_COPIES = """
public static final String NO_COPIES = String.format("""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This API is forbidden - you forgot to run ./gradlew precommit before opening this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💬 internal

@@ -43,5 +43,6 @@
"MAX_SHARDS_PER_NODE": "size-your-shards.html#troubleshooting-max-shards-open",
"FLOOD_STAGE_WATERMARK": "fix-watermark-errors.html",
"X_OPAQUE_ID": "api-conventions.html#x-opaque-id",
"FORMING_SINGLE_NODE_CLUSTERS": "modules-discovery-bootstrap-cluster.html#modules-discovery-bootstrap-cluster-joining"
"FORMING_SINGLE_NODE_CLUSTERS": "modules-discovery-bootstrap-cluster.html#modules-discovery-bootstrap-cluster-joining",
"ALLOCATION_EXPLAIN_NO_COPIES": "cluster-allocation-explain.html#_no_valid_shard_copy"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a fixed [[anchor-name]] to the docs rather than using the #_auto-generated one that might change inadvertently.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See #113667 which forbids this.

Copy link
Contributor

@stefnestor stefnestor Sep 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@matthewabbott, elaborating ...

(I assume the way Dave knew the above was wrong is that auto-generated headers can break and prefix _ and join header words via _.) On this doc line, you'll want to prefix a line like this example. So we will be editing the *.asciidoc in this PR after all. Afterwards in the above, what you had set inside the square brackets like [[explain-no-valid-shard-copy]] will become what goes at the end of the URL cluster-allocation-explain.html#explain-no-valid-shard-copy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed/Allocation All issues relating to the decision making around placing a shard (both master logic & on the nodes) external-contributor Pull request authored by a developer outside the Elasticsearch team >non-issue Supportability Improve our (devs, SREs, support eng, users) ability to troubleshoot/self-service product better. Team:Distributed Meta label for distributed team v9.0.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants