Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: user-facing part of variable vnode count #18515

Merged
merged 13 commits into from
Sep 26, 2024

Conversation

BugenZhao
Copy link
Member

@BugenZhao BugenZhao commented Sep 12, 2024

I hereby agree to the terms of the RisingWave Labs, Inc. Contributor License Agreement.

What's changed and what's your intention?

This is a progress towards #15900.

Significant changes in this PR:

  • The vnode count for singletons will always be 256 (no matter existing or newly created), instead of 1 used in previous PRs. This is to simplify backwards compatibility.
  • Add session variable named streaming_max_parallelism to decide the max parallelism used in future streaming jobs. Note that the default value of the session variable can be changed globally with ALTER SYSTEM.
  • Retrieve correct value of vnode count from expression context in rw_vnode(), used internally for 2-phase aggregation.
  • Rename VirtualNode::COUNT to VirtualNode::COUNT_FOR_COMPAT, indicating that it should not be used in most of the code. Also update the Java binding function.
  • Other minor fixes of TODOs.

The variable vnode count feature is now generally available after this PR, with just a few individual TODOs that still need to be fixed.

Checklist

  • I have written necessary rustdoc comments
  • I have added necessary unit tests and integration tests
  • All checks passed in ./risedev check (or alias, ./risedev c)

Documentation

  • My PR needs documentation updates. (Please use the Release note section below to summarize the impact on users)

Release note

Introduce a new session config of streaming_max_parallelism. Update the doc of streaming_parallelism. Please refer to the rustdoc for documentation.

@BugenZhao BugenZhao force-pushed the bz/var-vnode-user-facing-local_split branch from 968ee5d to dd4b4f7 Compare September 13, 2024 04:26
@BugenZhao BugenZhao force-pushed the bz/var-vnode-user-facing-local_split branch from dd4b4f7 to 8b0a5df Compare September 13, 2024 06:13
@BugenZhao BugenZhao force-pushed the bz/var-vnode-user-facing-local branch 2 times, most recently from 4ae30fd to 9cdeee0 Compare September 16, 2024 05:31
@BugenZhao BugenZhao force-pushed the bz/var-vnode-user-facing-local_split branch 2 times, most recently from 0802213 to 61f8b0d Compare September 16, 2024 05:35
@BugenZhao BugenZhao changed the base branch from bz/var-vnode-user-facing-local to bz/var-vnode-row-id-gen September 16, 2024 05:35
@BugenZhao BugenZhao changed the title feat: support per-fragment vnode count more more feat: user-facing part of variable vnode count Sep 16, 2024
@BugenZhao BugenZhao marked this pull request as ready for review September 16, 2024 06:59
@BugenZhao BugenZhao force-pushed the bz/var-vnode-user-facing-local_split branch from 632ae87 to f32cb85 Compare September 16, 2024 07:07
@BugenZhao BugenZhao force-pushed the bz/var-vnode-user-facing-local_split branch from f32cb85 to 669a70e Compare September 16, 2024 07:09
@BugenZhao BugenZhao force-pushed the bz/var-vnode-user-facing-local_split branch from 669a70e to 43f4a88 Compare September 16, 2024 07:39
@BugenZhao BugenZhao force-pushed the bz/var-vnode-user-facing-local_split branch from 43f4a88 to bf7205a Compare September 17, 2024 07:33
@BugenZhao BugenZhao force-pushed the bz/var-vnode-user-facing-local_split branch from bf7205a to 4a20c1c Compare September 19, 2024 06:22
@BugenZhao BugenZhao force-pushed the bz/var-vnode-user-facing-local_split branch from 4a20c1c to f39bc7d Compare September 20, 2024 15:51
@wenym1 wenym1 requested a review from hzxa21 September 23, 2024 10:30
@BugenZhao BugenZhao force-pushed the bz/var-vnode-user-facing-local_split branch from f531007 to 8414be3 Compare September 24, 2024 07:34
Base automatically changed from bz/var-vnode-row-id-gen to main September 24, 2024 08:02
@BugenZhao BugenZhao force-pushed the bz/var-vnode-user-facing-local_split branch from e29b738 to 4e93810 Compare September 24, 2024 09:20
@BugenZhao BugenZhao force-pushed the bz/var-vnode-user-facing-local_split branch from 4e93810 to a3a5472 Compare September 24, 2024 09:37
Copy link
Member

@fuyufjh fuyufjh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally LGTM!

src/meta/src/rpc/ddl_controller.rs Outdated Show resolved Hide resolved
// TODO(var-vnode): after `ServingVnodeMapping::upsert` is made vnode-count-aware,
// we may return 1 for singleton.
///
/// For backwards compatibility, [`VirtualNode::COUNT_FOR_COMPAT`] is used for singleton.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering the initial reason that we used 256 as Singleton's vnode_count here... What if we change it to 1? i.e. Which part of code rely on Singleton's vnode count being 256?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't remember exactly, but I previously encountered some problems and found that using 256 for the singleton simplified things.

However, it's true that using 1 is much clearer and more consistent. Given that this PR has passed the tests, I'll make another attempt in future PRs.

@graphite-app graphite-app bot requested a review from a team September 26, 2024 06:56
@BugenZhao BugenZhao added this pull request to the merge queue Sep 26, 2024
Merged via the queue into main with commit dba5975 Sep 26, 2024
31 of 32 checks passed
@BugenZhao BugenZhao deleted the bz/var-vnode-user-facing-local_split branch September 26, 2024 10:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants