Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for "nowait" mode in file synchronization #1337

Draft
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

haze518
Copy link
Collaborator

@haze518 haze518 commented Nov 11, 2024

Added nowait mode for file operations, which allows not waiting for the operation to complete before returning control to the client. This enables immediate continuation of work without blocking execution.

The append, overwrite, and delete methods in FilePersister have been updated to include a new confirmation parameter that controls the operation behavior:

WaitWithFlush — operation is performed with waiting and disk synchronization.
Wait — operation is performed with waiting but without synchronization.
Nowait — operation is performed immediately without waiting.
By default, WaitWithFlush is used to avoid changing the current system behavior.

A task queue has been added for handling operations in nowait mode asynchronously. This allows file operations to be executed in the background without blocking the client.

Inactivity state management has been implemented for tasks. If a task has not been used for a specified period, it will be marked as inactive, and the system will notify about it

@haze518 haze518 requested review from spetz, numinnex and hubcio and removed request for numinnex November 11, 2024 15:47
_ = idle_interval.tick() => {
if last_used.elapsed() > idle_timeout {
if let Err(e) = idle_notifier.send_async(path.to_string()).await {
error!("Error receiving command: {:?}", e);
Copy link
Contributor

@numinnex numinnex Nov 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Error sending idle notification

Copy link
Collaborator

@hubcio hubcio left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good, but how exactly are you handling closing of the segment? eg. when it's full. IMHO your new tokio task should be shutted down

@hubcio
Copy link
Collaborator

hubcio commented Nov 11, 2024

Also, could you please run benches?
rm local_data -rf; cargo run --bin iggy-server --release
in other shell:
cargo run --release --bin iggy-bench -- send tcp
cargo run --release --bin iggy-bench -- poll tcp

do it for both master and your PR

@haze518
Copy link
Collaborator Author

haze518 commented Nov 11, 2024

looks good, but how exactly are you handling closing of the segment? eg. when it's full. IMHO your new tokio task should be shutted down

@hubcio
In that case, idle_notifier.tick() will trigger, and this task will shut down since it hasn’t been used for a certain period of time

@hubcio
Copy link
Collaborator

hubcio commented Nov 11, 2024

looks good, but how exactly are you handling closing of the segment? eg. when it's full. IMHO your new tokio task should be shutted down

@hubcio In that case, idle_notifier.tick() will trigger, and this task will shut down since it hasn’t been used for a certain period of time

why did you go with such approach? why not just add another command eg. Shutdown and handle it in select? :)

@haze518
Copy link
Collaborator Author

haze518 commented Nov 11, 2024

looks good, but how exactly are you handling closing of the segment? eg. when it's full. IMHO your new tokio task should be shutted down

@hubcio In that case, idle_notifier.tick() will trigger, and this task will shut down since it hasn’t been used for a certain period of time

why did you go with such approach? why not just add another command eg. Shutdown and handle it in select? :)

Because in that case, we would need to create a persister for each segment so that the task would be deleted when the segment is closed. Alternatively, if we create it manually in some operation, we would also have to manually call shutdown, which didn’t seem very convenient

@hubcio
Copy link
Collaborator

hubcio commented Nov 11, 2024

looks good, but how exactly are you handling closing of the segment? eg. when it's full. IMHO your new tokio task should be shutted down

@hubcio In that case, idle_notifier.tick() will trigger, and this task will shut down since it hasn’t been used for a certain period of time

why did you go with such approach? why not just add another command eg. Shutdown and handle it in select? :)

Because in that case, we would need to create a persister for each segment so that the task would be deleted when the segment is closed. Alternatively, if we create it manually in some operation, we would also have to manually call shutdown, which didn’t seem very convenient

ok, acknowledged. let's see if all tests pass and how's performance :) it should be around 2x-3x faster than current master (AFAIR)

@haze518 haze518 force-pushed the persister_via_channels_new branch 4 times, most recently from caa55e7 to afd2fdc Compare November 12, 2024 15:51
@haze518 haze518 force-pushed the persister_via_channels_new branch from afd2fdc to a0dff0b Compare November 12, 2024 15:58
@coveralls
Copy link
Collaborator

Pull Request Test Coverage Report for Build 11800982726

Details

  • 168 of 194 (86.6%) changed or added relevant lines in 17 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall first build on persister_via_channels_new at 75.241%

Changes Missing Coverage Covered Lines Changed/Added Lines %
server/src/main.rs 6 7 85.71%
server/src/compat/storage_conversion/mod.rs 0 2 0.0%
server/src/streaming/storage.rs 0 2 0.0%
sdk/src/confirmation.rs 10 15 66.67%
server/src/streaming/persistence/persister.rs 98 114 85.96%
Totals Coverage Status
Change from base Build 11768736752: 75.2%
Covered Lines: 23248
Relevant Lines: 30898

💛 - Coveralls

@haze518 haze518 marked this pull request as draft November 13, 2024 16:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants