Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The Pull Request for Issue 101 for no_std #102

Open
wants to merge 13 commits into
base: main
Choose a base branch
from

Conversation

Lol3rrr
Copy link

@Lol3rrr Lol3rrr commented Mar 19, 2022

This is the corresponding for Issue #101.

Currently this only contains the changes for the yielding of the WriteHandle


This change is Reviewable

The WriteHandle now stores a function pointer to a yield function, which will be called to avoid simply spinning
while waiting for all the Readers to move on. This function can be specified when creating the WriteHandle and
will default to std::thread::yield_now.
@codecov
Copy link

codecov bot commented Mar 19, 2022

Codecov Report

Merging #102 (ba70de8) into main (d6d1312) will increase coverage by 2.80%.
Report is 74 commits behind head on main.
The diff coverage is 85.96%.

❗ Current head ba70de8 differs from pull request most recent head 490377a. Consider uploading reports for the commit 490377a to get more accurate results

Files Changed Coverage
src/aliasing.rs 0.00%
src/read/factory.rs 0.00%
src/read/guard.rs ø
src/utilities.rs ø
src/read.rs 66.66%
src/lib.rs 90.90%
src/handle_list.rs 92.30%
src/write.rs 92.85%

📢 Thoughts on this report? Let us know!.

Lol3rrr added 2 commits March 20, 2022 18:29
Added the "std" feature to the Crate, which is enabled by default, to allow for switching between no_std and std.

Then replaced most uses of "std" with either "core" or "alloc" depending on the Type, however it is still unclear
how we deal with the Mutex/MutexGuard Types as these are currently needed but not supported in no_std directly
Copy link
Owner

@jonhoo jonhoo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall this looks good. I left some minor nits, plus a TODO for the Mutex bit. We'll also want to update CI to test compiling in a no_std environment. You can use this CI file as a starting point.

src/lib.rs Outdated Show resolved Hide resolved
src/lib.rs Outdated Show resolved Hide resolved
src/lib.rs Outdated Show resolved Hide resolved
src/lib.rs Outdated Show resolved Hide resolved
src/lib.rs Outdated Show resolved Hide resolved
src/sync.rs Outdated
pub(crate) use std::sync::{Arc, Mutex, MutexGuard};
pub(crate) use core::sync::atomic::{fence, AtomicPtr, AtomicUsize, Ordering};
#[cfg(not(loom))]
pub(crate) use std::sync::{Mutex, MutexGuard};
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a note to self that this is still a std dependency as we've discussed in #101.

Lol3rrr added 6 commits March 26, 2022 22:23
Made the small adjustments from the initial review on the Pull-Request
A basic implementation of a Lock-Free HandleList
This includes all the basic changes needed to swap to the HandleList and remove the Mutex.
However there are a lot of TODOs that still need to be resolved
Fixed some more TODOs that were still open
Updated the Tests to now also work in a no_std environment, which mainly was just swapping out function names
and using the core/alloc versions of std items.
However the main doc test is still failing and Im not entirely sure on how to mitigate this Problem in an elegant way
@Lol3rrr
Copy link
Author

Lol3rrr commented Mar 27, 2022

Fixed all the Notes from the Review and now also added the Lock-Free list, called HandleList.
Also updated most of the Tests to support testing in no_std where possible, however this is not possible for the Doc-Test in the main comment so unsure if this should even be touched or just ignored because all other tests work in no_std.

Otherwise it should now be fully no_std compliant as far as I can tell and just needs some minor finishing touches in terms of internal comments and style in certain places

@jonhoo
Copy link
Owner

jonhoo commented Mar 28, 2022

Hmm, I think this is going to end up leaking all the entries since there's no Drop implementation. I wonder if the trick here is going to be using some existing crate that provides a lock free list — what do you think?

@Lol3rrr
Copy link
Author

Lol3rrr commented Mar 28, 2022

While using an existing crate for a lock free list would certainly make it a bit easier since we dont need to maintain our own implementation. However I dont know how we could then have these "snapshots" for the write handle, because the crate would need to support it or something similiar.

I think for now I might try to improve the List I wrote and see if I can improve it a bit, but if that does not work out we should consider using some other crate.

@jonhoo
Copy link
Owner

jonhoo commented Apr 2, 2022

I think basically any singly-linked atomic linked list gives you "snapshot" read iterators. They sort of have to by nature — if you only add to the head, then any read has to start at the head at the time of the first read, and just walk "backwards in time" from then.

@Lol3rrr
Copy link
Author

Lol3rrr commented Apr 2, 2022

Yeah right, didnt really think about that, but I think there was a Problem, where want to iterate over the same list of handles in the Writer, while we wait for them to move on (new line where we create a new Iterator in a loop vs. old line where we create a new Iterator in the loop but based on the MutexGuard so its always the same).
Thats why I create an explicit snapshot and then create an iterator based of that snapshot, because we will then always iterate over the same Handles.

If we were to just create a new Iterator for every time where we check them, there might be a new Handle added and we need to keep track of which handles we saw last time and where they are now (mapping between current list state and last seen "sub-list")

@jonhoo
Copy link
Owner

jonhoo commented Apr 10, 2022

That's a good point. However, I think you'll generally find the iterators to be Clone specifically for this reason. When you grab an iterator for a linked list, it really just holds a pointer to a head node. And for an shared reference iterator, that iterator should be Clone so that you can iterate from that same node multiple times.

Reworked a little bit of the Structure for the new HandleList to now handle dropping the List correctly and free
all the Entries of the List.
This meant we need to create a new InnerList that is wrapped in an Arc, which then allows us to know that once InnerList
is dropped, no one else can have access to it or any of its Entries anymore
@Lol3rrr
Copy link
Author

Lol3rrr commented May 18, 2022

Now fixed the previous Drop-Problem and also found and fixed another minor Problem with the List-Snapshot.

I first fixed the Drop-Problem for the HandleList, by moving the actual ownership of the List into a new InnerList, which is then wrapped in an Arc by the HandleList. This makes sure that we know once InnerList is dropped, no one can still hold a reference to any of its entries. So we should be able to safely free all the entries of the List once InnerList is dropped.

This also revealed the Problem that ListSnapshot was not tied to the actual list in any way, by a lifetime or anything like that, which would have resulted in possible use-after frees with the new Drop behaviour. To address this the ListSnapshot now also stores a clone of the Arc which wraps the InnerList, this makes sure that even if every other handle to the list is dropped the snapshot itself keeps the list alive and therefore all the entries also not yet freed and we can safely access them.

Copy link
Owner

@jonhoo jonhoo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for checking back in on this! I left some more in-depth notes this time.

src/write.rs Outdated Show resolved Hide resolved
Comment on lines +56 to +57
// We dont need a Drop implementation as of now, because the Epoch for this Handle will not
// be freed again
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's okay for us to only drop a handle only after all handles have been dropped. Doing so would effectively mean that the application has unbounded memory growth, and if it for example creates read a new read handle regularly or in a loop or something, it'll run into problems. I think we have to support removal, even if it's maybe delayed or periodic.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I also thought about this but we would also run in the Problem of having to deal with memory reclaimation. What would you think about not actually removing the entries but instead be able to reuse entries in the List, I haven't thought about it too much yet and it would not entirely solve the Problem with unbounded memory usage but It would at least help mitigate most cases. Just something to maybe consider as well

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, yes, reuse would be a nice way to go about it!

src/write.rs Outdated Show resolved Hide resolved
src/write.rs Outdated
self.last_epochs.resize(epochs.capacity(), 0);

// make sure we have enough space for all the epochs in the current snapshot
self.last_epochs.resize(epochs.iter().count(), 0);
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a little unfortunate to end up having to traverse the list twice. It'd be better if we could avoid that.

src/write.rs Show resolved Hide resolved
src/handle_list.rs Outdated Show resolved Hide resolved
Comment on lines 152 to 153
// We also have no one mutating Entries on the List and therefore we can access this without
// any extra synchronization needed.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will no longer be true once we have removal of individual elements.

src/handle_list.rs Outdated Show resolved Hide resolved
src/handle_list.rs Outdated Show resolved Hide resolved
src/handle_list.rs Outdated Show resolved Hide resolved
Fixed most of the basic Problems picked out on the last PR comment, mostly the ones that had very
straight forward solutions and didnt require any big changes or new ideas/features.
@Lol3rrr
Copy link
Author

Lol3rrr commented May 21, 2022

Fixed most of the Comments you made for the last commit.

Although there are still some open questions for me, especially how we would want to do the removal of entries in the List, which I already touched on in one of my comments. My main concern is that we would need to either have nodes be able to represent an "empty" state, which would certainly make it more complicated and would mean that we would never be able to actually reduce our memory usage. However if we start removing entries we would need to worry about memory reclaimation, which Im sure you know can be very tricky (especially when targeting no_std i think).

Besides that there is only the small question of how we could avoid having to traverse the List twice in publish, but thats also not of the highest priority compared to the other problem we need to solve

@jonhoo
Copy link
Owner

jonhoo commented May 28, 2022

I think having a way to represent "unused" is the way to go here. We don't actually have to empty them, since the whole Arc<AtomicUsize> can be re-used by another reader that comes along later! The "used" bit can just be an AtomicBool that you compare_exchange from false to true, and keep searching (or eventually allocate) if nothing is found.

You may find this code from the haphazard crate which more or less implements the exact same thing helpful:

Maybe we could even take that implementation and extract it into its own crate so we can re-use it!

@jonhoo
Copy link
Owner

jonhoo commented May 29, 2022

I extracted that list into its own crate: sento. Give that a try?

@Lol3rrr
Copy link
Author

Lol3rrr commented May 30, 2022

Hopefully I can find some time this week to properly look into it and integrate that into it. Will give an update then

@indietyp
Copy link

I would love to use left-right in a no-std project of mine, but development on no-std support seems to have stalled, what is the current state of development and is there any way I could help out?

@Lol3rrr
Copy link
Author

Lol3rrr commented Nov 13, 2022

Im not exactly sure anymore what exactly the stopping factor was, but from a quick look there seem to be two points that need to be addressed:

  • Add CI test for the no_std implementation
  • Switch to the proposed sento crate for the List

Lol3rrr added 2 commits September 7, 2023 21:29
Every list entry now also stores the number of following entries, which is relatively cheap to do,
as we only ever append new entries at the start.

This allows us to easily get the current length of the list, as we can simply read the current head
and read how many followers it has.
Added a first iteration of the reuse of entries in the new HandleList
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants