Skip to content

Conversation

@MrCroxx
Copy link
Contributor

@MrCroxx MrCroxx commented Jul 3, 2025

Which issue does this PR close?

Closes #6370 #6372.

FYI: #5906

Rationale for this change

Introduce FoyerLayer as foyer hybrid cache integration.

What changes are included in this PR?

Are there any user-facing changes?

MrCroxx added 6 commits July 3, 2025 23:49
Signed-off-by: MrCroxx <mrcroxx@outlook.com>
Signed-off-by: MrCroxx <mrcroxx@outlook.com>
Signed-off-by: MrCroxx <mrcroxx@outlook.com>
Signed-off-by: MrCroxx <mrcroxx@outlook.com>
Signed-off-by: MrCroxx <mrcroxx@outlook.com>
Signed-off-by: MrCroxx <mrcroxx@outlook.com>
@MrCroxx MrCroxx marked this pull request as ready for review July 8, 2025 06:51
@MrCroxx MrCroxx requested a review from Xuanwo as a code owner July 8, 2025 06:51
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. releases-note/feat The PR implements a new feature or has a title that begins with "feat" labels Jul 8, 2025
@MrCroxx
Copy link
Contributor Author

MrCroxx commented Jul 8, 2025

Please hold the merge progress after approval. I'll release a new foyer version after all problems here are fixed, then switch the foyer version in this PR to it. 🙏

Signed-off-by: MrCroxx <mrcroxx@outlook.com>
Copy link
Member

@Xuanwo Xuanwo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @MrCroxx for working on this, really great. Only some comments about the details.

Signed-off-by: MrCroxx <mrcroxx@outlook.com>
Copy link
Member

@erickguan erickguan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks cool! I'm going to tag along to see the progress.

let range = BytesContentRange::default()
.with_range(start, end - 1)
.with_size(entry.len() as _);
let buffer = entry.slice(start as usize..end as usize);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add a comment as a reminder? Up to you.

@MrCroxx MrCroxx changed the title feat: introduce foyer layer, partially impl it feat: introduce foyer layer Jul 15, 2025
@MrCroxx
Copy link
Contributor Author

MrCroxx commented Jul 27, 2025

Hi. Just back from a vacation. I'll keep working on this PR tomorrow. 🥰

let entry = self
.inner
.cache
.fetch(path.clone(), || {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OpRead contains a version field that I think we should include in the cache key

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice idea!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just found that this requirement is a bit tricky for foyer. As a cache, foyer does not support versioning (and it is also difficult to support, as caches allow only partial data retention, and supporting versioning requires a lot of additional overhead). If users want to read the latest version without a version tag, it may lead to reading incorrect objects or result in cache misses.

I think a better approach might be to bypass the cache when there is a versioning requirement, or to treat objects without a version and those with a clear version as two separate objects in the cache without fallback.

Any ideas? cc @Xuanwo @jorgehermo9 for help.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking of something like this https://github.com/jorgehermo9/opendal/blob/0b5520869ca53afd522fbc3faa345fba9f812d3a/core/src/layers/foyer.rs#L204 (taking it from my draft PR of the foyer cache layer)

Just adding it to the cache key so paths with different version are treated as separate cache keys

Example

reading path=/path/to/blob, version=None -> Storing entry with key "/path/to/blob-"
reading path=/path/to/blob, version= "1.0" -> Storing entry with key "/path/to/blob-1.0"
reading path=/path/t/blob, version = "2.0" -> Storing entry with key "/path/to/blob-2.0"

This way, you would have one entry per combination of (path,version). I think this should be the better approach, right?

or to treat objects without a version and those with a clear version as two separate objects in the cache without fallback.

With that, you meant this? Or did I understand it wrong?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Example

reading path=/path/to/blob, version=None -> Storing entry with key "/path/to/blob-"
reading path=/path/to/blob, version= "1.0" -> Storing entry with key "/path/to/blob-1.0"
reading path=/path/t/blob, version = "2.0" -> Storing entry with key "/path/to/blob-2.0"

Yeah, this example is what I mean. But it treats the read operation without version (fallback to latest version) and read operation with version in different ways.

e.g.

obj-1: v1, v2, v3 (latest)

Expected:

read obj-1, v3 => cache read obj-1+v3 (as expected)
read obj-1 => cache read obj-1+ (shoule be: cache read obj-1+v3)

Although this will not affect the correctness, I still want to confirm whether this behavior is expected?

Copy link
Contributor

@jorgehermo9 jorgehermo9 Aug 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mmmm yeah, I understand what you mean. It is a performance improvement, but I'm not sure if this layer should assume that the backend would treat None version == latest one. It is the most common approach, but for example, think of a backend that requires the version to be set, and fails with version=None (the backend does not fallback None to latest), the backend response would be different in those two cases (error with version=None and the blob bytes with version=latest)

Doing that fallback to latest would also be hard to know for this cache layer. How can the cache layer know which one is the latest version? Maybe the remote backend has version v4, but that version did never reach the cache yet and it only recorded version v3, should the fallback with version=None be done to version=v3 which we have on cache or go to the remote backend and check the latest version (v4)?

I think I prefer to not optimize this case in order to not make assumptions about the underlying backend in favor of correctness

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, the version field is a free string, how can you know that v3 is the latest? We can only interpret the version and induce the ordering if it complies with semver or something like that, but with the current implementation, we don't really know which tag would be the latest based only on its name

Maybe for some backend, we have another kind of version system that does v1 (latest), v2, v3. And when a new version reaches, we move each blob +1 version ahead and we would have v1 (latest)(new blob) , v2 (old v1), v3 (old v2), v4 (old v3). This is weird, but users could do this

Copy link
Contributor Author

@MrCroxx MrCroxx Aug 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make sense. Let me implement it.

MrCroxx added 2 commits August 6, 2025 10:59
Signed-off-by: MrCroxx <mrcroxx@outlook.com>
Signed-off-by: MrCroxx <mrcroxx@outlook.com>
writer.write_all(&version_len.to_le_bytes())?;
writer.write_all(version.as_bytes())?;
} else {
writer.write_all(&0u64.to_le_bytes())?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This way, the version "" (empty string) would have the same serialization as None and its decoding would always be None (per the condition of L72, where if version_len == 0, then None is returned. I think it is wrong to assume None=="" in the version. It is a corner case but it is weird to me

Maybe we could use some "magic" len value such as len = -1 (and storing that magic value in a constant?) to represente the None value?

Copy link
Contributor

@jorgehermo9 jorgehermo9 Aug 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or maybe we should do the equivalent of #[serde(skip_serializing_if = "Option::is_none")] here and do not write the len at all, and then in the decode part, we check before the reader.read_exact(&mut u64_buf)?; if the buffer is empty at that point. If the buffer is empty, then the version is None because there is no more bytes describing the lenght

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. I only considered the case of S3. In practice, I think these two should be the same for almost all storage backends. However, if there are counterexamples, this is not appropriate.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or maybe we should do the equivalent of #[serde(skip_serializing_if = "Option::is_none")] here and do not write the len at all, and then in the decode part, we check before the reader.read_exact(&mut u64_buf)?; if the buffer is empty at that point. If the buffer is empty, then the version is None because there is no more bytes describing the lenght

Good idea! I like this solution. Let me fix it.

Copy link
Contributor

@jorgehermo9 jorgehermo9 Aug 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Take a look into how bincode does that!

https://github.com/bincode-org/bincode/blob/55fd02934cff567ce1b2ff9d007608818ea6481b/src/enc/mod.rs#L89
https://github.com/bincode-org/bincode/blob/trunk/src/de/mod.rs#L312

They encode an u8 at the start to flag whether the value is Some(_) or None, and if it is some, the value 1 is stored, and next to that, the String encoding. If None, then only 0 is stored and nothing more

Copy link
Contributor

@jorgehermo9 jorgehermo9 Aug 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm afraid that we are kind of reimplementing the functionality of bincode... Should we really reimplement it to try avoid having a dependency to it? I think it would be a lot easier to just forward the struct to bincode (this is how the default serializer of foyer works, right? if I remember correctly, it is behind a feature flag..

The issue of reimplementing is the danger of having such bugs..

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be a lot easier to just forward the struct to bincode (this is how the default serializer of foyer works, right?

Yep. Foyer supports serde and bincode.

Should we really reimplement it to try avoid having a dependency to it?

I'm open to both options because the key and value struct is simple enough.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, it would be nice to add a test to assert that the serialization of a key with version=None and version="" is different.

And also, a test (fuzz test if possible but at least a simple test with a few cases) where we test that decode(encode(key)) == key so we ensure that decode is the inverse of encode

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

utilized bincode for FoyerKey with test case covered in e1abdb3

@jorgehermo9 PTAL

@flaneur2020
Copy link
Contributor

thank you very much @flaneur2020. Feel free to ping me and I can help review that!

Let me know if you need some kind of help.

@jorgehermo9 excited for having your help! i'll definitely reach out when I have something ready for review. looking forward to it! 😁

@flaneur2020
Copy link
Contributor

flaneur2020 commented Jan 1, 2026

I think foyer has a feature to enable auto-serialization/deserialization on cache key/values which is based on bincode, so we might don't have to write any code (I think that is the serde feature? @MrCroxx )

@jorgehermo9 +1, i found there's the following code that had already implement Code for the structs that had serde::Serialize + serde::de::DeserializeOwned trait implmented, so i think all we need is to enable the "serde" feature and derive serde::Serialize, serde::Deserialize for FoyerKey:

#[cfg(feature = "serde")]
impl<T> Code for T
where
    T: serde::Serialize + serde::de::DeserializeOwned,
{
    fn encode(&self, writer: &mut impl std::io::Write) -> std::result::Result<(), CodeError> {
        bincode::serialize_into(writer, self).map_err(CodeError::from)
    }

    fn decode(reader: &mut impl std::io::Read) -> std::result::Result<Self, CodeError> {
        bincode::deserialize_from(reader).map_err(CodeError::from)
    }

    fn estimated_size(&self) -> usize {
        bincode::serialized_size(self).unwrap() as usize
    }
}

@jorgehermo9
Copy link
Contributor

jorgehermo9 commented Jan 1, 2026

yep that's it! I think this would be the best approach

Also, thanks for the tests you introduced! awesome work🙌🏻

@flaneur2020
Copy link
Contributor

@Xuanwo @MrCroxx @jorgehermo9 mind take another look on this pr when you're free? thx a lot

@meteorgan
Copy link
Contributor

Do you still need a FoyerLayer, or should we instead use CacheLayer + foyer service ?

@flaneur2020
Copy link
Contributor

flaneur2020 commented Jan 3, 2026

Do you still need a FoyerLayer, or should we instead use CacheLayer + foyer service ?

@meteorgan there was a discussion around this in #7107 (comment)

imo we could introduce a concrete FoyerLayer first to meet the concrete requirements at the moment without being blocked too long for reaching consensus on designing abstractions, and then we could let FoyerLayer into an alias for CacheLayer, this should be a smooth transistion for users.

@meteorgan
Copy link
Contributor

@meteorgan there was a discussion around this in #7107 (comment)

imo we could introduce a concrete FoyerLayer first to meet the concrete requirements at the moment without being blocked too long for reaching consensus on designing abstractions, and then we could let FoyerLayer into an alias for CacheLayer, this should be a smooth transistion for users.

Got it.

Copy link
Member

@Xuanwo Xuanwo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this PR itself is good enough to go. Will continue the discussion inside CacheLayer.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm guessing we are adding a wrong file?

Copy link
Contributor

@flaneur2020 flaneur2020 Jan 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in 22711a4 , @Xuanwo PTAL quq

async move {
let (_, mut reader) = inner
.accessor
.read(&path, args.with_range(BytesRange::new(0, None)))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I have some new idea to share later in a new RFC.

Copy link
Contributor

@jorgehermo9 jorgehermo9 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Love this! thank you so much @flaneur2020 for pushing this further! And also @MrCroxx for the initial work

LGTM!

}

async fn abort(&mut self) -> Result<()> {
self.w.abort().await
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should self.buf be cleaned here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed in 1a550be


impl<A: Access> oio::Write for Writer<A> {
async fn write(&mut self, bs: Buffer) -> Result<()> {
self.buf.push(bs.clone());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should stop buffering (and clear the buffer?) if we reach the inner.size_limit in the write, right? Otherwise, we will buffer big files (bigger than inner.size_limit) and discarding the buffer in the close method (as there we have the size_limit check)

Copy link
Contributor

@flaneur2020 flaneur2020 Jan 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make sense, i added this logic in in 1a550be, PTAL

.read(&path, args.with_range(BytesRange::new(0, None)))
.await
.map_err(FoyerError::other)?;
let buffer = reader.read_all().await.map_err(FoyerError::other)?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmmm, Does this have the logic of "do not buffer things bigger than size_limit"? I see the write method has it, but we shouldn't also cache things bigger than size_limit when reading, right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's a good point, i think with size_limit specified, it should both works on read and write, let me modify this

Copy link
Contributor

@flaneur2020 flaneur2020 Jan 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

made a logic in 1dbf2b1 :

  1. read metadata if cache not found before sending read request to upstream
  2. if size is too large, return a FetchSizeTooLarge error, we'll skip this cache, and simply forward the request to upstream with user's original args (be like, the first 4kb from a 2GB object)
  3. if data is proper sized, fetch the ENTIRE object from upstream, and slice the range as user user requested.

@jorgehermo9 PTAL

@jorgehermo9
Copy link
Contributor

jorgehermo9 commented Jan 7, 2026

I'm reviewing it again when I'm free, give me a day or two @flaneur2020!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

releases-note/feat The PR implements a new feature or has a title that begins with "feat" size:XL This PR changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants