Skip to content

Conversation

@valadaptive
Copy link
Contributor

Adds a few different benchmark variants, to test the performance impact of underline drawing and caching.

For some reason, on my machine (Linux) I need to pass the -p (parallel) flag to tango when performing a comparison. Without it, the "new" code always comes back as ridiculously faster, even when it's not been changed at all. I can't find any information on why this would be.

@nicoburns
Copy link
Collaborator

This applies to tests as well as benchmarks: I think collocating the parley_draw tests/benchmarks with the regular parley tests/benchmarks is a bad idea, as I'm anticipating that we'll want to move parley_draw out of this repo at some point.

@valadaptive
Copy link
Contributor Author

I'm anticipating that we'll want to move parley_draw out of this repo at some point.

OK, what's the deal with this? I haven't been able to find any relevant Zulip threads about what on earth is actually in-scope for parley_draw. Is it just glue code for various renderer frontends? If we move it into its own repository, why can't we just move these benchmarks there too? Given that testing parley_draw will require a backend to drive text layout, won't the parley_draw test suite depend on parley proper?

I guess I just don't understand the motivation for why we want to move parley_draw into its own repo. What do we gain?

@nicoburns
Copy link
Collaborator

I haven't been able to find any relevant Zulip threads about what on earth is actually in-scope for parley_draw.

I'm not sure if there is full consensus on this yet, but my vision / understanding is that the scope is:

  • Glyph outline extraction
  • Glyph scaling/hinting
  • Glyph bounding box computation
  • Converstion of Glyphs into "drawing commands" suitable for use with a general-purpose rasterizer
  • Opt-in: rasterization to bitmap using vello_cpu
  • Opt-in: rasterization of "text decorations" (underline, strikethrough, etc).
  • Opt-in: Caching of all of the above (caching hinted outlines, bounding boxes, rendered bitmaps)

It therefore serves as a replacement for the "scaling" and "rasterizing" functionality in swash. And as such I expect it to be adopted much more widely than Parley. Both by other text layout systems like Cosmic Text, and potentially by 2D rendering libraries (like femtovg and WebRender), and by rendering abstractions like AnyRender.

I therefore think that the crate definitely shouldn't be Parley branded (have "parley" in the name). And I expect it's development to be more closely aligned with Vello and a rendering abstractions. But to be fair, neither of those technically preclude it from staying in this repo.

Given that testing parley_draw will require a backend to drive text layout, won't the parley_draw test suite depend on parley proper?

Yes, but it could quite easily be the crates.io version.

Copy link
Contributor

@conor-93 conor-93 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding these.

I think collocating the parley_draw tests/benchmarks with the regular parley tests/benchmarks is a bad idea

For now, I think for now having parley_draw tests in their own file is a sufficient degree of separation. I agree that will likely need to change in the future, as we get a clearer picture of the future of parley_draw.

If we move it into its own repository, why can't we just move these benchmarks there too?

It'd be good to have these benchmarks now though, yes. I don't think moving them will be that hard (and will follow naturally from any separation that may later happen).

@valadaptive valadaptive added this pull request to the merge queue Feb 8, 2026
Merged via the queue into linebender:main with commit 4f25d22 Feb 8, 2026
24 checks passed
@valadaptive valadaptive deleted the underlines-bench branch February 8, 2026 10:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants