Skip to content

Implement fractional scale#36

Open
moverest wants to merge 1 commit intoany1:masterfrom
moverest:fractional-scale
Open

Implement fractional scale#36
moverest wants to merge 1 commit intoany1:masterfrom
moverest:fractional-scale

Conversation

@moverest
Copy link

@moverest moverest commented Nov 1, 2024

Without this, if the compositor's output is set to a fractional value, e.g. ×1.5, the surface appears blurry as the compositor will scale the surface. With this the surface's pixels can match the pixels on the screen.

Copy link
Owner

@any1 any1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because we don't really have any control over the scale of the buffer from the server, I don't really see how this is supposed to help.

We already submit buffers that are scaled to suit the compositor's output scaling, but the reported output scale must be accurate for that to be effective. We should really be using wl_surface::preferred_buffer_scale instead of the way that it's currently being done.

I can see fractional scale being useful if client-side desktop resizing is implemented, but otherwise I don't understand what it achieves.

However, viewporter would still be useful because it allows us to offload the scaling to the compositor. I.e. we can skip one copy, which is a nice optimisation. That would be a much bigger change though.

struct wl_compositor* wl_compositor = NULL;
struct wl_shm* wl_shm = NULL;
struct zwp_linux_dmabuf_v1* zwp_linux_dmabuf_v1 = NULL;
static struct wp_viewporter *wp_viewporter = NULL;
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

star should be on the left side here

.close = xdg_toplevel_close,
};

static void wp_fractional_scale_preferred(void *data,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are also inconsistencies here with the location of the star.

@moverest
Copy link
Author

We already submit buffers that are scaled to suit the compositor's output scaling, but the reported output scale must be accurate for that to be effective. We should really be using wl_surface::preferred_buffer_scale instead of the way that it's currently being done.

The issue with this is that it only reports the scale as an integer (e.g. 1, 2, 3...). If the display output has a non integer scale, e.g. 1.5, it will report either 1 or 2 which makes the client render with a different scale than the one used by the compositor. This means that the compositor will scale the surface up or down.

A way to see this yourself is to set your display's scaling to a non-integer value, e.g. 1.75 and compare the rendering with the version without the changes and with the changes with a VNC remote that has the same resolution as your display. The version without the changes will appear blurry.

Without changes With changes
without-patch with-patch

What the fractional scaling protocol does is report the actual scaling used by the output and not the closest integer.

However, viewporter would still be useful because it allows us to offload the scaling to the compositor. I.e. we can skip one copy, which is a nice optimisation. That would be a much bigger change though.

I didn't think about that but by doing this we wouldn't even need to care about the scaling.

@moverest
Copy link
Author

moverest commented Nov 10, 2024

In my example the screenshot without the changes doesn't look that bad because I have another screen with a 2x scaling so it's down-scaling from 2x (as it takes the highest scaling it finds). If I remove that display, the rendering is worse as it will upscale from 1x instead.

screenshot_2024-11-10T19:42:20

@any1
Copy link
Owner

any1 commented Nov 11, 2024

Actually, if anything, the left one looks better than the right one, but it's really hard to tell. But the one where we scale down to 1 and the compositor scales it up again is definitely much worse, like you say.

I would argue that the compositor should report preferred_buffer_scale that's rounded up to the nearest integer, regardless of how large the fractional part is.

This is what wlroots does for both preferred_buffer_scale: https://gitlab.freedesktop.org/wlroots/wlroots/-/blob/master/types/scene/surface.c?ref_type=heads#L22 and the reported output scale: https://gitlab.freedesktop.org/wlroots/wlroots/-/blob/master/types/output/output.c?ref_type=heads#L52 , at least as far as I can tell.

Wouldn't it be better to fix this in the compositor instead?

@moverest
Copy link
Author

Actually, if anything, the left one looks better than the right one, but it's really hard to tell. But the one where we scale down to 1 and the compositor scales it up again is definitely much worse, like you say.

I see what you mean. What happens is that because of the compression (from the VNC server) the colours appear faded which reduces the contrast (especially with dark colours). GitHub adds another layer of compression when I uploaded the image so it makes the image even more faded. When scaling it up (to 2x) then down (to the 1.75x) it increases the contrast (but not uniformly) as a side effect. I'm pretty sure there are better ways to increase the contrast which would be more uniform that's another topic.

What happens here is that we have 1920x1080 buffer coming from the VNC server that's scaled up into a 2258x1270 buffer that is then scaled down to a 1920x1080 buffer by the compositor. What the fractional scaling patch is trying to do is to have that initial 1920x1080 buffer be scaled once into the buffer that's actually rendered, in my case the same 1920x1080. There's only one scaling transformation happening.

I would argue that the compositor should report preferred_buffer_scale that's rounded up to the nearest integer, regardless of how large the fractional part is.

This is what wlroots does for both preferred_buffer_scale: https://gitlab.freedesktop.org/wlroots/wlroots/-/blob/master/types/scene/surface.c?ref_type=heads#L22 and the reported output scale: https://gitlab.freedesktop.org/wlroots/wlroots/-/blob/master/types/output/output.c?ref_type=heads#L52 , at least as far as I can tell.

Wouldn't it be better to fix this in the compositor instead?

That's a good point. Not sure why they decided to scale down instead of using the nearest value. 🤷 It would definitely improve the rendering for clients that don't support fractional scaling. Though, I think it still makes sense to render at the "correct" resolution when possible to avoid the repeated scaling transformations which have the advantage of doing less processing and using less memory.

However, you made great point about the viewporter thingy. We could have a surface that's the size set by the VNC server, make no transformation on our side and let the compositor handle the scaling. For the black borders I wonder if we couldn't just use two 1x1px surfaces and have them rendered to the correct size with the viewport as well. I'll try that to see if it works.

@any1
Copy link
Owner

any1 commented Nov 11, 2024

That's a good point. Not sure why they decided to scale down instead of using the nearest value. 🤷 It would definitely improve the rendering for clients that don't support fractional scaling.

If they didn't think about it, they most likely did it wrong because the default is to truncate the value. Which compositor are you using? Have you reported this to them? It's not technically wrong according to spec, but it is a poor implementation.

Though, I think it still makes sense to render at the "correct" resolution when possible to avoid the repeated scaling transformations which have the advantage of doing less processing and using less memory.

You always need to copy the buffer anyway, so scaling while you're at it doesn't make much difference. A slightly larger buffer can make a difference on some puny embedded device, but maybe such a device shouldn't be set to fractional scaling values anyway. I'd rather avoid the added complexity of having a possibly slightly faster path for compositors that support fractional scaling. This also isn't really the intended application of that protocol.

However, you made great point about the viewporter thingy. We could have a surface that's the size set by the VNC server, make no transformation on our side and let the compositor handle the scaling. For the black borders I wonder if we couldn't just use two 1x1px surfaces and have them rendered to the correct size with the viewport as well. I'll try that to see if it works.

Yeah, maybe you can even put a single single-pixel buffer under the whole thing, but I haven't looked into it closely. Having the compositor perform the scaling is an enticing concept, but like I said before, the RFB buffer always needs to be copied. I see no way around it other than to block updates to it while the compositor is using it. In the case of AVFrame, it will work very well because we get those from a buffer pool.

The biggest upside of doing this would be that we could remove all the GL code, but then viewporter would become an absolute requirement.

@stacyharper
Copy link
Contributor

stacyharper commented Feb 9, 2025

I also want fractionnnal scale to be handle properly by wlvncc. Tell me if I'm wrong somehow:

Taking a simple example (that is mine): a distant machine 1920x1080 1.5 scale, the local machine 1920x1080 1.4 scale

afaik the server, wayvnc in my case, seems to be doing a correct job. It uses the buffer given by the compositor, which here is 1920x1080, that has been correctly rendered on by clients that support fractional scaling, or not. But the op is right here. Wlvnc, on a full-screen xdg toplevel, will render the frame on a buffer 2742x1542 (1371x771 x2), that the compositor will then downscale a bit to 1920x1080. This cause a slight but still noticeable blur (or sharpness depending on how you call this).

@stacyharper
Copy link
Contributor

(btw we should also fix output_list_get_max_scale to only use the outputs the surface entered)

@any1
Copy link
Owner

any1 commented Feb 9, 2025

I am rather sceptical that fractional scaling will help here if the compositor has done to correct thing when calculating preferred_buffer_scale. In fact, it would be best to use buffers of equal size to the server's buffer and just let the compositor take care of the whole thing via viewporter.

(btw we should also fix output_list_get_max_scale to only use the outputs the surface entered)

Actually, this can be replaced with preferred_buffer_scale.

@stacyharper
Copy link
Contributor

I am rather sceptical that fractional scaling will help here if the compositor has done to correct thing when calculating preferred_buffer_scale. In fact, it would be best to use buffers of equal size to the server's buffer and just let the compositor take care of the whole thing via viewporter.

The compositor can't, even theoretically, scale it perfectly. That's why the fractional scaling protocol exist. The clients have to render pixel perfect buffers, and tell the compositor how to position them correctly on the scaled surface coordinates, using the viewporter.

Actually, this can be replaced with preferred_buffer_scale

Right, but this require a higher protocol version that maybe not all compositor supports. Ideally clients do boths.

@any1
Copy link
Owner

any1 commented Feb 9, 2025

I am rather sceptical that fractional scaling will help here if the compositor has done to correct thing when calculating preferred_buffer_scale. In fact, it would be best to use buffers of equal size to the server's buffer and just let the compositor take care of the whole thing via viewporter.

The compositor can't, even theoretically, scale it perfectly. That's why the fractional scaling protocol exist. The clients have to render pixel perfect buffers, and tell the compositor how to position them correctly on the scaled surface coordinates, using the viewporter.

This makes sense if you're rendering UI elements. wlvncc receives pre-rendered buffers from the server. Scaling needs to happen regardless unless the server's buffer exactly fits your window or if you put scrollbars on the window. Ideally, in the case of a headless remote desktop, the VNC client would communicate the scale and the buffer size to the VNC server. In that case, applying the fractional scale protocol would make sense, but then there would be no scaling.

Actually, this can be replaced with preferred_buffer_scale

Right, but this require a higher protocol version that maybe not all compositor supports. Ideally clients do boths.

Hmm, I thought that this had been in there longer, but it's only been 2 years.

@stacyharper
Copy link
Contributor

While figuring out implementation details, I think I start to understand what you mean. We don't actually have to know the fractional scaling value, because we don't render things by ourselves. The only necessary changes are, if viewporter is available, we stop scaling the buffer ourselves, and we use a buffer size equals to the server one. The buffer should then be 1920x1080, applied to the scaled surface.

You'r right about the blurriness on non-fullscreen, non-matching dimensions. There will always be some scaling, except on equal dimensions, and full-screen client. With some black borders. But I still expect the result to be slightly better. But in case the client is full-screened, and resolutions are matching, this should be way better.

@stacyharper
Copy link
Contributor

stacyharper commented Feb 10, 2025

I investigated further on the viewporter, and how we should use it. When the server and client buffer dimension ratio would miss-match, the buffer would get stretched to fit perfectly in the surface.

To achieve what we want, I think we have to rework completely how we compose the images. Stop using pixman or egl to do so, and rely instead on wl_subcompositor (core v1), and viewporter (pretty old now). We should be able to scale and position the images perfectly, and offload everything to the compositor. I think we still need to know the fractional scale value at some point, to build the buffers, or to position things precisely.

edit:

There would be two surfaces. One background surface, a black box, scaled perfectly using the fractional scaling value when available, and scaled to the xdg surface using the viewporter. And one sub-surface, positioned at the center, and scaled with the viewporter too, to fit in the parent surface geometry.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants