Conversation
any1
left a comment
There was a problem hiding this comment.
Because we don't really have any control over the scale of the buffer from the server, I don't really see how this is supposed to help.
We already submit buffers that are scaled to suit the compositor's output scaling, but the reported output scale must be accurate for that to be effective. We should really be using wl_surface::preferred_buffer_scale instead of the way that it's currently being done.
I can see fractional scale being useful if client-side desktop resizing is implemented, but otherwise I don't understand what it achieves.
However, viewporter would still be useful because it allows us to offload the scaling to the compositor. I.e. we can skip one copy, which is a nice optimisation. That would be a much bigger change though.
| struct wl_compositor* wl_compositor = NULL; | ||
| struct wl_shm* wl_shm = NULL; | ||
| struct zwp_linux_dmabuf_v1* zwp_linux_dmabuf_v1 = NULL; | ||
| static struct wp_viewporter *wp_viewporter = NULL; |
There was a problem hiding this comment.
star should be on the left side here
| .close = xdg_toplevel_close, | ||
| }; | ||
|
|
||
| static void wp_fractional_scale_preferred(void *data, |
There was a problem hiding this comment.
There are also inconsistencies here with the location of the star.
The issue with this is that it only reports the scale as an integer (e.g. 1, 2, 3...). If the display output has a non integer scale, e.g. 1.5, it will report either 1 or 2 which makes the client render with a different scale than the one used by the compositor. This means that the compositor will scale the surface up or down. A way to see this yourself is to set your display's scaling to a non-integer value, e.g. 1.75 and compare the rendering with the version without the changes and with the changes with a VNC remote that has the same resolution as your display. The version without the changes will appear blurry.
What the fractional scaling protocol does is report the actual scaling used by the output and not the closest integer.
I didn't think about that but by doing this we wouldn't even need to care about the scaling. |
|
Actually, if anything, the left one looks better than the right one, but it's really hard to tell. But the one where we scale down to 1 and the compositor scales it up again is definitely much worse, like you say. I would argue that the compositor should report This is what wlroots does for both Wouldn't it be better to fix this in the compositor instead? |
I see what you mean. What happens is that because of the compression (from the VNC server) the colours appear faded which reduces the contrast (especially with dark colours). GitHub adds another layer of compression when I uploaded the image so it makes the image even more faded. When scaling it up (to 2x) then down (to the 1.75x) it increases the contrast (but not uniformly) as a side effect. I'm pretty sure there are better ways to increase the contrast which would be more uniform that's another topic. What happens here is that we have 1920x1080 buffer coming from the VNC server that's scaled up into a 2258x1270 buffer that is then scaled down to a 1920x1080 buffer by the compositor. What the fractional scaling patch is trying to do is to have that initial 1920x1080 buffer be scaled once into the buffer that's actually rendered, in my case the same 1920x1080. There's only one scaling transformation happening.
That's a good point. Not sure why they decided to scale down instead of using the nearest value. 🤷 It would definitely improve the rendering for clients that don't support fractional scaling. Though, I think it still makes sense to render at the "correct" resolution when possible to avoid the repeated scaling transformations which have the advantage of doing less processing and using less memory. However, you made great point about the viewporter thingy. We could have a surface that's the size set by the VNC server, make no transformation on our side and let the compositor handle the scaling. For the black borders I wonder if we couldn't just use two 1x1px surfaces and have them rendered to the correct size with the viewport as well. I'll try that to see if it works. |
If they didn't think about it, they most likely did it wrong because the default is to truncate the value. Which compositor are you using? Have you reported this to them? It's not technically wrong according to spec, but it is a poor implementation.
You always need to copy the buffer anyway, so scaling while you're at it doesn't make much difference. A slightly larger buffer can make a difference on some puny embedded device, but maybe such a device shouldn't be set to fractional scaling values anyway. I'd rather avoid the added complexity of having a possibly slightly faster path for compositors that support fractional scaling. This also isn't really the intended application of that protocol.
Yeah, maybe you can even put a single single-pixel buffer under the whole thing, but I haven't looked into it closely. Having the compositor perform the scaling is an enticing concept, but like I said before, the RFB buffer always needs to be copied. I see no way around it other than to block updates to it while the compositor is using it. In the case of The biggest upside of doing this would be that we could remove all the GL code, but then viewporter would become an absolute requirement. |
|
I also want fractionnnal scale to be handle properly by wlvncc. Tell me if I'm wrong somehow: Taking a simple example (that is mine): a distant machine 1920x1080 1.5 scale, the local machine 1920x1080 1.4 scale afaik the server, wayvnc in my case, seems to be doing a correct job. It uses the buffer given by the compositor, which here is 1920x1080, that has been correctly rendered on by clients that support fractional scaling, or not. But the op is right here. Wlvnc, on a full-screen xdg toplevel, will render the frame on a buffer 2742x1542 (1371x771 x2), that the compositor will then downscale a bit to 1920x1080. This cause a slight but still noticeable blur (or sharpness depending on how you call this). |
|
(btw we should also fix |
|
I am rather sceptical that fractional scaling will help here if the compositor has done to correct thing when calculating
Actually, this can be replaced with |
The compositor can't, even theoretically, scale it perfectly. That's why the fractional scaling protocol exist. The clients have to render pixel perfect buffers, and tell the compositor how to position them correctly on the scaled surface coordinates, using the viewporter.
Right, but this require a higher protocol version that maybe not all compositor supports. Ideally clients do boths. |
This makes sense if you're rendering UI elements. wlvncc receives pre-rendered buffers from the server. Scaling needs to happen regardless unless the server's buffer exactly fits your window or if you put scrollbars on the window. Ideally, in the case of a headless remote desktop, the VNC client would communicate the scale and the buffer size to the VNC server. In that case, applying the fractional scale protocol would make sense, but then there would be no scaling.
Hmm, I thought that this had been in there longer, but it's only been 2 years. |
|
While figuring out implementation details, I think I start to understand what you mean. We don't actually have to know the fractional scaling value, because we don't render things by ourselves. The only necessary changes are, if viewporter is available, we stop scaling the buffer ourselves, and we use a buffer size equals to the server one. The buffer should then be 1920x1080, applied to the scaled surface. You'r right about the blurriness on non-fullscreen, non-matching dimensions. There will always be some scaling, except on equal dimensions, and full-screen client. With some black borders. But I still expect the result to be slightly better. But in case the client is full-screened, and resolutions are matching, this should be way better. |
|
I investigated further on the viewporter, and how we should use it. When the server and client buffer dimension ratio would miss-match, the buffer would get stretched to fit perfectly in the surface. To achieve what we want, I think we have to rework completely how we compose the images. Stop using pixman or egl to do so, and rely instead on wl_subcompositor (core v1), and viewporter (pretty old now). We should be able to scale and position the images perfectly, and offload everything to the compositor. I think we still need to know the fractional scale value at some point, to build the buffers, or to position things precisely. edit: There would be two surfaces. One background surface, a black box, scaled perfectly using the fractional scaling value when available, and scaled to the xdg surface using the viewporter. And one sub-surface, positioned at the center, and scaled with the viewporter too, to fit in the parent surface geometry. |



Without this, if the compositor's output is set to a fractional value, e.g. ×1.5, the surface appears blurry as the compositor will scale the surface. With this the surface's pixels can match the pixels on the screen.