Nixers Book Club - Book #3: The Wayland Book - Community & Forums Related Discussions
Users browsing this thread: 1 Guest(s)
|
|||
Here's my summary/review/notes on chapter 7 to 9.
These were more heavy and hands-on chapters, after getting to know the basic ideas from previous chapters regarding interfaces and decoupling, we now put it all into practical actions with actual windows and inputs. XDG shell basics The XDG (cross-desktop group) shell is a standard protocol extension for Wayland which describes the semantics for application windows. Roles are like children in OOP, or traits, defining extra methods for wl_surfaces. This extension of wl_surface will keep reappearing over and over, that's how everything is done in Wayland, adding capabilities, functionalities, and features over it, new traits. In XDG, there are 2 roles defined: toplevel and popup. To form a tree of surfaces. It's not a standard protocol, it's defined in extensions /usr/share/wayland-protocols/stable/xdg-shell/xdg-shell.xml xdg_surfaces are surfaces in the domain of xdg-shell, new traits that add functionalities over normal surfaces. Namely, toplevel and popup, xdg_toplevel and xdg_popup. xdg_toplevel is finally our window. I initially tried the example movq showed on irc by using wl_shell get_shell_surface method but apparently, we should use xdg_wm_base instead with get_xdg_surface. As we said, it's not defined in the usual wayland headers that come with the distro packages, so at this point we need to generate the files using the wayland-scanner we've seen before and include it in our Makefile. The actual drawing of pixels happens on the configure and is done after acknowledging it. To answer movq, we assume after the rountrip we have all the globals from the registry, after wl_display_roundtrip(state.wl_display). The example is extensive, it takes quite a lot of implementation to get a window drawn, and without decoration. It's even more decoupled than I thought. Yet, in the following chapter about inputs, we add even more boiler/glue code. Also, there's no decoration, but let's wait, maybe the book will talk about that later. Surfaces in depth. This chapter dives more into the functionalities of the wl_surface. Which as I understood correctly is extended heavily into different roles, adding traits with methods that it can fill in extensions. From wl_surface to xdg_surface to xdg_toplevel. These wl_surfaces have their own lifecycle. The wl_surface drives the atomicity through its pending, committed, and applied states. A lot of states can be changed before committing to the surface, such as the wl_buffer, damage region, input events, etc.. To give the first state, you need to give the surface a role, allocating and attaching the buffer, then commit again. We see that in the previous example at the end of the configure request for xdg_surface_listener.configure. I guess that's what it does. Quote:The next question is: when should I prepare a new frame? And how too? in the event loop? … And no, it's done preferably after receiving an event from the wl_callback called "done", or after input events in event-driven applications. This is interestingly efficient and also low-level. We can manage each frame. To get this behavior of frame-callback, we need an object implementing this wl_callback interface which we get from the wl_surface.frame request. Then we set a listener for event "done". callback_data is the current time in millisecond. Then inside this callback, we destroy and recreate the callback, call draw again, reattach the buffer to the wl_surface, call damage the entire surface, and commit. The destruction and reconstruction of the callback is a bit confusing. So I guess internally it'll automatically only redraw what needs to be redrawn if we damage only a certain area. That's exactly what is done in the 8.3 section. Overall, it seems like we need to keep all these structures, these objects adding "traits" to the surface and manipulate them from everywhere. The global state invades everything, passed to all events. I'm sure it could be done otherwise though. Surface regions wl_compositor can be used to create an object of type region, a wl_region by callling the request create_region. A region is a goup of rectangles creating an arbitrary shape, by doing operations between different rectangles. These arbitrary regions can then be passed to wl_surface as either opaque region, for what part of the wl_surface is visible, or as input region, for which part can accept input. These are interesting to control surfaces I think. subsurfaces In the core protocol, wayland.xml, only one surface role is defined and that's subsurface. They are child surfaces that are positioned relatively to a parent surface and a z-index, kind of like transient/modal/popup/dialog windows. This can be used to do window decoration. Funnily, these are created from yet another global object: wl_subcompositor, even more separation of roles, yey! The subsurface can then be manipulated like a normal surface but has "place_above/below" functions. It's in sync with the parent surface lifecycle as far as the atomic operations on buffers and others go. high density surface (hiDPI) wl_output, which represents a display object, I guess, sends an event saying the scale factor in place. This scaling factor can then be applied to wl_surface via set_buffer_scale. I think that's a really nice way to handle hiDPI, that should solve a lot of things. However, as with everything Wayland is only the protocol, which practically is only the XML definition of interfaces, so we have to handle this manually. Chapter 9: Seats, handling inputs Finally, we're going to interact with windows. A seat represent a user with the inputs, pointer and keyboard. Yet another global that is accessible and that you can bind during startup. It offers a pointer, keyboard, and touch that you can get through requests. Each have their own interfaces defining how to interact with them. You can know what is supported from the capabilities, which is a bitfield, you can do bitwise operations to compare them with constants in the form: `WL_SEAT_CAPABILITY_*`. A concept of serial IDs is introduced, each input event is associated with an ID that needs to be sent back so that the server can decide whether to respect it or not. Another concept of input frame is introduced, each input event is actual fragmented into multiple ones, which are sent separately, until a final "frame" event is received indicating that it was a single set of states that belong together to the same input. We're adviced to buffer things until we actually receive that event. That mindset goes along the drawing on wl_surface, it's efficient. The book then dives into each input type. First one pointer input, returning a `wl_pointer`. It has all the usual events we would guess: from enter, to leave, to button clicked, scrolling/axis, etc.. We can notice the serial id being included. We can create a cursor for the pointing device using the request set_cursor and passing a surface. It's interesting how the axis source is well-defined. Second one is keyboard input, returning a `wl_keyboard`. We're getting an explanation for XKB, keymap, and scancode, this shouldn't be new for people who have been using X11. xkbcommon is the standalone library offering the translation from scancode to the keymap symbols. We receive the keymap from the wl_keyboard in an event called "keymap", which has a format and file descriptor. I would've guessed it would be a string but no. Quote:Bulk data like this is transferred over file descriptors.Well… that's something new! The keyboard mmap seem to fail on my machine for the example though, I had to use plain malloc an read the file descriptor manually instead. The keyboard events themselves are also somewhat obvious, key, key_state, modifiers. I like that they are separated, maybe that should fix some issues I've personally had while manipulating keys in X11 when they get modified. Key repeat event, alright... Third one is touch input, returning a `wl_touch`. Now, that makes it easy to go next level on multi-touch screens! The "frame" event makes a lot of sense in this case, when multiple fingers press the screen, each finger having a different id. "down", "up", "motion", yep, that's nice. Now for the example code. That's extensive, new globals, states, and listeners everywhere, glue code programming. We add the wl_seat, wl_keyboard, wl_pointer, wl_touch. Set the wl_set in the listener for the registry, set listeners for the capabilities that it supports. We then create pointer events storage structures, and obviously also add it to our global state. After that we can check if we have the capabilities to set the pointer and the related many listeners that will update the new structure in our global state. Interestingly, I've discovered that my touchpad support 3 touch input buttons: single finger, two fingers, and three fingers (similar to middle mouse). I think in general this could be always present and people would just need to handle the "frame" event instead of all this. We do a similar thing for the keyboard, but this time to compile we need lib xbkcommon. We also get a glimpse at wl_array objects helpers, with their wl_array_for_each. After these chapters, I think I have a better idea of Wayland mechanisms and way of thinking. Having things defined in a protocol defining objects and interfaces makes it easy to know what to expect, yet it also somehow decouples things a bit too much. Still, if the glue code is present it's a really clear and clean way to handle things. |
|||