But I believe that many technologies, protocols, and formats that were commonplace in late 90s/early 2000s are just here to stay forever: X11, BIOS/MBR, FAT32, 32-bit x86 PE, Win32, GTK2, Cocoa... The Linux kernel still runs static binaries built in mid-90s.
Judging from the feature list, TinyX hits a very sweet spot for otherwise underpowered or obsolete machines. You can of course spend €50 and get a RasPi that can handle a more complex / demanding stack, but also you can take advantage of the spare power, or just appreciate the architectural simplicity.
(And thus, I avoid anything Wayland.)
I'm also not surprised at all how bad Wayland is. The protocol makes wrong abstractions at the lowest level which causes problems everywhere else. The worst being massive fragmentation. It wont ever be good despite receiving big amounts of funding. But this is obviously by design.
Can you point out any specifics? I think both the low-level operations (allocate some shared memory for a surface, agree on a pixel format, etc) and the high-level features (scaling!) are solid, the holes are mostly in the ecosystem fragmentation/extension support ("can you please draw some window borders for me, thanks?"). If you were designing a modern windowing system from scratch, what would you do differently?
How much time do you have? First of all it needs to be much simpler to do simple things. You are right that you need to "allocate some shared memory for a surface, agree on a pixel format, etc" but it shouldn't take 250 LOC just to put a simple window on the screen (look at https://wayland.app/protocols/wayland there is no reason to make bare essential things that complicated). A proper standardized client library should need 1 LOC for that. The whole polling/ping mechanism is completely flawed. Since you don't know how to solve the halting problem please let the user decide if an application is unresponsive or not and reliably kill the pid of the window when the user presses that "x". Until then assume the process to be working. This also means a hard requirement for server side window decorations since they need to be rendered by another process with higher priority (also because temporarily unresponsive windows might occasionally be moved or resized). Windows should also be able to obtain where they are on the screen in relation to other windows from other processes. There should also be processes that are allowed to read out and modify all window positions (e.g. for window managing and automation). Also processes should be able to read image data from screen or other windows (e.g. for screen sharing). Standardized interfaces and proper access control for all latter things should be baked into the protocol from day one and not rely on non-standardized external infrastructure (like dbus/portals). And those interfaces should be as simple as possible. Latency is generally bad on Wayland. The mouse pointer should be rendered by a separate thread on its own hardware layer which requires no vsync. Keyboard input events should be able to immediately update dirty rectangles disregarding vsync for those pixels to make typing as smooth as possible. Generally allow tearing for windows by choice of the user/program and be able to have windows that don't allow tearing next to tearing windows at the same time. (I have more but nobody reads that shit anyways.)
> If you were designing a modern windowing system from scratch, what would you do differently?
Personally I would recreate something like MGR with a modern spin. But this is obviously a very opinionated design.
ya most of this i think people don't understand because they don't code. and even then a lot of people will defend it. the real problems show up even before any of the flow, design or logic though. if you were to spend millions on a house, you wouldn't cheap out on the foundation would you? yet the xml generation and shit horrible names like 'globals' and 'shell' liter wayland nomenclature. i would rather have 'window' namestepped on again like X did. the source of most wayland woes seem to be 'security' ideas and epoll being used to funnel everything. as for the amount of code that's define 'struct thing thing' extralongnametosetawhole3funcpointers is just astounding.
i don't think it would be 'that hard' to redo with the above sane types of considerations, avoid everyone reinventing the wheel and writing the same code for every project, and something that has a core protocol ACL configurable at the system level. it certainly shouldn't take a decade. but the main problem is then everyone would have to switch display server stacks, again.
i still haven't switched because i'm too lazy to rewrite anything just to get nothing from wayland.
Exactly one lifetime ;)
> A proper standardized client library should need 1 LOC for that.
Agree. See my sibling comment: https://news.ycombinator.com/item?id=43016206
Something like wlroots (with a high-level API) should've been the go to thing for all downstream compositors and/or client apps, unless you have very particular needs. The "wire" protocol underneath however? Meet the present and near-future needs, and make it as complex as it needs to be to match the state of the art on other platforms (OS X managed this over 25 years ago dammit!)
Graphics are complex though, even before you touch e.g. color science, and maybe you don't feel any pressure to meet the demands of the professional market, but it's like the curb effect... I do enjoy the heck out of my Zoom UAC-232, 32-bit float ADC may sound like overkill for an enthusiast setup but I just don't ever have to worry about gain levels, it's just one less thing that gets in the way of having fun with a guitar and/or mic. I don't work with picture as much, but I do understand why sRGB can feel limiting, and I recognise that I don't understand enough to make the call, whether Wayland is too complex for its task or not.
> Windows should also be able to obtain where they are on the screen in relation to other windows from other processes.
Wasn't this kind of poking and probing the root of all security evil under X11? You have to draw the line somewhere, and I think it was the right call to start conservative - you can always relax the restrictions later.
> There should also be processes that are allowed to read out and modify all window positions (e.g. for window managing and automation).
This is a much bigger topic that touches accessibility in general. I use Shortcat on macOS; I can press Cmd-Shift-space, and every interactive element on the screen gets a tiny popup with a two-letter keyboard shortcut - no need to reach for the mouse, unless some app insists on pushing its own pixels. The ability to only move windows around pales in comparison.
It's possible because stuff on macOS uses a standard set of libraries and APIs; the problem lies not with the display server (I think it's the wrong layer of abstraction), but in the layer in between that and the applications, which with the Qt/KDE/GTK2/3/4/5/whatever-tk-of-the-year was always a shitshow.
> Latency is generally bad on Wayland.
Isn't gamescope also a Wayland compositor? Or does it just isolate games?
> The mouse pointer should be rendered by a separate thread on its own hardware layer which requires no vsync.
I always thought this was the general/default case, with only the insane attempting software cursors. Darn I despise laggy cursors (StarCraft player checking in).
> Keyboard input events should be able to immediately update dirty rectangles disregarding vsync for those pixels to make typing as smooth as possible.
Noted and noted. It makes a lot of sense in general, vsync mostly matters for dynamic scenes and typing text isn't one. But you either need tight coupling (bleh), or a sane way to cut thru the layers.
> (I have more but nobody reads that shit anyways.)
I appreciate when people share different perspectives.
I'm actually interested in this topic, as I've embarked on a journey to write an entire NIH from-scratch Linux userland in pure Go - no cgo, therefore no Mesa/OpenGL/etc, likely gonna go thru fbdev unless I can understand if and how can I use DRI/DRM. Still aiming for a kinda late 90s vibe, it's amazing what Windows 2000 or E16 could do with software-only rendering.
I do think Wayland is too complex to tackle in a NIH/hobby project, but a very rough outline of how it does things is generally appealing.
> Personally I would recreate something like MGR with a modern spin. But this is obviously a very opinionated design.
I had to look it up. In my opinion, terminals/PTYs, network transparency, drawing primitives, tight coupling between display and widgets, are all waaay overrated. The sole use case for network transparent windowing died with the rise of the WWW, and you won't find a computer made in the last 30 years that can't draw its own windows. Remote access is solved by protocols such as VNC. A REPL shell does not need a terminal, it needs an input widget where you can edit text - where Ctrl-C means copy, and a scrollable output that supports automatic bookmarks for easy navigation (the stock macOS terminal is my hero). TUIs are just the lowest-common-denominator GUIs, with all the quirks of emulating a VT100 thrown in, including in-band sign^Q^Salling. Serial consoles are for rescue access, not for playing nethack, and modern servers have BMCs anyway. IMHO drawing belongs in a library like Cairo, it has no business in a display server, unless you're planning to make it talk to a printer.
Widgets are a tough nut though; consistency between window frame and its contents is essential, so either the display server has to understand widgets to draw correct frames, or throw the towel and delegate that to the higher-level toolkit and/or the application. Add another layer (like a separate WM) and it's starting to feel like too much indirection. It's an interesting dilemma though.
Overall maybe I just can't see why ManaGeR is appealing, at a glance it stinks of everything that held X11 back.
To be specific, configuring things like scaling, auto rotation on my laptop/tablet thing (and rotating the touch screen and stylus to match), and automatically using external displays as they are plugged in (with kanshi) has been much easier than with Xorg. Maybe if I was still just sitting at a desktop it would be different, but for these scenarios Wayland is nice.
For better or worse, Wayland is where all of the attention is right now, and there already are systems (like Asahi Linux) where Xorg is unsupported. The world will continue moving forward, sticking with Xorg may prove more difficult over time, so TinyX may be of interest even if purely for the sake of continued maintenance.
Late in the sense that OS X has had Quartz from the beginning, and Vista started the move to DWM; meanwhile the Linux community was patting itself on the back because Compiz looked cooler than either of those, and didn't see past the surface. X11 is not only an outdated design, it's also a security nightmare.
Early because it's still a mixed bag, and kinda suffers from second system. It's a protocol rather than a library - while it's simpler than X11, it relies heavily on optional extensions for things such as screen capture or even "server" side decorations. Under X11, you could write a spartan-but-functional WM in a 100 lines of C. Under Wayland, a functional compositor needs roughly 50k lines; wlroots can handle most of that, but neither KDE or Gnome uses it, so effectively there's three parallel worlds, with varying level of support for different extensions. Early on, IMHO too much effort went into Weston (the "reference" implementation nobody cares about), and too little into building something like wlroots. It may take a while for the situation to settle.
Whilst I agree with most your point, you're comparing a window manager and a compositor. I don't think there was any compositor for X that was made with less than 100 lines of C.
Which is my point. A project like wlroots should have been in the spotlight the same way Xorg was, so that new compositors (or existing X11 WMs) could adopt it easily, and both users and developers could benefit from an emerging-but-stable ecosystem. Instead it took like a decade to standardise and implement screen capture.
Because compositors had graphic pipelines and could add those with OpenGL shaders. But, there's still much more to a compositor, namely they actually composite the windows/screen, rendering things into GL textures and intercepting the rendering pipeline in X, ensuring everything is synced to avoid tearing etc. This is their main purpose.
Wayland is just an API that lets clients talk to the compositor direct rather than having to go through the X server.
So even a "trivial" compositor like xcompmgr, that runs alongside a WM, has to do all of this dance? The WM still has the authority to tell X, "draw this window here", but X lets xcompmgr take the wheel? (I can tell from here why X was already becoming more of a drag.)
Wayland (as an API) was designed to solve, which is why in theory it's supposed to have better performance, latency and power efficiency, although it hasn't worked out great in all cases.
I recall having played around with DSL/TCL (Damn Small Linux & Tiny Core Linux) around 2010 or so and IIRC they already had their minimalist X11-but-not-Xorg server even back then, which I found fascinating at the time. I'm pretty sure I took a curious peak at the source code back then, IIRC it simply didn't live in git yet (tarball and a few patches?).
The XVesa code base that this is forked off of is of course much older.
What’s the argument on why to change to a less permissive license?
What may not be immediately obvious, is that it still favors the downstream developer to contribute their changes back; the network effect scales exponentially with the number of participants. Making the changes copyleft complicates the matter, as their copyright status may become less clear as further downstream contributors chip in.
There's always some trade-off, but this is exactly what the MIT license permits. If it bothers you, you'd probably find yourself preferring copyleft anyway.
But no euphemisms like “public”, “free software”, “copyleft”, make the change anything less than taking away a freedom and calling it “free as in speech”.
... and I actually do that ...
... do you actually have the right to speak? Maybe banning airhorns would increase speech rights.
You’re mostly free to do things that do not harm others. If you don’t like my speaking you can and should just move along. Blasting an air horn is not only harming me it harms everyone around you. Whereby you are interfering with everyone else’s right to not have public nuisances.
It isn’t remotely similar to what I’m saying anyway.
The free software foundation has brainwashed y’all to think less freedom is more freedom. And yes, I will die on this hill.
It isn’t my fault if you have an issue with objective reality.
First to address the readme. Their goal for switch to GPL for closed devices is kind of pointless. For a closed device, they can still use it without modification. And even if they do modify it, how would you know. How would you prove it's your codebase they modified? The GPL does little to protect small projects. If they made money and got caught, they can just pay their way out of it. If they didn't make any money, well, no one would have probably found out until they they were a defunct company or the product was cancelled.
Second, more on the delusions of grandeur. Let's be real, this project is a small project with very little chance to grow big. It might become the defacto X server for old tech, but I doubt it would be used for anything new. And even if it is, not for many. The GPL won't protect it because it's obscure.
The GPL works for big high profile projects, like the Linux Kernel. It worked in the past because companies hadn't figured out a way around it, or that they can just ignore it and deal with the consequences later. They usually get away with it too, with little cost to them, as they have made their money.
I personally avoid releasing any of my projects as GPL. Because the likelihood of any of my projects being huge is minimal, and if they go big, I can change the license then. Maybe a non open source license like the Fair Source License that then transitions to MIT or Apache after 1-2 years.
Via dissasembly? It isn't that hard.
However i think you miss the point. GPL is more a statement of principles than anything else.
Copyleft isn't how the world works. MIT and Apache and liberal licenses liberate ideas for reuse and practical application in the real world. If you want your project to be a quirky curiosity visited and abandoned by people searching for some better solution for their problem, use copyleft.
If you want your idea to make the world a better place, use permissive, liberal licensing and move on.
Forking something from MIT to GPLv3 is disappointing, from this standpoint.
Most examples of this directly contradict what you say. Whilst Apple has contributed somewhat to BSD, the vast majority of their work is closed up and sealed off. It's not making the world a better place.
GPL3 and AGPL are really the only defense against the tragedy of the commons.
> It's the equivalent of screaming "why can't we all just get along" into the Matrix.
You have it the wrong way around. MIT is screaming for people to place nicely and work together. GPL sets a strict contract and rules for playing, there is no screaming and there is no room for people to create their own forks and not work together.
There are people who won't contribute to your program if it's BSD or MIT. There are those who won't contribute to your program if it's GPL.
A program with any license is at risk of being abandoned as a quirky curiosity, if it, doh, is a quirky curiosity.
And contributors have the security that the project will stay free and will not do the classic MIT switch-and-bait to a commercial license.
But if you contributed under the MIT license, you allow them to re-license as whatever they want.
EDIT: having thought about it again, I realized that I was bringing strong assumptions about a CLA being in effect, regardless of the specific license. So, without a CLA in consideration, relicensing is more viable (or more of a danger depending on your specific concerns) with MIT. If a CLA is in play then MIT and A/GPL are likely on equal footing, depending on the jurisdiction.
If you want to contribute to an open-source project, you can just work on your own repo and can work together with other people just like it was a real open-source project.
The good part is, you can pull any improvements from the original repo that they publish under A/GPL, but they can't pull from you if they want to re-license their code later. So the community fork will always be ahead in features and bug fixes compared to the CLA-crippled repo.
How do you explain the success of software projects such as the Linux kernel?
> MIT and Apache and liberal licenses liberate ideas for reuse and practical application in the real world.
No. The likes of MIT and Apache allow for royalty-free commercial use of third party software. Aka free labour.
The likes of GPL arguably have a greater impact on the whole concept of reuse because they require consumers to also be reusable by third parties.
It's ok if your goal is to just use someone else's projects for your own commercial benefit. I do that all the time. However, it's not right to try to frame it as something else.
Outside of software, copyleft is much closer to how the world works. In the real world, nothing is free, but reciporical agreements are common. Something like a free trade agreement is kind of like a copyleft license.
> Forking something from MIT to GPLv3 is disappointing, from this standpoint.
How can you both find that disappointing and support MIT licenses at the same time? Feels like contradiction. The entire premise of a BSD/MIT license is that people should have the right to do this.
One way to think of copyleft is it neutralises copyright. Permissive licences do not do that. People who support free software do not believe software copyright should be part of the world.
The GPL depends on the the validity of the premis that the creator of something gets to set the terms for how they give it to anyone else.
The terms just happen to be something other than the usual cash.
Anyone who doesn't think copyright should exist at all, doesn't even specify any licence or they specify public domain or mit or similar. By specifying GPL, you declare your right to set the terms of copying.
You can't just "not copyright" something nor can you "opt out" of copyright. Many countries don't even have a concept of public domain at all. But even if you could, if someone takes the work and alters it in any way it is immediately under copyright again.
Copyright came from a different time. It was invented due to the printing press around 500 years ago. Its purpose was essentially to protect authors (the many) from the owners of printing presses (the few). Now it's used by a few to extract money from the masses.
So, the only tool we have to effectively "opt out" of copyright is copyleft. This stops copyright being able to attach itself again, which can and does happen to public domain or permissive works. It's a funny twist, but that's the way it is. Silly laws require silly solutions.
I said what someone who did not accept the very premis of copyright would or should logically be expected to do.
They would not participate in the thing they don't recognize. If everyone else gives their works a default copyright, that's everyone else's problem, they didn't do it.
But GPL is ver definitely not that.
It is not subverting or perverting or sabotaging copyright, it IS a bog standard employment of the right to set the terms over a thing that you believe you have the right to set the terms for, and you have very definite ideas about what you want and don't want other people to be allowed to do.
When I choose GPL for something, make no mistake that the reason I do so is because those are the terms I want to enforce. I want to impose exactly the limits it imposes. That is my payment for the product. I am using copyright for it's intended (or at least advertised) purpose, not trying take a notch out of the concept of copyright because I don't believe in the validity of copyright.
I strongly think you are wrong, and I strongly disagree with your points, but I don't see why your opinion should disappear, lest this thread turn into an echo chamber.