Self-plug: last month I demoed [0] my own terminal. The goal is to dispense with traditional shells and see what happens. It generated quite a bit of hooplah, with a 50-50 split on how people felt about it.
I'll keep an eye on Fui; might come in handy for future Linux builds.
[0] https://terminal.click/posts/2025/04/the-wizard-and-his-shel...
It’s great to see others working in this field. When I started out (nearly a decade ago now) there was a lot more resistance to these kinds of concepts than there are now.
I wanted to download it, but I see there is contribution needed. I will see how I can convince you :)
If someone thinks they're a good fit to test experimental terminals, an email will suffice: abner at terminal dot click
(Not interested in testing, sad to say. I've spent the last fifteen years trapping "I wish this shell/terminal had X feature" to "time to find a tool for X that isn't as garbage as a shell and a terminal." But I have also used Emacs for about the same length of time, and that implements its own terminal emulators, anyway.)
Anyway unless you were a happy Genera user at that time, I would like what terminal did you use then with color highlighting, dynamic feedback, auto completion, transparency and the other features...
Until 2005, I also had to put up with using Xenix, DG/UX, Aix, Solaris, HP-UX, GNU/Linux via telnet as development servers.
Thankfully by 1996, X Win32 and Hummingbird came to rescue as my X Windows servers of choice, when possible.
As for all those features, you could already do most of them in 4DOS (1989).
https://en.m.wikipedia.org/wiki/4DOS
It is like asking me about electronic typewriter features, when the World has moved on into digital printing.
I've seen many debates about the correct carriage return. The world does not move on uniformly.
My understanding is that modern hardware is significantly more complicated at the lowest levels and (at least generally) no longer has a dedicated framebuffer (at least in the same sense that old hardware did).
My understanding of the memory access provided by fbdev is that it's an extremely simple API. In other words an outdated abstraction that's still useful so it's kept around.
An example of this complexity is video streams utilizing hardware accelerated decoding. Those often won't show up in screenshots, or if they do they might be out of sync or otherwise not quite what you saw on screen because the driver is attempting to construct a single cohesive snapshot for you where one never actually existed in the first place.
If I got anything wrong please let me know. My familiarity generally stops at the level of the various cross platform APIs (opengl, vulkan, opencl, etc).
Maybe some of fbdev are like that, but most of them are not. They use vga/vesa interfaces to get a real video memory and write into it. A text console is also using vga video memory to write character data into it.
I still wonder do there any ways to use VGA at its full. Like loading sprites into invisible on the screen video memory and copying them into their right place on the screen. VGA allowed to copy 8 of 4-bit pixels by copying one byte, for example. Were these things just dropped off for a nice abstraction, or maybe there is some ioctls to switch modes for read/writes into video memory? I don't know and never was interested enough to do a research.
> In other words an outdated abstraction that's still useful so it's kept around.
Yes, it is kinda like this, but the outdated abstraction is realized on video card, kernel just gives access to it.
In Linux fbdev is more like a fallback device when drivers for a specific video card are not accessible. fbdevs are used to make a text console with more than 80x25 characters. Video acceleration or opengl can work on fbdev only as a software implementation.
Modern hardware still generally can be put into a default VGA-compatible[1] mode that does use a dedicated framebuffer. This mode is used by the BIOS and during boot until the model-specific GPU driver takes over.
the frame
> eventually be written to the screen
the buffer
In practice, it's not literally that, but in practice, it acts/works like that.
I still have not figured out how to do fullscreen graphics on my Mac.
You can't, you don't have direct access to the framebuffer. Unless by "fullscreen" you just mean spanning from end-to-end in which case you can create an opengl or metal view and just set the fullscreen style mask.
Why is this the case? What would be the problem with allowing it?
They used to allow it, but they removed the API after 10.6
https://developer.apple.com/library/archive/documentation/Gr...
I guess on modern macOS CGDisplayCapture() is the closest example that still works (although clearly there is still some compositing going on since the orange-dot microphone indicator still appears, and you can get the dock to appear over it if you activate mission control. I'm guessing it does the equivalent of a full-screen window but then tries to lock input somehow).
The philosophy really creates a lot of problems that end up just frustrating users. It's a really lazy form of "security". Here's a simple example of annoyance: if I have private relay enabled, then turning on my VPN causes an alert (will not dismiss automatically). The same thing happens when I turn it off! It is a fine default, but why can't I decide I want to have automatically dismiss? This is such a silly thing to do considering I can turn off Private Relay, without any privileges, and without any notifications! NOTHING IS BEING SOLVED except annoying users... There's TONS of stupid little bugs that normal people just get used to but have no reason for existing (you can always identify an iPhone user because "i" is never capitalized...)
The philosophy only works if Apple is able to meet everybody's needs, but we're also talking about a company who took years to integrate a flashlight into their smartphone. It's okay that you can't create a perfect system. No one expects you to. It is also okay to set defaults and think about design, that's what people love you for (design). But none of this needs to come at a cost to "power users." I understand we're a small portion of users, but you need to understand that our efforts make your products better. Didn't see an issue with something? That's okay, you're not omniscient. But luckily there's millions of power users who will discover stuff and provide solutions. Even if these solutions are free, you still benefit! Because it makes peoples' experiences better! If it is a really good solution, you can directly integrate it into your product too! Everybody wins! Because designing computers is about designing an ecosystem. You design things people can build on top of! That's the whole fucking point of computers in the first place!
Get your head out of your ass and make awesome products!
[0] If any...
[another example]: Why is it that my Airpods can only connect to one device, constantly switching from my Macbook to iPhone. This often creates unnecessary notifications. I want magic with my headphones. There's no reason I shouldn't be able to be playing Spotify on my computer, walk away from it with my phone in pocket, and have the source switch to my phone. I can pick up my phone and manually change the source. But in reality what happens is I pick up my phone and Spotify pauses because my headphones switched to my iPhone... If you open up access then you bet these things will be solved. I know it means people might be able to then seamlessly switch from their Macbook to their Android, but come on, do you really think you're going to convert them by making their lives harder? Just wait till they try an iPhone! It isn't a good experience... I'm speaking from mine...
All software/hardware in the system (Core Animation, Retina scaling, HDR, Apple Silicon's GPUs) assumes an isolated/composited display model. A model where a modern rendering pipeline is optional would add major complexity, and would also prevent Apple from freely evolving its hardware, system software, and developer APIs.
Additionally, a mandatory compositor blocks several classes of potential security threats, such as malware that could suppress or imitate system UI or silently spy on the user and their applications (at least without user permission).
For another, security. Your application doesn’t get to scrape other applications’ windows and doesn’t get to put stuff in them, and they don’t get to do that to yours, unless properly entitled & signed. You can self-sign for development but for distribution that ensures malware can be blocked by having its signature revoked.
Sometimes a distinction is made between "TTY" (teletype, traditionally separate hardware, now built into the kernel, accessed by ctrl-alt-f1, at /dev/tty1 etc.) and "PTY" (pseudo-teletype, terminal emulator programs running under X11 etc., at /dev/pts/0 etc. nowadays. Confusingly, the traditional paths used /dev/ptyxx for masters and /dev/ttyxx for slaves, which is not the same as the tty-vs-pty distinction here.) "VT" or "virtual console" are unambiguous and more commonly used terms than "TTY" in this sense. Serial, Parallel, and USB terminals don't really fit into this distinction properly, even though they're close to the original definition of teletype they don't support VT APIs.
There are many things the kernel provides in a real TTY (raw keyboard, beeper control, VCS[A] buffers, etc.). "The" framebuffer at /dev/fb0 etc. is usually swapped out when the TTY is (assuming proper keyboard/event handling, which is really icky), so it counts. Actually it's more complicated than "framebuffer" since X11/Wayland actually use newer abstractions that the framebuffer is now built on top of (if I understand correctly; I've only dabbled here).
Note that there are 3 special devices the kernel provides:
/dev/tty0 is a sort-of alias for whichever /dev/tty1 etc. is currently active. This is not necessarily the one the program was started on; getting that is very hacky.
/dev/tty is a sort-of alias for the current program's controlling terminal (see credentials(7)), which might be a PTY or a TTY or none at all.
/dev/console is where the kernel logs stuff and single-user logins are done, which by default is /dev/tty0 but you can pass console= multiple times (the last will be used in contexts where a single device is needed).
This is probably wrong but hopefully informative.If you're ever bored, from a TTY, type
sudo dd if=/dev/urandom of=/dev/fb0
This provides a nifty demonstration of how both the framebuffer and urandom works.
you can also take a sort of "screenshot" in a tty by typing dd if=/dev/fb0 of=./shot.fb
and then you can view it by flipping those filenames around, so that the shot.fb is now the input and /dev/fb0 is now the output.