Cool project! Setting up a quick local HTML server can be annoying.
Alas it looks like it's web/electron based. :/ Downloading it and yep, 443.8 MB on MacOS. The Linux one is a bit better at 183.3 MB.
Electron really should get a kickback from disk manufacturers! ;)
Shameless plug, I've been working on a HTML inspired lightweight UI toolkit because I'm convinced we can make these sort of apps and they should only be ~10-20 MB [1] with nice syntax, animation, theming, etc. I'm finally making a suite of widgets. Maybe I can make a basic clone of this! Bet I could get in < 10MB. :)
I use python for serving my local static site development with this custom little bash wrapper script I wrote:
#!/usr/bin/env bash
set -e; [[ $TRACE ]] && set -x
port=8080
dir="."
if [[ "$1" == "-h" || "$1" == "--help" ]]; then
echo "usage: http-server [PORT] [DIRECTORY]"
echo " PORT Port to listen on (default: 8080)"
echo " DIRECTORY Directory to serve (default: .)"
exit 0
fi
if [ -n "$1" ]; then
port=$1
fi
if [[ -n "$2" ]]; then
dir=$2
fi
python3 -m http.server --directory "$dir" --protocol HTTP/1.1 "$port"
It's absolutely (almost) correct! HTTP/0.9 does not require you to send back a status code or any headers. Some modern web servers even recognise a lone "GET /" to mean HTTP/0.9 and will respond accordingly.
This is exactly my point - it successfully accomplishes a very specific task, in a way that is fragile and context dependent, and completely fails to handle any errors or edge cases, or reckon with any complexity at all.
For anyone baffled by this: This works because HTTP/0.9 (just called "HTTP" at the time) worked in an extremely simple way, and browsers mostly retained compatibility for this.
HTTP/0.9 web browser sends:
GET /
Netcat sends:
<!doctype html>
...
Nowadays a browser will send `GET / HTTP/1.1` and then a bunch of headers, which a true HTTP/0.9 server may be able to filter out and ignore, but of course this script will just send the document and the browser will still assume it's a legacy server.
Random tangent: It appears that most of Electron's funding is actually the German government.
The Sovereign Tech Agency, under the Federal Ministry for Economic Affairs and Climate Action, fund OpenJS, specifically for improving the state of open source in JavaScript.
I like and understand the paradigm of using web technologies to build GUI apps. I have yet to find any desktop framework that even comes close to the DX of using web tech.
I recently explored both Tauri [1] and Wails [2].
Especially Wails is lots of fun. The simplicity of go paired with the fast iteration I get from using React just feels awesome.
And the final application is ~10 MB in size!
In SW development you need to make compromises. You can not have all of: quality, performance, memory/cpu/disk efficiency, security, development speed, low effort, cross-platform app, accessibility, all the business features, etc. Which corners would you cut? You mention native tech but you seem to ignore the enormous tax in development time/knowledge, etc. So let's say you aim for the best UX. Are you ready to sacrifice business features, or any other aspect? I'm not advocating for crappy UI/UX, but I would rather use an electron app that has all the features I need, than native app that doesn't.
I have thought about this and I'm not sure that Electron really is to blame here.
It makes building an application accessibile, which means that there will be lots of apps built with it, many of which won't be any good.
Just like many native apps will also be horrible in terms of UX.
Good apps are good. And I believe that it's entire possible to build an amazing app with Electron.
Although not everyone might agree, IMO VSCode is a great example of that.
I fully agree with this. Electron is hated here as if it was the source of all evil. When Electron came (and node-webkit as well), there were very limited options to create fully cross-platform apps easily. I tried multiple ways (including Qt) but it was very cumbersome and slow to progress. With Electron not only was I able to create a usefull app quickly, I could reuse almost all the code on web. Ok, it takes space and consume memory inefficiently, but thanks to Electron a lot of useful apps exist that otherwise wouldn't or would be much worse. Today Tauri or something else might be better choice, but hating electron seems really out of place.
I dunk on electron, but it’s a love-hate sort of thing. There’s some great apps out there due to electron. VSCode is great. This http server is well done and looks handy!
Personally though I’m just greedy. I want the best of QT and Electron. Figuro is my attempt at realizing that. ;)
Interesting links, though it seems more due to SwiftUI than anything. SwiftUI still seems rough compared to good ole Cocoa. I also remember when Electron apps ate 100% CPU due to blinking text cursors.
For what’s its worth my Figuro library does pretty well for live updating text and scrolling! And I haven’t even optimized layouts yet, it currently re-layouts the entire tree for every frame.
I have tried Flutter and liked it for mobile development. Maybe I should give it a shot for desktop.
Though I believe those that dislike Electron and the likes for not being native would also have a bone to pick with Flutter.
Even the full-featured TLS/HTTPS forward proxy I use, linked with bloated OpenSSL, is still less than 10MB static binary: 8.7MB. When linked to WolfSSL it's only 4.6MB static binary. The proxy can serve small, static HTML pages, preloading them into memory at startup.
lighttpd is awesome for a quick, local server on Ubuntu. One command installs it. You tell the firewall to allow it. Then, just move your files into the directory. Use a CDN, like BunnyCDN, for HTTPS and caching.
It's not only easy: it runs (or ran) huge sites in production.
Sorta! It didn’t start out that way but I’ve been building more from HTML overtime but keeping it fast and lightweight. I’ve cherry picked a subset of HTML like CSS grids which add a lot of layout power without tons of normal HTML hacks.
I want to try adding a JavaScript runtime with simple DOM built on Figuro nodes instead. But there’s some issues with Nim’s memory management and QuickJs.
Am I reading this wrong or does this almost open up any server bound to localhost to the outside?
I think proxy_pass will forward traffic even when the root and try_files directives fail because the junction/symlink don't exist? And "listen 80" binds on all interfaces doesn't it, not just on localhost?
Is this clever? Sure. But this is also the thing you forget about in 6 months and then when you install any app that has a localhost web management interface (like syncthing) you've accidentally exposed your entire computer including your ssh keys to the internet.
Nothing is preventing you to add an IP whitelist and/or basic auth to same configuration. That is what I do to all my nginx configurations to be extra careful, so nothing slips by accident.
I got something similar running with nginx myself with purpose of getting access to my internal services from outside. The main idea here is that internal services are not on same machine this nginx is running on, so it will pass around to needed server in internal network. It goes like this:
Basically any regex matched subdomain is extracted and resolved as $service.internal and proxy passed to it. For this to work, of course any new service has to be registered in internal DNS. Adding whitelisted IPs and basic auth is also a good idea ( which I have, just removed from example ).
That's why I switched to Caddy for most of my needs. I create one Caddy server template, and then instantiate it as a new host with one line per server.
This looks nice with a friendly UI. I've been very happy with Caddy[1], but this seems like something I might recommend to someone that is new to the web environment.
> native UI libraries need to step up their game in terms of approachability.
Gnome does this, you can develop apps in Typescript.
But, they started to migrate some of their own apps to Typescript and immediately received backlash from the community [0]. Although granted, the Phoronix forums can be quite toxic.
My observation is that there is just a big disconnect between younger devs who just want to get the job done, and the more old-school community that care about resource efficients. Both are good intentions that I can understand, but they clash badly. This unfortunately hinders progress on this point.
I agree that this is, at least often, a case of where your roots lie.
Whats most shocking to me is that the likes of Apple and Mircosoft don't seem to be interested in/capable of building an actually good framework.
I feel like Microsoft tried with .NET Maui, but that really isn't a viable choice if you go by developer sentiment.
Typescript is a Microsoft project, so they did build an actually good framework. The swift work that Apple’s doing is pretty cool, though I haven’t used it in anger.
I come from an async/lock free C++ then Rust background, but am using typescript quite a bit these days. Rust is data race free because of the borrow checker. Swift async actors are too, by construction (similar to other actor based frameworks like Orleans). Typescript is trivially data race free (only one thread). Very few other popular languages provide that level of safety these days. Golang certainly does not.
I was benchmarking some single-treaded WASM rust code, and couldn’t figure out why half the runs were 2x slower than the other. It turns out I was accidentally running native half the time, so those runs were faster. I’m shocked the single core performance difference is so low.
Anyway, as bad a javascript used to be, typescript is actually a nice language with a decent ecosystem Like Rust and C++, its type system is a joy to work with, and greatly improves productivity vs. languages like Java, C#, etc.
It is more a side effect of JavaScript bootcamp programming wihtout learning anything else.
I have been coding since 1986, nowadays most of the UIs I get paid to work on are Web based, yet when I want to have fun in side projects I only code for native UIs, if a GUI is needed.
Want to code like VB and Delphi? Plenty of options available, and yes they do scalable layouts, just like they used to do already back in the 1990's for anyone that actually bothered to read the programming manuals.
Yes, I've dabbled in gtk, wxWidgets and several other systems. All of them are meh.
The big player these days seems to be web-based (Electron and friends), though the JVM stack with a native theme for Win/Mac is certainly usable in an environment where you can rely on Java being around.
I think the best option would be some kind of cross-application client-side HTML etc. renderer that apps could use for their user interaction. We could call it a "browser". That avoids the problem of 10 copies of the whole electron stack for 10 apps.
Years ago, Microsoft had their own version of this called HTA (HTml Application) where you could delegate UI to the built-in browser (IE) and get native-looking controls. Something like that but cross-platfom would be nice, especially as one motivation for this project is that Chrome apps are no longer supported so "Web Server for Chrome" is going away. So the "like electron but most of the overhead is handled by Chrome" option is actively being discontinued.
The real problem is that frontend work with anything else is such a pain in the ass.
You want to write separate versions for MacOS, Linux, and Windows Visual .NET#++ and maintain 3 separate source trees in 3 languages and sync all their features and deal with every bug 3 times?
It does. It also includes a dozen other things beyond what that one liner would do. Keep in mind, if it fits with what you're trying to test/how you're trying to develop, just doing things on http://localhost will be treated as a secure origin in most browsers these days.
There does seem to be a weird limitation that you can't enable both HTTP and HTTPS on the same port for some reason. That should be easy enough to code a fix for though.
Its the same transport (TCP assuming something like HTTP 1.1) and trying to mix HTTP and HTTPS seems like a difficult thing to do correctly and securely.
NGINX detects attempts to use http for server blocks configured to handle https traffic and returns an unencrypted http error: "400 The plain HTTP request was sent to HTTPS port".
Doing anything other than disconnecting or returning an error seems like a bad idea though.
Theoretically it would be feasible with something like STARTTLS that allows to upgrade a connection (part of SMTP and maybe IMAP) but browsers do not support this as it is not part of standard HTTP.
It actually is part of standard HTTP [0], just not part of commonly implemented HTTP.
The basic difference between SMTP and HTTP in this context is that email addresses do not contain enough information for the client to know whether it should be expecting encrypted transport or not (hence MTA-STS and SMTP/DANE [1]), so you need to negotiate it with STARTTLS or the like, whereas the https URL scheme tells the client to expect TLS, so there is no need to negotiate, you can just start in with the TLS ClientHello.
In general, it would be inadvisable at this point to try to switch hit between HTTP and HTTPS based on the initial packets from the client, because then you would need to ensure that there was no ambiguity. We use this trick to multiplex DTLS/SRTP/STUN and it's somewhat tricky to get right [2] and places limitations on what code points you can assign later. If you wanted to port multiplex, it would be better to do something like HTTP Upgrade, but at this point port 443 is so entrenched, that it's hard to see people changing.
> In general, it would be inadvisable at this point to try to switch hit between HTTP and HTTPS based on the initial packets from the client, because then you would need to ensure that there was no ambiguity.
Exactly my original point. If you really understand the protocols, there is probably zero ambiguity (I'm assuming here). But with essentially nothing to gain from supporting this, its obvious to me that any minor risk outweighs the (lack of) benefits.
You can in fact run http, https (and ssh and many others) on the same port with SSLH (its in debian repos) SSLH will forward incoming connections based on the protocol detected in the initial packets. Probes for HTTP, TLS/SSL (including SNI and ALPN), SSH, OpenVPN, tinc, XMPP, SOCKS5, are implemented, and any other protocol that can be tested using a regular expression, can be recognised
how is it more of a security issue than exposing the same services on other ports? Seems to me it’s actually better dont-call-it-security-through-obscurity?
I think what I had seen before was replacing the http variant of the "bad request" page with a redirect to the HTTPS base URL something akin to https://serverfault.com/a/1063031. Looking at it now this is probably more "hacky" than it'd be worth and, as you note, probably comes with some security risks (though for a local development app like this maybe that's acceptable just as using plain HTTP otherwise is), so it does sense that's not an included feature after all.
In general the way that works is user navigates to http://contoso.com which implicitly uses port 80. Contoso server/cdn listening on port 80 redirects them through whatever means to https://contoso.com which implicitly uses 443.
I don't see value on both being on the same port. Why would I ever want to support this when the http: or https: essentially defines the default port.
Now ofcourse someone could go to http://contoso.com:443, but WHY would they do this. Again, failing to see a reason for this.
The "why/value" is usually in clearly handling accidents in hardcoding connection info, particularly for local API/webdev environments where you might pass connection information as an object/list of parameters rather than normal user focused browser URL bar entry. The upside is a connection error can be a bit more opaque than an explicit 400 or 302 saying what happened or where to go instead. That's the entire reason webservers tend to respond with an HTTP 400 in such scenarios in the first place.
Like I said though, once I remembered this was more a "hacky" type solution to give an error than built-in protocol upgrade functionality I'm not so sure the small amount of juice would actually be worth the relatively complicated squeeze for such a tool anymore.
I use voidtools Everything on windows for instant file lookup. It has an Http server built in. Whenever browser complains about a feature only available via webserver url, not local file, it comes handy. Open everything webserver, enter name of the file and click.
Tailscale does this, you can serve a port on your Tailnet, or you can serve a directory of files, or you can expose it to the internet. Comes with HTTPS. It's pretty neat.
It looks nice and friendly, but for developers I can recommend exploring caddy[1] or nginx[2]. It's a useful technology to have worked with, even if they're ultimately only used for proxying analytics.
Second that, I don't really see reason not to run proper web server actually especially if one does web development and would use it for multiple projects anyway.
I often have the requirement, during development cycles, to bring up a static Webserver. After trying several options I always happily come back to the PHP built-in Webserver:
php -S localhost:8080
Many helpful options are available, just check the docs...
Seems like most of the people in these comments have missed the point, as while I also lament the use of Electron, pasting one-liner scripts does not obviate the usefulness of this project. Clearly the point of this project is not just about setting up a simple webserver, but to provide a quick and easy gui to configure that webserver, and there's a fair amount that it allows you to configure. Your one-liner that does nothing but pipe static files as a response is not that. If that's all you need, great, then this project is not for you and that's okay.
Missed opportunity to actually server the UI via the webserver in first place, as we used to do 25 years ago like with IIS Web management UI in my default browser.
for a webserver with an awesome GUI served through the web i recommend roxen. it is not a webframework requiring you to write code to serve content, but it serves static files out of the box and let's you add dynamic content on top of that. and it can outperform apache and other webservers depending on your workload.
to anyone concerned about needing to learn a new language, unless you want to build very custom dynamic sites you don't. to conveniently serve static files you won't ever have to touch the code. just like you wouldn't do that with apache or nginx.
and even if you do want something dynamic, a lot of dynamic features can be had by embedding custom tags in html. still no code required.
if you have concerns about pike as a language for the server implementation itself, pike is a very performant language with a long history being used in high profile sites and services. both pike and roxen go back to the early 90s.
and if you do want to create custom features and need help you can hire me. i am looking for work (pike,js,ts,python,php,ruby,go,... ;-)
Lol. I used to work at an IIS shop. 100% of engineering thought using it was a bad idea. Any one of us could configure apache correctly in a few minutes, but we ultimately had to hire a full time Microsoft guru to keep IIS (and the rest of the ecosystem that implies) running. He was a pleasure to work with, but it wasn’t clear why his job existed.
Wow, this is a "full circle" moment. I can distinctly remember installing my first WAMP (Windows, Apache, MySQL, PHP) stack back in ~03 when I was learning to program. It was all easy point-and-click installers. I think I may have had to edit a config file to enable PHP, but that was it.
I wrote the original version of this "simple web server" app (https://chromewebstore.google.com/detail/web-server-for-chro...) because the built-in python http server is a bit buggy. It would hang on some connections when loading a webpage with a lot of small assets.
I was surprised how many people found it useful. More so when Chrome web apps were supported on other platforms (mac/linux/windows).
127.0.0.1 means "self" in IP. Presumably that means that it if you browse to your IP address from your computer it will work, but from your phone it will not.
I usually do the opposite - 0.0.0.0 - which allows connections from any device.