All of a sudden one day, I was cut off from all my music, by the creators of the iPod!
I switched away from Apple Music and will never return. 15 years of extensive usage of iTunes, and now I will never trust Apple with my music needs again. I'm sure they don't care, or consider the move a good tradeoff for their user base, but it's the most user hostile thing I've ever experienced in two decades on Apple platforms.
Add music on macOS, and on your phone. Then sync.
RESULT: one overwrites the other, regardless of any settings.
You no longer have the audio you formerly owned.
It has nothing installed but VLC.
Life is too short to deal with the ridiculous interoperability of (simple music files) and (any modern computing platform).
I still remember spending days inside during summers as a kid, downloading, cataloging and tagging MP3 files while others were probably experiencing life haha.
But I do long for the days where I could just press 'play' and I would hear music, without waiting for Spotify's Electron crap to finish loading its 'optimistic UI', declining 10 cookie popups and agreeing to upload the soul of my unborn kids to Daniel Ek's private cloud.
Software using libparanoia and lame or ffmpeg is free. The very first time you use it, you might spend 30 minutes figuring things out. It generally takes 3-8 minutes to rip and encode a full CD these days.
The market for CDs and used CDs is quite open. $10-15 for an album is quite common. For those not aware, an album is usually 8-20 songs, so roughly the same $0.99 price as for individual tracks -- but without DRM, and with physical backup.
An awful lot of artists have their own shops; frequently, if you buy the CD from there, you also get a digital copy in WAV, FLAC or MP3 immediately.
I make my music library available as a read-only NFS export in my house network, and remotely via various bits of software to members of my family.
A lot of music is still available for sale, if not through Bandcamp then through stores like Qobuz[1]. Sometimes I have to look around for a bit to find a store that sells what I'm looking for, but I can usually find it on Bandcamp or there. Occasionally it's not for sale, in which case I don't feel bad about torrenting or downloading from YouTube, but that's rare.
> Life is too short to deal with (the ridiculous interoperability of simple music files) and (any modern computing platform).
iCloud: $1000 in Apple's pocket
Apple didn’t communicate that well and many folks lost stuff, particularly if they are picky about recordings.
All of the CD collection stuff has degraded everywhere as the databases of tracks have been passed around to various overlords.
Otherwise, you sync with iTunes/Music.app or manage outside of Apple like we did from 2000 till whenever match came out.
My wife had an extensive collection of recordings that aren’t available on Apple Music and never will be, and they’ve flawlessly synced since Match came out like 20 years ago.
I think the complaints about album are 100% legit. But a lot of the lost data/miscategorized albums are likely more related to old farts like me forgetting that many of my “CD rips” may have fallen off the Napster truck 30 years ago.
Apple when it comes to purchased music has been pretty awesome to its customers. (Apple Music… meh) Buying songs has been a sideshow for what… a decade? Unlike many providers, it’s all still there humming away. Every person involved is long retired, it’s still alive.
Oh and all my lossless got shit on.
Fuck me I guess??
I couldn't be bothered to spend time manually selecting stuff to download back then. It was offensive to even ask that spend 30 minutes manually correcting a completely unnecessary mistake on their part. And this was during a really really bad time in interface, with the flat ui idiocy all the rage, and when people were abandoning all UI standards that gave any affordances at all.
If I'm going to go and correct Apple's mistake, I may as well switch to another vendor and do it. Which is what I did. I'm now on Spotify to this day, even though it has many of the problems as Appple Music. At least Spotify had fewer bugs at the time, and they hadn't deleted music off my device.
Good riddance and I'll never go back to Apple Music.
Ideally, apps shouldn't detect if you have internet and then act differently. They should pull up your cached/offline data immediately and then update/sync as attempted connections return results.
The model where you have offline data but you can't even see your playlists because it wants to update them first because it thinks you have internet is maddening.
Back button in my browser for a static page redownloads it and the accompanying however many MB of framework...
Now I open links in new tabs always.
(FB Lite "app" - as well as mobile FB site - are notorious offenders there)
I can (and do) find things around the house that don't depend on a screen, but it's annoying to know that I don't really have much of a backup way to access the internet if the power is out for an extended period of time. (Short of plunking down for an inverter generator or UPS I suppose.)
Or you could use a Raspberry Pi or similar and a USB WiFi adapter (make sure it supports AP mode) and a battery bank, for an "emergency" battery-operated WiFi router that you'd only use during power outages.
EDIT: Unless your ISP's CPE (modem/whatever) runs on 5 volts, you'd need more than just a USB power bank to keep things going. Maybe a cheap amazon boost converter could get you the extra voltage option.
Cellular, FTTH and DSL do usually have battery backup though so should continue to work with a UPS.
I run my router + my RPi server off-grid with ~1kWh of usable (lead-acid) battery capacity.
So with those and my laptop's battery, I sailed into our last couple of minor daytime power cuts without even noticing. Sounds of commotion from neighbours alerted me that something was up!
If I have a podcast already downloaded, but I am on an iffy connection, Spotify will block me from getting to that podcast view while it tries to load the podcast view from the web instead of using downloaded data.
I frequently put my phone in airplane mode to force spotify into offline mode to get content to play.
Turns out, it's really tough to do accurately. The main reason is that the public datasets are a mess. For example, the internet availability data is in neat hexagons, while the census demographic data is in weird, irregular shapes that don't line up. Trying to merge them is a nightmare and you lose a ton of detail.
So our main takeaway, rather than just being a pretty map, was that our public data is too broken to even see the problem clearly.
I wrote up our experience here if anyone's curious: https://zeinh.ca/projects/mapping-digital-divide/
I think in so many fields the datasets are by far the highest impact thing someone can work on, even if it seems a bit mundane and boring. Basically every field I've worked in struggles for need of reliable, well maintained and open access data, and when they do get it, it usually sets off a massive amount of related work (Seen this happen in genetics, ML of course once we got ImageNet and also started getting social media text instead of just old newspaper corpuses).
That would definitely be advice I'd give to many people searching for a project in a field -- high quality data is the bedrock infrastructure for basically all projects in academic and corporate research, so if you provide the data, you will have a major impact, pretty much guaranteed.
So anyways, I bring this up with my local government in Chicago and they recommend that I switch to AT&T Fiber because it's listed as available at my address in the FCC's database. Well, I would love to do that except that
1. The FCC's database was wrong and rejected my corrections multiple times before AT&T finally ran fiber to my building this year (only 7 years after they claimed that it was available in the database despite refusing to connect to the building whenever we tried).
2. Now that it is in the building, their Fiber ISP service can't figure out that my address exists and has existing copper telephone lines run to it by AT&T themselves so their system cannot sell me the service. I've been arguing with them for 3 months on this and have even sent them pictures of their own demarc and the existing copper lines to my unit.
3. Even if they fixed the 1st issue, they coded my address as being on a different street than its mailing address and can't figure out how to sell me a consumer internet plan with this mismatch. They could sell me a business internet plan at 5x the price though.
And that's just my personal issues. And I haven't even touched on how not every cell phone is equally reliable, how the switch to 5G has made many cell phones less reliable compared to 3G and 4G networks, how some people live next to live event venues where they can have great mobile connections 70% of the time but the other 30% of the time it becomes borderline unusable, etc.
It's really eye-opening to set up something like toxiproxy, configure bandwidth limitations, latency variability, and packet loss in it, and run your app, or your site, or your API endpoints over it. You notice all kinds of UI freezing, lack of placeholders, gratuitously large images, lack of / inadequate configuration of retries, etc.
So I was tasked with fixing the issue. Instead of loading the whole list, I established a paginated endpoint and a search endpoint. The page now loaded in less than a second, and searches of customer data loaded in a couple seconds. The users hated it.
Their previous way of handling the work was to just keep the index of all customers open in a browser tab all day, Ctrl+F the page for an instant result and open the link to the customer details in a new tab as needed. My upgrades made the index page load faster, but effectively made the users wait seconds every single time for a response that used to be instant at the cost of a one time per day long wait.
There's a few different lessons to take from this about intent and design, user feedback, etc. but the one that really applies here is that sometimes it's just more friendly to let the user have all the data they need and allow them to interact with it "offline".
Of course if the system is a total mess then it might have been a lot of work, but what you describe is really more of a skill issue than a technical limitation.
It is an effective product, in so far as it generates revenue, but it is not an example I would use to describe a system that provides a good and useful experience to its end users - which was and generally is my goal when designing software.
Computers can do unfathomable amounts of computation in the blink of an eye. If your app takes seconds or longer to do stuff it's probably because it's ass. Nearly all common operations should be measured in nano or milliseconds. If they're slow it's probably the dev's fault.
In particular, I welcome you to experience the lovely word of corporate VPNs, whose maintainers and developers seem to have latency expectations that have not changed in 30 years.
And yes there's a corporate vpn and fire walls and vents and subnets and whatever else, but when I create a feature it doesn't take 5 seconds to load. It takes milliseconds to load. Because there's nothing in our environment that justifies this bullshit, the guys who built this app just suck. And I'm going to fix it like I always do.
I have also worked on large production systems where I've fixed lots of performance issues. Often I can see them just by reading the code, I'll find some code that looks ass then I'll run it and sure enough it's slow. So I fix it. It doesn't take a profiler or micro optimization, it just takes some basic understanding of what we're doing.
Some times slow code is justified, some times there's just a lot of processing to be done or a lot of network requests to send or something. But most of the time it's just devs who don't understand fundamentals.
So if you have a website and a backend, the most basic request will be fast. Make an endpoint that just returns hello world, a request to this endpoint will generally take a number of milliseconds. Maybe 10, maybe a few hundred, something like that assuming a decent connection. That's the round trip time, including overhead from protocols and auth etc.
Now you know the best case scenario - when you create a real endpoint that actually does something useful it will be slower. How much slower depends on what you are doing and how. That is what I am talking about. If you send 50 requests from your backend to various other services then obviously it's going to take a lot more time, so we want to avoid sending a lot of requests - especially in series, where the first request has to complete before the next one etc.
We also want to avoid doing a lot of heavy processing. You can do a large amount of processing with no discernible performance impact, but at some point and especially with bad algorithms this processing time can explode and make your system slow. For example I once looked at some code to generate a report that took 25 minutes to run. It was getting two lists of objects and combining them by iterating through the first one and linear searching the other for a match by id. The time complexity of this is O(n^2). I turned the other list into a dictionary which allows O(1) lookup, eliminating the O(n) linear search, making the overall time complexity O(n) and the report generated in less than 5 minutes. Still painfully slow by my standards, and I'm sure I could have optimized it further if I didn't have more important tasks to work on, but it's a pretty good improvement from a few minutes of work.
Another common culprit is just sending too much data. I've seen websites where a request returns huge Json documents of multiple megabytes, then uses a tiny fraction of the data. By changing the system so that the website only fetches the data it needs you can reduce the request time from seconds to milliseconds.
I hope this gives you a better idea of what I'm saying.
You could also look for inefficiencies in the search. Maybe the query is inefficient, maybe you can make use of database functionality such as full-text search and/or indexes etc. If you don't have access to make those changes to the db, you can cache the data in your backend memory, your own DB, Redis or whatever you prefer, so your app can be nice and snappy regardless of how ass your dependency is.
The i got a degree and a dev job, apprenticeship? Nah dude here's a big legacy app for you, have fun. Mentorship? Okay I technically had a mentor. We had a lunch every couple months, talked about stuff a bit but nothing much. And I mean this is going to sound a bit pompous but I'm above average. I had mostly A's in university, I finished every single project alone and then helped others. I was a TA. I corrected the professors when they made mistakes. I wrote a lot of code in my free time. I can't imagine what it must be like for one of my peers who honestly didn't know Jack shit and still graduated somehow.
I'm working on an app right now, took over after two other guys worked on it for about a year. This app isn't even in prod yet and it's already legacy code. Complete mess, everything takes like 5 seconds to load, the frontend does a crapload of processing because the data is stored and transferred in entirely the wrong structure so basically they just send all the data and sort it out on the frontend.
I honestly think the fastest way to get this app working properly is to scrap the whole thing and start from scratch but we have a deadline in a couple months so I guess I'll see how it goes.
I had reasonable - not mega - internet in Australia at the time. But the first 5 years of Steam sucked hard. Get home from work, go to play [game of choice] only to find it will take hours to update.
You can easily see this when using WiFi aboard a flight, where latency is around 600 msec at minimum (most airlines use geostationary satellites, NGSO for airline use isn't quite there yet). There is so much stuff that happens serially in back-and-forth client-server communication in modern web apps. The developer sitting in SF with a sub-10 ms latency to their development instance on AWS doesn't notice this, but it's sure as as heck noticeable when the round trip is 60x that. Obviously, some exchanges have to be serial, but there is a lot of room for optimization and batching that just gets left on the floor.
It's really useful to use some sort of network emulation tool like tc-netem as part of basic usability testing. Establish a few baseline cases (slow link, high packet loss, high latency, etc) and see how usable your service is. Fixing it so it's better in these cases will make it better for everyone else too.
What kills me is that 90% of these heavy SPAs are doing pretty mundane stuff - showing product listings, displaying articles, basic forms. You really don't need a React framework plus a dozen dependencies just to show some images and text. The old "boring" approach of server-rendered pages with a sprinkle of vanilla JS actually provides a better UX for most use cases, especially when the network isn't perfect.
The irony is that by chasing "modern" development practices, we've made the web worse for users while making it more complex for developers. Sometimes the simple solution really is the best solution.
There are lots of cases for sending MORE data on "iffy" internet connections.
One of our websites is a real estate for-sale browsing site (think Zillow). It works great from home or office, but if you are actively out seeing properties it can be real frustrating when Internet comes and goes. Where any interaction with the page can take 10-60 seconds to refresh because of latency and packet loss.
A few months ago I vibe-coded a prototype that would locally cache everything, and use cached versions primarily and update the cache in the background. Using developer tools to simulate bad networking it was a day and night experience. Largely because I would fetch first property photos of all properties as well as details about the first few hundred properties that matched your search criteria.
"Bloat" when used intelligently, isn't so bad. ;-)
Essentially local-first apps. Have a local database with all the info, use straight up SQL (or whatever) for your interactions, sync periodically with the mothership. Great solution for many usecases.
This was one of the original promises of mobile apps and what makes them better than websites. But the industry went a different way – many mobile and desktop apps became glorified browsers where nothing works without good internet.
So many times I pick up my phone because I'm stuck waiting somewhere, only to realize that I don't have a good connection anymore, and none of the sites I had open are usable anymore.
This often fails in all sorts of ways:
* The client treats timeout as end-of-file, and thinks the resource is complete even though it isn't. This can be very difficult for the user to fix, except as a side-effect of other breakages.
* The client correctly detects the truncation, but either it or the server are incapable of range-based downloads and try to download the whole thing from scratch, which is likely to eventually fail again unless you're really lucky.
* Various problems with automatic refreshing.
* The client's only (working) option is "full page refresh", and that re-fetches all resources including those that should have been cached.
* There's some kind of evil proxy returning completely bogus content. Thankfully less common on the client end in a modern HTTPS world, but there are several ways this can still happen in various contexts.
wget -c https://zigzag.com/file1.zip
Note that -c only works with FTP servers and with HTTP servers that support the "Range" header.
- Depending on your product or use case, somewhere between a majority and a vast majority of your users will be using your product from a mobile device. Throughput and latency can be extremely high, but also highly variable over time. You might be able to squeeze 30Mbps and 200ms pings for one request and then face 2Mbps and 4000ms pings seconds later.
- WiFi generally sucks for most people. The fact that they have a 100Mbps/20Mbps terrestrial link doesn't mean squat if they're eking out 3Mbps with eye-watering packet loss because they're in their attic office. The vast majority of your users are using wireless links (WiFi or cell) and are not in any way hardlined to the internet.
It's a nice feature, but it would be even nicer if you could pin some apps to prevent their offloading even if you haven't used them in ages.
That change would make _viable_ for me at all, right now it's next to useless.
Currently iOS will offload apps that provide widgets (like Widgetsmith) even when I have multiple Widgetsmith widgets on my 1st and 2nd homescreens, I just never open the app (I don't need to, the widgets are all I use). One day the widgets will just be black and clicking on them does nothing. I have to search for Widgetsmith and then make the phone re-download it. So annoying.
Also annoying is you can get push notifications from offloaded apps. Tapping on the notification does _nothing_ no alert, no re-download, just nothing. Again, you have to connect the dots and redownload it yourself.
This "feature" is very badly implemented. If they just allowed me to pin things and added some better UX (and logic for the widget issue) it would be much better.
0: https://support.apple.com/guide/iphone/manage-storage-on-iph...
I don't look much into phones that don't promise a reasonable support life, but if I go look at motorola all these midrange phones don't even have size options. At least some of them accept microsd.
I forget which it was but years ago there was a common customer service library that a lot of apps include that on its own added like 25MB to your app package size. That’s insane, I’ve built and shipped full apps that aren’t that large. Adding that library would’ve over doubled size for questionable utility.
It doesn’t take dropping too many dependencies like that to reduce package size significantly.
Should you assume all your customers have smartphones? Smartphones with internet connections? Working cameras? Zelle? Venmo? Facebook? WhatsApp? Uncracked screens (for displaying QR codes to be scanned)? The ability to install an app?
I recently bought a snack from a pop-up food vendor who could only accept Venmo, which luckily I have, or exact cash, since he didn't have change. I'm pretty sure he only told me this after he handed me my food. I know lots of people who don't have Venmo—some don't want it because they see it as a security risk, some have had their accounts banned, some just never used it and probably don't want to set it up in a hurry while buying lunch.
I also recently stayed at a rural motel that mentioned in the confirmation email that the front desk isn't staffed 24/7, so if you need to check in after hours, you have to call the on-call attendant. Since cell service is apparently spotty there (though mine worked fine), they included the Wi-Fi password so you could call via Wi-Fi. There were also no room phones, so if the Wi-Fi goes out after hours, guests are potentially incommunicado, which sounds like the basis of a mystery novel.
Seems like a bad idea to me. Even Square with Cash App support would be better
Just a warning about the screenshot he's referencing here: the slice of map that he shows is of the western half of the US, which includes a lot of BLM land and other federal property where literally no one lives [0], which makes the map look a lot sparser in rural areas than it is in practice for humans on the ground. If you look instead at the Midwest on this map you'll see pretty decent coverage even in most rural areas.
The weakest coverage for actually-inhabited rural areas seems to be the South and Appalachia.
[0] https://upload.wikimedia.org/wikipedia/commons/0/0f/US_feder...
It's grounds for endless debate because it's inherently a fuzzy answer, and everyone has their own limits. However the outcome naturally becomes an amalgamation of everyone's response. So perhaps a post like this leads to a few more slim websites.
Part of the problem is the acceptance of the term "long tail" as normal. It is not. It is a method of marginalizing people.
These are not numbers, these are people. Just because someone is on an older phone or a slower connection does not make them any less of a human being than someone on a new phone with the latest, fastest connection.
You either serve, or you don't. If your business model requires you to ignore 20% of potential customers because they're not on the latest tech, then your business model is broken and you shouldn't be in business.
The whole reason companies are allowed to incorporate is to give them certain legal and financial benefits in exchange for providing benefits (economic and other) to society. If your company can't hold up its end of the bargain, then please do go out of business.
Or, at least the business needs to recognize that their ending support for Y is literally cutting off potential customers, and affirmatively decide that's good for their business. Ask your company's sales team if they'd be willing to answer 10% of their inbound sales calls with "fuck off, customer" and hang up. I don't think any of them would! But these very same companies think nothing of ending support for 'old' phones or supporting only Chrome browser, or programming for a single OS platform, which is effectively doing the same thing: Telling potential customers to fuck off.
If I squeeze you to be more precise, it becomes uncomfortable and untenable, as no matter what you are either marginalizing people or marginalizing yourself, your company, or everyone else. It's something where it is extremely easy to have moral high ground when you have zero stake yourself, but anyone who understands the nuance of the problem can see right through it.
Approximately zero people are protesting because Gucci doesn't made a budget line of handbags that those below the poverty line can afford, or because car companies don't make cars that can be driven by quadriplegics, or because Google isn't making a version of Google Docs that operates through the post office mail for people without internet.
Every service, whether from the government or from a company, has some implicit requirements attached to it. Any sane person can see that. Statements like "Part of the problem is the acceptance of the term "long tail" as normal. It is not. It is a method of marginalizing people." indicate a level of detachment from reality, and the fact that you have to use emotionally manipulative phrases like "does not make them any less of a human being" to make your statements even seem halfway plausible conclusively prove this line of argument is on the level of randomly-generated nonsense in terms of logical coherence.
Made me think more about poor & unstable connections when building out new features or updating existing things in the product. Easily parsable loading states, better user-facing alerts about requests timing out, moving calculations/compute client-side where it made sense, etc.
A 10mb download over 3G is fine if you can actually start it. But when the page needs 15 round trips before first render, you're already losing the user.
We started simulating 1500ms+ RTT and packet loss by default on staging. That changed everything. Suddenly we saw how spinners made things worse, how caching saved entire flows, and how doing SSR with stale-while-revalidate wasn’t just optimization anymore. It was the only way things worked.
If your app can work on a moving train in Bangladesh, then it's gonna feel instant in SF.
The rule I've come up with is one user action, one request, one response. By 'one response', I mean one HTTP response containing DOM data; if that response triggers further requests for CSS, images, fonts, or whatever, that's fine, but all the modifications to the DOM need to be in that first request.
An amazing thing.
Grateful for the blog w/ nice data tho TY
Youre also opening up to more potential customers in rural areas or areas with poor reception, where internet may exist but may not be consistent or low latency
I lived happily on dialup when I was a teenager, with just one major use case for more bandwidth.
A different perspective on this shows up in a recent HN submission, "Start your own Internet Resiliency Club" (https://news.ycombinator.com/item?id=44287395). The author of the article talks about what it would take to have working internet in a warzone where internet communications are targeted.
While we can frame this as whether we should design our digital products to accommodate people with iffy internet, I think seeing this as a resiliency problem that affects our whole civilization is a better perspective. It is no longer about accomodating people who are underserved, but rather, should we really be building for a future where the network is assumed to be always-connected? Do we really want our technologies to be concentrated in the hands of the few?
This is spot on for me. I live in a low-density community that got telcom early and the infrastructure has yet to be upgraded. So, despite being a relatively wealthy area, we suffer from poor service and have to choose between flaky high latency high bandwidth (Starlink) and flaky low latency low bandwidth (DSL). I’ve chosen the latter to this point. Point to point wireless isn’t an option because of the geography.
Client-side rendering with piecemeal API calls is definitely not the solution if you are having trouble getting packets from A to B. The more you spread the information across different requests, the more likely you are going to get lose packets, force arbitrary retries and otherwise jank up the UI.
From the perspective of the server, you could install some request timing middleware to detect that a client is in a really bad situation and actually do something about it. Perhaps a compromise could be to have the happy path as a websocketed react experience that falls back to a ultralight, one-shot SSR experience if the session gets flagged as having a bad connection.
Even if I SSR and inline all the packages/content, that overall response could be broken up into multiple TCP packets that could also be dropped (missing parts in the middle of your overall response).
How does using SSR account for this?
I have to deal with this problem when designing TCP/UDP game networking during the streaming of world data. Streaming a bunch of data (~300 Kb) is similar to one big SSR render and send. This is because standard TCP packets max out at ~65 Kb.
Believing that one request maps to one packet is a frequent "gotcha" I have to point out to new network devs.
If there's 15 different components sending 25 different requests to different endpoints, some of which are triggered by activities like scrolling etc, then the user needs a consistent connection to have a good experience.
Packet loss in TCP doesn't fail the whole request. It just means some packets need to be resent which takes more time.
I hear you. That is the "promise" of TCP.
I have (unfortunately) seen many instances where this is not true.
> If you are dropping packets and losing data, why would it matter if you're making one request or several?
Let's just focus on the second part: why would it matter if you're making one request or several?
Because people make bad assumptions about the order that requests complete in, don't check that previous requests completed successfully, maybe don't know how (or care) because that's all buried in some frontend framework... maybe that's the point!
> Believing that one request maps to one packet is a frequent "gotcha" I have to point out to new network devs.
What is a "network dev"? Unless they're using UDP... maybe you're thinking of DNS? Nah probably not. QUIC? Is that the entire internet for you? Oh. What about encryption? That takes whole handshakes.
Send one packet, recipient always receives one packet, is a "gotcha" I have to point out to experienced network administrators... along with DNS requires TCP as well as UDP these days, what with DNSSEC, attack mitigations, etc.
As for what am I thinking of? All of the above! They're all part of my day job for the software I build.
Also, this isn't meant to be flippant, I agree with what you're saying! :D
BTW, frags are bad. DNS infra is still kneecapped by what turned out to be an extremely exuberant kicking of the can down the road packaged as "best practice". I think the architectural discussion must have been "100 nameservers for an AD domain, plus AUTHORITY and ADDITIONAL, not to mention DNSSEC..." "Oh UDP is fine. Frags aren't a problem, the routers and smart NICs will handle it fine." "4096 ought to be enough for anybody." "Good. I'll have another Old Fashioned then." And then the clever attacks begin.
Jumbos are great, but the PMTU has to support it. Localhost or a datacenter, maybe a local network. Somewhere between BIND 9.12.3 and BIND 9.18.21 the default for max-udp-size changed from 4096 to 1232. Just sayin....
FSVO required. Images beyond a few bytes shouldn't be inlined for example since loading them would block the meat of the content after them.
Cache-control immutable the code and assets of the app and it will only be reloaded on changes. Offline-first and/or stale-while-revalidate approaches (as in the React swr library) can hugely help with interactivity while (as quickly as possible) updating in the background things that have changed and can be synced. (A service worker can even update the app in the background so it's usable while being updated.) HTTP3/QUIC solves the "many small requests" and especially the "head of line blocking" problems of earlier protocols (though only good app/API design can prevent waterfalls). The client can automatically redo bad connections/requests as needed. Once the app is loaded (you can still use code splitting), the API requests will be much smaller than redownloading the page over and over again
Of course this requires a lot of effort in non-trivial cases, and most don't even know how to do it/that it is possible to do.
I'd like to introduce you to tight mode:
https://www.smashingmagazine.com/2025/01/tight-mode-why-brow...
Congested and/or weak wifi and cell service are what "iffy" is about. Will a page _eventually_ load if I wait long enough? Or are there 10 sequential requests, 100KBs each, that all have succeed just to show me 2 sentences of text?
Most kinds of communication products (Zoom) work OK, except for anything from Google.
The other issue that's under-considered is lower spec devices. Way more people use cheap Android phones than fancy last-five-years iPhones. Are you testing on those more common devices?
(if you don't have a favorite, try react.dev)
We're using this benchmark all the time on https://www.firefly-lang.org/ to try to keep it a perfect 100%.
For such things as streaming audio/video, there is the codec and other things to be considered as well. If the data can be coded in real time or if multiple qualities are available already on the server then this can be used to offer a lower quality file to clients that request such a file. The client can download the file for later use and may be able to continue download later, if needed.
There is also, e.g. do you know that you should need a video call (or whatever else you need) at all? Sometimes, you can do without it, or it can be an optional possibility.
There is also the avoiding needing specific computers, too. It is not only for internet access, although that is a part of it, too. However, this does not mean that computer and internet cannot be helpful. They can be helpful, but should be overly relied on so much.
The Gemini protocol does not have anything like the Range request and Content-length header, and I thought this was not good enough so I made one that does have these things. (HTTP allows multiple ranges per request, but I thought that is more complicated than it needs to be, and it is simpler to only allow one range per request.)
Seems sensible to take a small convenience hit now to mitigate those risks.
This is a major part of why I cannot stand software devs (I am loathe to call them “engineers”). Of COURSE YOU SHOULD design for an iffy internet. It’s never perfect. Thank the LORD code monkeys don’t build anything important like bridges or airplanes.
* Heart monitors
* Medication dosage systems
* Precision Guided Munition targeting systems
* "AI" controlled suicide drones
* 911 systems
* Software-controlled building-wide fire monitoring and suppression systems
Tons of other stuff.
Has your head exploded yet? Hint: Nobody seems to give a damn.
Use the software that you make, in the same conditions that your users will use it in.
Most mobile apps are developed by people in offices with massive connections, or home offices with symmetric gigabit fiber or similar. The developers make sure the stuff works and then they're on to the next thing. The first time someone tries to use it on a spotty cellular connection is probably when the first user installs the update.
You don't have to work on a connection like that all the time, but you need to experience your app on that sort of connection, on a regular basis, if you care about your users' experience.
Of course, it's that last part that's the critical missing piece from most app development.
i am unsure if there is a different word for the idea, but dogfooding is not it. In my opinion.
I attempt as I did way back in the '90s to ensure that a page will load in a bearable time over a nominal 9600 baud modem (~1kBps), and that the first packet window over TCP will contain some useful readable information. (All complicated by TLS handshakes and so on.) And I try not to rely for anything important on any very new features (eg CSS, HTML) so older cheaper hardware will still work.
My main site's home page works reasonably well on the emulator for the very first (CERN) browser, though the <img> tag had not yet been invented, so no images showed!
One of our biggest sticking points when new forms of multifactor came around is that it can sometimes take longer than a minute to deliver a push notification or text message even in areas that are solid red on Verizon's coverage map.
> This is likely worse for B2C software than B2B.
These are regional retail banks that all use the same LOB software. Despite the product being sold mainly to banks, which famously have branches, the developer never realized that there could be more than a millisecond between a client and a server. The reason they have VDI is so their desktop environment is in the same datacenter as their app server. It's a fucking CRUD app and the server has to deal with maybe a couple hundred transactions per hour.
I think this is pretty typical for B2B. You don't buy software because it is good. You buy software because the managers holding the purse strings like the pretty reports it makes and they are willing to put up with A LOT of bullshyt to keep getting them.
The ol reliable plain HTML stuff usually works great though, even when you have to wait a bit for it to load.
HOWEVER the main problem (apart from just not having service) is congestion. There is a complete shortage of low bandwidth spectrum which can penetrate walls well in (sub)urban areas, at ~600-900MHz. Most networks have managed to add enough capacity in the mid/upper bands to avoid horrendous congestion, but (eg) 3.5GHz does not penetrate buildings well at all.
This means it is very common to walk into a building and go from 100meg++ speeds on 5G to dropping down to 5G on 700MHz which is so congested that you are looking at 500kbit/sec on a good day.
Annoyingly phone OSs haven't got with the times yet. And just display signal quality for the bars. Which will usually be excellent. It really needs to also have a congestion indicator (could be based on how long your device is waiting for a transmission slot for example).
I've been trying to convince them to try Starlink, but they're unwilling to pay for the $500+ equipment costs.
One of my neighbors is apparently using Starlink since I see a Starlink router show up in my Wi-Fi scan.
Many people have already said designing for iffy internet helps everyone: this is true for slimming your payload, but not necessarily designing around dropped connections. On a plane or train, you might alternate between no internet and good internet, so you can just retry anything that failed when the connection is back, but a rural connection can be always spotty. And I think the calculus for devs isn't clearly positive when you have to design qualitatively new error handling pathways that many people will never use.
For example, cloning a git repo is non-resumable. Cloning a larger repo can be almost impossible since the probability the connection doesn't drop in the middle falls to zero. The sparse checkout feature has helped a lot here. Cargo also used to be very hard to use on rural internet until sparse registries.
Translation: shitty servers.
That means that the connection might be fine, but the backend is not.
I need to have a lot of error management in my apps, and try to keep the server interactions to a minimum. This also means that I deal with bad connections fairly well.
[1] <https://http3-explained.haxx.se/en/quic-future>
[2] <https://www.ietf.org/archive/id/draft-michel-quic-fec-01.htm...>
[3] <https://www.ietf.org/id/draft-zheng-quic-fec-extension-00.ht...>
As with lossy, laggy, and slow connections, this scenario is also more common that the average tragically online product manager will grasp.
It's hard to make a website that doesn't work reasonably well with that though. Even with all the messed up Javascript dependencies you might have.
I feel for those on older non-Starlink Satellite links. eg. islands in the pacific that still rely on Inmarsat geostationary links. 492 kbit/s maximum (lucky if you get that!), 3 second latency, pricing by the kb of data. Their lifestyle just doesn't use the internet much at all by necessity but at those speeds even when willing to pay the exorbitant cost sites will just timeout.
Starlink has been a revolution for these communities but it's still not everywhere yet.
I commute in tunnels where signal can drop out. I walk down busy city streets where I technically have 5G signal but often degrade to low bandwidth 4G because of capacity issues. I live in Australia so I'm 200ms from us-east-1 regardless of how much bandwidth I have.
It's amazing how, on infrastructure that's pretty much as good as you can get, I still experience UX issues with apps and sites that are only designed for the absolutely perfect case of US-based hard-wired connections.
The NTIA or FCC just released an updated map a few days ago (part of the BEAD overhaul) that shows the locations currently covered by existing unlicensed fixed wireless.
Quick Google search didn't find a link but I have it buried in one of my work slack channels. I'll come back with the map data if somebody else doesn't.
The state of broadband is way, way worse than people think in the US.
Indirect Link: https://medium.com/spin-vt/impact-of-unlicensed-fixed-wirele...
https://www.reddit.com/r/MapPorn/comments/vfwjsc/approximate...
I know a lot of the West has terrible broadband, but a not insignificant majority of that land area is uninhabited federal land -- wilderness, such as high mountains, desert, etc. By focusing on the West and focusing on maps that don't acknowledge inhabitedness as an important factor, it confuses the issue.
I'd argue it's more of a travesty that actual fiber optic internet is only available in maybe 15% of addresses nationwide, than the white holes in Eastern Oregon or Northern Nevada. One major reason I believe this is that even at my house, where I have "gigabit" available via DOCSIS, my upload is barely 20Mbps and I have a bandwidth cap of 1.25TB a month which means if I saturate my bandwidth I can only use it for 2 hours 46 minutes per month.
If you compare "things that would be possible if everyone had a 500Mbps upload without a bandwidth cap" vs "things I can do on this connection" it's a huge difference.
I can think of at least two supermarkets where I have crap internet inside in spite of having whole city decent 5G coverage outside.
One thing that never loads is the shopping app for our local equivalent of Amazon. I'm sure they lost some orders because I was in said supermarkets and couldn't check out the competition's offers. Minor cheap-ish stuff or I would have looked for better signal, but still lost orders.
* blind
* def
* reading impaired
* other languages/cultures
* slow/bad hardware/iffy internet
To me at some point we need to get to an LCARs like system - where we don't program bespoke UIs at all. Instead the APIs are available and the UI consumes it, knows what to show (with LLMs) and a React interface is JITted on the spot.And the LLM will remember all the rules for blind/def/etc...
Also I think until LLMs become reliable (which may be never), using them in the way you describe is a terrible idea. You don't want your UI to all of a sudden hallucinate something that screws it up.
As far as international emitting of interfaces - yes it absolutely makes sense to do it this way. If you're asking for an address and the customer is in the US, the LLM can easily whip up a form for that kind of address. If you're somewhere else, it can do that too. There's no reason for bespoke interfaces that never get the upgrade because someone made it overly complicated for some reason.
Back in the day, AOP was almost a big thing (for a small subset of programmers). Perhaps what was missing was having a generalized LLM that allowed for the concern to be injected. Forgot your ALT tag? LLM, Not internationalized? LLM, Non-complicated Lynx compatible view? LLM
There comes a point at which attempting to address everyone means you start making sacrifices that impact your product/offering (lowest common denominator), which itself can eliminates some higher end clients. Or you're spending so much creating multiple separate experiences that it significantly impacts the effort you have to put in and in business that hits profitability, or otherwise can cause burnout.
So, follow elegance, as well as efficiency, in the architecture and design to make it accessible to as wider audience as is practical. You have to think about what is practical, and what you're obligations are to your audience. Being thoughtful and intentional in design is no bad thing, it stops you being lazy and loading a 50MB JPEG as the backdrop when something else will do.
Huh, worked fine for me: https://i.imgur.com/Y7lTOac.png
We’ve gotten used to try again later or pull to refresh, but very few apps are built to handle offline states gracefully. Sometimes I wonder if developers ever test outside their office WiFi.
I hope people making apps to unlock cars or other critical things that you might need at 1am on a road trip in the middle of nowhere don’t have this attitude of “everyone has reliable internet these days!”
Concrete example: I made an app for Prospect Park in Brooklyn that had various features that were meant to be accessed while in the park which had (has?) very spotty cell service, so it was designed to sync and store locally using an eventually consistent DB, even with things that needed to be uploaded.
Systems hardened against authoritarianism are a great thing. Even the taliban have mobile coverage in kabul- and thus, every woman forced under the dschador holding on to a phone, has a connection to the world in her hand. Harden humanity against the worst of its sides in economic decline. I dream of a math proof, coming out of some kabul favella.
Earnings have practically nothing to do with it.
Funny that American civil service is used by militaries on both sides. At least it was used in some role during the war, bot sure about now.
I like ogg 0 not only because it's fairly small, especially for "clean" audio, audio not already mp3 encoded with a bad setting, for example; also it lets me know what services to avoid spending time on, if they support encumbered (or previously encumbered) codecs but not open ones.
So if your market is a global one, there's a chance even a fortune 500 company could struggle to load your product in their HQ because of their terrible internet connection. And I suspect it's probably even worse in some South American/African/Asian countries in the developing world...
This is the only reason I know why some websites/apps perform poorly on a bad connection (Discord struggles simply load the text content of messages).
Another one is being at a huge event with thousands of people trying to use mobile data at the same time.
If nothing else I think these two cases are enough to motivate caring a little bit about poor connections. Honestly I find them more motivating than developing for whatever % of users have a bad connection all the time.
The article looks at broadband penetration in the US. Which is useful, but you need to plan for worst cases scenario, not statistically likely cases.
I have blazing fast internet at home, and that isn't helpful for the AAA app when I need to get roadside assistance.
I want the nytimes app to sync data for offline reading, locally caching literally all of the text from this week should be happening.
Your point reminded me of the NASA Mars rover deployed in 2021 with the little Ingenuity helicopter on board.
The helicopter had a bug that required a software update, which NASA had to upload over three network legs: the Deep Space Network to Mars, a UHF leg from Mars-orbiting robotic vehicles to the rover, and a ZigBee connection from the rover to the Ingenuity helicopter. A single message could take between 5 and 20 minutes to arrive...
Edit: I described this in an article back then:
Ping statistics for <an IP in our DC>:
Packets: Sent = 98585, Received = 96686, Lost = 1899 (1% loss),
Approximate round trip times in milli-seconds:
Minimum = 43ms, Maximum = 3197ms, Average = 58ms
it's almost exactly 5s per 60s of loss^. has been since i got it. for "important" live stuff i have to switch to my cellphone, in a specific part of my house. otherwise the fact that most things are usable on "mobile" means my experience isn't "the worst" - but it does suck. I haven't played a multiplayer game with my friends in a year and a half - since at&t shut off fixed wireless to our area.oh well, 250mbit is almost worth it.
^: when i say this, i wasn't averaging, it drops from 0:54-0:59, in essence, "5 seconds of every minute"
Dropped packets? Throttling? Jitter?
I am trying to figure out if there are good testing suites for this or if it is something I need to manually setup.
It's actually worse than this. Companies will claim they offer gigabit within a zip code if there's a single gigabit connection, but they will not actually offer gigabit lines at any other addresses in the zip code.
per the USDA (for example).
It is telling that tech giants make tools to test their software in poor networking conditions. It may not look like they care, until you try software by those who really don't care.
The phone is connected to the car on Bluetooth. The user shuts off the car. The call continues without delay.
While there are so many more features that the phone could provide, staying connected during the call seems essential. However, I’m sure this wasn’t part of the first release of the app. But, once they get it to work, it is fucking magic.
- Mobile first design
- Near unlimited high speed bandwidth
There's never been a case where both are blanket true.
Programmers: Let's design for crappy internet
Internet providers: Maybe it's not necessary
This isn’t true anymore. Starlink changed the whole game. It’s fast and low latency now, and almost everyone on any service that isn’t Starlink has switched en masse to Starlink because previous satellite internet services were so bad.
Most of the time no one would notice. For some applications it's definitely something that needs to get designed in.
I still occasionally get blips on Comcast, mostly late at night when I'm one of the few who notices.
Loads of people are on "a slow link" or iffy internet that would otherwise have a fast internet. Like... plane wifi! Or driving through less populated areas (or the UK outside of london) and have spotty phone reception.
Intelligent local echo. Did you type that char? How many times? Are you sure? Now you never have to guess..
UDP based means avoiding Path MTU problems.
Connection "roaming" and auto connect, etc.
Really, really cool. I wish mosh was everywhere :/
When I complain about this, I get downvoted by angry people. They blame me for using "old" or "buggy" devices (they're not that old or slow), and blame my internet connection (it's fast and stable). Is it the CPU? The bandwidth? Latency? Some weird platform-specific bug? Who knows. But if every other web page I visit does not have this problem, then it's not my device, it's the website's design.
Whenever practical, you should design for efficiency. That means not using more resources then you have to, choosing a method that's fast rather than slow, trying to avoid unnecessary steps, etc. Of course people will downvote me for saying that too. Their first comment is going to be that this is "premature optimization". But it isn't "premature" to pick a design that isn't bloated and slow. If you know what you are doing, it's not hard to choose an efficient design.
Every year software is more bloated, more buggy. New software is released constantly, but it isn't materially better than what we had decades ago. New devs I talk to seem to know less and less about how computers work at all. Perhaps the enshittification of technology isn't the tech itself getting shittier, as it can't actually make itself worse. In an industry that doesn't have minimum standards, perhaps it's the people that are enshittifying.
- Minutes-long dead spots on public transit, even above ground
- Bad reception in buildings
- Bad wi-fi at various accommodations
- Google Maps eating through my data in a day or two
And no, geography is not an excuse. Neighboring Austria has same geography as Bavaria, and yet it is immediately noticeable when exactly you have passed the border by the cellphone signal indicator going up to full five bars. And neither is money an excuse, Romania - one of the piss poorest countries of Europe - has 5G built out enough to watch youtube in 4k on a train moving 15 km/h with open doors to Sannicolao Mare.
The issue is braindead politics ("we don't need 5G at any remote milk jug") and too much tolerance for plain and simple retarded people who think that all radiation is evil.
Except I sometimes read articles on the subway and not all subway tunnels in my city have cell service. Or sometimes I read articles when I eat in some place that's located deep inside an old building with thick brick walls. Public wifi is also not guaranteed to be stable — I stayed in hotels where my room was too far from the AP so the speed was utter shit. Once, I loaded some Medium articles on my phone before boarding a plane, only to discover, after takeoff, that these articles don't make sense without images that didn't load.
Anyway. As a user, for these kinds of static pages, I expect the page to be fully loaded as soon as my browser hides the progress bar. Dear web developers, please do your best to meet this expectation.
I don't quote the following to discount what the article is saying, I think it is what the article is saying:
> This may or may not be OK for your market—"good internet" tends to be in population centers, and population centers tend to contain more businesses and consumers.
and this:
> That said, I'm deliberately not making any moral judgments here. If you think you're in a situation where you can ignore this data, I'm not going to come after you. But if you dismiss it out of hand, you're likely going to be putting your users (and business) in a tough spot.
So yes, please assume that even your most adept power users will have crappy internet at least some of the time.
New headline: Betteridge's rule finally defeated. Or is it?
Next que...<loading>
There is no need or moral obligation for all of the internet to be accessible to everyone. If you're not a millionaire, you're not going to be join a rich country club. If you don't have a background in physics, the latest research won't be accessible to you. If you don't have a decent video card, you won't be able to play many of the latest games. The idea that everything should be equally accessible to everyone is simply the wrong assumption. Inequality is not a bad thing per se.
However, good design principles involve an element of parsimony. Not minimalism, mind you, but a purposeful use of technology. So if the content you wish to show is best served by something resource intensive that excludes some or even most people from using it, but those that can access it are the intended audience, then that's fine. But if you're just jamming a crapton of worthless gimmickry into your website that doesn't serve the purpose of the website, and on top of that, it prevents your target audience from using it, then that's just bad design.
Begin with purpose and a clear view of your intended audience and most of this problem will go away. We already do that by making websites that work with both mobile and desktop browsers. You don't necessarily need to make resource heaviness a first-order concern. It's already entailed by audience and informed by the needs of the presentation.