We've seen the wheel re-invented many times and would prefer to work on something other than the wheel again. Stuff like solving user problems and making money.
Meanwhile you have the coworker who uses some new but soon to be deprecated language/framework on every project, leaving a field of unsupportable debris in their wake..
Turns out brand new stuff doesn't always survive, and even if it does you don't know its tradeoffs & pain points yet.
Everything is perfect & bug-free when it has no product use.
Seen it many times, and seen the wreckage later.
https://www.joelonsoftware.com/2002/04/11/our-net-strategy/
The idea isn't that you can't rewrite in a new language, it's that want to do bit by bit. Don't just to the big bang thing.
Capable people know the constraints (and perhaps write good articles about them) and they know when to break those rules.
Good engineering is making the right compromises.
The engineering team in question had proposed to rewrite an old Java app with AWS Amolify and replace their Postgres database with Dynodb. The whole thing was then ducktaped with Lambdas and spread across multiple regions. They had not bothered writing a single line of test, never mind having any kind of build and deployment pipeline.. they didn’t even have basic error reporting.
After doing a deeper dive, we discovered that engineers didn’t have a local environment but only had access to staging, however they could easily connect to the production database from local by just updating environmental variables. It turns out that one of them had forgotten to switch it back after debugging something on production from his machine and had forgotten to switch it back.. he had a background cleanup job running in intervals which was wiping data..
It was a complete nightmare, the schema less nature of Dynamo made it harder to understand what is happening and the React UI would crash every 15 minutes due to an infinite loop.
The operation team had learned to use the Chrome console and clear the local storage manually before the window feeezes..
However, I can't count many companies I've seen decide to get into "the cloud" only to do lift-and-shifts and are now running their stack in slower and more expensive ways.
There's a reason this youtube video exists and it's because people get into hype and then are saddled with technical debt: https://www.youtube.com/watch?v=b2F-DItXtZs
Over the last decade I worked for a fintech that did analytics for the investment banking industry and between 2016-2020 the amount of people that were shocked we weren't trying to shove blockchain somewhere was surreal.
I’m pretty sure the majority of HN thinks that even hosting directly on EC2 is for NIH adventurers only.
If you want to be successful in your career, when you are put in charge of a big new project, on a tight timeline, high management visibility, etc.. you dig into your existing tried & true toolkit to get the job done. There's so many other variables, why needlessly add more risk no one asked for?
But yes, I'm glad there are maniacs out there.. I just don't want to work with them.
Contrast this with frameworks that are created for the framework's own sake, hoping to attract its first application developers.
Haha. Yes. But when the c-suite is made up of top level management pushed out of the s&p 500, they always assume it’s their tried and true toolkit from another company. Believe me, it’s never the hammer the current engineering staff is holding. I’m slow clapping so hard for the business school graduates right now…
It depends on what you mean by "multi-decade old app".
Excel is a multi-decade old app. Windows is a multi-decade old app.
They're both still being actively developed with a lot of new and cool features added though, and that keeps it fresh.
The multi-decade old app that hasn't been touched in 20 years? It need a rewrite. And yesterday.
I suspect though, that is not what you meant.
And we need to stop all new feature work on Excel since its legacy, so give me 80% of the dev team to do above. Oh btw, they don't know the alphabet soup of stuff I decided to use, so we will also start firing them as well, as I need to hire for these special skills.
Eventually styles will change and you will have to redo the UI. This will happen much more often than the above. Your program may look very different but if you have a good architecture this is a superficial change. It may still be expensive, but none of your core logic changes. Normally you keep the old and new UI running side by side (depending on the type of program may be different builds, other times it is just a front end) until you trust the new one. (depending on details it may be an all at once switch or one screen/widget at a time)
https://www.rocketsoftware.com/en-us/products/cobol/visual-c...
Anyway, as soon as LLM can reliably produce executables directly, languages will lose their value, it is the new Assembly to high level evolution.
What does that suppose to mean
To frame your argument, TIOBE ranks COBOL as the 19th most popular language, ahead of Ruby.
It's beyond useless.
This is a better list, and cobol is not even part of the top 20, while Ruby is at 9th place: https://redmonk.com/sogrady/2024/03/08/language-rankings-1-2...
It's like I'm sitting in a meeting.
Depending on the project, it might actually be the only sane option if you're still required to make significant changes to the application and features have be be continuously added - and the project has already become so full of technical debt that even minor changes such as mapping an additional attribute can take days.
As an easily explained example for such: I remember an angular fronted I had to maintain a while ago.
The full scope of the application was to display a dynamic form via multiple inputs and interdependent on the selected choices of the available form (so if question 1 had answer a, question 2+3 had to be answered etc).
While I wouldn't call such a behavior completely trivial, on a difficulty line it was definitely towards the easy end - but the actual code was so poorly written that any tiny change always led to regressions.
It was quite silly that we weren't allowed to invest the ~2 weeks to recreate it
Another example that comes to mind is a backend rewrite of a multi million PHP API to a Java backend. They just put the new Java backend in front of the old API and proxied through while they implemented the new routes. While it took over 2 years in total, it went relatively well.
But yeah, lots of Greenfield rewrites end in disaster. I give you that!
I'm convinced on hindsight that we could have just refactored in place and been just as well off. Sure there would be some code that is still the ugly mess that made us jump to the big rewrite in the first place. However we would have had working code all along to ship. Much more importantly, we fixed a lot of problems in the rewrite - but we introduced other problems that we didn't anticipate at the same time, and fixing them means either another rewrite, or do an in-place refactor. The in-place refactor gives one advantage - if whatever new hotness we choose doesn't work in the real world we will discover before it becomes a difficult to change architecture decision.
The only time it really makes sense to do a rewrite, is when either a new architecture/technology is going to be used that will impact team competency, or the team's competency has improved significantly but is being held back by the legacy application.
In both of those situations though, you could and should absolutely cut the application into pieces, and rewrite in place, in digestible, testable chunks.
The only time it makes sense to do a wholesale greenfield rewrite, is political. For example, maybe you have a one-time funding opportunity, with a hard deadline (rewrite before acquisition or ipo, etc).
We have also improved a lot as an industry. The rewrite was started in C++98 because C++11 was still a couple years away. Today C++23 has a lot of nice things, but some of the core APIs still are C++98 (though less every year) because that is all we had the ability to use. And of course rust wasn't an option back then, but if we were to start today would get a serious look.
In hindsight, cleaning up that Perl and C++ code, even where both languages stand today, would have been a much better outcome, than everything else that was produced out of that rewrite.
But hey, we all got to improve our CVs during that rewrite and got assigned better roles at the end, so who cares. /s
You are describing a web form.
If something is a 2 week rewrite, write it in BASIC, write it in a language you invented, whatever, have fun.
Their next example was exactly what you asked for, 2 years rewrite.
Bonus points from me because they didn't wait for the whole rewrite to be done, and instead started using the new project by replacing only parts of the original one.
Building the bridge between the old & new, replacing piecemeal, and maintaining service to users the whole time.
Somewhat similarly, I feel like the boring/mature infra often gets ripped up in favor of something hip and new by a CIO who wants a career checkmark that they "modernized" everything. Then they move on to the next company and forget the consequences of breaking what was stable.
Same as I have for people resisting it because they want stability.
It is the management’s job to handle this decision and any potential problem caused by it, period.
Promise the world, hire, build, fail to deliver, move on.
Lots of respect for any worker, management or otherwise, who prioritizes their own welfare over someone else’s profits :)
All the company has to do is change the rules of the game!
The buck stops at the top of course, perhaps unless it’s something like government or a non-profit, but even then I’m not sure.
I've seen many fads over my lifetime and I expect to see many more. Some fads I regret their death, while others I'm glad we saw the light and don't use that anymore.
You can build a brand new Greenfield project with Java. You can also due minor enhancements on a cutting edge tech project as well.
Interviewers care about the scope of work.
> We’ve seen (…) and would prefer (…) making money.
You think “the youths” don’t care about making money? That’s got to be in the top two reasons why people get into coding for at least the last decade. It’s also one of the top two reasons everything is shit, too many people only care about a quick buck.
They did.
For Mongo going to postgres went great, but for complicated reasons we're stuck with Elasticache forever on our main product.
(This is a massive oversimplification, but still used rule of thumb.)
And most companies vastly overestimate their data, and believe it to be "big", when it could be trivially handled decades ago by server-grade hardware.
And, most importantly, lack of market availability. Nobody is going to sell you a relational database nowadays, and rolling your own is... daring. Postgres was the last serious kick at a relational database, but even it eventually gave into becoming tablational when it abandoned QUEL and adopted SQL.
But tablational databases are arguably better for most use-cases, not to mention easier to understand for those who don't come from a math background. There is good reason tablational databases are the default choice.
Just playing the devil's advocate here. I would prefer using boring technology that gets the boring work done as quickly and easily as possible anyways, but that's because I have more fun doing other things than working.
There is still lots of interesting stuff if your company uses newest Java or newest .NET but both are „boring” in a good way so mostly stable and changes to those are incremental and progress is stable.
Heck Angular with its latest improvements in v17 and v18 is quite interesting - but counts as totally boring and stable tech. Migration to signals and standalone components is a bit of a hassle but still is rather easy.
We want to get the job done and get on with our lives.
Oh, and be able to meet the changes to the requirements we know will keep coming.
What they don’t understand is that after 10 or 20 years of this drama you start to realize how much of this new shit is a repetition of the shit the “old shit” replaced, but with new jargon. Similar Shit, Different Day. Progress is not a ladder in IT, it’s a spiral. The scenery is never exactly the same but it swings through the old neighborhood all the fucking time.
Eat your vegetables kid.
I would only agree it can be a Bus Number skill rather than an everyone skill. My point is some day there will be a new OS the kids will embrace because everything is new to everyone and it’s their chance to shine.
A customer wrote in trying to figure out why his Fargate application kept crashing. The app would hit 100% CPU usage and then eventually start failing health checks before getting bounced (rebooted)
I relayed this back to the customer who insisted the app shouldn't be spiking in CPU usage and wanted to know why. Of course being a Fargate workload there's minimal ways to attach debugging to it. You can't just spawn htop on Fargate!
Doing due diligence I fired off an email to the team that managed the infrastructure. They curtly replied
"it failed healthchecks and got bounced"
"Okay but why"
"It hit 100% CPU"
"Okay but why?"
"It failed healthchecks and got bounced"
At no point were they either willing to interrogate or even consider the lower layers of the stack. The very existence of everything below the containerized app was seemingly irrelevant to them.
After going back and forth about this for nearly a month and a half with the customer I asked my boss to add me to the "Linux" support Slack channel. Reasoning that there's got to be other greybeards in there who (frankly) knew what they were doing better than these kids.
After writing a multi-paragraph explanation of all my findings along with the customer, moments before I hit SEND I got an email
The customers app was not releasing threads properly and causing the system to reach thread exhaustion and begin context switching, a CPU intensive process that eventually would take so long that the health check probes would breach timeout and say the app was down and restart it
Saying that "Linux knowledge is unnecessary" is to put it bluntly, ignorant to the point of clownishness. Having a holistic understanding of how a system operates is invaluable
That happens, sure, but all too often these same greybeards will fail to recognize when there is a truly novel thing. So do give new tech an honest chance before writing it off as reinventing the wheel.
Turns out they work pretty good.
Don't bet your application stack on the latest flavor of this cyclical fad.
They don't know. And they shouldn't!
I don't have any idea what kind of tooling was used to create the drill I bought, just that it drills holes and is at the right price. Same for any other number of products and services.
The person who made the drill cares deeply, as they should. But consumers don't.
Why is software any different?
You leaned too far into the joke and it undermined the point. There’re roughly three kinds of Show HN:
1. “I made this product to solve this need I perceived. I hope it does well and makes me money. Come check it out.”
2. “I had this insane idea which is funny just from the description and made it as an experiment. Let me share what was interesting about it. Oh, and here’s the code.”
3. “I made this AI NFT Data Harvesting scam and think everyone is too stupid to notice this isn’t a real product filled with fake reviews. Come click this link from this account with 80% flagged comments.”
What you described seems closer to 2 than anything else. There’s nothing wrong with those. They stimulate curiosity, which is HN’s goal.
I lust over a lot of gear (not guitar gear, but gear), but I keep telling myself that my goal is to play, and so I focus more on what I already have than the next gadget even though sometimes it really is my bad instrument that is the problem. Sadly, despite the above, I'm not any better than the other two groups (even when my equipment is a problem, better wouldn't help much since I'm still a large part of the problem)
Never been flyfishing, though.
Easier to geek out over the gear and/or spend the cash than to do the hard work and take the time to actually, you know, do the hobby.
Not sure if that’s possible when building software systems.
Because software is living, often online, has various types of your and others' personal data, and usually runs on devices that you use for other things as well. The attack surface is massive, and more often than not, accessible. (Your drill might have a way to override the safety switch by soldering stuff to it, but you need physical access for that, and the reward is minimal; if you get into Equifax' publicly accessible servers, you get lots of important and lucrative data).
You might not care that your bank's website uses some obsolete Java framework with security holes, but you will care if they use it to drain your bank account. You might not care that your recipe generator app uses an old version of Jenkins for releases and doesn't do code signing, but you will if someone hacks them and releases a fake version that installs a keylogger on your machine, or exploits a bug in their backend to ship you malware that scrapes your Pictures folder with your nudes and important documents.
And on and on and on. A piece of software not updated for years, or a phone with no security updates, or with horrible legacy like Jenkins, is a security risk you as a user have to take at least some interest in. Yeah, you won't pick your bank by their tech stack, but when you see an atrocious website with UX out of the 1950s, absurd password length restrictions, broken domains/certificates/etc., you can be pretty sure their tech stack is a disaster, and maybe, you can try to avoid them.
First, I want to see numbers to back that up. New versions bring new bugs that aren't known yet, by anyone. Old versions have older, more widely known bugs. From a stability POV, sticking with the old may be good: you know how to work around the old issues. From a security POV, that's probably bad: every script kiddie has a Burp Suite plugin to exploit it.
Second, there ain't no such thing as an immune system. You can asymptotically approach zero, like you can approach the speed of light, but it would require infinite resources to reach either.
I don't buy into the "cyber security" arguments, and frankly I consider it a grift to keep hackers employed by playing on the fears of people. The same thing as "anti-virus" software, which never really worked in real life and isn't widely used anymore.
Or how about Heartbleed where the OpenSSL library had bug. OpenSSL is on the external web server and the attack could compromise server public keys. Perfect for impersonating the server. The solution was to update the OpenSSL library.
There have been browser zero days. Hacker News sanitizes input so user can’t compromise anything. But Hacker News could do an attack.
Never say never.
The first is just blatantly irresponsible and dumb "advice", while I do agree that most of the "you need to tick this box in order to get the contract" kind of "security" software is just malware, and often worse than what they supposedly cure.
It’s dependencies all the way down.
For most businesses, credit card processing is outsourced to Stripe or similar services, and the security for that is their responsibility. Customer data is only stored on local machines with encryption. So it's very possible to architect solutions that aren't vulnerable. Unless you want to go into very unlikely scenarios.
You appreciate why that would be a problem, surely?
As for leaking private date, now you're in the territory of some hackers having access to reading RAM memory. Which I guess is a possibility, but not something that every business in the world needs to concern themselves with.
If you call your local auto dealer and say you want to buy all their cars, don't you think they have some process stopping them from just sending all their cars to your adress? A hacker could make that call, you know...
It's the boring tech of the web ui world.
The entire is ecosystem is the definition of not-boring
[0] https://vercel.com/guides/corepack-errors-github-actions
Well obviously you need to catch up with the times. If the 'CTO' (who has at least 5 years of experience) of my shiny new SaaS can't do a regular blog post about how clever he is solving this week's obtuse problem in version 0.3.75beta01 of this MemeDrvrUltra thing he found last month and bet the company on, is it even really a SaaS startup? And he'd be denied that multipart year end expose (crossposted to every social platform on the planet using AIoftheWeek v.五.九 to dress it up to fit) about HOW FUCKING HEROIC his team was staying up for 8 days straight migrating from MemeDrvrUltra to SuperMegaMemeblaster ("Closed Private Beta FTW! We're special") and how it almost worked because MemeDrvrUltra was at least a couple of years old and clearly not what the VCs were talking up at the last speed pitch angel event (tho version 0.4.22.beta03 did close a bunch of our tickets (what's EWONTFIX mean again?) and changed its mascot to some funny looking frog). If all you did in life is be old and lazy and boring and pick "what works" or "what's well supported" or "stuff that doesn't get me an outage call once a week at 2:45am", what's the point of even living? Really, what sort of loser wants to work at a company like that?
/S
By calling it boring, they characterize their _preference_ as the majority accepted, mature, obvious decision and anything else is merely software engineers chasing shiny objects. The truth is almost always more nuanced. Both solutions have pros and cons. They trade-off different values that resonate more or less with different people.
So, please be careful with the "it's boring and therefore obviously better" argument. Don't let it be a way to summarily dismiss someone else's preferences in favor of your own--without a deeper discussion of trade-offs. Otherwise it's no better than any other condescending attempt (ex. I'm in charge, so we are doing it this way. No one ever got fired choosing IBM/Microsoft/..) to win an argument without having to present real arguments.
Anything that makes money is boring, anything that they like is boring, another poster saying that "boring tech stacks tend to be highly scalable".
By tomorrow we'll have people saying boring tech also brings world peace and cures world hunger.
The "boring" meme is just a conversation killer.
The same way I would say be wary an argument which can be boiled down to "it's newer so it's obviously better". I used to made such argument myself in the beginning of my career but now I see that proffering everything new is a bad strategy.
Comparing software objectively is hard (if not impossible), and there is a place for personal presences too. But if after a discussion of pros and cons it's not obvious which of two options is significantly better I would be inclined to choose an older and more established technology.
But I would suggest that Kotlin gets you all the "boring advantages" of Java, with feature adds.
I think you typically see this with older, established, things. But there is nothing guaranteeing it. And, indeed, it is often the result of specific action on the stewards of a technology.
This can often be couched in terms of backwards compatibility. Which is a clear action one can pursue to get stability. However, it can also be seen in greatly limiting scope. As an example, we love talking about scale, but that doesn't mean you have to design and build for scale you will never see.
If you like something it’s a “best practice.” If you don’t, it’s an “anti-pattern.” If you are used to something and like it, it’s “boring.” If you are not used to something and expect you won’t like it, it’s a “shiny object.”
IME, these sorts of terms are not helpful when discussing tech. They gloss over all the details. IMO its better to recognize the various tradeoffs all languages and tools make and discuss those specifically, as these sorts of labels are almost always used an excuse to not do that.
"Boring" works in this regard because you are saying there is not a lot of activity behind the scenes on it. Most of the work is spent doing the "boring" parts of the job for most of us. Documentation and testing.
Which doesn't mean it's bad or anything. But "boring" shouldn't be redefined to "something I like" or "something I make money with".
A system that breaks when updating dependencies, introduces unexpected behaviour through obscure defaults, forces you to navigate layers of abstraction isn't mature... (looking at you Spring and Java ecosystem), it's old and unstable.
Stability, predictability, and well-designed simplicity define maturity, not age alone.
Is Python mature and boring? With toolchain issues and headaches of all kinds... Newer languages like Go or Rust i.e solve all these toolchain issues and make it truly boring in the best way possible.
The big issue, IMHO, is that when you're dealing with interpreted languages it's very hard to lock down issues before runtime. With compiled or statically typed languages you tend to know a lot sooner where issues lie.
I've had to update requests to deal with certificate issues (to support more modern ciphers/hashes etc) but I won't know until runtime if it even works.
(I do not want 20 copies of the same library in my process)
It's worth noting that all of these involved anaconda, which was the recommended way to install numeric libraries at the time. Other package managers might be better.
Sometimes old versus new effects this. For example in Rust the language improves so fast I've literally had a 3 month old rustc be unable to compile a rust program (SDR FFT thing) because it used a new feature my 3 month old rustc didn't support. As I continued to encounter Rust projects this happened a few more times. Then I decided to stop trying to compile Rust projects.
Right now the dev culture is mostly bleeding-edge types who always use the latest version and target it. As Rust becomes more generally popular and the first-adopter bleeding-edge types make up proportionally less of the userbase I expect this will happen less. Bash still gets new features added all the time too; it's just that the type of developers who chose to write in Bash care about their code being able to work on most machines. Even ones years (gasp!) out of date.
I have some Java projects using the newest 21 version, and older ones using 8.
I don't have to set up a custom install of a language for every single application for any other language (although python in machine learning domain is getting there). This is an abnormallity which complicates software mantainence and leads to problems. It should be avoided if possible. And setting up container for every application is also not a solution. It's a symptom. Like a fever is a symptom of infection, containers are symptom of development future shock.
To be clear, I'm talking about in the context of a human person and a desktop computer. Not employed work at a business.
> I don't have to set up a custom install of a language for every single application for any other language
Which languages do you use? I find that using version manager saves a lot of headaches for every language that I use, and is very much a normality. Otherwise I run into issues if I need different versions for different projects. The fact that Rust has a first-party version manager is a blessing.
>Which languages do you use?
c, c++, perl, bash. A program written in perl+inline c today will compile and run on system perl+gcc from 2005. And a perl+inline c program written in 2005 will compile and run just fine on system perl+distro today. And pure Perl is completely time/version portable from the late 90s to now and back. No need for containerization or application specific installs of a language at all. System perl just works everywhere, every time.
There are versions of c++xx isms and non-c89/etc isms in some programs written in these languages. But at least these only happen a couple times a decade and because of the wide popularity and dev culture their use is much more delayed after introduction than in rust or python.
Even better, a metric for refused PRs (maybe including PR size somehow), tracking where users cared enough to try to contribute and the owner just refused to accept the changes. Easily gamed though.
Think of how much time is wasted because so much software that's been written but not maintained and can't be used because of how libraries have "evolved" since then.
In my experience there are almost always some issues (either legit or misguided support requests,) PRs for docs and features, updates to libraries for vulnerabilities or deprecated code. Worth looking at stuff that isn’t open also. Even if it’s a 10 year old project and the last commit is from 6 months ago fixing a seemingly minor bug, if there more than a few stars or forks, and there aren’t ignored issues and PRs sitting there, I’d consider it trustworthy.
And if it does look abandoned but was active at some point, it’s always a good idea to take a stroll through the forks.
This one is actually archived last year so clearly unmaintained but so far seems to be working great. But no successor fork. I think in this case I will use it, especially because it's in test code and not product.
but if you find a mature/dead project there's a chance it's just old and stable.
what I'm getting at is in an unmantained project any stranger should be able to figure out how to adrees any novel issues. it's the point of open source; I don't undrestand why this is failing. maybe beacuse understanding an anonymous code base is hard work?
It's sometimes possible for a project to have 0 bugs to fix and 0 in-scope improvements (for performance, compatibility, etc.) left to make, but only if its scope is extremely limited. Even Knuth still gets bug reports for his age-old TeX.
If you get many "wtf" moments while doing this, the project is not mature.
I have a feeling that I don't get many replies to job applications because the vast majority of work I've done is "boring", and the majority of open source code I've written is shell scripts. It all works fantastic, and has zero bugs and maintenance cost, but it's not sexy. Intellectual elitism has also defined my role ("DevOps Engineer" is literally just a "Sysadmin in the cloud", but we can't say that because we're supposed to be embarrassed to administrate systems); I'm fairly confident if my resume was more "Go and Rust" than "Python and Shell", I'd get hired in a heartbeat.
Python is a very old tech (v1 in 1991), but you won't have a hard time finding a job because the popularity of the language has been kept up first with the web, then with data analysis and now with AI.
Also, the more something has been out there, the more legacy there is to maintain. There is no shortage of PHP or Java jobs. Sure they are not sexy, but you'll have work.
"Dive headfirst into the realms of React.js, Vue.js, and Angular for mind-blowing interfaces. Unleash the raw power of Svelte and Next.js for lightning-fast performance. Style with Tailwind CSS and code with TypeScript for ultra-modern, maintainable projects. Integrate GraphQL and WebAssembly for next-level data handling and execution. Build Progressive Web Apps (PWA) and leverage Server-Side Rendering (SSR) for out-of-this-world user experiences. Embrace the Jamstack architecture and Micro Frontends for infinitely scalable, modular applications. Focus on Component-Driven Development and Headless CMS for ultimate flexibility. Create Single Page Applications (SPA) with Responsive Design and CSS-in-JS for seamless adaptability. Master State Management with Redux, MobX, or Zustand. Supercharge your workflow with Automated Testing (Jest, Cypress) and CI/CD Pipelines. Prioritize Web Performance Optimization, Accessibility (a11y), and User Experience (UX) for top-tier applications. Implement Design Systems, Code Splitting, and Lazy Loading for hyper-efficient, user-friendly experiences. Join the vanguard of front-end development and shatter the boundaries of what’s possible!"
Except for the mentions of next.js and one mention of AI assistants, these job postings could be from 2020 or even 2015.
Senior Frontend Developer at SimplyAnalytics
- 8+ years of professional software development experience on large, structured code bases using vanilla JavaScript (this is not a React, Angular, Node.js, or full-stack position)
- Strong UI development skills (CSS & HTML)
- Open to learning new technologies
- Self-starter who gets things done
- Attention to detail
--- Frontend Developer at Nutrient
- Good knowledge of web technologies (e.g., HTML, CSS, React.js. Next.js, Javascript/jQuery, HTTP, REST, PHP, Cookies, DOM).
- Familiarity with UI frameworks (e.g., Bootstrap, Tailwind CSS).
- Familiarity and regular use of AI assisted IDEs like Cursor, Windsurf, Co-Pilot, etc.
- Manage and prioritize multiple concurrent projects, meeting deadlines in a fast-paced environment.
- Have good communication skills and enjoy working with a passionate team and experience working on a globally distributed team.
- Have a well-rounded approach to problem solving, and understand the difference between when to apply a fix and when to refactor to remove a specific class of bugs.
- Experience integrating with various Marketing technologies and tools/APIs (e.g., Hubspot, Google Analytics, Salesforce, etc).
--- Senior Frontend Software Engineer at Hopper
- Senior-level experience & familiarity with React
- The ability to effectively drive towards a solution in a thoughtful and creative manner
- The ability to work autonomously, iterate on solutions, and manage different contexts
- Dealt with ambiguity and can balance building out multiple features at once without jeopardizing the quality of the code
--- Senior React Developer at SKYCATCHFIRE
- Expert-level React development
- Strong background in TypeScript and Next.js
- Using AI assistance tools to develop better software
- Experience building and maintaining large-scale applications
- Clear written communicator that prefers emails to meetings
- Portfolio showing systematic, well-structured work
On the other hand the Go tool chain is superb, but you need to reinvent a few things (until recently Go didn’t have generics) and the libraries out there are just a fraction of what Java offers.
Boring build tools on Java land would be Ant/Ivy, Maven.
Is your assumption that the learning curve on Gradle is higher than that of Maven?
I ask as someone who endured both tools/ecosystems and is intrigued with the idea of moving on to a system like Bazel.
Which is especially sad because he fails to even understand what Gradle does (see his comparison with Ant which is just completely inaccurate. Gradle has a proper build graph (actually better than Maven at this - there is never a need to do a clean install in Gradle, but you often have to do it with maven to get the proper output) it operates on)
Also they are very keen breaking the DSL language every couple of releases, and before Kotlin based DSL, had to come up with a background daemon to cache Groovy execution, due to its performance, or lack therefore.
So far, I never had any reason to try out Bazel.
Gradle and Maven have one rare superpower, maybe you could lump Ivy in there, which is to navigate and cache maven repos to build dynamic classpaths. Java is like a 10-ton elephant hurtling downhill on one foot on a squeaky old rollerskate. Somehow it manages to stays up, but it's always a near run thing whether it crashes. I appreciate that Gradle continues to make incremental advances that keep the elephant up. You complain that Gradle keeps breaking the DSL, but they have to navigate new Java, Groovy, and Maven releases that have relevant feature developments. It's a 16 year old program, it's going to either change or die. I look at JavaExec and consider the power of all it does expressed in such a simple way. Just that one feature makes Gradle tough to beat and worthwhile: https://docs.gradle.org/current/dsl/org.gradle.api.tasks.Jav...
Here are the recent-ish feature adds that I found super impactful and that improved the way I worked with Gradle and the overall quality of my builds:
* https://docs.gradle.org/current/userguide/sharing_build_logi...
* https://docs.gradle.org/current/userguide/platforms.html
To me, the fact that Gradle as an organization has been able to keep doing updates over years and make a niche business out of that product has been laudable. I think the way they positioned themselves as thought leaders during the whole Log4Shell fracas was really excellent, and I appreciate that paying them for support mostly means you get their best performance tools and training. I'm skilled enough after a lifetime with the tool that I can do an ok job at performance, but I'm sure things would improve if I could pay.
Bazel is a huge step up complexity wise, and as far as I can tell, it would not be useful for "watcher mode" style builds the way gradle is. If anyone from Gradle, Inc. reads this:
* I like the idea of hermetic builds and reproducible, cacheable builds touted by Bazel, but I also want my ability to use the watcher mode to script hot redeployments and manage local test server lifecycles like I can do in Gradle. Could Gradle be morphed or have a constrained mode somehow to be able to do both those things? EDIT: OMG, maybe you are already headed this direction! https://blog.gradle.org/declarative-gradle-first-eap
* It seems to me that software supply chain in the Java ecosystem could use a package management expert like you. I would love it if Gradle had its own, private for-pay repository that contained a trusted subset of artifacts that you certify and watch new versions and do independent code quality checks and 1st party code reviews (AI-assist ooh la la!) If we can keep our builds to your trusted set, we could have a lot of confidence that our software supply chain is safe. Maybe you could charge customers to add more artifacts to the repo that you approve of quality-wise and then validate and serve going forward.
I don't like the way public maven repos are centralized that we blindly trust and give our telemetry to and would love to point my builds at a private repo.
Any halfway non-horrible tech just needs enough funding and usage from some company to become mainstream.
It's like those hobbyists who invest all of their time and money into buying high-end expensive kit, but lack ability.
Photography is less about the technology and more about the art.
Many SLR users, wannabe photographers, will never achieve what a professional is able to do with even a throw away fixed focus camera.
If it doesn’t, then why do professionals use gear that’s often worth tens or hundreds thousand dollars? Try to picture wildlife or soccer with a smartphone from 2010. Are you sure it’s not limiting you? ;)
In photography both things matter: skill AND tools.
Also not all photographers are alike and not all photography is art. Event / documentation photography is also professional photography. It requires repeatability and acceptable results, not great results once in a while. That’s why professionals use expensive, sophisticated gear with autofocus, eyetracking, high speed drive etc. They cannot afford getting only 1 shot out of 1000 great. They need all (most) shots good enough.
Btw: Many amateurs who do photography for fun are often way more artistically skilled than professionals so I would not look down on them. It applies not just to photography but probably any discipline. The original meaning of the word „amateur” is actually very positive. Being paid for something does not mean you’re good at it. I’ve seen plenty of terrible quality work from professionals (in photography, in computer programming, in electronics design/rework).
I don’t really care about the brand or model but I like to know the settings. Ie aperture used, max aperture, focal length, etc.
Was very subtle flash used and if so what was the placement and mod, etc?
I just want to know how it was made.
Even with shots from people like Cartier-Bresson where he almost always used the same lens, film and settings. I still wonder if anything changed.
I don’t miss all the arse sniffing, snobbery and inverse snobbery involved in photography though.
What are some boring techs you use that you cannot live without? How long have you been using them?
For me, it'd be stuff like Vim, C, Python, Fedora, mutt and I've been using them for 25-30 years! How about you?
Just as an idea, not listing everything.
"Cannot live without" is a strong wording, but software that I use a lot and that's mature/stable in my experience: shell (zsh, bash, sh), GNU utils, vim, nmap, xfce, git, ssh, mpv, Xorg, curl, and lots of little old CLI tools.
Debian. Moves at its own pace, flexible yet stable enough to cope with my idea of a smart move.
Go. I've been playing with it since before 1.0, threw it into production at my first occasion, and all those years it continues to deliver.
Django. Identical story.
...StarCraft (1, then 2). Technically not "tech", but in a competitive setting, it remains strategically, tactically, and mechanically demanding, it reflects that path of mastery, "teach yourself programming in ten years". It shaped my spirit, attitude, but also humility.
SC2 in its first decade has been about always keeping the game fresh - new units, new spells, adventurous changes, crazy maps. As of 2020, the changes not only stopped, but the game was caught for a few years in a pretty crappy state: nigh unbeatable PvT cheeses, 40min-long PvZs where Zerg is clearly winning but can't close the game, meanwhile no Protoss in top 10 GM or major tourney semifinals. The worst of it has now been fixed, but the changes are only slight tweaks and nudges, now again reminding me of Brood War. It's still an excellent game though.
- IE9 which finally allowed you to use modern (at the time) web features (like flexbox) without having to support IE specific hacks.
- ES6 which added a lot of syntax changes to JS to make it much nicer to use (and pretty much killed Coffeescript).
- Popularization of type-checking with Typescript and Flow around 2020 which is almost standard these days.
And of course the frameworks evolved a lot as well, but that was mostly project-specific not so much the platform. Someone doing React doesn't care about Angular2 release.
The Esc key used to stop animated gifs and cancel AJAX calls, it was like a 'stop the world, lemme get off!' button.
Canvas tag (with desynchronized context), Gamepad API, and Web Audio API made the browser into a full-blown operating environment supportive of game development.
CSS3 - grids, aspect-ratio, media queries, oh my!
Web Workers, ASM.js, and WebAssembly -- what even is web development anymore?!?
I was just highlighting the stuff that really made a huge difference for everyone. Even if you don't use typescript your deps probably do and your IDE can show type hints.
Typescript only ever paid off in-terms of capabilities for me when my dependencies went all-in with the runtime type information - felt a lot like Java development marshalling and unmarshalling JSON to objects. But by then, my build times were turning into molasses.
Express has so many different uses and I've used it on very large scale backend projects for Fortune 500's, shoved it inside lambda functions, and used it to host email templates and build dev tooling with it to overcome shitty API's at work that always go down.
In all those times I don't think I've ever run into an issue that wasn't already solved
https://netflixtechblog.com/node-js-in-flames-ddd073803aa4
But the alternatives like Restify and Fastify are very similar to Express in developer experience, so it was not a huge deal to move away from it. One could think that these new frameworks are just a new major version of Express that had a lot of breaking changes.
In the netflix blog post they're complaining about increasing latency over time because they have a function that *reloads all express routes in-memory* that didn't properly remove all the previous routes, so the routes array got bigger and bigger. That's not a fundamental problem with express[1], that's an obscure (ab)use case implemented wrong. Hardly a damning indictment of express.
> This turned out be caused by a periodic (10/hour) function in our code. The main purpose of this was to refresh our route handlers from an external source. This was implemented by deleting old handlers and adding new ones to the array. Unfortunately, it was also inadvertently adding a static route handler with the same path each time it ran.
[1]: Admittedly an array is not the "best" data structure for routing, but that absolutely wasn't the performance issue they were having. Below a couple thousand routes it barely matters.
Any NodeJS web projects I run are on Express 4 and _still_ use express-async-router, helmet, and a hodgepodge of other add-ons to make it passable.
It is highly productive and useful, but I would never serve to the internet with NodeJS, much less with Express 4. Only internal apps. Scary NPM ecosystem of abandonware and resume fodder, it's not okay!
What is the native system provided by Windows? They release a new one for each OS, that is instantly deprecated.
Or by Linux? Is it GTK or Qt? Xmotif?
Using that perspective I think digital circuits which can be created in any controllable media is what I'm pursuing. I'm trying to make large circuits that represent the physical environment and the logic of the technical tasks that need to be done. My thinking is that next wave technology improvement will come from large environmental circuits that represent the human use of the physical environment.
I see the pursuit of smaller and smaller circuits as only one method of technology development best left to sophisticated global production. I think there will be cottage industry based on locally or "last mile" implementation of more esoteric but easy to understand systems.
The general purpose computer has resulted in overly complicated and inefficient for most regular people's actual computing needs.
Maybe it makes sense for your business maybe it doesn't. If you don't want to be on the forefront of technology, well don't. I don't think that launching every startup using the late adopter strategy is necessarily going to result in better business performance. Those that find a better way to do things will win.
And also these legacy techs don't pay that much in general, for every consultant making bank fixing COBOL bugs there are probably dozens or more COBOL developer making less than a javascript developer, so you'll find yourself making less money than a kid with a 3 years of experience in web development, and when you go out in the market trying to switch jobs, you'll have a hard time finding a new job or salaries will suck. Don't be stupid and become the fall guy to keep the legacy debt going while everybody else in the company is learning the in-demand cool stuff and padding their resumes with hire able skills. You will regret it.
But introducing it willy nilly into production applications because you want to gain experience with it is bad. Bad for the business, at least. For you it might be resume driven development.
That's why I always advocate for time and space for developers to play with new things on the company's dime. Some good options:
- conferences
- hackfests
- spikes
After some investigation, you can layer in new tech where it makes sense, which makes for an even more compelling story on the resume.
Of course, you also have to have business buy-in that this is a worthwhile use of time. R&D and investing have a lot longer history than the craft of software engineering, so that's the approach I'd take.
Very true! I would also add: Sometimes boring things are boring for reasons that actually make them poor solutions.
For example, some systems require tedious, mind-numbing configuration. This is boring, but also a good reason to NOT use something. If it takes hours and hours of manual tuning, by someone with special training, then the solution is incomplete and at least needs some better defaults and/or documentation. It might be a poor option.
Another example is a system that does not lend itself to automation. Requiring manual interaction is certainly boring, but also does not scale well. It is a valid reason to disqualify a solution.
Boring can often be a smell--an intuition that a solution is not fully solving the problem.
Most of my career was building dev tools, test infrastructure etc. The cardinal rule there was that it has to be boring.. Practically, this meant no surprises, almost invisible, people take it for granted and it just facilitates things without getting in your way.
Similar to electricity, air, water etc. Very few people really notice these things and talk about them. Till they stop working in which case, it's all people can think and speak of.
let's schedule deployment for friday afternoon
Just before a one month backpacking trip to Sahara desert.
just this once.
What is old becomes new, and what is new becomes old. You think you're creating some cool new "shitz", but someone else has already done it. You're just reinventing the wheel with some extra "shitz" tacked on
Often this is part of the value proposition with commercial offerings - Obtaining access to solutions that work for people with much bigger problems than you.
This can differ subtly from the raw scale of deployment. For example, SQL Server and Oracle may not be as widely deployed as something like SQLite, but in the cases where they are deployed the demands tend to be much more severe.
Wireguard is a pretty simple vpn to setup, has a very predictable and stable behaviour which makes it boring, but it's fairly new compared to other vpn software.
Today I successfully ran `pip install -r requirements.txt` in one Python project I cloned from GitHub, I couldn't believe my own eyes. Usually it's at least half an hour trying to figure out how to install dependencies.
In JavaScript ecosystem, installing packages works, but the packages get deprecated within a few months. React and many other frontend frameworks completely change their philosophy and the recommended way of writing code that you need to rewrite your app every 1-2 years or be left behind with deprecated packages.
If it's still like the RoR I used, it can never be considered "boring", just old.
It was a framework where code you dump in "conventional" locations is autoloaded everywhere.
With DSLs based on interpreting the method name as an expression, reflecting on them in `method_missing` implementations you get from inheriting classes.
Where state is shared between instance objects by way of reflection.
Where source diving is the documentation in many third party packages.
No, these were reasons why "rockstar" became a pejorative in the programming community for a while.
The article says "The opposite of being bored is to be surprised".
RoR is the surprises and metaprogramming magic all the way down, with a culture that hates code comments like it's some sort of plague.
People here are using "boring" as a synonym for "I can make money with it".
If I recall, this was one reason GNU Octave integrated the Forth+Fortran code used at NASA, and kept the reasoned algorithmic assumptions consistent with legacy calculations.
All legacy code has a caveat that depends on if a team cared about workmanship. =3
If I'm being honest, I'm a bit tired of seeing this trope. The details _always_ matter, and when it's written like this, it comes across as universally true. Ironically, Kubernetes seems to past the test of "boring" by the author's standards considering how long it's been around and how many hundreds of thousands of clusters have been running on it over the years.
Do people with really bad architecture skills use k8s to design overly complex clusters? Absolutely. Does that represent the entirety of the k8s community? Not even close. Kubernetes is dangerous because of how far down the rabbit hole you can go. People who try to be Google on day one with k8s are generating so much unnecessary negative PR.
Alas, it's tiresome.
But personally, I think it's silly to say Kubernetes has more of a propaganda issue than any other <insert big name framework>. There are "purists" like any other tool. However, I don't let them bother me so much that I just throw the baby out with the bathwater.
We got into a meta discussion about how we still have to approach it methodically, but the nice thing about boring, mature software is having a good gut feel that it probably won’t dip into the risk budget much at all.
Not that I'm unappreciative of what projects like Postgres, Django, etc. have given me. Like a good physician, I appreciate you and everything you do, but we'd both be happier seeing each other as rarely as possible.
I say that sometimes to people and they look at me weird. When you work in bigger projects with a lot of people it is harder to argue why you shouldn't write the code than to just do it.
Like, for example, I had to code some build-step that updated some assets that took about 5 seconds to run. That operation was done maybe once a month by other developers, during review another person asked why I didn't parallelize the process and cache already processed files and I was just like: it would add 200+ lines of extra code and error handling and it is not like I mind doing it, I just don't think it is worth the overhead of understanding this code and troubleshooting any possible bugs of this extra optimization code.
And it is harder to argue this kind of thing back and forth than it is to just do it. And now there are 200 extra lines of code that would take anyone else besides me at least an hour to grasp before they can make changes.
Same applies on discussing why you shouldn't add a dependency. If anything that is harder because you need to justify the extra time of not using the dependency.
If you're experienced enough, you either knew the rough way to code a task or realize that you need to take time to investigate the problem space. I don't think I ever ask myself what should I do to write more code with less effort. All the improvements I've made was to target precisely the thing I wanted to edit.
Most compilers of ISO languages, some of them were all caps named, others not.
This would probably not have been as big of a headache, if it wasn't because it was running in a container, and was deployed as part of a separate project, meaning CloudNativePG (which probably handles this for you) was not an option.
Boring cryptography is obviously secure.
The guiding principle for whether something is boring or not is the Principle of Least Astonishment. If I can, say, send you a ciphertext that was encrypted with an authenticated mode, and then decrypt it to two valid plaintexts using two different keys, this is astonishing even if the impact of it is negligible.
I had frequently thought that I would prefer to maintain a batch system than an online system.
(Though curiously many of those systems still run in actual virtual machines emulating the whole mainframe and whatnot)
The JVM is a reliable stack. I used it at work on most projects.
I highly recommend in the future starting from what you actually value about software. Oldness and boringness are not the reasons, and if they are, they are extremely bad reasons.
I can't resist asking, though: does this make Common LISP boring?
Incompatible is not.
The challenge is to make new software compatible with mature software. The more we try, the easier it gets.
Eh, the opposite of being bored is being excited/engaged, and the opposite of being surprised is being predictable. I don't disagree that predictable is probably what we want for a lot of what we see around us (no one wants unpredictable traffic lights), I think we are lying to ourselves if for at least some subset of our work-life we want cool and shiny, so long as we are within the bounds of business objectives.
boring tech is nice, if it can get your job done, is compatible with modern security standards and allows fast reliable development
sadly that isn't always the case
especially security standards have shifted a lot in the last 10+ years, partially due to attacks getting more advanced partially due to more insight into what works and what doesn't
deployment environment and pipelines have shifted a ton, too, but here most "old" approaches continue to work just fine
data privacy laws, including but not limited to GDPR, bring additional challenges wrt. logging, statistics and data storage
regulations in many places also require increased due diligence from IT companies in all kinds of ways, bringing new challenges to the software live cycle, dependency management, location of deployment. Points like 4-eye-principle, immutable audit logs, and a reasonable standard of both dynamic and static vulnerability scanning/code analysis can depending on your country and kind of business be required by law.
If your boring tech can handle all that just fine, perfect use it.
But if you just use it blindly without checking if it's still up to the task it can easily be a very costly mistake, as costly as blindly using the new wide spread hyped tech.