To be honest, it's hard for me not to get kind of emotional about this. Obviously I don't know what's going to happen, but I can imagine a future where some future model is better at proving theorems than any human mathematician, like the situation, say, chess has been in for some time now. In that future, I would still care a lot about learning why theorems are true --- the process of answering those questions is one of the things I find the most beautiful and fulfilling in the world --- and it makes me really sad to hear people talk about math being "solved", as though all we're doing is checking theorems off of a to-do list. I often find the conversation pretty demoralizing, especially because I think a lot of the people I have it with would probably really enjoy the thing mathematics actually is much more than the thing they seem to think it is.
> "The rapid advance of computers has helped dramatize this point, because computers and people are very different. For instance, when Appel and Haken completed a proof of the 4-color map theorem using a massive automatic computation, it evoked much controversy. I interpret the controversy as having little to do with doubt people had as to the veracity of the theorem or the correctness of the proof. Rather, it reflected a continuing desire for human understanding of a proof, in addition to knowledge that the theorem is true."
Incidentally, I've also a similar problem when reviewing HCI and computer systems papers. Ok sure, this deep learning neural net worked better, but what did we as a community actually learn that others can build on?
The Four Colour Theorem is true because there exists a finite set of unavoidable yet reducible configurations. QED.
To verify this computational fact one uses a (very) glorified pocket calculator.
The thing is that the underlying reasoning (the logic) is what provides real insights. This is how we recognize other problems that are similar or even identical. The steps in between are just as important, and often more important.
I'll give an example from physics. (If you're unsatisfied with this one, pick another physics fact and I'll do my best. _Any_ will do.) Here's a "fact"[0]: The atoms with even number of electrons are more stable than those with an odd number. We knew this in the 1910's, and this is a fact that directly led to the Pauli Exclusion Principle, which led us to better understand chemical bonds. Asking why Pauli Exclusion happens furthers our understanding and leading us to a better understanding of the atomic model. It goes on and on like this.
It has always been about the why. The why is what leads us to new information. The why is what leads to generalization. The why is what leads to causality and predictive models. THe why is what makes the fact useful in the first place.
[0] Quotes are because truth is very very hard to derive. https://hermiene.net/essays-trans/relativity_of_wrong.html
I'm fairly sure that people are only getting hung up on the size of this finite set, for no good reason. I suspect that if the size of this finite set were 2, instead of 633, and you could draw these unavoidable configuration on the chalk board, and easily illustrate that both of them are reducible, everyone would be saying "ah yes, the four colour theorem, such an elegant proof!"
Yet, whether the finite set were of size 2 or size 633, the fundamental insight would be identical: there exists some finite unavoidable and reducible set of configurations.
I think that is exactly correct, except for the "no good reason" part. There aren't many (any?) practical situations where the 4-colour theory's provability matters. So the major reason for studying it is coming up with a pattern that can be used in future work.
Having a pattern with a small set (single digit numbers) means that it can be stored in the human brain. 633 objects can't be. That limits the proof.
But, I can understand if pure mathematicians don't feel this way. This might be only really an intriguing and beautiful concept to someone who is interested in scaling up algorithms and AI.
Surely, reducing the infinite way in which polygons can be placed on a plane to a finite set, no matter how large, must involve some pattern useful for future work?
That’s the whole point of math.
Have programmers given up on wanting their mind blown by unbelievable simplicity?
OK but respectfully that's just restating the problem in an alternative form. We don't get any insight from it. Why does there exist this limit? What is it about this problem that makes this particular structure happen?
https://blog.tanyakhovanova.com/2024/11/foams-made-out-of-fe...
You just summarised (nearly) everything a mathematician can get out of that computerised proof. That's unsatisfying. It doesn't give you any insight into any other areas of math, nor does it suggest interesting corollaries, nor does it tell you which pre-condition of the statement does what work.
That's rather underwhelming. That's less than you can get out of the 100th proof of Pythagoras.
https://blog.tanyakhovanova.com/2024/11/foams-made-out-of-fe...
This is also nice because only pre-1600 tech involved
We "know" it's true, but only because a machine ground mechanically through lots of tedious cases. I'm sure most mathematicians would appreciate a simpler and more elegant proof.
However by the end of that period, it seemed to transition to a situation where the most important skill in achieving a good score was manipulating statistical machine learning tools (Random Forests was a popular one, I recall), rather than gaining deep understanding of the physics or sociology of the problem, and I started doing worse and I lost interest in Kaggle.
So be it. If you want to win, you use the best tools. But the part that brought joy to me was not fighting for the opportunity to win a few hundred bucks (which I never did), but for the intellectual pleasure and excitement of learning about an interesting problem in a new field that was amenable to mathematical analysis.
"Mathematics advances by solving problems using new techniques because those techniques open up new areas of mathematics."
Think of the problem as requiring spending a certain amount of complexity to solve. If you don't spend it on developing a new way of thinking then you spend it on long and tedious calculations that nobody can keep in working memory.
It's similar to how you can write an AI model in Pytorch or you can write down the logic gates that execute on the GPU. The logic gate representation uses only elementary techniques. But nobody wants to read or check it by hand.
For actual research mathematics, there is no reason why an AI (maybe not current entirely statistical models) shouldn't be able to guide you through it exactly the way you prefer to. Then it's just a matter of becoming honest with your own desires.
But it'll also vastly blow up the field of recreational mathematics. Have the AI toss a problem your way you can solve in about a month. A problem involving some recent discoveries. A problem Franklin could have come up with. During a brothel visit. If he was on LSD.
Like I said, I don't have any idea what's going to happen. The thing that makes me sad about these conversations is that the people I talk to sometimes don't seem to have any appreciation for the thing they say they want to dismantle. It might even be better for humanity on the whole to arrive in this future; I'm not arguing that one way or the other! Just that I think there's a chance it would involve losing something I really love, and that makes me sad.
Yes! This is what frustrates my about the pursuit of AI for the arts too.
I see both cases as people who aren’t well served by the artisanal version attempting to acquire a better-than-commoditized version because they want more of that thing to exist. We regularly have both things in furniture and don’t have any great moral crisis that chairs are produced mechanistically by machines. To me, both things sound like “how dare you buy IKEA furniture — you have no appreciation of woodwork!”
Maybe artisanal math proofs are more beautiful or some other aesthetic concern — but what I’d like is proofs that business models are stable and not full of holes constructed each time a new ML pipeline deploys; which is the sort of boring, rote work that most mathematicians are “too good” to work on. But they’re what’s needed to prevent, eg, the Amazon 2018 hiring freeze.
That’s the need that, eg, automated theorem proving truly solves — and mathematicians are being ignored (much like artist) by people they turn up their noses at.
Who is "they"?
Most AI for math work is being done by AI researchers that are not themselves academic mathematicians (obviously there exceptions). Similarly, most AI for music and AI for visual art is being done by AI researchers that themselves are not professional musicians or artists (again, there are exceptions). This model can work fine if the AI researchers collaborate with mathematicians or artists to understand that the use of AI is actually useful in the workflow of those fields, but often that doesn't happen and there is a savior-like arrogance where AI researchers think they'll just automate those fields. Same thing happens in AI for medicine. So the reason many of those AI researchers want to do this is for the usual incentives - money and publications.
Clearly, there are commercial use cases for AI in all these fields and those may involve removing humans entirely. But in the case of art, and I (and Hardy) would argue academic math, there's a human aspect that can't be removed. Both of those approaches can exist in the world and have value but AI can't replace Van Gogh entirely. It'll automate the process of creating mass produced artwork or become a tool that human artists can use. Both of those require understanding the application domain intimately, so my point stands I think.
In my experience, the vast majority is people who are hobbyists or amateurs in those fields, who are looking to innovate in approaches — eg, the overwhelming majority of AI music is hobbyists using models to experiment. Similarly, the overwhelming majority of people using AI graphics tools are making memes or pictures to share with friends.
Those people are poorly served by the artisanal approach and are looking to create more art — they’re not engaging in “savior-like arrogance” but trying to satisfy unmet desire for new music and art. You’re merely being snooty.
> But in the case of art, and I (and Hardy) would argue academic math, there's a human aspect that can't be removed.
This is the pretentiousness I called out (and you completely failed to address):
> That’s the need that, eg, automated theorem proving truly solves — and mathematicians are being ignored (much like artist) by people they turn up their noses at.
Nobody is stopping you from your artisanal proofs — have at it. You’re refusing to do the ugly work people actually want, so they’re solving their problems with a tool that doesn’t involve you.
Oh… I didnt anticipate this would bother you. Would it be fair to say that its not that you like understanding why its true, because you have that here, but that you like process of discovering why?
Perhaps thats what you meant originally. But my understanding was that you were primarily just concerned with understanding why, not being the one to discover why.
I can only speak for myself, but it's not that I care a lot about me personally being the first one to discover some new piece of mathematics. (If I did, I'd probably still be doing research, which I'm not.) There is something very satisfying about solving a problem for yourself rather than being handed the answer, though, even if it's not an original problem. It's the same reason some people like doing sudokus, and why those people wouldn't respond well to being told that they could save a lot of time if they just used a sudoku solver or looked up the answer in the back of the book.
But that's not really what I'm getting at in the sentence you're quoting --- people are still free to solve sudokus even though sudoku solvers exist, and the same would presumably be true of proving theorems in the world we're considering. The thing I'd be most worried about is the destruction of the community of mathematicians. If math were just a fun but useless hobby, like, I don't know, whittling or something, I think there would be way fewer people doing it. And there would be even fewer people doing it as deeply and intensely as they are now when it's their full-time job. And as someone who likes math a lot, I don't love the idea of that happening.
Why would mathematics be different than woodworking?
Do you believe there’s a limited demand for mathematics? — my experience is quite the opposite, that we’re limited by the production capacity.
> You’re comparing something many people do as a hobby to the life’s work and f others.
You’re denigrating the talents and educational efforts of artisanal woodworkers to make a shallow dismissal of my point.
One place I think the analogy breaks down, though, is that I think you're pretty severely underestimating the time and effort it takes to be productive at math research. I think my path is pretty typical, so I'll describe it. I went to college for four years and took math classes the whole time, after which I was nowhere near prepared to do independent research. Then I went to graduate school, where I received a small stipend to teach calculus to undergrads while I learned even more math, and at the end of four and a half years of that --- including lots of one-on-one mentorship from my advisor --- I just barely able to kinda sorta produce some publishable-but-not-earthshattering research. If I wanted to produce research I was actually proud of, it probably would have taken several more years of putting in reps on less impressive stuff, but I left the field before reaching that point.
Imagine a world where any research I could have produced at the end of those eight and a half years would be inferior to something an LLM could spit out in an afternoon, and where a different LLM is a better calculus instructor than a 22-year-old nicf. (Not a high bar!) How many people are going to spend all those years learning all those skills? More importantly, why would they expect to be paid to do that while producing nothing the whole time?
- apprenticing
- journeyman phase
- only finally achieving mastery
CNC never replaced those people, rather, it scaled the whole field — by creating much higher demand for furniture. People who never made that full journey instead work at factories where their output is scaled. What was displaced was mediocre talent in average homes, eg, building your own table from a magazine design.
You still haven’t answered why you think mathematics will follow a different trajectory — and the only substantial displacement will, eg, be business analysts no longer checking convexity of models and outsourcing that to AI-scaled math experts at the company.
First, right now presumably the reason a few people still become master woodworkers is that their work is actually better than the mass-produced furniture that you can get for much less money. Imagine a world where instead it was possible to cheaply and automatically produce furniture that is literally indistinguishable from, or maybe even noticeably superior to, anything a human woodworker could ever make. Do you really think the same number of people would still spend years and years developing those skills?
Second, you've talked about business logic and "math experts at the company" a few times now, which makes me wonder if we're just referring to different things with the word "mathematics". I'm talking about a specific subset, what's sometimes called "pure math," the kind of research that mostly only exists within academia and is focused on proving theorems with the goal of improving human understanding of mathematical patterns with no particular eye on solving any practical problems. It sounds like you're focused on the sort of mathematical work that gets done in industry, where you're using mathematical tools, but the goal is to solve a practical problem for a business.
These are actually quite different activities --- the same individuals who are good at one stand a decent chance of being good at the other, but that's most of what they have in common, and even there I know many people who are much more skilled at one than the other. I'm not really asking anyone who doesn't care about pure math to start caring about it, but when I'm talking about the effect of AI on the future of the field, I'm referring specifically to pure math research.
In mathematics it is just as (if not moreso) important to be able to apply techniques used to solve novel proofs as it is to have the knowledge that the theorem itself is true. Not only might those techniques be used to solve similar problems that the theorem alone cannot, but it might even uncover wholly new mathematical concepts that lead you to mathematics that you previously could not even conceive of.
Machine proofs in their current form are basically huge searches/brute forces from some initial statements to the theorem being proved, by way of logical inference. Mathematics is in some ways the opposite of this: it's about understanding why something is true, not solely whether it is true. Machine proofs give you a path from A to B but that path could be understandable-but-not-generalizable (a brute force), not-generalizable-but-understandable (finding some simple application of existing theorems to get the result that mathematicians simply missed), or neither understandable-nor-generalizable (imagine gigabytes of pure propositional logic on variables with names like n098fne09 and awbnkdujai).
Interestingly, some mathematicians like Terry Tao are starting to experiment with combining LLMs with automated theorem proving, because it might help in both guiding the theorem-prover and explaining its results. I find that philosophically fascinating because LLMs rely on some practices which are not fully understood, hence the article, and may validate combining formal logic with informal intuition as a way of understanding the world (both in mathematics, and generally the way our own minds combine logical reasoning with imprecise language and feelings).
I think they also adjust their heuristics, based on looking at thousands of computer moves.
This algorithm will happily predict whatever it was fed with, just ask Chat GPT to write the review of non-existing camera, car or washing machine, you will receive nicely written list of advantages of such item, so what it does not exist.
I really wish that had been my experience taking undergrad math courses.
Instead, I remember linear algebra where the professor would prove a result by introducing an equation pulled out of thin air, plugging it in, showing that the result was true, and that was that. OK sure, the symbol manipulation proved it was true, but zero understanding of why. And when I'd ask professors about the why, I'd encounter outright hostility -- all that mattered was whether it was proven, and asking "why" was positively amateurish and unserious. It was irrelevant to the truth of a result. The same attitude prevailed when it got to quantum mechanics -- "shut up and calculate".
I know there are mathematicians who care deeply about the why, and I have to assume it's what motivates many of them. But my actual experience studying math was the polar opposite. And so I find it very surprising to hear the idea of math being described as being more interested in why than what. The way I was taught didn't just not care about the why, but seemed actively contemptuous of it.
I majored in math at MIT, and even at the undergraduate level it was more like what OP is describing and less like what you're saying. I actually took linear algebra twice since my first major was Economics before deciding to add on a math major, and the version of linear algebra for your average engineer or economist (i.e.: a bunch of plug and chug matrices-type stuff), which is what I assume you're referring to, was very different. Linear algebra for mathematicians was all about vector spaces and bases and such, and was very interesting and full of proofs. I don't think actually concretely multiplying matrices was even a topic!
So I guess linear algebra is one of those topics where the math side is interesting and very much what all the mathematicians here are describing, but where it turned out to be so useful for everything, that there's a non-mathematician version of it which is more like what it sounds like you experienced.
Maybe because CS is more engineering than science (at least as far as what drives the sociology), a lot of people approach AI from the same industrial perspective -- be it applications to math, science, art, coding, and whatever else. Ideas like _the bitter lesson_ only reinforce the zeitgeist.
Which is to say, if you only concern yourself with theorems which have short, understandable proofs, aren't you cutting yourself off from vast swathes of math space?
If you're talking about questions that are well-motivated but whose answers are ugly and incomprehensible, then a milder version of this actually happens fairly often --- some major conjecture gets solved by a proof that everyone agrees is right but which also doesn't shed much light on why the thing is true. In this situation, I think it's fair to describe the usual reaction as, like, I'm definitely happy to have the confirmation that the thing is true, but I would much rather have a nicer argument. Whoever proved the thing in the ugly way definitely earns themselves lots of math points, but if someone else comes along later and proves it in a clearer way then they've done something worth celebrating too.
Does that answer your question?
The new spin on these older unresolved issues IHMO is really the black-box aspect of our statistical approaches. Lots of mathematicians that are fine with proof systems like Lean and some million-step process that can in principle be followed are also happy with more open-ended automated search and exploration of model spaces, proof spaces, etc. But can they ever be really be happy with a million gigabyte network of weighted nodes masquerading as some kind of "explanation" though? Not a mathematician but I sympathize. Given the difficulty of building/writing/running it, that looks more like a product than like "knowledge" to me (compare this to how Lean can prove Godel on your laptop).
Maybe it's easier to swallow the bitter pill of poor quality explanations though after the technology itself is a little easier to actually handle. People hate ugly things less if they are practical, and actually something you can build pretty stuff on top on.
https://en.wikipedia.org/wiki/Quasi-empiricism_in_mathematic...
I think even if N is quite large, that just means it may take decades or millennia to publish and understand all k necessary papers, but maybe it’s still worth the effort even if we can get the length-N paper right away. What are you going to do with a mathematical proof that no one can understand anyway?
Care or not, what are they supposed to do with it?
Sure, they can now assume the theorem to be true, but nothing stopped them from doing that before.
> the primary aim isn't really to find out whether a result is true but why it's true.
I'm honestly surprised that there are mathematicians that think differently (my background[0]). There are so many famous mathematicians stating this through the years. Some more subtle like Poincare stating that math is not the study of numbers but the relationship between them, while others far more explicit. This sounds more like what I hear from the common public who think mathematics is discovered and not invented (how does anyone think anything different after taking Abstract Algebra?).But being over in the AI/ML world now, this is my NUMBER ONE gripe. Very few are trying to understand why things are working. I'd argue that the biggest reason machines are black boxes are because no one is bothering to look inside of them. You can't solve things like hallucinations and errors without understanding these machines (and there's a lot we already do understand). There's a strong pushback against mathematics and I really don't understand why. It has so many tools that can help us move forward, but yes, it takes a lot of work. It's bad enough I know people who have gotten PhDs from top CS schools (top 3!) and don't understand things like probability distributions.
Unfortunately doing great things takes great work and great effort. I really do want to see the birth of AI, I wouldn't be doing this if I didn't, but I think it'd be naive to believe that this grand challenge can entirely be solved by one field and something so simple as throwing more compute (data, hardware, parameters, or however you want to reframe the Bitter Lesson this year).
Maybe I'm biased because I come from physics where we only care about causal relationships. The "_why_" is the damn Chimichanga. And I should mention, we're very comfortable in physics working with non-deterministic systems and that doesn't mean you can't form causal relationships. That's what the last hundred and some odd years have been all about.[1]
[0] Undergrad in physics, moved to work as engineer, then went to grad school to do CS because I was interested in AI and specifically in the mathematics of it. Boy did I become disappointment years later...
[1] I think there is a bias in CS. I notice there is a lot of test driven development, despite that being well known to be full of pitfalls. You unfortunately can't test your way into a proof. Any mathematician or physicist can tell you. Just because your thing does well on some tests doesn't mean there is proof of anything. Evidence, yes, but that's far from proof. Don't make the mistake Dyson did: https://www.youtube.com/watch?v=hV41QEKiMlM
People do look, but it's extremely hard. Take a look at how hard the mechanistic interpretability people have to work for even small insights. Neel Nanda[1] has some very nice writeups if you haven't already seen them.
> People do look
This was never in question > Very few are trying to understand why things are working
What is in question is why this is given so little attention. You can hear Neel talk about this himself. It is the reason he is trying to rally people and get more into Mech Interp. Which frankly, this side of ML is as old as ML itself.Personal, I believe that if you aren't trying to interpret results and ask the why then you're not actually doing science. Which is fine. There's plenty of good things that come from outside science. I just think it's weird to call something science if you aren't going to do hypothesis testing and finding out why things are the way they are
I would really encourage others to read works that go through the history of the topic they are studying. If you're interested in quantum mechanics, the one I'd recommend is "The Quantum Physicists" by William Cropper[0]. It won't replace Griffiths[1] but it is a good addition.
The reason that getting information like this is VERY helpful is that it teaches you how to solve problems and actually go into the unknown. It is easy to learn things from a book because someone is there telling you all the answers, but texts like these instead put yourself in the shoes of the people in those times, and focus on seeing what and why certain questions are being asked. This is the hard thing when you're at the "end". When you can't just read new knowledge from a book, because there is no one that knows! Or the issue Thomas Wolf describes here[2] and why he struggled.
[0] https://www.amazon.com/Quantum-Physicists-Introduction-Their...
[1] https://www.amazon.com/Introduction-Quantum-Mechanics-David-...
But absolutely worst of all is the arrogance. The hubris. The thinking that because some human somewhere has figured a thing out that its then just implicitly known by these types. The casual disregard for their fellow humans. The lack of true care for anything and anyone they touch.
Move fast and break things!! Even when its the society you live in.
That arrogance and/or hubris is just another type of stupidity.
It's not just that comments that vent denunciatory feelings are lower-quality themselves, though usually they are. It's that they exert a degrading influence on the rest of the thread, for a couple reasons: (1) people tend to respond in kind, and (2) these comments always veer towards the generic (e.g. "lack of true care for anything and anyone", "just another type of stupidity"), which is bad for curious conversation. Generic stuff is repetitive, and indignant-generic stuff doubly so.
By the time we get further downthread, the original topic is completely gone and we're into "glorification of management over ICs" (https://news.ycombinator.com/item?id=43346257). Veering offtopic can be ok when the tangent is even more interesting (or whimsical) than the starting point, but most tangents aren't like that—mostly what they do is replace a more-interesting-and-in-the-key-of-curiosity thing with a more-repetitive-and-in-the-key-of-indignation thing, which is a losing trade for HN.
This is the part I don't get honestly
Are people just very shortsighted and don't see how these changes are potentially going to cause upheaval?
Do they think the upheaval is simply going to be worth it?
Do they think they will simply be wealthy enough that it won't affect them much, they will be insulated from it?
Do they just never think about consequences at all?
I am trying not to be extremely negative about all of this, but the speed of which things are moving makes me think we'll hit the cliff before even realizing it is in front of us
That's the part I find unnerving
Yes, partly that. Mostly they only care about their rank. Many people would burn down the country if it meant they could be king of the ashes. Even purely self-interested people should welcome a better society for all, because a rising tide lifts all boats. But not only are they selfish, they're also very stupid, at least in this aspect. They can't see the world as anything but zero sum, and themselves as either winning or losing, so they must win at all costs. And those costs are huge.
Yes, I think this is it. Frequently using social media and being “online” leads to less critical thought, less thinking overall, smaller window that you allow yourself to think in, thoughts that are merely sound bites not fully fleshed out thoughts, and so on. Ones thoughts can easily become a milieu of memes and falsehoods. A person whose mind is in the state will do whatever anyone suggests for that next dopamine hit!
I am guilty of it all myself which is how I can make this claim. I too fear for humanity’s future.
> Do they think the upheaval is simply going to be worth it?
All technology causes upheaval. We've benefited from many of these upheavals to the point where it's impossible for most to imagine a world without the proliferation of movable type, the internal combustion engine, the computer, or the internet. All of your criticisms could have easily been made word for word by the Catholic Church during the medieval era. The "society" of today is no more of a sacred cow than its antecedent incarnations were half a millenium ago. As history has shown, it must either adapt, disperse, or die.
I am not concerned about some kind of "sacred cow" that I want to preserve
I am concerned about a future where those with power no longer need 90% of the population so they deploy autonomous weaponry that grinds most of the population into fertilizer
And I'm concerned there are a bunch of short sighted idiots gleefully building autonomous weaponry for them, thinking they will either be spared from mulching, or be the mulchers
Edit: The thing about appealing to history is that it also shows that when upper classes get too powerful they start to lose touch with everyone else, and this often leads to turmoil that affects the common folk most
As one of the common folk, I'm pretty against that
There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”
aka optimization-for-its-own-sake, aka pathological optimization.
It's basically meatspace internalizing and adopting the paperclip problem as a "good thing" to pursue, screw externalities and consequences.
And, lo-and-behold, my read for why it gets downvoted here is that a lot of folks on HN ascribe to this mentality, as it is part of the HN ethos to optimize , often pathologically.
You flatten the heap
You decrease or eliminate the reward for being at the top
You decrease or eliminate the penalty for being at the bottom
The main problem is that we haven't figured out a good way to do this without creating a whole bunch of other problems
I worked in an organization afflicted by this and still have friends there. In the case of that organization, it was caused by an exaggerated glorification of management over ICs. Managers truly did act according to the belief, and show every evidence of sincerely believing in it, that their understanding of every problem was superior to the sum of the knowledge and intelligence of every engineer under them in the org chart, not because they respected their engineers and worked to collect and understand information from them, but because managers are a higher form of humanity than ICs, and org chart hierarchy reflects natural superiority. Every conversation had to be couched in terms that didn't contradict those assumptions, so the culture had an extremely high tolerance for hand-waving and BS. Naturally this created cover for all kinds of selfish decisions based on politics, bonuses, and vendor perks. I'm very glad I got out of there.
I wouldn't paint all of tech with the same brush, though. There are many companies that are better, much better. Not because they serve higher ideals, but because they can't afford to get so detached from reality, because they'd fail if they didn't respect technical considerations and respect their ICs.
Why Ai field is so secretive? Because it's all trade secrets - and maybe soon to become patents. You don't give away precisely how semiconductor fabs work, only base research level of "this direction is promising"
Why everyone is pushed to add Ai in? Because that's where the money is, that's where the product is.
Why Ai needs results fast? Because it's production line, and you create and design stuff
Even the core distinction mentioned - that Ai is about "speculation and possibility" - that's all about tool experimenting and prototyping. It's all about building and constructing. Aka Engineering/Technology letters of STEM
I guess next step is to ask "what to do next?". IMO, math and Ai fields should realise the divide and slowly diverge, leaving each other alone on an arm's length. Just as engineers and programmers (not computer scientists) already do
Long story short, current AI is doing cargo-cult math - ie, going through the motions with mimicry. Experts can see through it, but excited AI hypesters are blind, and lap it up. Even alpha-geometry (with built-in theorem prover) is largely doing brute-force search of a limited axiomatized domain. This is not to say AI is not useful, just that the hype exceeds the actual.
I can see a day might come when we (research mathematicians, math professors, etc) might not exist as a profession anymore, but there will continue to be mathematicians. What we'll do to make a living when that day comes, I have no idea. I suspect many others will also have to figure that out soon.
[0] I've seen this attributed to the Character of Physical Law but haven't confirmed it
I'd include writing, art-, and music-making in that category.
This is well-studied and not unique to AI, the USA in English, or even Western traditions. Here is what I mean: a book called Diffusion of Innovations by Rogers explains a history of technology introduction.. if the results are tallied in population, money or other prosperity, the civilizations and their language groups that have systematic ways to explore and apply new technology are "winners" in the global context.
AI is a powerful lever. The meta-conversation here might be around concepts of cancer, imbalance and chairs on the deck of the Titanic.. but this is getting off-topic for maths.
Engineering has always involved large amounts of both math and secrecy, what's different now?
(But the engineers want the benefits of academic research -- going to conferences to give talks, credibility, intellectual prestige -- without paying the costs, e.g. actually sharing new knowledge and information.)
Not exactly AI by today's standards, but a lot of the math that they need has been rolled into their software tools. And Excel is quite powerful.
I have listened to colin Mclarty talk about philosophy of math and there was a contingent of mathematicians who solely cared about solving problems via “algorithms”. The time period was just preceding the modern math since the late 1800s roughly, where the algorithmists, intuitivists, and logical oriented mathematicians coalesced into a combination that includes intuitive, algorithmic, and importance of logic, leading to the modern way we do proofs and focus on proofs.
These algorithmists didn’t care about the so called “meaningless” operations that got an answer, they just cared they got useful results.
I think the article mitigates this side of math, and is the side AI will be best or most useful at. Having read AI proofs, they are terrible in my opinion. But if AI can prove something useful even if the proof is grossly unappealing to the modern mathematician, there should be nothing to clamor about.
This is the talk I have in mind https://m.youtube.com/watch?v=-r-qNE0L-yI&pp=ygUlQ29saW4gbWN...
This really isn't about mathematics or AI, this is about the gap between academia and business. The academic wants to pursue knowledge for the sake of knowledge, while a business wants to make money.
Compare to computer science or engineering, where business has near completely pervaded the fields. I've never heard anybody lamenting their inability to "pursue understanding for its own sake" and when someone does advance the theory, there's also a conversation about how to make it profitable. The academic aspect isn't gone, but it's found a way to coexist with the business aspect, for better or worse.
Honestly it sounds like mathematicians have had things pretty good if this is one of their biggest complaints.
I think this is an interesting question. In a hypothetical SciFi world where we somehow provably know that AI is infallible and the results are always correct, you could imagine mathematicians grudgingly accepting some conjecture as "proven by AI" even without understanding the why.
But for real-world AI, we know it can produce hallucinations and its reasoning chains can have massive logical errors. So if it came up with a proof that no one understands, how would we even be able to verify that the proof is indeed correct and not just gibberish?
Or more generally, how do you verify a proof that you don't understand?
Just so this isn't misunderstood, not so much cutting-edge math is presently possible to code in lean. The famous exceptions (such as the results by Clausen-Scholze and Gowers-Green-Manners-Tao) have special characteristics which make them much more ground-level and easier to code in lean.
What's true is that it's very easy to check if a lean-coded proof is correct. But it's hard and time-consuming to formulate most math as lean code. It's something many AI research groups are working on.
"Special characteristics" is really overstating it. It's just a matter of getting all the prereqs formalized in Lean first. That's a bit of a grind to be sure, but the Mathlib effort for Lean has the bulk of the undergrad curriculum and some grad subjects formalized.
I don't think AI will be all that helpful wrt. this kind of effort, but it might help in some limited ways.
The main bottleneck is having the libraries that define the theorems and objects you need to operate at those levels. Everything is founded on axiomatic foundations and you need to build all of maths on top of that. Projects like mathlib are getting us there but it is a massive undertaking.
It’s not just that it is a lot of maths to go through, it’s also that most maths has not really been proven to this degree of exactitude and there is much gap-filling to do when trying to translate existing proofs, or the reasoning style might be quite distant to how things are expressed in Lean. Some maths fields are also self-consistent islands that haven’t been yet connected to the common axiomatic foundations, and linking them is a serious research endeavor.
Although Lean does allow you to declare theorems as axioms. It is not common practice, but you can skip high up the abstraction ladder and set up a foundation up there if you are confident those theorems are correct. But still defining those mathematical objects can be quite hard on its own, even if you skip the proving.
Anyways, the complexity of the Lean language itself doesn’t help either. The mode of thinking you need to have to operate it is much closer to programming than maths, and for those that think that the Rust borrow-checker is a pain, this is an order of magnitude more complex.
Lean was a significant improvement in ergonomics compared to the previous generation (Coq, Isabelle, Agda…), but still I think there is a lot of work to be done to make it mathematician-friendly.
Most reinforcement-learning AI for maths right now is focused on olympiad problems, hard but quite low in the maths abstraction ladder. Often they don’t even create a proof, they just solve problems that end with an exact result and you just check that. Perhaps the reasoning was incorrect, but if you do it for enough problems you can be confident that it is not just guessing.
On the other side of the spectrum you have mathematicians like Tao just using ChatGPT for brainstorming. It might not be great at complex reasoning, but it has a much wider memory than you do and it can remind you of mathematical tools and techniques that could be useful.
Could you elaborate on this? I'm interested to learn what the complexities are (beyond the mathematical concepts themselves).
Something like 5 different DSLs in the same language, most of it in a purist functional style that is neither familiar to most mathematicians nor most programmers, with type-checking an order of magnitude more strict and complex than any programming language (that's the point of it), with rather obscure errors most of the time.
It's really tedious to translate any non-trivial proofs to this model, so usually you end up proving it again almost from scratch within Lean, and then it's really hard to understand as it is written. Much of the information to understand a proof is hidden away as runtime data that is usually displayed via a complex VSCode extension. It's quite difficult to understand from the proof code itself, and usually it doesn't look anything like a traditional mathematical proof (even if they kind of try by naming keywords with a similar terminology as in normal proofs and sprinkling some unicode symbols).
I never-ever feel like I'm doing maths when I'm using Lean. I'm fighting with the syntax to figure out how to express mathematical concepts in the style that it likes, always having several different ways of achieving similar things (anti Zen of Python). And I'm fighting with the type-checker to transform this abstract expression into this other abstract expression (that's really what a proof is when it boils down to it), completely forgetting about the mathematical meaning, just moving puzzle pieces around with obscure tools.
And even after all of this, it is so much more ergonomic than the previous generation of proof-assistants :)
---
I think that the main reasons for Lean's complexity are:
- That it has a very purist functional style and syntax, literally reflecting the Curry-Howard Correspondence (function = proof), rather than trying to bridge the gap to more familiar maths notation.
- That it aims to be a proof assistant, it is fundamentally semi-automatic and interactive, this makes it a hard design challenge.
- A lot of the complexity is aimed at giving mathematicians the tools to express real research maths in it, on this it has been more successful than any alternative.
- Because of this it has at least 5 different languages embedded in it: functional expressions of theorems, forward proofs with expression transformations, backward proofs with tactics, the tactics metaprogramming macro language, and the language to define data-types (and at least 4 kinds of data-types with different syntax).
I thought the rhetoric sounded somewhat like the AGI/accelerationist folks who postulate some sort of eventual "godlike" AI whose thought processes are somehow fundamentally inaccessible to humans. So if you had a proof that was only understandable to this sort if AIs, then mathematics as a discipline of understanding would be over for good.
But this sounds like it would at least theoretically let you tackle the proof? Like, it's imaginable that some AI generates a proof that is several TB (or EB) in size but still validates - which would of course be impossible to understand for human readers in the way you can understand a paper. But then "understanding" that proof would probably become a field of research of its own, sort of like the "BERTology" papers that try to understand the semantics of specific hidden states in BERT (or similar approaches for GPTs).
So I'd see an incomprehensible AI-generated proof not as the end of research in some conjecture, but more as a sort of guidance: Unlike before, you now know that the treasure chest exist and you even have its coordinates, you just don't have the route to that location. The task then becomes about figuring out that route.
Lean makes the proofs verifiable, and sure to a degree understandable, but normal Lean proofs written by humans are already a challenge to understand without additional documentation, and, as with all code-generation, AI generated proofs are probably even harder to understand. There are many paths to arrive at a proof, the AI may take very messy routes that don't align with how humans create abstractions and assign meaning to break the process down into understandable steps.
In a sense all AI is understandable, nothing is hidden, you can see all the numbers inside a Transformer. Just being able to see the code might not necessarily help mathematicians too much in extracting deeper meaning and lessons from AI-generated proofs, at least not without significant additional work.
I understand that mathematicians are all about understanding why a theorem is true, not just checking that it is, and then using that understanding to push forward other research. Same as most scientists care more about understanding how the world works rather than developing methods to control and manipulate it.
But proven theorems are mathematical tools that can be used to make progress, regardless of your understanding of how they work. I think there is a path forward for maths as an engineering discipline, possibly with much faster progress, just as computer science expanded into software engineering. AI is just the next step of many, software libraries, interfaces and abstractions are also black-boxes in a sense that do stuff for you without knowing how.
A good engineer learns to build on the shoulders of giants, mastering the understanding of what tools do and how they behave, without necessarily needing to know how they are built from scratch. It might be less satisfying to the curious, but you can get a lot more done.
But when you talk about "getting a lot more done," I want to ask, get a lot more done to what end? Despite what mathematicians sometimes write in their grant applications, resolving most of the big open problems in the field probably won't lead to new technologies or anything. To use the Riemann Hypothesis example again, most number theorists already think it's probably true, and there are a lot of papers being published already which prove things like "if the Generalized Riemann Hypothesis is true, then [my new result]".
No one is really waiting around just for the literal, one-bit answer to the question of whether RH is true; if we got that information and nothing else, I'm sure number theorists would be happy to know, but not a whole lot about the work being done in the field would change. It's not just being "satisfying to the curious"; virtually the entire reason we want a proof is to use the new ideas it would presumably contain to do more mathematics. This is exactly what's happened with the proof of the Poincare Conjecture, the only one of the Millennium Problems that's been resolved so far.
This is what I was lamenting in my comment earlier: the thing you're describing, where we set proof-finding models to work and they spit out verifiable but totally opaque proofs of big open problems in math, very well might happen someday, but it wouldn't actually be all that useful for anything, and it would also mean the end of the only thing about the whole enterprise that the people working in it actually care about.
A proof of the conjecture would essentially just move the situation from "we think there could be counterexamples, but so far we haven't found any" to "there really are no counterexamples anywhere, you can stop looking". The interesting thing here would be the explanation why there can be no counterexamples, which is exactly the thing that the proof wouldn't give you.
On the other hand, a counterproof would either directly present the community with a counterexample - and maybe reveal some interesting class of objects that was overlooked so far - or at least establish that there have to be counterexamples somewhere, which would probably give more motivation to efforts of finding one.
(Generally speaking here. I'm not a mathematician and I can't really say anything about the hypotheses in question)
This is the big question! Computer-aided proof has been around forever. AI seems like just another tool from that box. Albeit one that has the potential to provide 'human-friendly' answers, rather than just a bunch of symbolic manipulation that must be interpreted.
In this hypothetical Riemann Hypothesis example, the only thing the human would have to check is that (a) the proof-verification software works correctly, and that (b) the statement of the Riemann Hypothesis at the very beginning is indeed a statement of the Riemann Hypothesis. This is orders of magnitude easier than proving the Riemann Hypothesis, or even than following someone else's proof!
An issue in these discussions is that mathematics is both an art, a sport, and a science. And the development of AI that can build 'useful' libraries of proven theorems means different things for each. The sport of mathematics will be basically over. The art of mathematics will thrive as it becomes easier to explore the mathematical world. For the science of mathematics, it's hard to say, it's been kind of shaky for ~50 years anyway, but it can only help.
Modern AI is about "well, it looks like it works, so we're golden".
What I think Mathematicians should remind themselves is a lot of prestigious mathematicians, the likes of Cantor or Erdos, often only employed a handful of “tricks”/heuristics for their proofs over their career. They repeatedly and successfully applied these strategies into unsolved problems
I argue would not take a tremendous jump in performance for an AI to begin their own journey similar in kind to the greats, the only thing standing in their way (as with all contemporary mathematicians) is the extreme specialisation required to reach the boundary of unsolved problems
AI need not be Euler to be an important tool and figure within mathematics
I know this claim is often made but it seems obvious that in this discussion, trick means something far wider and more subtle than any set computer program. In a lot of ways, "he just uses a few tricks" is akin to the way a mathematician will say "and the rest of the proof is elementary" (when it's still quite long and hard for anyone not versed in a given specialty). I mean, before category theory was formalized, the proofs that now are possible with it might classified as "all done with this trick" but grasping said trick was far from elementary matter.
I argue would not take a tremendous jump in performance for an AI to begin their own journey similar in kind to the greats, the only thing standing in their way (as with all contemporary mathematicians) is the extreme specialisation required to reach the boundary of unsolved problems.
Not that LLMs can't do some impressive things but your narrative seems to anthropomorphize them in a less than useful way.
Mathematicians who practice constructive math and view existence proofs as mere intellectual artifacts tend to embrace AI, physics, engineering and even automated provers as worthy subjects.
In math, there's an urban legend that the first Greek who proved sqrt(2) is irrational (sometimes credited to Hippasus of Metapontum) was thrown overboard to drown at sea for his discovery. This is almost certainly false, but it does capture the spirit of a mission in pure math. The unspoken dream is this:
~ "Every beautiful question will one day have a beautiful answer."
At the same time, ever since the pure and abstract nature of Euclid's Elements, mathematics has gradually become a more diverse culture. We've accepted more and more kinds of "numbers:" negative, irrational, transcendental, complex, surreal, hyperreal, and beyond those into group theory and category theory. Math was once focused on measurement of shapes or distances, and went beyond that into things like graph theory and probabilities and algorithms.
In each of these evolutions, people are implicitly asking the question:
"What is math?"
Imagine the work of introducing the sqrt() symbol into ancient mathematics. It's strange because you're defining a symbol as answering a previously hard question (what x has x^2=something?). The same might be said of integration as the opposite of a derivative, or of sine defined in terms of geometric questions. Over and over again, new methods become part of the canon by proving to be both useful, and in having properties beyond their definition.
AI may one day fall into this broader scope of math (or may already be there, depending on your view). If an LLM can give you a verified but unreadable proof of a conjecture, it's still true. If it can give you a crazy counterexample, it's still false. I'm not saying math should change, but that there's already a nature of change and diversity within what math is, and that AI seems likely to feel like a branch of this in the future; or a close cousin the way computer science already is.
* AI could get better at thinking intuitively about math concepts. * AI could get better at looking for solutions people can understand. * AI could get better at teaching people about ideas that at first seem abstruse. * AI could get better at understanding its own thought, so that progress is not only a result, but also a method for future progress.
lol, took me a second to get the plausible reason for that
Henri Cartan of the Bourbaki had not only a more comprehensive view, but a greater scope of the potential of mathematical modeling and description
I fear AI is just going to lower our general epistemic standards as a society, and we forget essential truth verifying techniques in the technical (and other) realms all together. Needless to say the impact this has on our society's ethical and effectively legal foundations, because ultimately without clarity on how's and why's it will be near impossible to justly assign damages.
I don't think this is (generally) true? Speaking as a math postdoc right now, at least in my field of computational mathematics there's definitely a notion of first author. Though, a note of individual contributions at the bottom of the paper is becoming more common.
This seems very caricatural, one thing I've often heard in the AI community is that it'd be interesting to train models with an old data cutoff date (say 1900) and see whether the model is able to reinvent modern science
This is not the point, but the saying "there is no royal road to geometry" is far older than Gauss! It goes back at least to Proclus, who attributes it to Euclid.
The story goes that the (royal) pharaoh of Egypt wanted to learn geometry, but didn't want to have to read Euclid. He wanted a faster route. But, "there is no royal road to geometry."
Unless the royal pharaoh of Egypt, refers to Ptolemy I Soter, Macedonian general who was the first Ptolemaic Kingdom ruler of Egypt after Alexander's death.
"He [Euclid] lived in the time of Ptolemy the First, for Archimedes, who lived after the time of the first Ptolemy, mentions Euclid. It is also reported that Ptolemy once asked Euclid if there was not a shorter road to geometry that through the Elements, and Euclid replied that there was no royal road to geometry."
Source: http://aleph0.clarku.edu/~djoyce/java/elements/Euclid.html
There is a major caveat here. Most 'serious math' in AI papers is wrong and/or irrelevant!
It's even the case for famous papers. Each lemma in Kingma and Ba's ADAM optimization paper is wrong, the geometry in McInnes and Healy's UMAP paper is mostly gibberish, etc...
I think it's pretty clear that AI researchers (albeit surely with some exceptions) just don't know how to construct or evaluate a mathematical argument. Moreover the AI community (at large, again surely with individual exceptions) seems to just have pretty much no interest in promoting high intellectual standards.
Wrong in the strict formal sense or do you mean even wrong in “spirit”?
Physicists are well-known for using “physicist math” that isn’t formally correct but can easily be made as such in a rigorous sense with the help of a mathematician. Are you saying the papers of the AI community aren’t even correct “in spirit”?
However the math in AI papers is indeed different. For example, Kingma and Ba's paper self-presents as having a theorem with a rigorous proof via a couple of lemmas proved by a chain of inequalities. The key thing is that the mathematical details are purportedly all present, but are just wrong.
This isn't at all like what you see in physics papers, which might just openly lack detail, or might use mathematical objects whose existence or definition remain conjectural. There can be some legitimate problems with that, but at least in the best cases it can be very visionary. (Mirror symmetry is a standard example.) By contrast I'm not sure what 'spirit' is even possible in a detailed couple-page 'proof' that its authors probably don't even fully understand. In most cases, the 'theorem' isn't remotely interesting enough as pure mathematics and is also not of any serious relevance to the empirical problem at hand. It just adds an impressive-looking section to the paper.
I do think it's possible that in the future there will be very interesting pure mathematics inspired by AI. But it hasn't been found yet, and I'm very certain it won't come from reconsidering these kinds of badly-written theorems and proofs.
However the discussion there is more about math which is unnecessary to a paper, not so much about the problem of math which is unintelligible or, if intelligible, then incorrect. I don't have other papers off the top of my head, although by now it's my default expectation when I see a math-centric AI paper. If you have any such papers in mind, I could tell you my thoughts on it.
In fact, the modern practice (the concept predates the practice of course, but was more of an opinion than a ritual) of mathematics as this ultimate understandable system of truth and elegance seemingly began in Ancient Greece with their practice of proofs and early development of mathematical "frameworks". It didn't reach its current level of rigor and sophistication until 100-150 years ago when Formalism became the dominant school of thought (https://en.wikipedia.org/wiki/Formalism_(philosophy_of_mathe...), spearheaded by a group of mathematicians who held even deeper beliefs that are often referred to as Mathematical Platonism (https://en.wikipedia.org/wiki/Mathematical_Platonism). (Note that these wikipedia articles are not amazing explanations of the concepts, how they relate to realism, or developed historically but they are adequate primers)
Of course, Godel proved that truths exists outside of these formal systems (only a couple decades after mathemticians had started building a secret religion around worshipping Logos. These beliefs were pervasive see eg Einsteins concept of God as a clockmaker or Erdos' references to "The Book"), which leaves us almost back where we started where we might need to consider there may be some empirical results and patterns which "work" but we do not fully understand - we may never understand them. Personally, I think this philosophically justifies not subjecting oneself to the burden of spending excess time understanding or proving things that have never been understood before - it may elude elegance (as the 4-color proof) or even knowability.
We can always look backwards and explain things later, and of course, it's a false dichotomy that some theorems or results must be fully understood and proven (or proven elegantly) before they can be considered true and used as a basis for further results. Perhaps it is unsatisfying to those who wish to truly understand the universe in terms of mathematical elegance, but that asshole used mathematical elegance to disprove mathematical elegance as a perfect tool for understanding the universe already, so take it up with him.
Personally, as someone who at one time heavily considered pursuing a life in mathematics in part because of its ability to answer deep truths, I think Godel set us free: to understand or know things, we cannot rely solely on mathematics. Formal mathematics itself tells us that there are things we can only understand by discovering them, building them, or experimenting with them. There are truths that Cuda Cowboys can uncover that LaTex Liturgy cannot
With AI advisor I do not have this problem. It explains parts I need, in a way I understand. If I study some complicated topic, AI shortens it from months to days.
I was somehow mathematically gifted when younger, sadly I often reinvented my own math, because I did not even know this part of math existed. Watching how Deepseek thinks before answering, is REALLY beneficial. It gives me many hints and references. Human teachers are like black boxes while teaching.
We clearly will soon have the technology for that .. but it requires a rich opinionated benefactor, or inspired government agency to fund the development .. or perhaps it can be done as an Open model variant through crowdsourcing.
An LLM personal assistant that detects my preferences and echoes my biases and massages my ego and avoids challenging me with facts and new ideas .. whose goal is to maximize screentime and credits for shareholder value .. seem to be where things are heading.
I guess this is an argument for having open models.
I thought I understood calculus until I realised I didn't. And that took a bit thwack in the face really. I could use it but I didn't understand it.
My point is human advisor does not have enough time, to answer questions and correctly explain the subject. I may get like 4 hours a week, if lucky. Books are just a cheap substitute for real dialog and reasoning with a teacher.
Most ancient philosophy papers were in form of dialog. It is much faster to explain things.
AI is a game changer. It shortens feedback loop from a week to hour! It makes mistakes (as humans do), but it is faster to find them. And it also develops cognitive skills while finding them.
It is like programming in low level C in notepad 40 years ago. Versus high level language with IDE, VCS, unit tests...
Or like farming resources in Rust. Booring repetitive grind...
I don't think professional programmers were using notepad in 1985. Here's talk of IDEs from an article from 1985: https://dl.acm.org/doi/10.1145/800225.806843 It mentions Xerox Development Environment, from 1977 https://en.wikipedia.org/wiki/Xerox_Development_Environment
The feedback loop for programming / mathematics / other things I've studied was not a week in the year 2019. In that ancient time the feedback look was maybe 10% slower than with any of these LLMs since you had to look at Google search.
He’s saying go read these books, wrestle with the ideas, go to bed, dream about them, think about them in the shower. Repeat that until you understand enough to understand the answer.
There’s no shortcut here. If you had unlimited time with the advisor he couldn’t just sit you down and make you understand in a few sessions.
And it's not that AI can't contribute to this effort. I can certainly see how a chatbot research partner could be super valuable for lit review, brainstorming, and even 'talking things through' (much like mathematicians get value from talking aloud). This doesn't even touch on the ability to generate potentially valid proofs, which I do think has a lot of merit. But the idea that we could totally outsource the work to a generative model seems impossible by definition. The point of the labor is develop human understanding, removing the human from the loop changes the nature of the endeavor entirely (basically to algorithm design).
Similar stuff holds about art (at a high level, and glossing over 'craft art'); IMO art is an expressive endeavor. One person communicating a hard-to-express feeling to an audience. GenAI can obviously create really cool pictures, and this can be grist for art, but without some kind of mind-to-mind connection and empathy the picture is ultimately just an artifact. The human context is what turns the artifact into art.