To be honest, it's hard for me not to get kind of emotional about this. Obviously I don't know what's going to happen, but I can imagine a future where some future model is better at proving theorems than any human mathematician, like the situation, say, chess has been in for some time now. In that future, I would still care a lot about learning why theorems are true --- the process of answering those questions is one of the things I find the most beautiful and fulfilling in the world --- and it makes me really sad to hear people talk about math being "solved", as though all we're doing is checking theorems off of a to-do list. I often find the conversation pretty demoralizing, especially because I think a lot of the people I have it with would probably really enjoy the thing mathematics actually is much more than the thing they seem to think it is.
> "The rapid advance of computers has helped dramatize this point, because computers and people are very different. For instance, when Appel and Haken completed a proof of the 4-color map theorem using a massive automatic computation, it evoked much controversy. I interpret the controversy as having little to do with doubt people had as to the veracity of the theorem or the correctness of the proof. Rather, it reflected a continuing desire for human understanding of a proof, in addition to knowledge that the theorem is true."
Incidentally, I've also a similar problem when reviewing HCI and computer systems papers. Ok sure, this deep learning neural net worked better, but what did we as a community actually learn that others can build on?
The Four Colour Theorem is true because there exists a finite set of unavoidable yet reducible configurations. QED.
To verify this computational fact one uses a (very) glorified pocket calculator.
You just summarised (nearly) everything a mathematician can get out of that computerised proof. That's unsatisfying. It doesn't give you any insight into any other areas of math, nor does it suggest interesting corollaries, nor does it tell you which pre-condition of the statement does what work.
That's rather underwhelming. That's less than you can get out of the 100th proof of Pythagoras.
The thing is that the underlying reasoning (the logic) is what provides real insights. This is how we recognize other problems that are similar or even identical. The steps in between are just as important, and often more important.
I'll give an example from physics. (If you're unsatisfied with this one, pick another physics fact and I'll do my best. _Any_ will do.) Here's a "fact"[0]: The atoms with even number of electrons are more stable than those with an odd number. We knew this in the 1910's, and this is a fact that directly led to the Pauli Exclusion Principle, which led us to better understand chemical bonds. Asking why Pauli Exclusion happens furthers our understanding and leading us to a better understanding of the atomic model. It goes on and on like this.
It has always been about the why. The why is what leads us to new information. The why is what leads to generalization. The why is what leads to causality and predictive models. THe why is what makes the fact useful in the first place.
[0] Quotes are because truth is very very hard to derive. https://hermiene.net/essays-trans/relativity_of_wrong.html
I'm fairly sure that people are only getting hung up on the size of this finite set, for no good reason. I suspect that if the size of this finite set were 2, instead of 633, and you could draw these unavoidable configuration on the chalk board, and easily illustrate that both of them are reducible, everyone would be saying "ah yes, the four colour theorem, such an elegant proof!"
Yet, whether the finite set were of size 2 or size 633, the fundamental insight would be identical: there exists some finite unavoidable and reducible set of configurations.
Have programmers given up on wanting their mind blown by unbelievable simplicity?
https://blog.tanyakhovanova.com/2024/11/foams-made-out-of-fe...
https://blog.tanyakhovanova.com/2024/11/foams-made-out-of-fe...
This is also nice because only pre-1600 tech involved
We "know" it's true, but only because a machine ground mechanically through lots of tedious cases. I'm sure most mathematicians would appreciate a simpler and more elegant proof.
Like I said, I don't have any idea what's going to happen. The thing that makes me sad about these conversations is that the people I talk to sometimes don't seem to have any appreciation for the thing they say they want to dismantle. It might even be better for humanity on the whole to arrive in this future; I'm not arguing that one way or the other! Just that I think there's a chance it would involve losing something I really love, and that makes me sad.
Yes! This is what frustrates my about the pursuit of AI for the arts too.
I see both cases as people who aren’t well served by the artisanal version attempting to acquire a better-than-commoditized version because they want more of that thing to exist. We regularly have both things in furniture and don’t have any great moral crisis that chairs are produced mechanistically by machines. To me, both things sound like “how dare you buy IKEA furniture — you have no appreciation of woodwork!”
Maybe artisanal math proofs are more beautiful or some other aesthetic concern — but what I’d like is proofs that business models are stable and not full of holes constructed each time a new ML pipeline deploys; which is the sort of boring, rote work that most mathematicians are “too good” to work on. But they’re what’s needed to prevent, eg, the Amazon 2018 hiring freeze.
That’s the need that, eg, automated theorem proving truly solves — and mathematicians are being ignored (much like artist) by people they turn up their noses at.
Oh… I didnt anticipate this would bother you. Would it be fair to say that its not that you like understanding why its true, because you have that here, but that you like process of discovering why?
Perhaps thats what you meant originally. But my understanding was that you were primarily just concerned with understanding why, not being the one to discover why.
I can only speak for myself, but it's not that I care a lot about me personally being the first one to discover some new piece of mathematics. (If I did, I'd probably still be doing research, which I'm not.) There is something very satisfying about solving a problem for yourself rather than being handed the answer, though, even if it's not an original problem. It's the same reason some people like doing sudokus, and why those people wouldn't respond well to being told that they could save a lot of time if they just used a sudoku solver or looked up the answer in the back of the book.
But that's not really what I'm getting at in the sentence you're quoting --- people are still free to solve sudokus even though sudoku solvers exist, and the same would presumably be true of proving theorems in the world we're considering. The thing I'd be most worried about is the destruction of the community of mathematicians. If math were just a fun but useless hobby, like, I don't know, whittling or something, I think there would be way fewer people doing it. And there would be even fewer people doing it as deeply and intensely as they are now when it's their full-time job. And as someone who likes math a lot, I don't love the idea of that happening.
Why would mathematics be different than woodworking?
Do you believe there’s a limited demand for mathematics? — my experience is quite the opposite, that we’re limited by the production capacity.
I think they also adjust their heuristics, based on looking at thousands of computer moves.
"Mathematics advances by solving problems using new techniques because those techniques open up new areas of mathematics."
Think of the problem as requiring spending a certain amount of complexity to solve. If you don't spend it on developing a new way of thinking then you spend it on long and tedious calculations that nobody can keep in working memory.
It's similar to how you can write an AI model in Pytorch or you can write down the logic gates that execute on the GPU. The logic gate representation uses only elementary techniques. But nobody wants to read or check it by hand.
> the primary aim isn't really to find out whether a result is true but why it's true.
I'm honestly surprised that there are mathematicians that think differently (my background[0]). There are so many famous mathematicians stating this through the years. Some more subtle like Poincare stating that math is not the study of numbers but the relationship between them, while others far more explicit. This sounds more like what I hear from the common public who think mathematics is discovered and not invented (how does anyone think anything different after taking Abstract Algebra?).But being over in the AI/ML world now, this is my NUMBER ONE gripe. Very few are trying to understand why things are working. I'd argue that the biggest reason machines are black boxes are because no one is bothering to look inside of them. You can't solve things like hallucinations and errors without understanding these machines (and there's a lot we already do understand). There's a strong pushback against mathematics and I really don't understand why. It has so many tools that can help us move forward, but yes, it takes a lot of work. It's bad enough I know people who have gotten PhDs from top CS schools (top 3!) and don't understand things like probability distributions.
Unfortunately doing great things takes great work and great effort. I really do want to see the birth of AI, I wouldn't be doing this if I didn't, but I think it'd be naive to believe that this grand challenge can entirely be solved by one field and something so simple as throwing more compute (data, hardware, parameters, or however you want to reframe the Bitter Lesson this year).
Maybe I'm biased because I come from physics where we only care about causal relationships. The "_why_" is the damn Chimichanga. And I should mention, we're very comfortable in physics working with non-deterministic systems and that doesn't mean you can't form causal relationships. That's what the last hundred and some odd years have been all about.[1]
[0] Undergrad in physics, moved to work as engineer, then went to grad school to do CS because I was interested in AI and specifically in the mathematics of it. Boy did I become disappointment years later...
[1] I think there is a bias in CS. I notice there is a lot of test driven development, despite that being well known to be full of pitfalls. You unfortunately can't test your way into a proof. Any mathematician or physicist can tell you. Just because your thing does well on some tests doesn't mean there is proof of anything. Evidence, yes, but that's far from proof. Don't make the mistake Dyson did: https://www.youtube.com/watch?v=hV41QEKiMlM
People do look, but it's extremely hard. Take a look at how hard the mechanistic interpretability people have to work for even small insights. Neel Nanda[1] has some very nice writeups if you haven't already seen them.
> People do look
This was never in question > Very few are trying to understand why things are working
What is in question is why this is given so little attention. You can hear Neel talk about this himself. It is the reason he is trying to rally people and get more into Mech Interp. Which frankly, this side of ML is as old as ML itself.Personal, I believe that if you aren't trying to interpret results and ask the why then you're not actually doing science. Which is fine. There's plenty of good things that come from outside science. I just think it's weird to call something science if you aren't going to do hypothesis testing and finding out why things are the way they are
Which is to say, if you only concern yourself with theorems which have short, understandable proofs, aren't you cutting yourself off from vast swathes of math space?
If you're talking about questions that are well-motivated but whose answers are ugly and incomprehensible, then a milder version of this actually happens fairly often --- some major conjecture gets solved by a proof that everyone agrees is right but which also doesn't shed much light on why the thing is true. In this situation, I think it's fair to describe the usual reaction as, like, I'm definitely happy to have the confirmation that the thing is true, but I would much rather have a nicer argument. Whoever proved the thing in the ugly way definitely earns themselves lots of math points, but if someone else comes along later and proves it in a clearer way then they've done something worth celebrating too.
Does that answer your question?
I think even if N is quite large, that just means it may take decades or millennia to publish and understand all k necessary papers, but maybe it’s still worth the effort even if we can get the length-N paper right away. What are you going to do with a mathematical proof that no one can understand anyway?
The new spin on these older unresolved issues IHMO is really the black-box aspect of our statistical approaches. Lots of mathematicians that are fine with proof systems like Lean and some million-step process that can in principle be followed are also happy with more open-ended automated search and exploration of model spaces, proof spaces, etc. But can they ever be really be happy with a million gigabyte network of weighted nodes masquerading as some kind of "explanation" though? Not a mathematician but I sympathize. Given the difficulty of building/writing/running it, that looks more like a product than like "knowledge" to me (compare this to how Lean can prove Godel on your laptop).
Maybe it's easier to swallow the bitter pill of poor quality explanations though after the technology itself is a little easier to actually handle. People hate ugly things less if they are practical, and actually something you can build pretty stuff on top on.
https://en.wikipedia.org/wiki/Quasi-empiricism_in_mathematic...
But absolutely worst of all is the arrogance. The hubris. The thinking that because some human somewhere has figured a thing out that its then just implicitly known by these types. The casual disregard for their fellow humans. The lack of true care for anything and anyone they touch.
Move fast and break things!! Even when its the society you live in.
That arrogance and/or hubris is just another type of stupidity.
It's not just that comments that vent denunciatory feelings are lower-quality themselves, though usually they are. It's that they exert a degrading influence on the rest of the thread, for a couple reasons: (1) people tend to respond in kind, and (2) these comments always veer towards the generic (e.g. "lack of true care for anything and anyone", "just another type of stupidity"), which is bad for curious conversation. Generic stuff is repetitive, and indignant-generic stuff doubly so.
By the time we get further downthread, the original topic is completely gone and we're into "glorification of management over ICs" (https://news.ycombinator.com/item?id=43346257). Veering offtopic can be ok when the tangent is even more interesting (or whimsical) than the starting point, but most tangents aren't like that—mostly what they do is replace a more-interesting-and-in-the-key-of-curiosity thing with a more-repetitive-and-in-the-key-of-indignation thing, which is a losing trade for HN.
This is the part I don't get honestly
Are people just very shortsighted and don't see how these changes are potentially going to cause upheaval?
Do they think the upheaval is simply going to be worth it?
Do they think they will simply be wealthy enough that it won't affect them much, they will be insulated from it?
Do they just never think about consequences at all?
I am trying not to be extremely negative about all of this, but the speed of which things are moving makes me think we'll hit the cliff before even realizing it is in front of us
That's the part I find unnerving
Yes, partly that. Mostly they only care about their rank. Many people would burn down the country if it meant they could be king of the ashes. Even purely self-interested people should welcome a better society for all, because a rising tide lifts all boats. But not only are they selfish, they're also very stupid, at least in this aspect. They can't see the world as anything but zero sum, and themselves as either winning or losing, so they must win at all costs. And those costs are huge.
Yes, I think this is it. Frequently using social media and being “online” leads to less critical thought, less thinking overall, smaller window that you allow yourself to think in, thoughts that are merely sound bites not fully fleshed out thoughts, and so on. Ones thoughts can easily become a milieu of memes and falsehoods. A person whose mind is in the state will do whatever anyone suggests for that next dopamine hit!
I am guilty of it all myself which is how I can make this claim. I too fear for humanity’s future.
> Do they think the upheaval is simply going to be worth it?
All technology causes upheaval. We've benefited from many of these upheavals to the point where it's impossible for most to imagine a world without the proliferation of movable type, the internal combustion engine, the computer, or the internet. All of your criticisms could have easily been made word for word by the Catholic Church during the medieval era. The "society" of today is no more of a sacred cow than its antecedent incarnations were half a millenium ago. As history has shown, it must either adapt, disperse, or die.
I am not concerned about some kind of "sacred cow" that I want to preserve
I am concerned about a future where those with power no longer need 90% of the population so they deploy autonomous weaponry that grinds most of the population into fertilizer
And I'm concerned there are a bunch of short sighted idiots gleefully building autonomous weaponry for them, thinking they will either be spared from mulching, or be the mulchers
Edit: The thing about appealing to history is that it also shows that when upper classes get too powerful they start to lose touch with everyone else, and this often leads to turmoil that affects the common folk most
As one of the common folk, I'm pretty against that
There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”
aka optimization-for-its-own-sake, aka pathological optimization.
It's basically meatspace internalizing and adopting the paperclip problem as a "good thing" to pursue, screw externalities and consequences.
And, lo-and-behold, my read for why it gets downvoted here is that a lot of folks on HN ascribe to this mentality, as it is part of the HN ethos to optimize , often pathologically.
You flatten the heap
You decrease or eliminate the reward for being at the top
You decrease or eliminate the penalty for being at the bottom
The main problem is that we haven't figured out a good way to do this without creating a whole bunch of other problems
I worked in an organization afflicted by this and still have friends there. In the case of that organization, it was caused by an exaggerated glorification of management over ICs. Managers truly did act according to the belief, and show every evidence of sincerely believing in it, that their understanding of every problem was superior to the sum of the knowledge and intelligence of every engineer under them in the org chart, not because they respected their engineers and worked to collect and understand information from them, but because managers are a higher form of humanity than ICs, and org chart hierarchy reflects natural superiority. Every conversation had to be couched in terms that didn't contradict those assumptions, so the culture had an extremely high tolerance for hand-waving and BS. Naturally this created cover for all kinds of selfish decisions based on politics, bonuses, and vendor perks. I'm very glad I got out of there.
I wouldn't paint all of tech with the same brush, though. There are many companies that are better, much better. Not because they serve higher ideals, but because they can't afford to get so detached from reality, because they'd fail if they didn't respect technical considerations and respect their ICs.
Mathematicians who practice constructive math and view existence proofs as mere intellectual artifacts tend to embrace AI, physics, engineering and even automated provers as worthy subjects.
I can see a day might come when we (research mathematicians, math professors, etc) might not exist as a profession anymore, but there will continue to be mathematicians. What we'll do to make a living when that day comes, I have no idea. I suspect many others will also have to figure that out soon.
[0] I've seen this attributed to the Character of Physical Law but haven't confirmed it
I'd include writing, art-, and music-making in that category.
Why Ai field is so secretive? Because it's all trade secrets - and maybe soon to become patents. You don't give away precisely how semiconductor fabs work, only base research level of "this direction is promising"
Why everyone is pushed to add Ai in? Because that's where the money is, that's where the product is.
Why Ai needs results fast? Because it's production line, and you create and design stuff
Even the core distinction mentioned - that Ai is about "speculation and possibility" - that's all about tool experimenting and prototyping. It's all about building and constructing. Aka Engineering/Technology letters of STEM
I guess next step is to ask "what to do next?". IMO, math and Ai fields should realise the divide and slowly diverge, leaving each other alone on an arm's length. Just as engineers and programmers (not computer scientists) already do
What I think Mathematicians should remind themselves is a lot of prestigious mathematicians, the likes of Cantor or Erdos, often only employed a handful of “tricks”/heuristics for their proofs over their career. They repeatedly and successfully applied these strategies into unsolved problems
I argue would not take a tremendous jump in performance for an AI to begin their own journey similar in kind to the greats, the only thing standing in their way (as with all contemporary mathematicians) is the extreme specialisation required to reach the boundary of unsolved problems
AI need not be Euler to be an important tool and figure within mathematics
I know this claim is often made but it seems obvious that in this discussion, trick means something far wider and more subtle than any set computer program. In a lot of ways, "he just uses a few tricks" is akin to the way a mathematician will say "and the rest of the proof is elementary" (when it's still quite long and hard for anyone not versed in a given specialty). I mean, before category theory was formalized, the proofs that now are possible with it might classified as "all done with this trick" but grasping said trick was far from elementary matter.
I argue would not take a tremendous jump in performance for an AI to begin their own journey similar in kind to the greats, the only thing standing in their way (as with all contemporary mathematicians) is the extreme specialisation required to reach the boundary of unsolved problems.
Not that LLMs can't do some impressive things but your narrative seems to anthropomorphize them in a less than useful way.
Engineering has always involved large amounts of both math and secrecy, what's different now?
(But the engineers want the benefits of academic research -- going to conferences to give talks, credibility, intellectual prestige -- without paying the costs, e.g. actually sharing new knowledge and information.)
Not exactly AI by today's standards, but a lot of the math that they need has been rolled into their software tools. And Excel is quite powerful.
In math, there's an urban legend that the first Greek who proved sqrt(2) is irrational (sometimes credited to Hippasus of Metapontum) was thrown overboard to drown at sea for his discovery. This is almost certainly false, but it does capture the spirit of a mission in pure math. The unspoken dream is this:
~ "Every beautiful question will one day have a beautiful answer."
At the same time, ever since the pure and abstract nature of Euclid's Elements, mathematics has gradually become a more diverse culture. We've accepted more and more kinds of "numbers:" negative, irrational, transcendental, complex, surreal, hyperreal, and beyond those into group theory and category theory. Math was once focused on measurement of shapes or distances, and went beyond that into things like graph theory and probabilities and algorithms.
In each of these evolutions, people are implicitly asking the question:
"What is math?"
Imagine the work of introducing the sqrt() symbol into ancient mathematics. It's strange because you're defining a symbol as answering a previously hard question (what x has x^2=something?). The same might be said of integration as the opposite of a derivative, or of sine defined in terms of geometric questions. Over and over again, new methods become part of the canon by proving to be both useful, and in having properties beyond their definition.
AI may one day fall into this broader scope of math (or may already be there, depending on your view). If an LLM can give you a verified but unreadable proof of a conjecture, it's still true. If it can give you a crazy counterexample, it's still false. I'm not saying math should change, but that there's already a nature of change and diversity within what math is, and that AI seems likely to feel like a branch of this in the future; or a close cousin the way computer science already is.
* AI could get better at thinking intuitively about math concepts. * AI could get better at looking for solutions people can understand. * AI could get better at teaching people about ideas that at first seem abstruse. * AI could get better at understanding its own thought, so that progress is not only a result, but also a method for future progress.
An issue in these discussions is that mathematics is both an art, a sport, and a science. And the development of AI that can build 'useful' libraries of proven theorems means different things for each. The sport of mathematics will be basically over. The art of mathematics will thrive as it becomes easier to explore the mathematical world. For the science of mathematics, it's hard to say, it's been kind of shaky for ~50 years anyway, but it can only help.
I have listened to colin Mclarty talk about philosophy of math and there was a contingent of mathematicians who solely cared about solving problems via “algorithms”. The time period was just preceding the modern math since the late 1800s roughly, where the algorithmists, intuitivists, and logical oriented mathematicians coalesced into a combination that includes intuitive, algorithmic, and importance of logic, leading to the modern way we do proofs and focus on proofs.
These algorithmists didn’t care about the so called “meaningless” operations that got an answer, they just cared they got useful results.
I think the article mitigates this side of math, and is the side AI will be best or most useful at. Having read AI proofs, they are terrible in my opinion. But if AI can prove something useful even if the proof is grossly unappealing to the modern mathematician, there should be nothing to clamor about.
This is the talk I have in mind https://m.youtube.com/watch?v=-r-qNE0L-yI&pp=ygUlQ29saW4gbWN...
I think this is an interesting question. In a hypothetical SciFi world where we somehow provably know that AI is infallible and the results are always correct, you could imagine mathematicians grudgingly accepting some conjecture as "proven by AI" even without understanding the why.
But for real-world AI, we know it can produce hallucinations and its reasoning chains can have massive logical errors. So if it came up with a proof that no one understands, how would we even be able to verify that the proof is indeed correct and not just gibberish?
Or more generally, how do you verify a proof that you don't understand?
Just so this isn't misunderstood, not so much cutting-edge math is presently possible to code in lean. The famous exceptions (such as the results by Clausen-Scholze and Gowers-Green-Manners-Tao) have special characteristics which make them much more ground-level and easier to code in lean.
What's true is that it's very easy to check if a lean-coded proof is correct. But it's hard and time-consuming to formulate most math as lean code. It's something many AI research groups are working on.
"Special characteristics" is really overstating it. It's just a matter of getting all the prereqs formalized in Lean first. That's a bit of a grind to be sure, but the Mathlib effort for Lean has the bulk of the undergrad curriculum and some grad subjects formalized.
I don't think AI will be all that helpful wrt. this kind of effort, but it might help in some limited ways.
In this hypothetical Riemann Hypothesis example, the only thing the human would have to check is that (a) the proof-verification software works correctly, and that (b) the statement of the Riemann Hypothesis at the very beginning is indeed a statement of the Riemann Hypothesis. This is orders of magnitude easier than proving the Riemann Hypothesis, or even than following someone else's proof!
This is the big question! Computer-aided proof has been around forever. AI seems like just another tool from that box. Albeit one that has the potential to provide 'human-friendly' answers, rather than just a bunch of symbolic manipulation that must be interpreted.
This seems very caricatural, one thing I've often heard in the AI community is that it'd be interesting to train models with an old data cutoff date (say 1900) and see whether the model is able to reinvent modern science
Modern AI is about "well, it looks like it works, so we're golden".
This is not the point, but the saying "there is no royal road to geometry" is far older than Gauss! It goes back at least to Proclus, who attributes it to Euclid.
The story goes that the (royal) pharaoh of Egypt wanted to learn geometry, but didn't want to have to read Euclid. He wanted a faster route. But, "there is no royal road to geometry."
Unless the royal pharaoh of Egypt, refers to Ptolemy I Soter, Macedonian general who was the first Ptolemaic Kingdom ruler of Egypt after Alexander's death.
There is a major caveat here. Most 'serious math' in AI papers is wrong and/or irrelevant!
It's even the case for famous papers. Each lemma in Kingma and Ba's ADAM optimization paper is wrong, the geometry in McInnes and Healy's UMAP paper is mostly gibberish, etc...
I think it's pretty clear that AI researchers (albeit surely with some exceptions) just don't know how to construct or evaluate a mathematical argument. Moreover the AI community (at large, again surely with individual exceptions) seems to just have pretty much no interest in promoting high intellectual standards.
This is well-studied and not unique to AI, the USA in English, or even Western traditions. Here is what I mean: a book called Diffusion of Innovations by Rogers explains a history of technology introduction.. if the results are tallied in population, money or other prosperity, the civilizations and their language groups that have systematic ways to explore and apply new technology are "winners" in the global context.
AI is a powerful lever. The meta-conversation here might be around concepts of cancer, imbalance and chairs on the deck of the Titanic.. but this is getting off-topic for maths.
Henri Cartan of the Bourbaki had not only a more comprehensive view, but a greater scope of the potential of mathematical modeling and description
With AI advisor I do not have this problem. It explains parts I need, in a way I understand. If I study some complicated topic, AI shortens it from months to days.
I was somehow mathematically gifted when younger, sadly I often reinvented my own math, because I did not even know this part of math existed. Watching how Deepseek thinks before answering, is REALLY beneficial. It gives me many hints and references. Human teachers are like black boxes while teaching.
I thought I understood calculus until I realised I didn't. And that took a bit thwack in the face really. I could use it but I didn't understand it.
My point is human advisor does not have enough time, to answer questions and correctly explain the subject. I may get like 4 hours a week, if lucky. Books are just a cheap substitute for real dialog and reasoning with a teacher.
Most ancient philosophy papers were in form of dialog. It is much faster to explain things.
AI is a game changer. It shortens feedback loop from a week to hour! It makes mistakes (as humans do), but it is faster to find them. And it also develops cognitive skills while finding them.
It is like programming in low level C in notepad 40 years ago. Versus high level language with IDE, VCS, unit tests...
Or like farming resources in Rust. Booring repetitive grind...
I don't think professional programmers were using notepad in 1985. Here's talk of IDEs from an article from 1985: https://dl.acm.org/doi/10.1145/800225.806843 It mentions Xerox Development Environment, from 1977 https://en.wikipedia.org/wiki/Xerox_Development_Environment
The feedback loop for programming / mathematics / other things I've studied was not a week in the year 2019. In that ancient time the feedback look was maybe 10% slower than with any of these LLMs since you had to look at Google search.
And it's not that AI can't contribute to this effort. I can certainly see how a chatbot research partner could be super valuable for lit review, brainstorming, and even 'talking things through' (much like mathematicians get value from talking aloud). This doesn't even touch on the ability to generate potentially valid proofs, which I do think has a lot of merit. But the idea that we could totally outsource the work to a generative model seems impossible by definition. The point of the labor is develop human understanding, removing the human from the loop changes the nature of the endeavor entirely (basically to algorithm design).
Similar stuff holds about art (at a high level, and glossing over 'craft art'); IMO art is an expressive endeavor. One person communicating a hard-to-express feeling to an audience. GenAI can obviously create really cool pictures, and this can be grist for art, but without some kind of mind-to-mind connection and empathy the picture is ultimately just an artifact. The human context is what turns the artifact into art.