Having the ability to throw math heavy ML papers at the assistants and get simplified explanations / pseudocode back is absolutely amazing, as someone who's forgot most of what I learned in uni, 25+ years back and never really used it since.
This is where LLMs shine for learning imo: throwing a paper in Claude and getting an overview then being able to ask questions.
Especially for fields that I didn’t study at the Bachelors or Masters level, like biology. Getting to engage with deeper research with a knowledgeable tutor assistant has enabled me to go deeper than I otherwise could.
I'm not sure the exact dollar value of feeling safe enough to ask really stupid questions that I should already know the answer to and I'd be embarrassed if anyone saw me ask Claude, but it's more than I'm paying them. Maybe that's the enshittification play. Extra $20/month if you don't want it to sound judgey about your shit.
'Vibe formalizing' is a logical extension of 'vibe engineering' implemented by 'vibe coding'. Sometimes I have trouble with getting the individual puzzle pieces of a problem to fall into place, where a hypothetical 'Move 37 As A Service' to unify informal methods with mathematical rigor deserves to be explored!
I've had mixed results with AI on research mathematics. I've gotten it to auto-complete non-trivial arguments, and I've found some domains where it seems hopelessly lost. I think we're still at a point in history where mathematicians will not be replaced by AI and can only benefit by dabbling with it.
I've had similar results in both mathematics and programming. For one paper I was writing I wanted to try them and it was a fairly straightforward problem counting the number of permutations. I spent much more time trying to get the AI to figure it out than it took to actually solve it. Couldn't get it to do it even after I solved the problem. Similarly in coding I've had it fail to find trivial bugs like an undefined keyword, which would have easily been caught had I turned on ctags (because the keyword was inherited from a class, which made it so easy to miss). But similarly I've had them succeed on non-trivial tasks and those have been of great help, though nothing has ever been end-to-end just the AI.
So I agree. I think these tools are very helpful, but I also think it is unhelpful that people are trying to sell them as much more than they are. The over hype of them not only legitimizes any pushback but provides ammunition. I believe the truth is that they're helpful tools, but are still far from replacing expertise. I believe that if we try to oversell them then we run the risk of ruining their utility. If you promise the moon you have the deliver the moon. That's different from aiming for the moon and landing in the trees.
I'm not a mathematician, but how credible is that anti-Lean material? Are they marketing an alternative programmatic approach, as in they're anti-lean because "I got something else" or are they philosophically anti-Lean and have valid arguments?
It's mainly the latter, although the author makes half-hearted gestures as some sort of CAS (Computer Algebra System) being better.
It's not very credible. There are individual fragments that make sense but it's not consistent when taken together.
For example, by making reference to Godelian problems and his overall mistrust of infinitary structures, he's implicitly endorsing ultrafinitism (not just finitism because e.g. PRA which is the usual theory for finitary proofs also falls prey to Godel's incompleteness theorems). But this is inconsistent with his expressed support for CASes, which very happily manipulate structures that are meant to be infinitary.
He tries to justify this on the grounds that CASes only perform a finite number of symbol manipulations to arrive at an answer, but so too is true for Lean, otherwise typechecking would never terminate. Indeed this is true of any formal system you could run on a computer.
Leaving aside his inconsistent set of arguments for CAS over Lean (and there isn't really a strong distinction between the two honestly; you could argue that Lean and other dependently types proof assistants are just another kind of CAS), his implicit support of ultrafinitism already would require a huge amount of work to make applicable to a computer system. There isn't a consensus on the logical foundations of ultrafinitism yet so building out a proof checker that satisfies ultrafinitistic demands isn't even really well-defined and requires a huge amount of theory crafting.
And just for clarity, finitism is the notion that unboundedness is okay but actual infinities are suspect. E.g. it's okay to say "there are an infinite number of natural numbers" which is understood to be shorthand for "there is no bound on natural numbers" but it's not okay to treat the infinitary object N of all natural numbers as a real thing. So e.g. some finitists are okay with PA over PRA.
On the other hand ultrafinitists deny unboundedness and say that sufficiently large natural numbers simply do not exist (most commonly the operationalization of this is that the question of whether a number exists or not is a matter of computation that scales with the size of the number, if the computation has not completed we cannot have confidence the number exists, and hence sufficiently large numbers for which the relevant computations have not been completed do not exist). This means e.g. quantification or statements of the form "for all natural numbers..." are very dangerous and there's not a complete consensus yet on the correct formalization of this from an ultrafinitistic point of view (or whether such statements would ever be considered coherent).
I think Zeilberger is taken heavily out of context and confused with Norman Wildberger a lot; he certainly has some eccentric opinions but that one is not at all reflected in his blog's contents (which are largely things like "[particular paper] presents [conjecture/proof] that can be [resolved/shortened] by routine methods" that are only routine because of his decades of work), it's a shame that him being the go-to example of a crank seems to have become engrained into LLMs
Because he is smart enough to use the existing (frontier) tools to get good results and create a sort of collaborative environment that is novel for research maths.
As for collaboration I meant: https://terrytao.wordpress.com/2024/09/25/a-pilot-project-in... The issue with horizontal scaling of maths research is trust: if you don't know the author, it is more work to verify their work, especially non-formal proofs. Lean4 enables large projects be split up into pieces where lean can validate each intermediate result, so a much broader group of people can contribute pieces without jeopardizing the overall soundness.
Collaborative environment meaning that any PFY employed by the "AI" providers can read your most intimate thought processes and keep track of embarrassing failures or misconceptions.
The embarrassing failures or misconceptions of math experts with regards to research level mathematics? Definitely a serious problem.
Though by your "Perelman and Wiles didn't need "AI" assistance" comment, you'd surely be there on the sidelines to ridicule them for each and every single one. I guess maybe that's where your concerns are coming from?
I can practically see how these concerns of yours would suddenly evaporate if they started using self-hosted models instead... ... yeah, right, who are we kidding?
If mathematicians aren't occasionally saying something that's obviously wrong to a 300-level student, are they really pushing the envelope? I'm just a programmer, but I find that my, and my coworkers, biggest insights always come right after we've said something seriously dumb.
Thankfully it's mathematics, so people powerscaling their idols, deifying them at the detriment of others, and putting terms into quotes mockingly, is not what determines whether results hold or not. Perhaps the only field not fundamentally shackled by this type of quackery, even if people try their hardest from time to time to make it so.
It's fine, at least you admit that what you wrote was just to insult.
For people who at least pretend to care to not think in strawmans, it's been nearly six years, and their deus has never exited said machina (if it's ever been in there to begin with, or anywhere else).
I was told by a hungarian, that hungarian written spelling and spoken pronunciation is pretty precisely aligned compared to, say, english. Except when it comes to names when it gets a bit random!
Why not do the bloke the decency to spell his name correctly? Those diacritics are important.
Anyway, I was told that Paul's name is very roughly pronounced by an anglophone as: "airdish".
Having the ability to throw math heavy ML papers at the assistants and get simplified explanations / pseudocode back is absolutely amazing, as someone who's forgot most of what I learned in uni, 25+ years back and never really used it since.
This is where LLMs shine for learning imo: throwing a paper in Claude and getting an overview then being able to ask questions.
Especially for fields that I didn’t study at the Bachelors or Masters level, like biology. Getting to engage with deeper research with a knowledgeable tutor assistant has enabled me to go deeper than I otherwise could.
I'm not sure the exact dollar value of feeling safe enough to ask really stupid questions that I should already know the answer to and I'd be embarrassed if anyone saw me ask Claude, but it's more than I'm paying them. Maybe that's the enshittification play. Extra $20/month if you don't want it to sound judgey about your shit.
I hope we continue to see gains for scientific professionals and companies doing research.
Even imperfect assistants increase leverage.
iOS formalization app mentioned by Tao (beta)
https://aristotle.harmonic.fun/
TIL: startup founded by Robin Hood CEO
'Vibe formalizing' is a logical extension of 'vibe engineering' implemented by 'vibe coding'. Sometimes I have trouble with getting the individual puzzle pieces of a problem to fall into place, where a hypothetical 'Move 37 As A Service' to unify informal methods with mathematical rigor deserves to be explored!
I've had mixed results with AI on research mathematics. I've gotten it to auto-complete non-trivial arguments, and I've found some domains where it seems hopelessly lost. I think we're still at a point in history where mathematicians will not be replaced by AI and can only benefit by dabbling with it.
I've had similar results in both mathematics and programming. For one paper I was writing I wanted to try them and it was a fairly straightforward problem counting the number of permutations. I spent much more time trying to get the AI to figure it out than it took to actually solve it. Couldn't get it to do it even after I solved the problem. Similarly in coding I've had it fail to find trivial bugs like an undefined keyword, which would have easily been caught had I turned on ctags (because the keyword was inherited from a class, which made it so easy to miss). But similarly I've had them succeed on non-trivial tasks and those have been of great help, though nothing has ever been end-to-end just the AI.
So I agree. I think these tools are very helpful, but I also think it is unhelpful that people are trying to sell them as much more than they are. The over hype of them not only legitimizes any pushback but provides ammunition. I believe the truth is that they're helpful tools, but are still far from replacing expertise. I believe that if we try to oversell them then we run the risk of ruining their utility. If you promise the moon you have the deliver the moon. That's different from aiming for the moon and landing in the trees.
Also interesting that the responses include anti-Lean material.
I'm not a mathematician, but how credible is that anti-Lean material? Are they marketing an alternative programmatic approach, as in they're anti-lean because "I got something else" or are they philosophically anti-Lean and have valid arguments?
It's mainly the latter, although the author makes half-hearted gestures as some sort of CAS (Computer Algebra System) being better.
It's not very credible. There are individual fragments that make sense but it's not consistent when taken together.
For example, by making reference to Godelian problems and his overall mistrust of infinitary structures, he's implicitly endorsing ultrafinitism (not just finitism because e.g. PRA which is the usual theory for finitary proofs also falls prey to Godel's incompleteness theorems). But this is inconsistent with his expressed support for CASes, which very happily manipulate structures that are meant to be infinitary.
He tries to justify this on the grounds that CASes only perform a finite number of symbol manipulations to arrive at an answer, but so too is true for Lean, otherwise typechecking would never terminate. Indeed this is true of any formal system you could run on a computer.
Leaving aside his inconsistent set of arguments for CAS over Lean (and there isn't really a strong distinction between the two honestly; you could argue that Lean and other dependently types proof assistants are just another kind of CAS), his implicit support of ultrafinitism already would require a huge amount of work to make applicable to a computer system. There isn't a consensus on the logical foundations of ultrafinitism yet so building out a proof checker that satisfies ultrafinitistic demands isn't even really well-defined and requires a huge amount of theory crafting.
And just for clarity, finitism is the notion that unboundedness is okay but actual infinities are suspect. E.g. it's okay to say "there are an infinite number of natural numbers" which is understood to be shorthand for "there is no bound on natural numbers" but it's not okay to treat the infinitary object N of all natural numbers as a real thing. So e.g. some finitists are okay with PA over PRA.
On the other hand ultrafinitists deny unboundedness and say that sufficiently large natural numbers simply do not exist (most commonly the operationalization of this is that the question of whether a number exists or not is a matter of computation that scales with the size of the number, if the computation has not completed we cannot have confidence the number exists, and hence sufficiently large numbers for which the relevant computations have not been completed do not exist). This means e.g. quantification or statements of the form "for all natural numbers..." are very dangerous and there's not a complete consensus yet on the correct formalization of this from an ultrafinitistic point of view (or whether such statements would ever be considered coherent).
Due to his position and general fame, Tao has to deal with a larger-than-usual number of kooks.
[flagged]
I think Zeilberger is taken heavily out of context and confused with Norman Wildberger a lot; he certainly has some eccentric opinions but that one is not at all reflected in his blog's contents (which are largely things like "[particular paper] presents [conjecture/proof] that can be [resolved/shortened] by routine methods" that are only routine because of his decades of work), it's a shame that him being the go-to example of a crank seems to have become engrained into LLMs
[flagged]
Because he is smart enough to use the existing (frontier) tools to get good results and create a sort of collaborative environment that is novel for research maths.
As for collaboration I meant: https://terrytao.wordpress.com/2024/09/25/a-pilot-project-in... The issue with horizontal scaling of maths research is trust: if you don't know the author, it is more work to verify their work, especially non-formal proofs. Lean4 enables large projects be split up into pieces where lean can validate each intermediate result, so a much broader group of people can contribute pieces without jeopardizing the overall soundness.
Indeed. Who’s to say whether Wiles would have used AI assistance if it had, you know, existed, in 1994.
Collaborative environment meaning that any PFY employed by the "AI" providers can read your most intimate thought processes and keep track of embarrassing failures or misconceptions.
The embarrassing failures or misconceptions of math experts with regards to research level mathematics? Definitely a serious problem.
Though by your "Perelman and Wiles didn't need "AI" assistance" comment, you'd surely be there on the sidelines to ridicule them for each and every single one. I guess maybe that's where your concerns are coming from?
I can practically see how these concerns of yours would suddenly evaporate if they started using self-hosted models instead... ... yeah, right, who are we kidding?
If mathematicians aren't occasionally saying something that's obviously wrong to a 300-level student, are they really pushing the envelope? I'm just a programmer, but I find that my, and my coworkers, biggest insights always come right after we've said something seriously dumb.
Thankfully it's mathematics, so people powerscaling their idols, deifying them at the detriment of others, and putting terms into quotes mockingly, is not what determines whether results hold or not. Perhaps the only field not fundamentally shackled by this type of quackery, even if people try their hardest from time to time to make it so.
[flagged]
It's fine, at least you admit that what you wrote was just to insult.
For people who at least pretend to care to not think in strawmans, it's been nearly six years, and their deus has never exited said machina (if it's ever been in there to begin with, or anywhere else).
They should name one of the AI's "Erdos". Then we can all have an Erdos number of one!
Or a successor of https://en.wikipedia.org/wiki/DR-DOS.
There is an AI-integrated IDE called Erdos...
https://www.lotas.ai/erdos
Erdős
I was told by a hungarian, that hungarian written spelling and spoken pronunciation is pretty precisely aligned compared to, say, english. Except when it comes to names when it gets a bit random!
Why not do the bloke the decency to spell his name correctly? Those diacritics are important.
Anyway, I was told that Paul's name is very roughly pronounced by an anglophone as: "airdish".
(I saw this on a math department bulletin board about 1960)
A theorem both deep and profound States that every circle is round But in a paper by Erdos Written in Kurdish A counterexample is found
Is not American so nobody here cares...