Remix.run Logo
malux85 2 days ago

This is what has excited me for many years - the idea I call "scientific refactoring"

What happens if we reason upwards but change some universal constants? What happens if we use Tao instead of Pi everywhere, these kind of fun questions would otherwise require an enormous intellectual effort whereas with the mechanisation and automation of thought, we might be able to run them and see!

kridsdale3 2 days ago | parent | next [-]

Not just for math, but ALL of Science suffers heavily from a problem of less than 1% of the published works being capable of being read by leading researchers.

Google Scholar was a huge step forward for doing meta-analysis vs a physical library.

But agents scanning the vastness of PDFs to find correlations and insights that are far beyond human context-capacity will I hope find a lot of knowledge that we have technically already collected, but remain ignorant of.

semi-extrinsic 2 days ago | parent | next [-]

This idea is just ridiculous to anyone who's worked in academia. The theory is nice, but academic publishing is currently in the late stages of a huge death spiral.

In any given scientific niche, there is a huge amount of tribal knowledge that never gets written down anywhere, just passed on from one grad student to the rest of the group, and from there spreads by percolation in the tiny niche. And papers are never honest about the performance of the results and what does not work, there is always cherry picking of benchmarks/comparisons etc.

There is absolutely no way you can get these kinds of insights beyond human context capacity that you speak of. The information necessary does not exist in any dataset available to the LLM.

charcircuit a day ago | parent [-]

The same could be said about programmers, but we have adapted and started writing it all down so that AI cab use it.

semi-extrinsic a day ago | parent [-]

No no, in comparison to academia, programmers have been extremely diligent at documenting exactly how stuff works and providing fairly reproducible artifacts since the 1960s.

Imagine trying to teach an AI how to code based on only slide decks from consultants. No access todocumentation, no stack overflow, no open source code used in the training data; just sales pitches and success stories. That's close to how absurd this idea is.

a day ago | parent [-]
[deleted]
newyankee 2 days ago | parent | prev | next [-]

Exactly, and I think not every instance can be claimed to be a hallucination, there will be so much latent knowledge they might have explored.

It is likely we might see some AlphaGo type new styles in existing research workflows that AI might work out if there is some verification logic. Humans could probably never go into that space, or may be none of the researchers ever ventured there due to different reasons as progress in general is mostly always incremental.

zozbot234 2 days ago | parent | prev [-]

Google Scholar is still ignoring a huge amount of scholarship that is decades old (pre-digital) or even centuries old (and written in now-unused languages that ChatGPT could easily make sense of).

stouset 2 days ago | parent | prev | next [-]

> What happens if we use Tao instead of Pi everywhere

Literally nothing other than mild convenience. It’s just 2pi.

lapetitejort 2 days ago | parent | next [-]

Call me a mathematical extremist but I think pi should equal 6.28... and tau, which looks like half of pi, should equal 3.14...

measurablefunc 2 days ago | parent [-]

In 1897, the Indiana General Assembly attempted to legislate a new value for pi, proposing it be defined as 3.2, which was based on a flawed mathematical proof. This bill, known as the Indiana pi bill, never became law due to its incorrect assertions and the prior proof that squaring the circle is impossible: https://en.wikipedia.org/wiki/Indiana_pi_bill

a day ago | parent | prev | next [-]
[deleted]
measurablefunc 2 days ago | parent | prev [-]

You're forgetting that some equations have π/2 so on balance nothing will change. It will be the same number of symbols.

ogogmad a day ago | parent [-]

I don't think it's just the sheer number of symbols. It's also the fact that the symbol τ means "turn". So you can say "quarter-turn" instead of π/2.

I'm not sure why that point gets lost in these discussions. And personally, I think of the set of fundamental mathematical objects as having a unique and objective definition. So, I get weirdly bothered by the offset in the Gamma function.

chmod775 2 days ago | parent | prev | next [-]

I can write a sed command/program that replaces every occurence of PI with TAU/2 in LaTeX formulas and it'll take me about 30 minutes.

The "intellectual effort" this requires is about 0.

Maybe you meant Euler's number? Since it also relates to PI, it can be used and might actually change the framework in an "interesting way" (making it more awkward in most cases - people picked PI for a reason).

observationist 2 days ago | parent | next [-]

I think they mean in a more general way - thinking with tau instead of pi might shift the context in terms of another method or problem solving algorithm, or there might be obscure or complex uses of tau or pi that haven't cross-fertilized in the literature - where it might be natural to think of clever extensions or use cases in one context but not the other, and those extensions and extrapolations will be apparent to AI, within reach of a tedious and exhaustive review of existing literature.

I think what they were getting at is something like this: The application of existing ideas that simply haven't been applied in certain ways because it's too boring or obvious or abstract for humans to have bothered with, but AI can plow through a year's worth of human drudgery in a day or a month or so, and that sort of "brute force" won't require any amazing new technical capabilities from AI.

saulpw 2 days ago | parent | prev [-]

Yeah but you also have to replace all (2*tau/2) with tau, and 4*(tau/2)^2 with tau^2, etc etc...

sublinear 2 days ago | parent | prev | next [-]

* Tau

ogogmad a day ago | parent | prev | next [-]

I'm using LLMs to rewrite every formula featuring the Gamma function to instead use the factorial. Just let "z!" mean "Gamma(z+1)", substitute everywhere, and simplify. Then have the AI rewrite any prose.

kelipso a day ago | parent [-]

I’m going to replace every instance of 1 with 0.999 repeating, do the equivalent for all all integers, and see how my mind totally explodes.

HardCodedBias 2 days ago | parent | prev [-]

Think of how this opened up EM:

https://ddcolrs.wordpress.com/2018/01/17/maxwells-equations-...