| ▲ | lwansbrough a day ago | ||||||||||||||||||||||||||||||||||
Can anyone with specific knowledge in a sophisticated/complex field such as physics or math tell me: do you regularly talk to AI models? Do feel like there's anything to learn? As a programmer, I can come to the AI with a problem and it can come up with a few different solutions, some I may have thought about, some not. Are you getting the same value in your work, in your field? | |||||||||||||||||||||||||||||||||||
| ▲ | ceh123 a day ago | parent | next [-] | ||||||||||||||||||||||||||||||||||
Context: I finished a PhD in pure math in 2025 and have transitioned to being a data scientist and I do ML/stats research on the side now. For me, deep research tools have been essential for getting caught up with a quick lit review about research ideas I have now that I'm transitioning fields. They have also been quite helpful with some routine math that I'm not as familiar with but is relatively established (like standard random matrix theory results from ~5 years ago). It does feel like the spectrum of utility is pretty aligned with what you might expect: routine programming > applied ML research > stats/applied math research > pure math research. I will say ~1 year ago they were still useless for my math research area, but things have been changing quickly. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | jacquesm a day ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
I don't have a degree in either physics or math, but what AI helps me to do is to stay focused on the job before me rather than to have to dig through a mountain of textbooks or many wikipedia pages or scientific papers trying to find an equation that I know I've seen somewhere but did not register the location of and did not copy down. This saves many days, every day. Even then I still check the references once I've found it because errors can and do slip into anything these pieces of software produce, and sometimes quite large ones (those are easy to spot though). So yes, there is value here, and quite a bit but it requires a lot of forethought in how you structure your prompts and you need to be super skeptical about the output as well as able to check that output minutely. If you would just plug in a bunch of data and formulate a query and would then use the answer in an uncritical way you're setting yourself up for a world of hurt and lost time by the time you realize you've been building your castle on quicksand. | |||||||||||||||||||||||||||||||||||
| ▲ | D-Machine a day ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
I do / have done research in building deep learning models and custom / novel attention layers, architectures, etc., and AI (ChatGPT) is tremendously helpful in facilitating (semantic) search for papers in areas where you may not quite know the magic key words / terminology for what you are looking for. It is also very good at linking you to ideas / papers that you might not have realized were related. I also found it can be helpful when exploring your mathematical intuitions on something, e.g. like how a dropout layer might effect learned weights and matrix properties, etc. Sometimes it will find some obscure rigorous math that can be very enlightening or relevant to correcting clumsy intuitions. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | randomizedalgs 10 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
I'm an active researcher in TCS. For me, AI has not been very helpful on technical things (or even technical writing), but has been super helpful for (1) literature reviews; (2) editing papers (e.g., changing a convention everywhere in the paper); and (3) generating Tikz figures/animations. | |||||||||||||||||||||||||||||||||||
| ▲ | ancillary 11 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
I did a theoretical computer science PhD a few years ago and write one or two papers a year in industry. I have not had much success getting models to come up with novel ideas or even prove theorems, but I have had some success asking them to prove smaller and narrower results and using them as an assistant to read papers (why are they proving this result, what is this notation they're using, expand this step of their proof, etc). Asking it to find any bugs in a draft before Arxiving also usually turns up some minor things to clarify. Overall: useful, but not yet particularly "accelerating" for me. | |||||||||||||||||||||||||||||||||||
| ▲ | Davidzheng a day ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
I talk to them (math research in algebraic geometry) not really helpful outside of literature search unfortunately. Others around me get a lot more utility so it varies. (Most powerful model i tried was Gemini 2.5 deep think and Gemini 3.0 pro) not sure if the new gpts are much better | |||||||||||||||||||||||||||||||||||
| ▲ | abdullahkhalids a day ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
I work in quantum computing. There is quite a lot of material about quantum computing out there that these LLMs must have been trained on. I have tried a few different ones, but they all start spouting nonsense about anything that is not super basic. But maybe that is just me. I have read some of Terence Tao's transcripts, and the questions he asks LLMs are higher complexity than what I ask. Yet, he often gets reasonable answers. I don't yet know how I can get these tools to do better. | |||||||||||||||||||||||||||||||||||
| |||||||||||||||||||||||||||||||||||
| ▲ | hyperadvanced a day ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
I’m a hobbyist math guy (with a math degree) and LLMs can at least talk a little talk or entertain random attempts at proofs I make. In general they rebuke my more wild attempts, and will lead me to well-trodden answers for solved problems. I generally enjoy (as a hobby) finding fun or surprising solutions to basic problems more than solving novel maths, so LLMs are fun for me. | |||||||||||||||||||||||||||||||||||
| ▲ | ramraj07 a day ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
As the other person said, Deep Research is invaluable; but generating hypotheses is not as good at the true bleeding edge of the research. The ChatGPT 4.0 OG with no guardrails, briefly generated outrageously amazing hypotehses that actually made sense. After that they have all been neutered beyond use in this direction. | |||||||||||||||||||||||||||||||||||
| ▲ | kmaitreys 18 hours ago | parent | prev | next [-] | ||||||||||||||||||||||||||||||||||
My experience has been mixed. Honestly though, talking to AI and discussing a problem with it is better than doing nothing and just procrastinating. It's mostly wrong, but the conversation helps me think. In the end, once my patience runs out and my own mind has been "refreshed" through the conversation (even if it was frustrating), I can work on it myself. Some bits of the conversation will help but the "one-shot" doesn't exist. tldr: ai chatbots can get you going, and may be better than just postponing and procrastinating over the problem you're trying to solve. | |||||||||||||||||||||||||||||||||||
| ▲ | j2kun a day ago | parent | prev [-] | ||||||||||||||||||||||||||||||||||
They are good for a jump start on literature search, for sure. | |||||||||||||||||||||||||||||||||||