Remix.run Logo
londons_explore 3 hours ago

I think we aren't far from AI being able to solve this sort of problem too.

Imagine you are Apple and can just set an LLM loose on the codebase for a weekend with the task to reduce RAM usage of every component by 50%...

al_borland 3 hours ago | parent | next [-]

From everything I’ve seen, LLMs aren’t exactly known for writing extremely optimized code.

Also, what happens to the stability and security of my phone after they let an LLM loose on the entire code base for a weekend?

There are 1.5 billion iPhones out there. It’s not a place to play fast and loose with bleeding edge tech known for hallucinations and poor architecture.

rescbr 3 hours ago | parent | next [-]

If you ask an LLM to code whatever, it definitely won’t produce optimized code.

If you direct it to do a specific task to find memory and cpu optimization points, based on perf metrics, then it’s a completely different world.

jfim 3 hours ago | parent [-]

You can also tell it the optimization to implement.

I asked Claude to find all the valid words on a Boggle board given a dictionary and it wrote a simple implementation that basically tried to search for every single word on the board. Telling it to prune the dictionary first by building a bit mask of the letters in each word and on the board and then checking if the word is even possible to have on the board gave something like a 600x speedup with just a simple prompt of what to do.

That does assume that one has an idea of how to optimize though and what are the bottlenecks.

al_borland 2 hours ago | parent [-]

Can we assume at this point if the problems are well known, the low hanging fruit has already been addressed? The Boggle example seems like a pretty basic optimization that anyone writing a Boggle-solver would do.

iOS is 19 years old, built on top of macOS, which is 24 years old, built on top of NeXTSTEP, which is 36 years old, built on top of BSD, which is 47 years old. We’re very far from greenfield.

teeray 3 hours ago | parent | prev | next [-]

> LLMs aren’t exactly known for writing extremely optimized code.

They are trained on everything, and as a result write code like the Internet average developer.

robinwassen 2 hours ago | parent | prev [-]

They kind do if you prompt them, I had mine reimplement the Windows calc (almost fully feature complete) in rust running with 2mb RAM instead of 40mb or whatever the win 11 version uses as a POC.

A handwritten c implementation would most likely be better, but there is so much to gain from just slaughtering the abstraction bloat it does not really matter.

alpaca128 3 hours ago | parent | prev [-]

LLMs are trained on currently existing code.