| |
| ▲ | righthand 7 days ago | parent | next [-] | | “AI is getting better rapidly” is a false premise. As AI is a large domain. There is no way to quanitify the better as compared to the entire domain. “Llms are improving rapidly during a short period of time where they gain popularity” is more accurate. Llms getting better != a path to AGI. | | |
| ▲ | metalman 7 days ago | parent [-] | | the same is true of solar eclipses, there are no partial eclipses, untill the very last instant of the moon covering the sun, it is far to bright to look at, and then the stars come out and the solar flares are visible, the birds sing there evening songs
and here I have told you of it, but at best it will be hint.
AI is worse, much worse, as we have our own inteligence, but cant even offer a hint of where that line is and how to cross it, where to go to see it |
| |
| ▲ | backpackviolet 7 days ago | parent | prev | next [-] | | > "AI is getting better rapidly" … is it? I hear people saying that. I see “improvement”: the art generally has the right number of fingers more often, the text looks like text, the code agents don’t write stuff that even the linter says is wrong. But I still see the wrong number of fingers sometimes. I still see the chat bots count the wrong number of letters in a word. I still see agents invent libraries that don’t exist. I don’t know what “rapid” is supposed to mean here. It feels like Achilles and the Tortoise and also has the energy costs of a nation-state. | | |
| ▲ | righthand 7 days ago | parent [-] | | Agreed there really isn’t any metrics that indicate this is true. Considering many models are still too complex to run locally. Llms are getting better for the corporations that sell access to them. Not necessarily for the people that use them. |
| |
| ▲ | camillomiller 7 days ago | parent | prev [-] | | Compare Altman outlandish claims about GPT-5 and the reality of this update. Do you think they square out in any reasonable way? | | |
| ▲ | bpodgursky 7 days ago | parent [-] | | Please, please seriously think back to your 2020 self, and think about whether your 2020 self would be surprised by what AI can do today. You've frog-boiled yourself into timelines where "No WORLD SHAKING AI launches in the past 4 months" means "AI is frozen". In 4 months, you will be shocked if AI doesn't have a major improvement every 2 months. In 6 months, you will be shocked if it doesn't have a major update ever 1 month. It's hard to see exponential curves while you're on it, I'm not trying to fault you here. But it's really important to stretch yourself to try. | | |
| ▲ | backpackviolet 7 days ago | parent | next [-] | | I’m still surprised by what AI can do. It’s amazing. … but I still have to double check when it’s important that I get the right answer, I still have to review the code it writes, and I still am not sure there is actually enough business to cover what it will actually cost to run when it needs to pay for itself. | |
| ▲ | th0ma5 7 days ago | parent | prev | next [-] | | To be honest, I had the right idea back then... This technology has fundamental qualities that require it to provide inaccurate token predictions that are only statistically probable. They aren't even trying to change this situation other than trying to find more data to train, saying you have to keep adding layers of them, or are saying it is the user's responsibility. There's been the obvious notion that digitizing the world's information is not enough and that hasn't changed. | |
| ▲ | jononor 7 days ago | parent | prev | next [-] | | I for one am quite surprised. Sometimes impressed. But also often frustrated. And occasionally disappointed. Sometimes worried about the negative follow-on effects. Working with current LLMs spans the whole gamut...
But for coding we are at the point where even the current level is quite useful. And as the tools/systems get better, the usefilness is going to increase quite a bit. Even if models improve slowly from this point on. It will impact the whole industry over the next years, and since software is eating the world, will impact many other industries as well.
Exponential? Perhaps in the same way as computers and Internet have been exponential - cost per X (say tokens) will probably go down exponentially the next years and decades, the same way cost per FLOP went down, on megabytes transferred. But those exponential gains did not results in exponential growth in productivity, or if so, the exponent is much much lower. And I suspect it will likely be the same for artificial intelligence. | |
| ▲ | righthand 7 days ago | parent | prev [-] | | What if I’ve not been impressed by giving a bunch of people a spam bot tuned to education materials? Am I frog boiled? Who cares about the actual advancement of this singular component if I was never impressed. You assume everyone is “impressed”. |
|
|
|