Remix.run Logo
asadm 5 days ago

I think you and I are having very different experiences with these copilot/agents. So I have questions for you, how do you:

- generate new modules/classes in your projects - integrate module A into module B or entire codebase A into codebase B?

- get someones github project up and running on your machine, do you manually fiddle with cmakes and npms?

- convert an idea or plan.md or a paper into working code?

- Fix flakes, fix test<->code discrepancies or increase coverage etc

If you do all this manually, why?

skydhash 5 days ago | parent | next [-]

> generate new modules/classes in your projects

If it's formulaic enough, I will use the editor templates/snippets generator. Or write a code generator (if it involves a bunch of files). If it's not, I probably have another class I can copy and strip out (especially in UI and CRUD).

> integrate module A into module B

If it's cannot be done easily, that's the sign of a less than optimal API.

> entire codebase A into codebase B

Is that a real need?

> get someones github project up and running on your machine, do you manually fiddle with cmakes and npms

If the person can't be bothered to give proper documentation, why should I run the project? But actually, I will look into AUR (archlinux) and Homebrew formula if someone has already do the first jobs of figuring dependency version. If there's a dockerfile, I will use that instead.

> convert an idea or plan.md or a paper into working code?

Iteratively. First have an hello world or something working, then mowing down the task list.

> Fix flakes, fix test<->code discrepancies or increase coverage etc

Either the test is wrong or the code is wrong. Figure out which and rework it. The figuring part always take longer as you will need to ask around.

> If you do all this manually, why?

Because when something happens in prod, you really don't want that feeling of being the last one that have interacted with that part, but with no idea of what has changed.

frakt0x90 5 days ago | parent | prev | next [-]

To me, using AI to convert an idea or paper into working code is outsourcing the only enjoyable part of programming to a machine. Do we not appreciate problem solving anymore? Wild times.

mackeye 5 days ago | parent | next [-]

i'm an undergrad, so when i need to implement a paper, the idea is that i'm supposed to learn something from implementing it. i feel fortunate in that ai is not yet effective enough to let me be lazy and skip that process, lol

craftkiller 5 days ago | parent [-]

When I was younger, we all had to memorize phone numbers. I still remember those numbers (even the defunct ones) but I haven't learned a single new number since getting a cellphone.

When I was younger, I had to memorize how to drive to work/the grocery store/new jersey. I still remember those routes but I haven't learned a single new route since getting a smartphone.

Are we ready to stop learning as programmers? I certainly am not and it sounds like you aren't either. I'll let myself plateau when I retire or move into management. Until then, every night debugging and experimenting has been building upon every previous night debugging and experimenting, ceaselessly progressing towards mastery.

tracker1 5 days ago | parent | next [-]

I can largely relate... that said, I rarely rely on my phone for remembering routes to places I've been before. It does help that I've lived in different areas of my city and suburbs (Phoenix) so I'm generally familiar with most of the main streets, even if I haven't lived on a given side of town in decades.

The worst is when I get inclined to go to a specific restaurant I haven't been to in years and it's completely gone. I've started to look online to confirm before driving half an hour or more.

fapjacks 5 days ago | parent | prev [-]

I noticed this also, and ever since, I've made it a point to always have memorized my SO's number and my best friend's number.

mirkodrummer 5 days ago | parent | prev | next [-]

*Outsourcing to a parrot on steroids which will make mistakes, produce stale ugly ui with 100px border radius, 50px padding and rainbow hipster shadows, write code biased towards low quality training data and so on. It's the perfect recipe for disaster

xpe 5 days ago | parent [-]

Over the top humor duly acknowledged.

Disastrous? Quite possibly, but my concerns are based on different concerns.

Almost everything changes, so isn’t it better to rephrase these statements as metrics to avoid fixating on one snapshot in an evolving world?

As the metrics get better, what happens? Do you still have objections? What objections remain as AI capabilities get better and better without limit? The growth might be slow or irregular, but there are many scenarios where AIs reach the bar where they are better at almost all knowledge work.

Stepping back, do you really think of AI systems as stochastic parrots? What does this metaphor buy you? Is it mostly a card you automatically deal out when you pattern match on something? Or does serve as a reusable engine for better understanding the world?

We’ve been down this road; there is already much HN commentary on the SP metaphor. (Not that I recommend HN for this kind of thing. This is where I come to see how a subset of tech people are making sense of it, often imperfectly with correspondingly inappropriate overconfidence.)

TLDR: smart AI folks don’t anchor on the stochastic parrots metaphor. It is a catchy phrase and helped people’s papers get some attention, but it doesn’t mean what a lot of people think it means. Easily misunderstood, it serves as a convenient semantic stop sign so people don’t have to dig in to the more interesting aspects of modern AI systems. For example: (1) transformers build conceptual models of language that transcend any particular language. (2) They also build world models with spatial reasoning. (3) Many models are quite resilient to low quality training data. And more.

To make this very concrete: under the assumption of universal laws of physics, people are just following the laws of physics, and to a first approximation, our brains are just statistical pattern matchers. By this definition, humans would also be “stochastic parrots”. I go all this trouble to show that this metaphor doesn’t cut to the heart of the matter. There are clearer questions to ask: they require getting a lot more specific about various forms and applications of intelligent behavior. For example

- under what circumstances does self play lead to superhuman capability in a particular domain?

- what limits exist (if any) in the self supervised training paradigm used for sequential data? If the transformer trained in this way can write valid programs then it can create almost any Turing machine; limited only by time and space and energy. What more could you want? (Lots, but I’m genuinely curious as to people’s responses after reflecting on these.)

jeremyjh 5 days ago | parent | next [-]

Until the thing can learn on its own and advance its capabilities to the same degree that a junior developer can, it is not intelligent enough to do that work. It doesn't learn our APIs, it doesn't learn our business domain, it doesn't learn from the countless mistakes I correct it on. What we have now is interesting, it is helping sometimes and wasteful others. It is not intelligent.

xpe 4 days ago | parent | next [-]

> It is not intelligent.

Which of the following would you agree to... ?

1. There is no single bar for intelligence.

2. Intelligence is better measured on a scale than with 1 bit (yes/no).

3. Intelligence is better considered as having many components instead of just one. When people talk about intelligence, they often mean different things across domains, such as emotional, social, conceptual, spatial, kinetic, sensory, etc.

4. Many researchers have looked for -- and found -- in humans, at least, some notions of generalized intellectual capability that tends to help across a wide variety of cognitive tasks.

If some of these make sense, I suggest it would be wise to conclude:

5. Reasonable people accentuate different aspects and even definitions of intelligence.

6. Expecting a yes/no answer for "is X intelligent?" without considerable explanation is approximately useless. (Unless it is a genuinely curious opener for an in-depth conversation.)

7. Asking "is X intelligent?" tends to be a poorly framed question.

4 days ago | parent [-]
[deleted]
xpe 4 days ago | parent | prev | next [-]

> Until the thing can learn on its own and advance its capabilities to the same degree that a junior developer can, it is not intelligent enough to do that work.

This confuses intelligence with memory (or state) which tends to enable continuous learning.

xpe 2 days ago | parent | next [-]

Update: it might have been clearer and more helpful if I wrote this instead…

This idea of intelligence stated above seems to combine computation, memory, and self-improvement. These three concepts (as I understand them) are both different and logically decoupled.

For example, in the context of general agents, computational ability can change without affecting memory capability. Also, high computational ability does not necessarily confer self-improvement abilities. Having more memory does not necessarily benefit self-improvement.

In the case of biology, it is possible that self improvement demands energy savings and therefore sensory processing degradation. This conceptually relates to a low power CPU mode or a gasoline engine that can turn off some cylinders.

jeremyjh 4 days ago | parent | prev [-]

No confusion here.

This is just semantics, but you brought it up. The very first definition of intelligence provided by Webster:

1.a. the ability to learn or understand or to deal with new or trying situations : reason also : the skilled use of reason

https://www.merriam-webster.com/dictionary/intelligence

xpe 2 days ago | parent | next [-]

A time traveler from the future has recommended we both read or reread “Disputing Definitions” by Yudkowsky (2008).

Some favorite quotes of mine from it:

> Dictionary editors are historians of usage, not legislators of language. Dictionary editors find words in current usage, then write down the words next to (a small part of) what people seem to mean by them.

> Arguing about definitions is a garden path; people wouldn't go down the path if they saw at the outset where it led.

>> Eliezer: "Personally I'd say that if the issue arises, both sides should switch to describing the event in unambiguous lower-level constituents, like acoustic vibrations or auditory experiences. Or each side could designate a new word, like 'alberzle' and 'bargulum', to use for what they respectively used to call 'sound'; and then both sides could use the new words consistently. That way neither side has to back down or lose face, but they can still communicate. And of course you should try to keep track, at all times, of some testable proposition that the argument is actually about. Does that sound right to you?"

xpe 2 days ago | parent | prev [-]

Ok, let’s work with that definition for this subthread. Even so, one can satisfy that definition without having the ability to:

> “advance its capabilities”

(your phrase)

An example would be a person with damaged short-term memory. And (pretty sure) an AI system without history and that cannot modify itself.

xpe 4 days ago | parent | prev [-]

Another thing that jumps out to me is just how fluidly people redefine "intelligence" to mean "just beyond what machines today can do". I can't help wonder much your definition has changed. What would happen if we reviewed your previous opinions, commentary, thoughts, etc... would your time-varying definitions of "intelligence" be durable and consistent? Would this sequence show movement towards a clearer and more testable definition over time?

My guess? The tail is wagging the dog here -- you are redefining the term in service of other goals. Many people naturally want humanity to remain at the top of the intellectual ladder and will distort reality as needed to stay there.

My point is not to drag anyone through the mud for doing the above. We all do it to various degrees.

Now, for my sermon. More people need to wake up and realize machine intelligence has no physics-based constraints to surpassing us.

A. Businesses will boom and bust. Hype will come and go. Humanity has an intrinsic drive to advance thinking tools. So AI is backed by huge incentives to continue to grow, no matter how many missteps economic or otherwise.

B. The mammalian brain is an existence proof that intelligence can be grown / evolved. Homo sapiens could have bigger brains if not for birth-canal size constraints and energy limitations.

C. There are good reasons to suggest that designing an intelligent machine will be more promising than evolving one.

D. There are good reasons to suggest silicon-based intelligence will go much further than carbon-based brains.

E. We need to stop deluding ourselves by moving the goalposts. We need to acknowledge reality, for this is reality we are living in, and this is reality we can manipulate.

Let me know if you disagree with any of the sentences below. I'm not here to preach to the void.

xpe 4 days ago | parent | next [-]

> A. Businesses will boom and bust. Hype will come and go. Humanity has an intrinsic drive to advance thinking tools. So AI is backed by huge incentives to continue to grow, no matter how many missteps economic or otherwise.

Corrected to:

A. Businesses will boom and bust. Hype will come and go. Nevertheless, humanity seems to have an intrinsic drive to innovate, which means pushing the limits of technology. People will seek more intelligent machines, because we perceive them as useful tools. So AI is pressurized by long-running, powerful incentives, no matter how many missteps economic or otherwise. It would take a massive and sustained counter-force to prevent a generally upwards AI progression.

jeremyjh 4 days ago | parent | prev [-]

Did Webster also redefine the term in service of other goals?

1. the ability to learn or understand or to deal with new or trying situations

https://www.merriam-webster.com/dictionary/intelligence

xpe a day ago | parent | next [-]

This also reveals a failure mode in conversations that might go as follows. You point to some version of Webster’s dictionary, but I point to Stuart Russell (an expert in AI). If this is all we do, it is nothing more than an appeal to authority and we don’t get far.

xpe a day ago | parent | prev [-]

This misunderstands the stated purpose of a dictionary: to catalog word usage — not to define an ontology that other must follow. Usage precedes cataloging.

ITjournalist 5 days ago | parent | prev | next [-]

Regarding the phrase statistical parrot, I would claim that statistical parrotism is an ideology. As with any ideology, what we see is a speciation event. The overpopulation of SEO parrots has driven out a minority of parrots who now respecialize in information dissemination rather than information pollution, leaving their former search-engine ecological niche and settling in a new one that allows them to operate at a higher level of density, compression and complexity. Thus it's a major step in evolution, but it would be a misunderstanding to claim that evolution is the emergence of intelligence.

mirkodrummer 4 days ago | parent [-]

The overpopulation of AI BS, prophet previsions, pseudo philosopher/anthropologist and so on, this site has been tampered with is astonishing

mirkodrummer 5 days ago | parent | prev [-]

LLMs ARE stochastic parrots, throw whatever chatgpt slop answer but facts are facts

xpe 4 days ago | parent [-]

> Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

https://news.ycombinator.com/newsguidelines.html

mirkodrummer 4 days ago | parent [-]

Even then, facts remains facts ;)

vehemenz 5 days ago | parent | prev | next [-]

Drawing blueprints is more enjoyable than putting up drywall.

jeremyjh 5 days ago | parent [-]

The code is the blueprint.

“The final goal of any engineering activity is some type of documentation. When a design effort is complete, the design documentation is turned over to the manufacturing team. This is a completely different group with completely different skills from the design team. If the design documents truly represent a complete design, the manufacturing team can proceed to build the product. In fact, they can proceed to build lots of the product, all without any further intervention of the designers. After reviewing the software development life cycle as I understood it, I concluded that the only software documentation that actually seems to satisfy the criteria of an engineering design is the source code listings.” - Jack Reeves

asadm 5 days ago | parent | prev [-]

depends. if i am converting it to then use it in my project, i don't care who writes it, as long as it works.

pnathan 5 days ago | parent | prev | next [-]

I'm pretty fast coding and know what I'm doing. My ideas are too complex for claude to just crap out. If I'm really tired I'll use claude to write tests. Mostly they aren't really good though.

AI doesn't really help me code vs me doing it myself.

AI is better doing other things...

asadm 5 days ago | parent [-]

> AI is better doing other things...

I agree. For me the other things are non-business logic, build details, duplicate/bootstrap code that isn't exciting.

mackeye 5 days ago | parent | prev | next [-]

> how do you convert a paper into working code?

this is something i've found LLMs almost useless at. consider https://arxiv.org/abs/2506.11908 --- the paper explains its proposed methodology pretty well, so i figured this would be a good LLM use case. i tried to get a prototype to run with gemini 2.5 pro, but got nowhere even after a couple of hours, so i wrote it by hand; and i write a fair bit of code with LLMs, but it's primarily questions about best practices or simple errors, and i copy/paste from the web interface, which i guess is no longer in vogue. that being said, would cursor excel here at a one-shot (or even a few hours of back-and-forth), elegant prototype?

asadm 5 days ago | parent [-]

I have found that whenever it fails for me, it's likely that I was trying to one-shot the solution and I retry by breaking the problem into smaller chunks or doing a planning work with gemini cli first.

mackeye 5 days ago | parent [-]

smaller chunks works better, but ime, it takes as long as writing it manually that way, unless the chunk is very simple, e.g. essentially api examples. i tend not to use LLMs for planning because thats the most fun part for me :)

chamomeal 5 days ago | parent | prev | next [-]

For stuff like adding generating and integrating new modules: the helpfulness of AI varies wildly.

If you’re using nest.js, which is great but also comically bloated with boilerplate, AI is fantastic. When my code is like 1 line of business logic per 6 lines of boilerplate, yes please AI do it all for me.

Projects with less cruft benefit less. I’m working on a form generator mini library, and I struggle to think of any piece I would actually let AI write for me.

Similar situation with tests. If your tests are mostly “mock x y and z, and make sure that this spied function is called with this mocked payload result”, AI is great. It’ll write all that garbage out in no time.

If your tests are doing larger chunks of biz logic like running against a database, or if you’re doing some kinda generative property based testing, LLMs are probably more trouble than they’re worth

stevenbedrick 5 days ago | parent | prev | next [-]

To do those things, I do the same thing I've been doing for the thirty years that I've been programming professionally: I spend the (typically modest) time it takes to learn to understand the code that I am integrating into my project well enough to know how to use it, and I use my brain to convert my ideas into code. Sometimes this requires me to learn new things (a new tool, a new library, etc.). There is usually typing involved, and sometimes a whiteboard or notebook.

Usually it's not all that much effort to glance over some other project's documentation to figure out how to integrate it, and as to creating working code from an idea or plan... isn't that a big part of what "programming" is all about? I'm confused by the idea that suddenly we need machines to do that for us: at a practical level, that is literally what we do. And at a conceptual level, the process of trying to reify an idea into an actual working program is usually very valuable for iterating on one's plans, and identifying problems with one's mental model of whatever you're trying to write a program about (c.f. Naur's notions about theory building).

As to why one should do this manually (as opposed to letting the magic surprise box take a stab at it for you), a few answers come to mind:

1. I'm professionally and personally accountable for the code I write and what it does, and so I want to make sure I actually understand what it's doing. I would hate to have to tell a colleague or customer "no, I don't know why it did $HORRIBLE_THING, and it's because I didn't actually write the program that I gave you, the AI did!"

2. At a practical level, #1 means that I need to be able to be confident that I know what's going on in my code and that I can fix it when it breaks. Fiddling with cmakes and npms is part of how I become confident that I understand what I'm building well enough to deal with the inevitable problems that will occur down the road.

3. Along similar lines, I need to be able to say that what I'm producing isn't violating somebody's IP, and to know where everything came from.

4. I'd rather spend my time making things work right the first time, than endlessly mess around trying to find the right incantation to explain to the magic box what I want it to do in sufficient detail. That seems like more work than just writing it myself.

Now, I will certainly agree that there is a role for LLMs in coding: fancier auto-complete and refactoring tools are great, and I have also found Zed's inline LLM assistant mode helpful for very limited things (basically as a souped-up find and replace feature, though I should note that I've also seen it introduce spectacular and complicated-to-fix errors). But those are all about making me more efficient at interacting with code I've already written, not doing the main body of the work for me.

So that's my $0.02!

craftkiller 5 days ago | parent | prev [-]

> generate new modules/classes in your projects

I type:

  class Foo:
or:

  pub(crate) struct Foo {}
> integrate module A into module B

What do you mean by this? If you just mean moving things around then code refactoring tools to move functions/classes/modules have existed in IDEs for millennia before LLMs came around.

> get someones github project up and running on your machine

docker

> convert an idea or plan.md or a paper into working code

I sit in front of a keyboard and start typing.

> Fix flakes, fix test<->code discrepancies or increase coverage etc

I sit in front of a keyboard, read, think, and then start typing.

> If you do all this manually, why?

Because I care about the quality of my code. If these activities don't interest you, why are you in this field?

asadm 5 days ago | parent [-]

> If these activities don't interest you, why are you in this field?

I am in this field to deliver shareholder value. Writing individual lines of code; unless absolutely required, is below me?

craftkiller 5 days ago | parent | next [-]

Ah well then, this is the cultural divide that has been forming since long before LLMs happened. Once software engineering became lucrative, people started entering the field not because they're passionate about computers or because they love the logic/problem solving but because it is a high paying, comfortable job.

There was once a time when only passionate people became programmers, before y'all ruined it.

asadm 5 days ago | parent [-]

i think you are mis-categorizing me. i have been programming for fun since i was a kid. But that doesn't mean i solve mundane boring stuff even though i know i can get someone else or ai to figure those parts out so i can do the fun stuff.

craftkiller 5 days ago | parent [-]

Ah perhaps. Then I think we had different understandings of my "why are you in this field?" question. I would say that my day job is to "deliver shareholder value"[0] but I'd never say that is why I am in this field, and it sounds like it isn't why you're in this field either since I doubt you were thinking about shareholders when you were programming as a kid.

[0] Actually, I'd say it is "to make my immediate manager's job easier", but if you follow that up the org chart eventually it ends up with shareholders and their money.

asadm 5 days ago | parent [-]

well sure i may have oversimplified it. the shareholder is usually me :)

barnabee 5 days ago | parent | prev | next [-]

Every human who defines the purpose of their life's work as "to deliver shareholder value" is a failure of society.

How sad.

asadm 5 days ago | parent [-]

as opposed to fluff like "make world a better place"?

barnabee 4 days ago | parent [-]

Defining one's worth by shareholder value is pretty dystopian, so yeah, even "make the world a better place" is preferable, at least if whoever said it really means it…

5 days ago | parent | prev [-]
[deleted]