| ▲ | skydhash 7 days ago |
| Programmers are mostly translating business rules to the very formal process execution of the computer world. And you need to both knows what the rules means and how the computer works (or at least how the abstracted version you’re working with works). The translation is messy at first, which is why you need to revise it again and again. Especially when later rules comes challenging all the assumptions you’ve made or even contradicting themselves. Even translations between human languages (which allows for ambiguity) can be messy. Imagine if the target language is for a system that will exactly do as told unless someone has qualified those actions as bad. |
|
| ▲ | noduerme 6 days ago | parent | next [-] |
| Good programmers working hand in glove with good companies do much more than this. We question the business logic itself and suggest non-technical, operational solutions to user issues before we take a hammer to the code. Also, as someone else said, consider the root causes of an issue, whether those are in code logic or business ops or some intersection between the two. When I save twenty hours of a client's money and my own time, by telling them that a new software feature they want would be unnecessary if they changed the order of questions their employees ask on the phone, I've done my job well. By the same token, if I'm bored and find weird stuff in the database indicating employees tried to perform the same action twice or something, that is something that can be solved with more backstops and/or a better UI. Coding business logic is not a one-way street. Understanding the root causes and context of issues in the code itself is very hard and requires you to have a mental model of both domains. Going further and actually requesting changes to the business logic which would help clean up the code requires a flexible employer, but also an ability to think on a higher order than simply doing some CRUD tasks. The fact that I wouldn't trust any LLM to touch any of my code in those real world cases makes me think that most people who are touting them are not, in fact, writing code at the same level or doing the same job I do. Or understand it very well. |
| |
| ▲ | shinycode 6 days ago | parent | next [-] | | True and LLM have no incentive to avoid writing code. It’s even worse they are « paid » by the amount of code they generate. So default behavior is to avoid asking questions to refine the need. They thrive on blurry and imprecise prompt because in any case they’ll generate thousands of loc, regardless of the pertinence.
Many people confirmed that in their experience.
I’ve never seen an LLM step back, ask questions and then code or avoid coding. It’s by design a choice of generating the most stuff because of money. So right now an LLM and the developer you describe here are two very different thing and an LLM will, by design, never replace you | |
| ▲ | danielrico 6 days ago | parent | prev | next [-] | | > When I save twenty hours of a client's money and my own time, by telling them that a new software feature they want would be unnecessary if they changed the order of questions their employees ask on the phone, I've done my job well. I like to explain my work as "do whatever is needed to do as little work as possible". Being by improving logs, improving architecture, updating logs, pushing responsibilities around or rejecting some features. | | |
| ▲ | withinboredom 6 days ago | parent [-] | | "The best programmers are lazy, or more accurately, they work hard to be as lazy as possible." -- CS101, first day | | |
| ▲ | K0balt 6 days ago | parent [-] | | The most clever lines of code are the ones you don’t write. Often this is a matter of properly defining the problem in terms of data structure. LLMs are not at all good at seeing that a data structure is inside out and that by turning it right side in, we can fix half the problems. More significantly though, OP seems right on to me. The basic functionality of LLMs is handy for a code writing assistant, but does not replace a software engineer, and is not ever likely too no matter how many janky accessories we bolt on. LLMs are fundamentally semantic pattern matching engines, and are only problem solvers in the context of problems that are either explicitly or implicitly defined and solved in their training data. They will always require supervision because there is fundamentally no difference between a useful LLM output and a “hallucination” except the utility rating that a human judge applies to the output. LLMs are good at solving fully defined, fully solved problems. A lot of work falls into that category, but some does not. | | |
| ▲ | noduerme 5 days ago | parent [-] | | >> The most clever lines of code are the ones you don’t write. Just to add, I think there are three things that LLMs don't address here, but maybe it's because they're not being asked the broader questions: 1. What are some reasonable out-of-band alternatives to coding the thing I'm being asked to code? 2. What kind of future modifications might the client want, and how can we ensure this mod will accommodate those without creating too many new constraints, but also without over-preparing for something that night not happen? 3. What is the client missing that we're also missing? This could be as simple as forgetting that under some circumstances, the same icon is being used in a UI to mean something else. Or that an error box might obscure the important thing that just triggered the error. Or that six years ago, we created a special user level called "-1" that is a reserved level for employees in training, and users on that level can't write to certain tables. And asking the question whether we want them to be able to train on the new feature, and if so, whether there are exceptions to that which would open the permissions on the DB but restrict some operations in the middleware. "What are we missing" is 95% of my job, and unit tests are useless if you don't know all the potential valid or invalid inputs. |
|
|
| |
| ▲ | 1dom 6 days ago | parent | prev | next [-] | | I think this is a fair and valuable comment. Only part I think could be more nuanced is: > The fact that I wouldn't trust any LLM to touch any of my code in those real world cases makes me think that most people who are touting them are not, in fact, writing code at the same level or doing the same job I do. Or understand it very well. I agree with this specifically for agentic LLM use. However, I've personally increased my code speed and quality with LLMs for sure using purely local models as a really fancy auto complete for 1 or 2 lines at a time. The rest of your comment is good, bit the last paragraph to me reads like someone inexperienced with LLMs looking to find excuses to justify not being productive with them, when others clearly are. Sorry. | |
| ▲ | jlcummings 6 days ago | parent | prev | next [-] | | Being effective with llm agents requires not just the ability to code or to appreciate nuance with libraries or business rules but to have the ability and proclivity of pedantry. Dad-splain everything always. And to have boundless contextual awareness… dig a rabbit hole, but beware that you are in your own hole. At this point you can escape the hole but you have to be purposefully aware of what guardrails and ladders you give the agent to evoke action. The better, more explicit guardrails you provide the more likely the agent is able to do what is expected and honor the scope and context you establish. If you tell it to use silverware to eat, be assured it doesn’t mean to use it appropriately or idiomatically and it will try eating soup with a fork. Lastly don’t be afraid of commits and checkpoints, or to reject/rollback proposed changes and restate or reset the context. The agent might be the leading actor, but you are the director. When a scene doesn’t play out, try it again after clarification or changing camera perspective or lighting or lines, or cut/replace the scene entirely. | | |
| ▲ | cmsj 6 days ago | parent | next [-] | | I find that level of pedantry and hand-holding, to be extremely tedious and I frequently find myself just thinking fuck it, I'll write it myself and get what I want the first time. | | |
| ▲ | skydhash 6 days ago | parent [-] | | This. That’s why every programmer strive for a good architecture and write tests. When you have that and all your bug fixes and feature request are only a small amount of lines, that is pure bliss. Even if it requires hours of reading and designing. Anything is better than dumping lot of lines. |
| |
| ▲ | dingi 4 days ago | parent | prev [-] | | Why would anyone bother at this point though? Tedious handholding and extra effort for code reviews. Just write the damn thing yourself. | | |
| ▲ | etherealG 19 hours ago | parent [-] | | Because once you figure out the correct way to handhold, you can automate it and the tediousness goes away. It’s only tedious once per codebase or task, then you find the less tedious recipe and you’re done. You can even get others to do the tedious part at their layer of abstraction so that you don’t have to anymore. Same as compilers, cpu design, or any other pet of the stack lower than the one you’re using. |
|
| |
| ▲ | gxs 6 days ago | parent | prev | next [-] | | To be honest you sound super defensive, not just in a classic programmer when someone invades on their turf sort of way, but also in the classic way people who are reluctant to accept a new technology This sentiment of, a human will always be needed, there’s no replacement for human touch, the stakes are too high, is as old as time You just said, quite literally, that people leveraging LLMs to code are not doing it at your level - that’s borders on hubris The fact of the matter is that like most tools, you get out of AI what you put into it I know a lot of engineers and this pride, this reluctance to accept the help is super common The best engineers on the other hand are leveraging this just fine, just another tool for them that speeds things up | | |
| ▲ | geraldwhen 6 days ago | parent [-] | | Hubris? The offshore team submitting 2000 line nonsense PRs from AI is reality. We’re living it. We see it every day. The business leaders cannot be convinced that this isn’t making less skilled developers more productive. | | |
| ▲ | gibbitz 3 days ago | parent | next [-] | | Worth noting that there are business leaders who see high LOC and number of commits as metrics of good programmers. To them the 2000 LOC commits from offshore are proof that it's working. Sadly the proof that it's not will show in their sales and customer satisfaction if they keep producing their product long enough. For too long the business model in tech has been to get bought out so this doesn't often matter to business. | |
| ▲ | 6 days ago | parent | prev [-] | | [deleted] |
|
| |
| ▲ | danielbln 6 days ago | parent | prev [-] | | I'm not sure what any of what you just wrote has to do with LLMs. If you use LLMs to rubber duck or write tests/code, then all of the things you mentioned should still apply. That last logical leap, the fact that _you_ wouldn't trust LLM to touch your code means that people who do aren't at the same level as you is a fallacy. |
|
|
| ▲ | mgaunard 6 days ago | parent | prev | next [-] |
| That's not quite true; programmers adjust what the business rules should be as they write code for it. Those rules are also very fuzzy and only get defined more formally by the coding process. |
| |
| ▲ | area51org 6 days ago | parent | next [-] | | That seems very dependent on which company you work for. Many would not grant you that kind of flexibility. | | |
| ▲ | hansifer 6 days ago | parent | next [-] | | At their peril, because any set of rules, no matter how seemingly simple, has edge cases that only become apparent once we take on the task of implementing them at the code level into a functioning app. And that's assuming specs have been written up by someone who has made every effort to consider every relevant condition, which is never the case. | | |
| ▲ | tharkun__ 6 days ago | parent | next [-] | | And in the example of "why" this 401 is happening that's another one of those. The spec might have said to return a 401 for both not being authenticated and for not having enough privileges. But that's just plain wrong and a proper developer would be allowed to change that. If you're not authenticating properly, you get a 401. That means you can't prove you're who you say you are. If you are past that, i.e. we know that you are who you say you are, then the proper return code is 403 for saying "You are not allowed to access what you're trying to access, given who you are". Which funnily enough seems to be a very elusive concept to many humans as well, never mind an LLM. | | |
| ▲ | HeWhoLurksLate 3 days ago | parent [-] | | ...then there are the other fun ones, like not wanting to tell people things exist that they don't have access to, like Github returning 404 errors for private repositories you know exist when you aren't logged into an account that has access to them. | | |
| ▲ | tharkun__ 2 days ago | parent [-] | | That one at least makes sense if you ask me. It's not just Github doing it. On the web side of things you'd return the same "no such thing here" page whether you don't have access or it really doesn't exist as well. So leaking more info than the page you return to users in the browser would show via the status code would not be good. I.e. that would be the appropriate thing to do if you're trying to prevent leakage of information i.e. enumeration of resources. But you should not return 401 for this still. A 404 is the appropriate response for pretending that "it's just not there" if you ask me. You can't return 404 when it's not there and a 403 when you have no access if enumeration is bad. So for example, if you don't have access to say the settings of a repo you have access to, a 403 is OK. No use pretending with a 404, because we all know the settings are just a feature of Github. However, pretending that a repo you don't have access to but exists isn't there with a 404 is appropriate because otherwise you could prove the existence of "superSecretRepo123" simply by guessing and getting a 403 instead of a 404. |
|
| |
| ▲ | 6 days ago | parent | prev [-] | | [deleted] |
| |
| ▲ | motorest 6 days ago | parent | prev | next [-] | | > That seems very dependent on which company you work for. Many would not grant you that kind of flexibility. It really boils down to what scenario you have in mind. Developers do interact with product managers and discussions do involve information flowing both ways. Even if a PM ultimately decides what the product should do, you as a developer have a say in the process and outcome. Also, there are always technological constraints, and some times even practical constraints are critical. A PM might want to push this or that feature but if it's impossible to deliver on a specific deadline they have no alternative to compromise, and the compromise is determined by what developers call out. | |
| ▲ | gregors 6 days ago | parent | prev [-] | | The majority of places I've worked don't adjust business rules on the fly because of flexibility. They do it because "we need this out the door next month". They need to ship and ship now. Asking clarifying questions at some of these dumpster fires is actually looked down upon, much less taking the time to write or even informally have a spec. |
| |
| ▲ | specialist 6 days ago | parent | prev | next [-] | | > adjust ... the business rules Successful projects do this. Ideally, front loaded. Unsuccessful projects attempt to reify the chaos. | |
| ▲ | nicbou 6 days ago | parent | prev [-] | | How does that work in an AI-supported development process? I'm a bit out of the loop since I left the industry. Usually there is a lot of back and forth over things like which fields go in a form, and whether asking for a last name will impact the conversion rate and so on. | | |
| ▲ | serpix 6 days ago | parent [-] | | Well the AI will just steamroll through and will therefore go out of rails just like a junior dev on a coding binge. | | |
| ▲ | gibbitz a day ago | parent [-] | | But all the senior business folks think AI can do no wrong and want to put it out the door anyway, assuming all the experienced engineers are just trying to get more money or something. |
|
|
|
|
| ▲ | benreesman 6 days ago | parent | prev | next [-] |
| This is a very common statement but doesn't match my experience at all, unless you expand "business rules" to mean "not code already". There's plenty of that work, and it goes by many names ("enterprise", others). But lots and lots and lots of programmers are concerned with using computers for computations: making things with the new hardware that you couldnt with the old hardware being an example. Embedded, cryptography, graphics, simulation, ML, drones and compilers and all kinds of stuff are much more about resources than business logic. You can define up business logic to cover anything I guess, but at some point its no longer what you meant by that. |
|
| ▲ | physicsguy 7 days ago | parent | prev | next [-] |
| Yes although many software engineers try as hard as possible to avoid learning what the business problem is. In my experience though those people never make great engineers. |
| |
| ▲ | trimbybj 7 days ago | parent | next [-] | | Often those of us that do want to learn what the business problem is are not allowed to be involved in those discussions, for various reasons. Sometimes it's "Oh we can take care of that so you don't have to deal with it," and sometimes it's "Just build to this design/spec" and they're not used to engineers (the good ones) questioning things. | | |
| ▲ | marcus_holmes 6 days ago | parent [-] | | "Just shut up and push the nerd-buttons, nerd." I went and got an MBA to try and get around this. It didn't work. | | |
| ▲ | lisbbb 6 days ago | parent [-] | | I had a professor in grad school, Computer Engineering, that begged me not to get an MBA--he had worked in industry, particularly defense, and had a very low opinion of MBAs. I tend to agree nowadays. I really think the cookie-cutter "safe" approach that MBA types take, along with them maximizing profits using data science tools, has made the USA a worse place overall. | | |
|
| |
| ▲ | lisbbb 6 days ago | parent | prev | next [-] | | My problem was that the business problems were so tough on most of the gigs I had that it was next to impossible to build a solution for them! Dealing with medical claims in real time at volume was horrendous. | |
| ▲ | ruslan_sure 6 days ago | parent | prev | next [-] | | Understanding the business problem or goal is actually the context for correctly writing code. Without it, you start acting like an LLM that didn't receive all the necessary code to solve a task. When a non-developer writes code with an LLM, their ability to write good code decreases. But at the same time, it goes up thanks to more "business context." In a year or two, I imagine that a non-developer with a proper LLM may surpass a vanilla developer. | |
| ▲ | tempodox 6 days ago | parent | prev | next [-] | | Going by your first sentence, you must be working in a very bad environment. How can anyone solve a problem they don't understand? | | |
| ▲ | skydhash 6 days ago | parent [-] | | Hint: They don't They usually code for the happy path, and add edge cases as bugs are discovered in production. But after a while both happy path and edge cases blend into a ball of mud that you need the correct incantation to get running. And it's a logic maze that contradict every piece of documentation you can find (ticket, emails). Then it quickly become something that people don't dare to touch. |
| |
| ▲ | pjmlp 6 days ago | parent | prev | next [-] | | Usually this only happens to those doing product development. When the employer business isn't shipping software, engineers have no other option than actually learn the business as well. | |
| ▲ | sodapopcan 7 days ago | parent | prev [-] | | I guess that really is a thing, eh? That concept is pretty foreign to me. How on earth are you supposed to do domain modelling if you don't understand the domain? | | |
| ▲ | victorbjorklund 6 days ago | parent [-] | | How many % of software is domain modeled? Must me a small minority. | | |
| ▲ | perrylaj 5 days ago | parent | next [-] | | Nearly 100%. They don't call it that or use that term, and almost never _design_ thinking about the domain. But the absence of a formal 'domain model' still results in domain modeling - it's just done at the level of IC who may or may not have any awareness of the broader implications of the model they are creating. | |
| ▲ | alexanderchr 6 days ago | parent | prev | next [-] | | I’d say all (useful) software is modelling some domain. | |
| ▲ | pjmlp 6 days ago | parent | prev [-] | | Plenty if developed under consulting contract. |
|
|
|
|
| ▲ | nonethewiser 7 days ago | parent | prev | next [-] |
| >Software engineers are able to step back, think about the whole thing, and determine the root cause of a problem. Agree strongly, and I think this is basically what the article is saying as well about keeping a mental model of requirements/code behavior. We kind of already knew this was the hard part. How many times have you heard that once you get past junior level, the hard part is not writing the code? And that It's knowing what code to write? This realization is practically a right of passage. Which kind of begs the question for what the software engineering job looks like in the future. It definitely depends on how good the AI is. In the most simplistic case, AI can do all the coding right now and all you need is a task issue. And frankly probably a user written (or at least reviewed, but probably written) test. You could make the issue and test upfront and farm out the PR to an agent and manually approve when you see it passed the test case you wrote. In that case you are basically PM and QA. You are not even forming the prompt, just detailing the requirements. But as the tech improves can all tasks fit into that model? Not design/architecture tasks - or at least without a new task completion model than described above. The window will probably grow, but its hard to imagine that it will handle all pure coding tasks. Even for large tasks that theorhetically can fit into that model, you are going to have to do a lot of thinking and testing and prototyping to figure out the requirements and test cases. In theory you could apply the same task/test process but that seems like it would be too much structure and indirection to actually be helpful compared to knowing how to code. |
| |
| ▲ | ruslan_sure 6 days ago | parent [-] | | What if LLMs get 'a mental model of requirements/code behavior'? LLMs may have experts in it, each with its own specialty. You can even combine several LLMs, each doing its own thing: one creates architecture, another writes documentation, a third critiques, a fourth writes code, a fifth creates and updates the "mental model," etc. I agree with the PM role, but with such low requirements that anyone can do it. | | |
| ▲ | chrz 6 days ago | parent [-] | | and each LLM can invent some ridiculous suprise. Who is going to check if it did right thing? |
|
|
|
| ▲ | isaacremuant 6 days ago | parent | prev | next [-] |
| No. That's the narrow definition of a code monkey who gets told what to do. The good ones wear multiple hats and actually define the problem, learns sufficiently about a domain to interact with it or the experts on said domain and figures out what are the short Vs long term tradeoffs to focus on the value and not just the technical aspect. |
|
| ▲ | graycat 6 days ago | parent | prev | next [-] |
| "Rules"? An earlier effort at AI was based on rules and the C. Forgy RETE algorithm. Soooo, rules have been tried?? |
| |
| ▲ | pjmlp 6 days ago | parent [-] | | C? Rules engines were traditionally written in Prolog or Lisp during the AI wave they were cool. | | |
| ▲ | graycat 6 days ago | parent [-] | | > "C?" Forgy was Charles Forgy. For a "rules engine", there was also IBM's YES/L1. | | |
|
|
|
| ▲ | Gehinnn 6 days ago | parent | prev | next [-] |
| I wouldn't say "translating", but "finding/constructing a model that satisfies the business rules".
This can be quite hard in some cases, in particular if some business rules are contradicting each other or can be combined in surprisingly complex ways. |
|
| ▲ | EGreg 6 days ago | parent | prev [-] |
| Programmers maybe But software architects (especially of various reusable frameworks) have to maintain the right set of abstractions and make sure the system is correct and fast, easy to debug, that developers fall into the pit of success etc. Here are just a few major ones, each of which would be a chapter in a book I would write about software engineering: ENVIRONMENTS & WORKFLOWS
Environment Setup
Set up a local IDE with a full clone of the app (frontend, backend, DB).
Use .env or similar to manage config/secrets; never commit them.
Debuggers and breakpoints are more scalable than console.log.
Prefer conditional or version-controlled breakpoints in feature branches.
Test & Deployment Environments
Maintain at least 3 environments: Local (dev), Staging (integration test), Live (production).
Make state cloning easy (e.g., DB snapshots or test fixtures).
Use feature flags to isolate experimental code from production. BUGS & REGRESSIONS
Bug Hygiene
Version control everything except secrets.
Use linting and commit hooks to enforce code quality.
A bug isn’t fixed unless it’s reliably reproducible.
Encourage bug reporters to reset to clean state and provide clear steps.
Fix in Context
Keep branches showing the bug, even if it vanishes upstream.
Always fix bugs in the original context to avoid masking root causes. EFFICIENCY & SCALE
Lazy & On-Demand
Lazy-load data/assets unless profiling suggests otherwise.
Use layered caching: session, view, DB level.
Always bound cache size to avoid memory leaks.
Pre-generate static pages where possible—static sites are high-efficiency caches.
Avoid I/O
Use local computation (e.g., HMAC-signed tokens) over DB hits.
Encode routing/logic decisions into sessionId/clientId when feasible.
Partitioning & Scaling
Shard your data; that’s often the bottleneck.
Centralize the source of truth; replicate locally.
Use multimaster sync (vector clocks, CRDTs) only when essential.
Aim for O(log N) operations; allow O(N) preprocessing if needed. CODEBASE DESIGN
Pragmatic Abstraction
Use simple, obvious algorithms first—optimize when proven necessary.
Producer-side optimization compounds through reuse.
Apply the 80/20 rule: optimize for the common case, not the edge.
Async & Modular
Default to async for side-effectful functions, even if not awaited (in JS).
Namespace modules to avoid globals.
Autoload code paths on demand to reduce initial complexity.
Hooks & Extensibility
Use layered architecture: Transport → Controller → Model → Adapter.
Add hookable events for observability and customization.
Wrap external I/O with middleware/adapters to isolate failures. SECURITY & INTEGRITY
Input Validation & Escaping
Validate all untrusted input at the boundary.
Sanitize input and escape output to prevent XSS, SQLi, etc.
Apply defense-in-depth: validate client-side, then re-validate server-side.
Session & Token Security
Use HMACs or signatures to validate tokens without needing DB access.
Enable secure edge-based filtering (e.g., CDN rules based on token claims).
Tamper Resistance
Use content-addressable storage to detect object integrity.
Append-only logs support auditability and sync. INTERNATIONALIZATION & ACCESSIBILITY
I18n & L10n
Externalize all user-visible strings.
Use structured translation systems with context-aware keys.
Design for RTL (right-to-left) languages and varying plural forms.
Accessibility (A11y)
Use semantic HTML and ARIA roles where needed.
Support keyboard navigation and screen readers.
Ensure color contrast and readable fonts in UI design. GENERAL ENGINEERING PRINCIPLES
Idempotency & Replay
Handlers should be idempotent where possible.
Design for repeatable operations and safe retries.
Append-only logs and hashes help with replay and audit.
Developer Experience (DX)
Provide trace logs, debug UIs, and metrics.
Make it easy to fork, override, and simulate environments.
Build composable, testable components. ADDITIONAL TOPICS WORTH COVERING
Logging & Observability
Use structured logging (JSON, key-value) for easy analysis.
Tag logs with request/session IDs.
Separate logs by severity (debug/info/warn/error/fatal).
Configuration Management
Use environment variables for config, not hardcoded values.
Support override layers (defaults → env vars → CLI → runtime).
Ensure configuration is reloadable without restarting services if possible.
Continuous Integration / Delivery
Automate tests and checks before merging.
Use canary releases and feature flags for safe rollouts.
Keep pipelines fast to reduce friction. |
| |
| ▲ | ghurtado 6 days ago | parent [-] | | > a book I would write about software engineering: You should probably go do that, rather than using the comment section of HN as a scratch pad of your stream of consciousness. That's not useful to anyone other than yourself. Is this a copypasta you just have laying around? | | |
| ▲ | MisterMower 6 days ago | parent [-] | | On the flip side, his commment actually contributes to the conversation, unlike yours. Poorly written? Sure. You can keep scrolling though. | | |
| ▲ | ghurtado 6 days ago | parent | next [-] | | > unlike yours If irony was a ton of bricks, you'd be dead | |
| ▲ | motorest 6 days ago | parent | prev [-] | | > On the flip side, his commment actually contributes to the conversation (...) Not really. It goes off on a tangent, and frankly I stopped reading the wall of text because it adds nothing of value. | | |
| ▲ | EGreg 6 days ago | parent [-] | | How would you know if it adds nothing of value if you stopped reading it? :) | | |
| ▲ | actionfromafar 6 days ago | parent | next [-] | | Here let me attach a copy of Wikipedia. Don’t stop reading! :-) | |
| ▲ | motorest 6 days ago | parent | prev [-] | | > How would you know if it adds nothing of value if you stopped reading it? :) If you write a wall of text where the first pages are inane drivel, what do you think are the odds that the rest of that wall of text suddenly adds readable gems? Sometimes a turd is just a turd, and you don't need to analyze all of it to know the best thing to do is to flush it. | | |
| ▲ | EGreg 6 days ago | parent [-] | | Every sentence there is meaningful. You can go 1 by 1. But yea the formatting should be better! | | |
| ▲ | motorest 6 days ago | parent [-] | | > Every sentence there is meaningful. It really isn't. There is no point to pretend it is, and even less of a point to expect anyone should waste their time with an unreadable and incoherent wall of text. You decide how you waste your time, and so does everyone else. | | |
| ▲ | EGreg 6 days ago | parent [-] | | For a developers to know 1. Set up a local IDE with a full clone of the app (frontend, backend, DB). Thus the app must be fully able to run on a small, local environment, which is true of open source apps but not always for for-profit companies 2. Use .env or similar to manage config/secrets; never commit them. A lot of people don’t properly exclude secrets from version control, leading to catastrophic secret leaks. Also when everyone has their own copy, the developer secrets and credentials aren’t that important. 3. Debuggers and breakpoints are more scalable than console.log. Prefer conditional or version-controlled breakpoints in feature branches. A lot of people don’t use debuggers and breakpoints, instead doing logging. Also they have no idea how to maintain DIFFERENT sets of breakpoints, which you can do by checking the project files into version control, and varying them by branches. 4. Test & Deployment Environments Maintain at least 3 environments: Local (dev), Staging (integration test), Live (production). This is fairly standard advice, but it is best practice, so people can test in local and staging. 5. Make state cloning easy (e.g., DB snapshots or test fixtures). This is not trivial. For example downloading a local copy of a test database, to test your local copy of Facebook with a production-style database. Make it fast, eg by rsync mysql innodb files. |
|
|
|
|
|
|
|
|