Remix.run Logo
ekjhgkejhgk 12 hours ago

Do the easy thing while it works, and when it stops working, fix the problem.

Julia does the same thing, and from the Rust numbers on the article, Julia has about 1/7th the number of packages that Rust does[1] (95k/13k = 7.3).

It works fine, Julia has some heuristics to not re-download it too often.

But more importantly, there's a simple path to improve. The top Registry.toml [1] has a path to each package, and once donwloading everything proves unsustainable you can just download that one file and use it to download the rest as needed. I don't think this is a difficult problem.

[1] https://github.com/JuliaRegistries/General/blob/master/Regis...

galenlynch 11 hours ago | parent | next [-]

I believe Julia only uses the Git registry as an authoritative ledger where new packages are registered [1]. My understanding is that as you mention, most clients don't access it, and instead use the "Pkg Protocol" [2] which does not use Git.

[1] https://github.com/JuliaRegistries/General

[2] https://pkgdocs.julialang.org/dev/protocol/

mi_lk 9 hours ago | parent | prev | next [-]

> Do the easy thing while it works, and when it stops working, fix the problem

Another way to phrase this mindset is "fuck around and find out" in gen-Z speak. It's usually practical to an extent but I'm personally not a fan

sagarm 5 hours ago | parent | next [-]

I've mostly heard FAFO used to describe something obviously stupid.

Building on the same thing people use for code doesn't seem stupid to me, at least initially. You might have to migrate later if you're successful enough, but that's not a sign of bad engineering. It's just building for where you are, not where you expect to be in some distant future

zephen 5 hours ago | parent | prev [-]

Not at all.

When you fuck around optimizing prematurely, you find out that you're too late and nobody cares.

Oh, well, optimization is always fun, so there's that.

0xbadcafebee 11 hours ago | parent | prev | next [-]

This is basically unethical. Imagine anything important in the world that worked this way. "Do nuclear engineering the easy way while it works, and when it stops working, fix the problem."

Software engineers always make the excuse that what they're making now is unimportant, so who cares? But then everything gets built on top of that unimportant thing, and one day the world crashes down. Worse, "fixing the problem" becomes near impossible, because now everything depends on it.

But really the reason not to do it, is there's no need to. There are plenty of other solutions than using Git that work as well or better without all the pitfalls. The lazy engineer picks bad solutions not because it's necessarily easier than the alternatives, but because it's the path of least resistance for themselves.

Not only is this not better, it's often actively worse. But this is excused by the same culture that gave us "move fast and break things". All you have to do is use any modern software to see how that worked out. Slow bug-riddled garbage that we're all now addicted to.

xboxnolifes 9 hours ago | parent | next [-]

Most of the world does work this way. Problems are solved within certain conditions and for use over a certain time frame. Once those change, the problem gets revisited.

Most software gets to take it to more of an extreme then many engineering fields since there isn't physical danger. Its telling that the counter examples always use the potentially dangerous problems like medicine or nuclear engineering. The software in those fields are more stringent.

hombre_fatal 10 hours ago | parent | prev | next [-]

On the other hand, GitHub wants to be the place you choose to build your registry for a new project, and they are clearly on board with the idea given that they help massive projects like Nix packages instead of kicking them off.

As opposed to something like using a flock of free blogger.com blogs to host media for an offsite project.

baobun 4 hours ago | parent [-]

...For now. The writing is on the wall.

ModernMech 9 hours ago | parent | prev | next [-]

Hold up... "lazy engineers" are the problem here? What about a society that insists on shoving the work product of unfunded, volunteer engineers into critical infrastructure because they don't want to pay what it costs to do things the right way? Imagine building a nuclear power plant with an army of volunteer nuclear engineers.

It cannot be the case that software engineers are labelled lazy for not building the at-scale solution to start with, but at the same time everyone wants to use their work, and there are next to no resources for said engineer to actually build the at scale solution.

> the path of least resistance for themselves.

Yeah because they're investing their own personal time and money, so of course they're going to take the path that is of least resistance for them. If society feels that's "unethical", maybe pony up the cash because you all still want to rely on their work product they are giving out for free.

rovr138 8 hours ago | parent [-]

> If society feels that's "unethical", maybe pony up the cash because you all still want to rely on their work product they are giving out for free.

I like OSS and everything.

Having said that, ethically, should society be paying for these? Maybe that is what should happen. In some places, we have programs to help artists. Should we have the same for software?

ekjhgkejhgk 9 hours ago | parent | prev [-]

Fixing problems as they appear is unethical? Ok then.

You realize, there are people who think differently? Some people would argue that if you keep working on problems you don't have but might have, you end up never finishing anything.

It's a matter of striking a balance, and I think you're way on one end of the spectrum. The vast majority of people using Julia aren't building nuclear plants.

BenjiWiebe 5 hours ago | parent [-]

Fixing problems when they appear is ethical.

Refusing to fix a problem that hasn't appeared yet, but has been/can be foreseen - that's different. I personally wouldn't call it unethical, but I'd consider it a negative.

zephen 4 hours ago | parent [-]

The problem is that popularity is governed by power laws.

Literally anybody could forsee that, _if_ something scales to millions of users, there will be issues. Some of the people who forsee that could even fix it. But they might spend their time optimizing for something that will never hit 1000 users.

Also, the problems discussed here are not that things don't work, it's that they get slow and consume too many resources.

So there is certainly an optimal time to fix such problems, which is, yes, OK, _before_ things get _too_ slow and consume _too_ many resources, but is most assuredly _after_ you have a couple of thousand users.

IshKebab 9 hours ago | parent | prev | next [-]

> when it stops working, fix the problem

This is too naive. Fixing the problem costs a different amount depending on when you do it. The later you leave it the more expensive it becomes. Very often to the point where it is prohibitively expensive and you just put up with it being a bit broken.

This article even has an example of that - see the vcpkg entry.

zahlman 12 hours ago | parent | prev [-]

> 00000000-1111-2222-3333-444444444444 = { name = "REPLTreeViews", path = "R/REPLTreeViews" }

... Should it be concerning that someone was apparently able to engineer an ID like that?

ekjhgkejhgk 12 hours ago | parent | next [-]

Could you please articulate specifically why that should be concerning?

Right now I don't see the problem because the only criterion for IDs is that they are unique.

zahlman 11 hours ago | parent [-]

I didn't know whether they were supposed to be within the developer's control (in which case the only real concern is whether someone else has already used the id), or generated by the system (in which case a developer demonstrated manipulation of that system).

Apparently it is the former, and most developers independently generate random IDs because it's easy and is extremely unlikely to result in collisions. But it seems the dev at the top of the list had a sense of vanity instead.

KenoFischer 10 hours ago | parent [-]

You're supposed to generate a random one, but the only consequence of not doing so is that you won't be able to register your package if someone else already took the UUID (which is a pain if you have registered versions in a private registry). That said, "vanity" UUIDs are a bad look, so we'd probably reject them if someone tried that today, but there isn't any actual issue with them.

skycrafter0 12 hours ago | parent | prev | next [-]

If you read the repo README, it just says "generate a uuid". You can use whatever you want as long as it fits the format, it seems.

adestefan 12 hours ago | parent | prev [-]

It’s as random as any other UUID.

Severian 11 hours ago | parent | next [-]

Incorrect, only some UUIDs are random, specifically v4 and v7 (v7 uses time as well).

https://en.wikipedia.org/wiki/Universally_unique_identifier

> 00000000-1111-2222-3333-444444444444

This would technically be version 2, which would be built from the date-time and MAC address, and DCE security version.

But overall, if you allow any yahoo to pick a UUID, its not really a UUID, its just some random string that looks like one.

ekjhgkejhgk 9 hours ago | parent [-]

> if you allow any yahoo to pick a UUID, its not really a UUID

universally unique identifier (UUID)

> 00000000-1111-2222-3333-444444444444

It's unique.

Anyway we're talking about a package that doesn't matter. It's abandoned. Furthermore it's also broken, because it uses REPL without importing it. You can't even precompile it.

https://github.com/pfitzseb/REPLTreeViews.jl/blob/969f04ce64...

anonymars 9 hours ago | parent | prev [-]

Which is to say, not guaranteed at all. GUIDs are designed to be unique, not random/unpredictable

https://devblogs.microsoft.com/oldnewthing/20120523-00/?p=75...