Remix.run Logo
1a527dd5 9 hours ago

1. Don't use bash, use a scripting language that is more CI friendly. I strongly prefer pwsh.

2. Don't have logic in your workflows. Workflows should be dumb and simple (KISS) and they should call your scripts.

3. Having standalone scripts will allow you to develop/modify and test locally without having to get caught in a loop of hell.

4. Design your entire CI pipeline for easier debugging, put that print state in, echo out the version of whatever. You don't need it _now_, but your future self will thank you when you do it need it.

5. Consider using third party runners that have better debugging capabilities

Storment33 9 hours ago | parent | next [-]

I would disagree with 1. if you need anything more than shell that starts to become a smell to me. The build/testing process etc should be simple enough to not need anything more.

embedding-shape 9 hours ago | parent | next [-]

That's literally point #2, but I had the same reaction as you when I first read point #1 :)

Storment33 9 hours ago | parent [-]

I agree with #2, I meant more if you are calling out to something that is not a task runner(Make, Taskfile, Just etc) or a shell script thats a bit of a smell to me. E.g. I have seen people call out to Python scripts etc and it concerns me.

masfuerte 6 hours ago | parent | next [-]

My software runs on Windows, Linux and MacOS. The same Python testing code runs on all three platforms. I mostly dislike Python but I can't think of anything better for this use case.

tracker1 5 hours ago | parent | next [-]

You might consider Deno with Typescript... it's a single exe runtime, with a self-update mechanism (deno upgrade) and can run typescript/javascript files that directly reference the repository/http/modules that it needs and doesn't require a separate install step for dependency management.

I've been using it for most of my local and environment scripting since relatively early on.

Storment33 5 hours ago | parent | prev [-]

I don't touch Windows so I would not know.

> The same Python testing code runs on all three platforms.

I have no objections to Python being used for testing, I use it myself for the end to end tests in my projects. I just don't think Python as a build script/task runner is a good idea, see below where I got Claude to convert one of my open source projects for an example.

WorldMaker 6 hours ago | parent | prev | next [-]

It's interesting because #1 is still suggesting a shell script, it's just suggesting a better shell to script.

Storment33 5 hours ago | parent [-]

I had no idea 'pwsh' was PowerShell. Personally not interested, maybe if your a Microsoft shop or something then yeah.

WorldMaker 4 hours ago | parent | next [-]

"pwsh" is often used as the short-hand for modern cross-platform PowerShell to better differentiate it from the old Windows-only PowerShell.

I think pwsh is worth exploring. It is cross-platform. It is post-Python and the Python mantra that "~~code~~ scripts are read more often than they are written". It provides a lot of nice tools out of the box. It's built in an "object-oriented" way, resembling Python and owing much to C#. When done well the "object-oriented" way provides a number of benefits over "dumb text pipes" that shells like bash were built on. It is easy to extend with C# and a few other languages, should you need to extend it.

I would consider not dismissing it off hand without trying it just because Microsoft built it and/or that it was for a while Windows-only.

Rohansi 2 hours ago | parent | prev [-]

It's actually a pretty good shell! FOSS and cross-platform, too.

embedding-shape 9 hours ago | parent | prev [-]

Huh? Who cares if the script is .sh, .bash, Makefile, Justfile, .py, .js or even .php? If it works it works, as long as you can run it locally, it'll be good enough, and sometimes it's an even better idea to keep it in the same language the rest of the project is. It all depends and what language a script is made in shouldn't be considered a "smell".

moduspol 6 hours ago | parent | next [-]

Once you get beyond shell, make, docker (and similar), dependencies become relevant. At my current employer, we're mostly in TypeScript, which means you've got NPM dependencies, the NodeJS version, and operating system differences that you're fighting with. Now anyone running your build and tests (including your CI environment) needs to be able to set all those things up and keep them in working shape. For us, that includes different projects requiring different NodeJS versions.

Meanwhile, if you can stick to the very basics, you can do anything more involved inside a container, where you can be confident that you, your CI environment, and even your less tech-savvy coworkers can all be using the exact same dependencies and execution environment. It eliminates entire classes of build and testing errors.

tracker1 5 hours ago | parent | next [-]

I've switched to using Deno for most of my orchestration scripts, especially shell scripts. It's a single portable, self-upgradeable executable and your shell scripts can directly reference the repositories/http(s) modules/versions it needs to run without a separate install step.

I know I've mentioned it a few times in this thread, just a very happy user and have found it a really good option for a lot of usage. I'll mostly just use the Deno.* methods or jsr:std for most things at this point, but there's also npm:zx which can help depending on what you're doing.

It also is a decent option for e2e testing regardless of the project language used.

Storment33 5 hours ago | parent | prev [-]

I use to have my Makefile call out and do `docker build ...` and `docker run ...` etc with a volume mount of the source code to manage and maintain tooling versions etc.

It works okay, better than a lot of other workflows I have seen. But it is a bit slow, a bit cumbersome(for langs like Go or Node.js that want to write to HOME) and I had some issues on my ARM Macbook about no ARM images etc.

I would recommend taking a look at Nix, it is what I switched to.

* It is faster. * Has access to more tools. * Works on ARM, X86 etc.

pamcake 4 hours ago | parent | prev | next [-]

Shell and bash are easy to write insecurely and open your CI runners or dev machines up for exploitation by shell injection. Non-enthusiasts writing complex CI pipelines pulling and piping remote assets in bash without ShellCheck is a risky business.

Python is a lot easier to write safely.

snovv_crash 3 hours ago | parent [-]

You shouldn't be pulling untrusted assets in CI regardless. Hacking your bash runner is the hardest approach anyways, just patch some subroutine in a dependency that you'll call during your build or tests.

Storment33 8 hours ago | parent | prev [-]

> Huh? Who cares if the script is .sh, .bash, Makefile, Justfile, .py, .js or even .php?

Me, typically I have found it to be a sign of over-engineering and found no benefits over just using shell script/task runner, as all it should be is plumbing that should be simple enough that a task runner can handle it.

> If it works it works, as long as you can run it locally, it'll be good enough,

Maybe when it is your own personal project "If it works it works" is fine. But when you come to corporate environment there starts to be issues of readability, maintainability, proprietary tooling, additional dependencies etc I have found when people start to over-engineer and use programming languages(like Python).

E.g.

> never_inline 30 minutes ago | parent | prev | next [–]

> Build a CLI in python or whatever which does the same thing as CI, every CI stage should just call its subcommands.

However,

> and sometimes it's an even better idea to keep it in the same language the rest of the project is

I'll agree. Depending on the project's language etc other options might make sense. But personally so far everytime I have come across something not using a task runner it has just been the wrong decision.

embedding-shape 8 hours ago | parent | next [-]

> But personally so far everytime I have come across something not using a task runner it has just been the wrong decision.

Yeah, tends to happen a lot when you hold strong opinions with strong conviction :) Not that it's wrong or anything, but it's highly subjective in the end.

Typically I see larger issues being created from "under-engineering" and just rushing with the first idea people can think of when they implement things, rather than "over-engineering" causing similarly sized future issues. But then I also know everyone's history is vastly different, my views are surely shaped by the specific issues I've witnessed (and sometimes contributed to :| ), than anything else.

Storment33 7 hours ago | parent [-]

> Yeah, tends to happen a lot when you hold strong opinions with strong conviction :) Not that it's wrong or anything, but it's highly subjective in the end.

Strong opinions, loosely held :)

> Typically I see larger issues being created from "under-engineering" and just rushing with the first idea people can think of when they implement things, rather than "over-engineering"

Funnily enough running with the first idea I think is creating a lot of the "over-engineering" I am seeing. Not stopping to consider other simpler solutions or even if the problem needs/is worth solving in the first place.

> Yeah, tends to happen a lot when you hold strong opinions with strong conviction :) Not that it's wrong or anything, but it's highly subjective in the end.

I quickly asked Claude to convert one of my open source repos using Make/Nix/Shell -> Python/Nix to see how it would look. It is actually one of the better Python as a task runners I have seen.

* https://github.com/DeveloperC286/clean_git_history/pull/431

While the Python version is not as bad as I have seen previously, I am still struggling to see why you'd want it over Make/Shell.

It introduces more dependencies(Python which I solved via Nix) but others haven't solved this problem and the Python script has dependencies(such as Click for the CLI).

It is less maintainable as it is more code, roughly x3 the amount of the Makefile.

To me the Python code is more verbose and not as simple compared to the Makefile's target so it is less readable as well.

Imustaskforhelp 6 hours ago | parent [-]

> It introduces more dependencies(Python which I solved via Nix) but others haven't solved this problem and the Python script has dependencies(such as Click for the CLI).

UV scripts are great for this type of workflow

There are even scripts which will install uv in the same file effectively making it just equivalent to ./run-file.py and it would handle all the dependency management the python version management and everything included and would work everywhere

https://paulw.tokyo/standalone-python-script-with-uv/

Personally I end up just downloading uv and so not using the uv download script from this but if I am using something like github action which are more (ephemeral?) I'd just do this.

Something like this can start out simple and can scale much more than the limitations of bash which can be abundant at times

That being said, I still make some shell scripts because executing other applications is first class support in bash but not so much in python but after discovering this I might create some new scripts with python with automated uv because I end up installing uv on many devices anyway (because uv's really good for python)

I am interested in bun-shell as well but that feels way too much bloated and even not used by many so less (AI assistance at times?) and I haven't understood bun shell at the same time too and so bash is superior to it usually

Storment33 6 hours ago | parent | next [-]

> UV scripts are great for this type of workflow

So previously when I have seen Python used as a task runner I think they used UV to call it. Although I don't think they had as a complete solution as your here auto-installing UV etc.

Although the example you've linked is installing UV if missing, the version is not pinned, I also don't think it is handling missing Python which is not pinned even if installed locally. So you could get different versions on CI vs locally.

While yes you are removing some of the dependencies problems created via using Python over Make/Shell I don't think this completely solves it.

> Something like this can start out simple and can scale much more than the limitations of bash which can be abundant at times

I personally haven't witnessed anytime I would consider the scales to have tipped in favour of Python and I would be concerned if they ever do, as really the task runner etc should be plumbing, so it should be simple.

> That being said, I still make some shell scripts because executing other applications is first class support in bash but not so much in python but after discovering this I might create some new scripts with python with automated uv because I end up installing uv on many devices anyway (because uv's really good for python)

Using Python/UV to do anything more complex than my example PR above?

Imustaskforhelp 5 hours ago | parent [-]

I think UV scripts can/will actually install python and manage it itself as well and you can actually pin a specific version of python itself via Uv scripts

I copied this from their website (https://docs.astral.sh/uv/guides/scripts/#declaring-script-d...)

uv also respects Python version requirements: example.py

# /// script # requires-python = ">=3.12" # dependencies = [] # ///

# Use some syntax added in Python 3.12 type Point = tuple[float, float] print(Point)

> Using Python/UV to do anything more complex than my example PR above?

I can agree that this might be complex but that complexity has a trade off and of course nothing is shoe fits all but there are times when someone has to manage a complex CI environment and I looked at and there are some CI deterministic options too like invoke etc. and when you combine all of these, I feel like the workflow can definitely be interesting to say the least

Once again, I don't know what really ends up in github actions since I have never really used it properly, I am basing its critiques based on what I've read and what solutions (python came quite frequently) and something recently which I discovered (which was the blog)

quotemstr 6 hours ago | parent | prev [-]

This thing does a global uv install when run? That's obnoxious! Never running stuff from whoever wrote this.

Oh, and later the author suggests the script modify itself after running. What the fuck. Absolutely unacceptable way to deploy software.

Imustaskforhelp 6 hours ago | parent [-]

Does it really matter if its a global install of uv or not especially on Github Actions

Also if this still bothers you, nothing stops you from removing the first x lines of code and having it in another .py file if this feels obnoxious to you

> Oh, and later the author suggests the script modify itself after running. What the fuck. Absolutely unacceptable way to deploy software.

Regarding author suggest its removes itself its because it does still feel clutterish but there is virtually 0 overhead in using/having it still be if you are already using uv or want to use uv

Oh also, (I am not the Author) but I have played extensively with UV and I feel like the script can definitely be changed to install it locally rather than globally.

They themselves mention it as #overkill on their website but even then it is better than whatever github action is

quotemstr 3 hours ago | parent [-]

I'm a huge believer in the rule that everything GH actions does should be a script you can also run locally.

Imustaskforhelp 3 hours ago | parent [-]

Yes I believe the same too and I think we are on the same goal. I think that I can probably patch this code to install uv, let's say locally instead of globally if that's a major concern. I feel like its not that hard.

quotemstr 3 hours ago | parent [-]

It's easy enough to patch. It's the philosophy that bugs me. We already have a huge problem with routine workflows pulling things from the network (often, without even a semblance of hash-locking) and foregoing the traditional separation between environment setup and business logic. There's a lot of value into having discrete steps for downloading/installing stuff and doing development, because then you can pay special attention to the former, look for anything odd, read release notes, and so on. Between explicit, human-solicited upgrades, dev workflows should be using, ideally, vendored dependencies, or, if not that, then at least stuff that's hash-verified end-to-end.

Someday, someone is going to have a really big disaster that comes out of casual getting unauthenticated stuff from somebody else's computer.

pjc50 8 hours ago | parent | prev | next [-]

Using shell becomes deeply miserable as soon as you encounter its kryptonite, the space character. Especially but not limited to filenames.

catlifeonmars 6 hours ago | parent | prev | next [-]

I find that shell scripting has a sharp cliff. I agree with the sentiment that most things are over engineered. However it’s really easy to go from a simple shell script running a few commands to something significantly more complex just to do something seemingly simple, like parse a semantic version, make an api call and check the status code etc, etc.

The other problem with shell scripting on things like GHA is that it’s really easy to introduce security vulnerabilities by e.g forgetting to quote your variables and letting an uncontrolled input through.

There’s no middle ground between bash and python and a lot of functionality lives in that space.

Storment33 5 hours ago | parent [-]

> However it’s really easy to go from a simple shell script running a few commands to something significantly more complex just to do something seemingly simple, like parse a semantic version, make an api call and check the status code etc, etc.

Maybe I keep making the wrong assumption that everyone is using the same tools the same way and thats why my opinions seem very strong. But I wouldn't even think of trying to "parse a semantic version" in shell, I am treating the shell scripts and task runners as plumbing, I would be handing that of a dedicated tool to action.

jcon321 6 hours ago | parent | prev [-]

yea imagine having to maintain a python dependency (which undergoes security constraints) all because some junior cant read/write bash... and then that junior telling you you're the problem lmao

dijit 9 hours ago | parent | prev [-]

I mean, at some point you are bash calling some other language anyway.

I'm a huge fan of "train as you fight", whatever build tools you have locally should be what's used in CI.

If your CI can do things that you can't do locally: that is a problem.

maccard 8 hours ago | parent | next [-]

> If your CI can do things that you can't do locally: that is a problem.

IME this is where all the issues lie. Our CI pipeline can push to a remote container registry, but we can't do this locally. CI uses wildly different caching strategies to local builds, which diverges. Breaking up builds into different steps means that you need to "stash" the output of stages somewhere. If all your CI does is `make test && make deploy` then sure, but when you grow beyond that (my current project takes 45 minutes with a _warm_ cache) you need to diverge, and that's where the problems start.

tracker1 5 hours ago | parent [-]

Ironically, at least for a couple recent projects... just installing dependencies fresh is as fast on GH Actions as the GH caching methods, so I removed the caching and simplified the workflows.

embedding-shape 9 hours ago | parent | prev | next [-]

> If your CI can do things that you can't do locally: that is a problem.

Probably most of the times when this is an actual problem, is building across many platforms. I'm running Linux x86_64 locally, but some of my deliverables are for macOS and Windows and ARM, and while I could cross-compile for all of them on Linux (macOS was a bitch to get working though), it always felt better to compile on the hardware I'm targeting.

Sometimes there are Windows/macOS-specific failures, and if I couldn't just ssh in and correct/investigate, and instead had to "change > commit > push" in an endless loop, it's possible I'd quite literally would lose my mind.

ethin 7 hours ago | parent [-]

I literally had to do this push > commit > test loop yesterday because apparently building universal Python wheels on MacOS is a pain in the ass. And I don't have a mac, so if I want to somewhat reliably reproduce how the runner might behave, I have to either test it on GH actions or rent one from something like Scaleway. Mainly because I don't currently knwo how else to do it. It's so, so frustrating and if anyone has ideas on making my life a bit better that would be nice lol.

Imustaskforhelp 6 hours ago | parent [-]

there is quickemu which can install mac vm on linux (or any other host) rather quickly, what are your thoughts on it (I am an absolute quickemu shill because I love that software)

https://github.com/quickemu-project/quickemu [ Quickly create and run optimised Windows, macOS and Linux virtual machines ]

tracker1 5 hours ago | parent [-]

Thank you so much for this... If I could +1 a dozen times I would.

Imustaskforhelp 5 hours ago | parent [-]

Thanks! Glad I could help. If I may ask, what specific use case are you using quickemu for? Is it also for running mac machines on say linux?

tracker1 4 hours ago | parent [-]

That's what I intend to use it for, Mac and Windows... I'm starting on an app that I want to work cross platform (tauri/rust w/ react+mui) and want to be able to do manual testing or troubleshooting as needed on mac and windows without needing a separate machine.

My laptop is an M1 MacBook Air, and I do have an N100 I could use for Windows... I'd just assume use my fast desktop which even emulated is likely faster and not have to move seats.

Imustaskforhelp 4 hours ago | parent [-]

yes, I think just the amount of friction it can reduce might be worth it in the first place.

Oh btw although there are many primitives which help transferring files between VM's and others by having sshfs etc., one of the things which I enjoyed doing in quickemu is using the beloved piping-server

https://github.com/nwtgck/piping-server Infinitely transfer between every device over pure HTTP with pipes or browsers

The speeds might be slow but I was using it to build simple shell scripts and you can self host it or deploy on cf workers too most likely which is really simple but I haven't done it

But for quick deployments/transfers of binaries/simple files, its great as well. Tauri is meant to be lightweight/produce small binaries so I suppose one can try it but there are other options as well

Piping Serrvers + quickemu felt like a cheatcode to me atleast for more ephemeral vm's based workflow but of course YMMV

Good luck with your project! I tried building a tauri app once for android just out of mere curiosity on linux and it was hell. I didn't know anything about android development but setting up the developer environment was really hard and I think I forgot everything I learnt from that but wish I had made notes or even video documenting the process

tracker1 4 hours ago | parent [-]

Fortunately/Unfortunately it wouldn't be a good experience for Phone use, maybe table as part of it will be displaying BBS-ANSI art and messages which lends itself to a larger display.

Storment33 9 hours ago | parent | prev [-]

> If your CI can do things that you can't do locally: that is a problem.

Completely agree.

> I'm a huge fan of "train as you fight", whatever build tools you have locally should be what's used in CI.

That is what I am doing, having my GitHub Actions just call the Make targets I am using locally.

> I mean, at some point you are bash calling some other language anyway.

Yes, shell scripts and or task runners(Make, Just, Task etc) are really just plumbing around calling other tools. Which is why it feels like a smell to me when you need something more.

zelphirkalt 7 hours ago | parent | prev | next [-]

I don't agree with (1), but agree with (2). I recommend just putting a Makefile in the repo and have that have CI targets, which you can then easily call from CI via a simple `make ci-test` or similar. And don't make the Makefiles overcomplicated.

Of course, if you use something else as a task runner, that works as well.

Wilder7977 6 hours ago | parent | next [-]

For certain things, makefiles are great options. For others though they are a nightmare. From a security perspective, especially if you are trying to reach SLSA level 2+, you want all the build execution to be isolated and executed in a trusted, attestable and disposable environment, following predefined steps. Having makefiles (or scripts) with logical steps within them, makes it much, much harder to have properly attested outputs.

Using makefiles mixes execution contexts between the CI pipeline and the code within the repository (that ends up containing the logic for the build), instead of using - centrally stored - external workflows that contains all the business logic for the build steps (e.g., compiler options, docker build steps etc.).

For example, how can you attest in the CI that your code is tested if the workflow only contains "make test"? You need to double check at runtime what the makefile did, but the makefile might have been modified by that time, so you need to build a chain of trust etc. Instead, in a standardized workflow, you just need to establish the ground truth (e.g., tools are installed and are at this path), and the execution cannot be modified by in-repo resources.

quotemstr 6 hours ago | parent [-]

That doesn't make any sense. Nothing about SLSA precludes using make instead of some other build tool. Either inputs to a process are hermetic and attested or they're not. Makefiles are all about executing "predefined steps".

It doesn't matter whether you run "make test" or "npm test whatever": you're trusting the code you've checked out to verify its own correctness. It can lie to you either way. You're either verifying changes or you're not.

Wilder7977 4 hours ago | parent [-]

You haven't engaged with what I wrote, of course it doesn't make sense.

The easiest and most accessible way to attest what has been done is to have all the logic of what needs to be done in a single context, a single place. A reusable workflow that is executed by hash in a trusted environment and will execute exactly those steps, for example. In this case, step A does x, and step B attests that x has been done, because the logic is immutably in a place that cannot be tampered with by whoever invokes that workflow.

In the case of the makefile, in most cases, the makefile (and therefore the steps to execute) will be in a file in the repository, I.e., under partial control of anybody who can commit and under full control of those who can merge. If I execute a CI and step A now says "make x", the semantic actually depends on what the makefile in the repo includes, so the contexts are mixed between the GHA workflow and the repository content. Any step of the workflow now can't attest directly that x happened, because the logic of x is not in its context.

Of course, you can do everything in the makefile, including the attestation steps, bringing them again in the same context, but that makes it so that once again the security relevant steps are in a potentially untrusted environment. My thinking specifically hints at the case of an organization with hundreds of repositories that need to be brought under control. Even more, what I am saying make sense if you want to use the objectively convenient GH attestation service (probably one of the only good feature they pushed in the last 5 years).

zelphirkalt 3 hours ago | parent [-]

Usually, the people writing the Makefile are the same that could also be writing this stuff out in a YAML (lol) file as the CI instructions, often located in the same repository anyway. The irony in that is striking. And then we have people who can change environment variables for the CI workflows. Usually, also developers, often the same people that can commit changes to the Makefile.

I don't think it changes much, aside from security theater. If changes are not properly reviewed, then all fancy titles will not help. If anything, using Make will allow for a less flaky CI experience, that doesn't break the next time the git hoster changes something about their CI language and doesn't suffer from YAMLitis.

quotemstr 3 hours ago | parent [-]

You're correct. It's absolutely security theater. Either you trust the repository contents or you don't. There's no, none, zilch trust improvement arising from the outer orchestration being done in a YAML file checked into the repo and executed by CI instead of a Makefile also executed by CI.

What's the threat model Wilder is using exactly? Look, I'm ordinarily all for nuance and saying reasonable people can disagree when it comes to technical opinions, but here I can't see any merit whatsoever to the claim that orchestrating CI actions with Make is somehow a security risk when the implementations of these actions at some level live in the repo anyway.

antihipocrat 27 minutes ago | parent [-]

That's a great point. If we keep following the requirement for attestation to its logical conclusion we would end up replicating the entire server running the repository at the source, then the cycle repeats

reactordev 7 hours ago | parent | prev | next [-]

Makefile or scripts/do_thing either way this is correct. CI workflows should only do 1 thing each step. That one thing should be a command. What that command does is up to you in the Makefile or scripts. This keeps workflows/actions readable and mostly reusable.

pydry 7 hours ago | parent | prev | next [-]

>I don't agree with (1)

Neither do most people, probably but it's kinda neat how they suggested fix for github actions' ploy to maintain vendor lock-in is to swap it with a language invented by that very same vendor.

elSidCampeador 7 hours ago | parent | prev [-]

makefile commands are the way

kstrauser 7 hours ago | parent | prev | next [-]

I was once hired to manage a build farm. All of the build jobs were huge pipelines of Jenkins plugins that did various things in various orders. It was a freaking nightmare. Never again. Since then, every CI setup I’ve touched is a wrapper around “make build” or similar, with all the smarts living in Git next to the code it was building. I’ll die on this hill.

jayd16 2 hours ago | parent | prev | next [-]

#2 is not a slam dunk because the CI system loses insight into your build process if you just use one big script.

Does anyone have a way to mark script sections as separate build steps with defined artifacts? Would be nice to just have scripts with something like.

    BeginStep("Step Name") 
    ... 
    EndStep("Step Name", artifacts)
They could noop on local runs but be reflected in the github/gitlab as separate steps/stages and allow resumes and retries and such. As it stands there's no way to really have CI/CD run the exact same scripts locally and get all the insights and functionality.

I haven't seen anything like that but it would be nice to know.

arwhatever 4 hours ago | parent | prev | next [-]

Do you (or does anyone) see possible value in a CI tool that just launches your script directly?

It seems like if you

> 2. Don't have logic in your workflows. Workflows should be dumb and simple (KISS) and they should call your scripts.

then you’re basically working against or despite the CI tool, and at that point maybe someone should build a better or more suitable CI tool.

zelphirkalt 3 hours ago | parent [-]

Can we have a CI tool, that simply takes a Makefile as input? Perhaps takes all targets, that start with "ci" or something.

never_inline 9 hours ago | parent | prev | next [-]

Build a CLI in python or whatever which does the same thing as CI, every CI stage should just call its subcommands.

Storment33 9 hours ago | parent [-]

Just use a task runner(Make, Just, Taskfile) this is what they were designed for.

jonhohle 8 hours ago | parent | next [-]

I typically use make for this and feel like I’m constantly clawing back scripts written in workflows that are hard to debug if they’re even runnable locally.

This isn’t only a problem with GitHub Actions though. I’ve run into it with every CI runner I’ve come across.

never_inline 7 hours ago | parent | prev [-]

In many enterprise environments, deployment logic would be quite large for bash.

Storment33 7 hours ago | parent [-]

Personally, I have never found the Python as a task runners to be less code, more readable or maintainable.

ufo 9 hours ago | parent | prev | next [-]

How do you handle persistent state in your actions?

For my actions, the part that takes the longest to run is installing all the dependencies from scratch. I'd like to speed that up but I could never figure it out. All the options I could find for caching deps sounded so complicated.

embedding-shape 9 hours ago | parent | next [-]

> How do you handle persistent state in your actions?

You shouldn't. Besides caching that is.

> All the options I could find for caching deps sounded so complicated.

In reality, it's fairly simple, as long as you leverage content-hashing. First, take your lock file, compute the sha256sum. Then check if the cache has an artifact with that hash as the ID. If it's found, download and extract, those are your dependencies. If not, you run the installation of the dependencies, then archive the results, with the ID set to the hash.

It really isn't more to it. I'm sure there are helpers/sub-actions/whatever Microsoft calls it, for doing all of this with 1-3 lines or something.

ufo 8 hours ago | parent [-]

The tricky bit for me was figuring out which cache to use, and how to use and test it locally. Do you use the proprietary github actions stuff? If the installation process inside the actions runner is different from what we use in the developer machines, now we have two sets of scripts and it's harder to test and debug...

embedding-shape 8 hours ago | parent [-]

> Do you use the proprietary github actions stuff?

If I can avoid it, no. Almost everything I can control is outside of the Microsoft ecosystem. But as a freelancer, I have to deal a bunch with GitHub and Microsoft anyways, so in many of those cases, yes.

Many times, I end up using https://github.com/actions/cache for the clients who already use Actions, and none of that runs in the local machines at all.

Typically I use a single Makefile/Justfile, that sometimes have most of the logic inside of it for running tests and what not, sometimes shell out to "proper" scripts.

But that's disconnected from the required "setup", so Make/Just doesn't actually download dependencies, that's outside of the responsibilities of whatever runs the test.

And also, with a lot of languages, it doesn't matter if you run an extra "npm install" over already existing node_modules/, it'll figure out what's missing/there already, so you could in theory still have "make test" do absolute everything locally, including installing dependencies (if you now wish this), and still do the whole "hash > find cache > extract > continue" thing before running "make test", and it'll skip the dependencies part if it's there already.

philipp-gayret 9 hours ago | parent | prev | next [-]

Depends on the build toolchain but usually you'd hash the dependency file and that hash is your cache key for a folder in which you keep your dependencies. You can also make a Docker image containing all your dependencies but usually downloading and spinning that up will take as long as installing the dependencies.

For caching you use GitHubs own cache action.

1a527dd5 9 hours ago | parent | prev | next [-]

You don't.

For things like installing deps, you can use GitHub Actions or several third party runners have their own caching capabilities that are more mature than what GHA offers.

plagiarist 9 hours ago | parent | prev [-]

If you are able to use the large runners, custom images are a recent addition to what Github offers.

https://docs.github.com/en/actions/how-tos/manage-runners/la...

tracker1 5 hours ago | parent | prev | next [-]

Minor variance on #1, I've come to use Deno typescripts for anything more complex than what can be easily done in bash or powershell. While I recognize that pwsh can do a LOT in the box, I absolutely hate the ergonomics and a lot of the interactions are awkward for people not used to it, while IMO more developers will be more closely aligned to TypeScript/JavaScript.

Not to mention, Deno can run TS directly and can reference repository/http modules directly without a separate install step, which is useful for shell scripting beyond what pwsh can do. ex: pulling a dbms client and interacting directly for testing, setup or configuration.

For the above reasons, I'll also use Deno for e2e testing over other languages that may be used for the actual project/library/app.

newsoftheday 7 hours ago | parent | prev | next [-]

> Don't use bash

What? Bash is the best scripting language available for CI flows.

linuxftw 8 hours ago | parent | prev | next [-]

1. Just no. Unless you are some sort of Windows shop.

jayd16 3 hours ago | parent | next [-]

Pwsh scripts are portable across mac, linux and windows with arguably less headache than bash. Its actually really nice. You should try it.

If you don't like it, you can get bash to work on windows anyway.

rerdavies 7 hours ago | parent | prev [-]

If you're building for Windows, then bash is "just no", so it's either cmd/.bat, or pwsh/.ps. <shrugs>

c-hendricks 6 hours ago | parent | next [-]

All my windows work / ci runs still use bash.

zabzonk 6 hours ago | parent | prev | next [-]

I develop on Windows. And I use bash and (gnu) make - combination that cannot be beat, in my experience.

import 7 hours ago | parent | prev | next [-]

That’s the only reason for sure.

pixl97 7 hours ago | parent | prev [-]

I mean, if you're a Windows shop you really should be using powershell.

embedding-shape 9 hours ago | parent | prev [-]

Step 0. Stop using CI services that purposefully waste your time, and use CI services that have "Rebuild with SSH" or similar. From previous discussions (https://news.ycombinator.com/item?id=46592643), seems like Semaphore CI still offers that.