Remix.run Logo
pizlonator 13 hours ago

This is a good write up and I agree with pretty much all of it.

Two comments:

- LLVM IR is actually remarkably stable these days. I was able to rebase Fil-C from llvm 17 to 20 in a single day of work. In other projects I’ve maintained a LLVM pass that worked across multiple llvm versions and it was straightforward to do.

- LICM register pressure is a big issue especially when the source isn’t C or C++. I don’t think the problem here is necessarily licm. It might be that regalloc needs to be taught to rematerialize

theresistor 13 hours ago | parent | next [-]

> It might be that regalloc needs to be taught to rematerialize

It knows how to rematerialize, and has for a long time, but the backend is generally more local/has less visibility than the optimizer. This causes it to struggle to consistently undo bad decisions LICM may have made.

pizlonator 13 hours ago | parent [-]

> It knows how to rematerialize

That's very cool, I didn't realize that.

> but the backend is generally more local/has less visibility than the optimizer

I don't really buy that. It's operating on SSA, so it has exactly the same view as LICM in practice (to my knowledge LICM doesn't cross function boundary).

LICM can't possibly know the cost of hoisting. Regalloc does have decent visibility into cost. Hence why this feels like a regalloc remat problem to me

CalChris 10 hours ago | parent [-]

> to my knowledge LICM doesn't cross function boundary

LICM is called with runOnLoop() but is called after function inlining. Inlining enlarges functions, possibly revealing more invariants.

pizlonator 10 hours ago | parent [-]

Sure. Any pass that is scoped to functions (or even loops, or basic blocks) will have increased scope if run after inlining, and most passes run after inlining.

In the context of this thread, your observation is not meaningful. The point is: LICM doesn't cross function boundary and neither does regalloc, so LICM has no greater scope than regalloc.

weinzierl 11 hours ago | parent | prev | next [-]

"LLVM IR is actually remarkably stable these days."

I'm by no means an LLVM expert but my take away from when I played with it a couple of years ago was that it is more like the union of different languages. Every tool and component in the LLVM universe had its own set of rules and requirements for the LLVM IR that it understands. The IR is more like a common vocabulary than a common language.

My bewilderment about LLVM IR not being stable between versions had given way to understanding that this freedom was necessary.

Do you think I misunderstood?

pizlonator 10 hours ago | parent | next [-]

> like the union of different languages

No. Here are two good ways to think about it:

1. It's the C programming language represented as SSA form and with some of the UB in the C spec given a strict definition.

2. It's a low level representation. It's suitable for lowering other languages to. Theoretically, you could lower anything to it since it's Turing-complete. Practically, it's only suitable for lowering sufficiently statically-typed languages to it.

> Every tool and component in the LLVM universe had its own set of rules and requirements for the LLVM IR that it understands.

Definitely not. All of those tools have a shared understanding of what happens when LLVM executes on a particular target and data layout.

The only flexibility is that you're allowed to alter some of the semantics on a per-target and per-datalayout basis. Targets have limited power to change semantics (for example, they cannot change what "add" means). Data layout is its own IR, and that IR has its own semantics - and everything that deals with LLVM IR has to deal with the data layout "IR" and has to understand it the same way.

> My bewilderment about LLVM IR not being stable between versions had given way to understanding that this freedom was necessary.

Not parsing this statement very well, but bottom line: LLVM IR is remarkably stable because of Hyrum's law within the LLVM project's repository. There's a TON of code in LLVM that deals with LLVM IR. So, it's super hard to change even the smallest things about how LLVM IR works or what it means, because any such change would surely break at least one of the many things in the LLVM project's repo.

jcranmer 10 hours ago | parent | next [-]

> 1. It's the C programming language represented as SSA form and with some of the UB in the C spec given a strict definition.

This is becoming steadily less true over time, as LLVM IR is growing somewhat more divorced from C/C++, but that's probably a good way to start thinking about it if you're comfortable with C's corner case semantics.

(In terms of frontends, I've seen "Rust needs/wants this" as much as Clang these days, and Flang and Julia are also pretty relevant for some things.)

There's currently a working group in LLVM on building better, LLVM-based semantics, and the current topic du jour of that WG is a byte type proposal.

pizlonator 9 hours ago | parent [-]

> This is becoming steadily less true over time, as LLVM IR is growing somewhat more divorced from C/C++, but that's probably a good way to start thinking about it if you're comfortable with C's corner case semantics.

First of all, you're right. I'm going to reply with amusing pedantry but I'm not really disagreeing

I feel like in some ways LLVM is becoming more like C-in-SSA...

> and the current topic du jour of that WG is a byte type proposal.

That's a case of becoming more like C! C has pointer provenance and the idea that byte copies can copy "more" than just the 8 bits, somehow.

(The C provenance proposal may be in a state where it's not officially part of the spec - I'm not sure exactly - but it's effectively part of the language in the sense that a lot of us already consider it to be part of the language.)

jcranmer 9 hours ago | parent [-]

The C pointer provenance is still in TS form and is largely constructed by trying to retroactively justify the semantics of existing compilers (which all follow some form of pointer provenance, just not necessarily coherently). This is still an area where we have a decent idea of what we want the semantics to be but it's challenging to come up with a working formalization.

I'd have to double-check, but my recollection is that the current TS doesn't actually require that you be able to implement user-written memcpy, rather it's just something that the authors threw their hands up and said "we hope compilers support this, but we can't specify how." In that sense, byte type is going beyond what C does.

pizlonator 9 hours ago | parent | next [-]

> The C pointer provenance is still in TS form and is largely constructed by trying to retroactively justify the semantics of existing compilers

That's my understanding too

> I'd have to double-check, but my recollection is that the current TS doesn't actually require that you be able to implement user-written memcpy, rather it's just something that the authors threw their hands up and said "we hope compilers support this, but we can't specify how."

That's also my understanding

> In that sense, byte type is going beyond what C does.

I disagree, but only because I probably define "C" differently than you.

"C", to me, isn't what the spec describes. If you define "C" as what the spec describes, then almost zero C programs are "C". (Source: in the process of making Fil-C, I experimented with various points on the spectrum here and have high confidence that to compile any real C program you need to go far beyond what the spec promises.)

To me, when we say "C", we are really talking about:

- What real C programs expect to happen.

- What real C compilers (like LLVM) make happen.

In that sense, the byte type is a case of LLVM hardening the guarantee that it already makes to real C programs.

So, LLVM having a byte type is a necessary component of LLVM supporting C-as-everyone-practically-it.

Also, I would guess that we wouldn't be talking about the byte type if it wasn't for C. Type safe languages with well-defined semantics have no need for allowing the user to write a byte-copy loop that does the right thing if it copies data of arbitrary type

(Please correct me if I'm wrong, this is fun)

uecker 7 hours ago | parent [-]

The C standard has a conformance model that distinguishes between "strictly conforming" and "conforming" C programs. Almost zero C programs are strictly conforming, but many are conforming.

uecker 7 hours ago | parent | prev [-]

bytewise copy just works with the TS. What it does not support is tracking provenance across the copy and doing optimization based on this. What we hope is that compilers drop these optimizations, because they are unsound.

weinzierl 9 hours ago | parent | prev [-]

Thanks for your detailed answer. You encouraged me to give it another try and have closer look this time.

enos_feedler 11 hours ago | parent | prev [-]

This take makes sense in the context of MLIR creation which introduces dialects which are namespaces within the IR. Given it was created by Chris Lattner I would guess he saw these problems with LLVM as well.

fooker 12 hours ago | parent | prev [-]

There is a rematerialize pass, there is no real reason to couple it with register allocation. LLVM regalloc is already somewhat subpar.

What would be neat is to expose all right knobs and levers so that frontend writers can benchmark a number of possibilities and choose the right values.

I can understand this is easier said than done of course.

pizlonator 12 hours ago | parent [-]

> There is a rematerialize pass, there is no real reason to couple it with register allocation

The reason to couple it to regalloc is that you only want to remat if it saves you a spill

fooker 12 hours ago | parent [-]

Remat can produce a performance boost even when everything has a register.

Admittedly, this comes up more often in non-CPU backends.

pizlonator 12 hours ago | parent [-]

> Remat can produce a performance boost even when everything has a register.

Can you give an example?

fooker 12 hours ago | parent [-]

Rematerializing 'safe' computation from across a barrier or thread sync/wait works wonders.

Also loads and stores and function calls, but that's a bit finicky to tune. We usually tell people to update their programs when this is needed.

pizlonator 11 hours ago | parent [-]

> Rematerializing 'safe' computation from across a barrier or thread sync/wait works wonders.

While this is literally "rematerialization", it's such a different case of remat from what I'm talking about that it should be a different phase. It's optimizing for a different goal.

Also feels very GPU specific. So I'd imagine this being a pass you only add to the pipeline if you know you're targeting a GPU.

> Also loads and stores and function calls, but that's a bit finicky to tune. We usually tell people to update their programs when this is needed.

This also feels like it's gotta be GPU specific.

No chance that doing this on a CPU would be a speed-up unless it saved you reg pressure.