| ▲ | nhumrich 9 days ago |
| Nick Humrich here, the author who helped rewrite PEP 501 to introduce t-strings, which was the foundation for this PEP. I am not an author on this accepted PEP, but I know this PEP and story pretty well. Let me know if you have any questions. I am super excited this is finally accepted. I started working on PEP 501 4 years ago. |
|
| ▲ | Waterluvian 9 days ago | parent | next [-] |
| I often read concerns that complexity keeps being added to the language with yet another flavour of string or whatnot. Given that those who author and deliberate on PEPs are, kind of by definition, experts who spend a lot of time with the language, they might struggle to grok the Python experience from the perspective of a novice or beginner. How does the PEP process guard against this bias? |
| |
| ▲ | rtpg 9 days ago | parent | next [-] | | There are many long-term users of Python who participate in PEP discussion who argue for beginners[0], often because they professionally are teaching Python. There are also loads of people basically defaulting to "no" on new features, because they understand that there is a cost of supporting things. I will often disagree about the evaluation of that cost, but it's hard to say there is no cost. Nobody wants a system that is unusable, slow, hard to implement for, or hard to understand. People sometimes just have different weights on each of these properties. And some people are in a very awkward position of overestimating costs due to overestimating implementation effort. So you end up in discussions like "this is hard to understand!" "No it isn't!" Hard to move beyond, but the existence of these kinds of conversations serve, in a way, as proof that people aren't jumping on every new feature. Python is still a language that is conservative in what it adds. This should actually inspire more confidence in people that features added to Python are _useful_, because there are many people who are defaulting to not adding new features. Recent additions to Python speeding up is more an indicator of the process improving and identifying the good stuff rather than a lowering of the bar. [0]: I often think that these discussions often get fairly intense. Understandability is definitely a core Python value, but I Think sometimes discussions confuse "understandability" with "amount of things in the system". You don't have to fully understand pervasive hashing to understand Python's pervasive value equality semantics! A complex system is needed to support a simple one! | |
| ▲ | nhumrich 9 days ago | parent | prev | next [-] | | All discussion on PEP's happens in public forums where anyone can opine on things before they are accepted. I agree that the experts are more likely to participate in this exchange. And while this is wish-washy, I feel like the process is really intended to benefit the experts more than the novices anyways. There have been processes put into place in recent years to try to curb the difficulty of things. One of those is that all new PEPs have to include a "how can you teach this to beginers" section, as seen here on this pep: https://peps.python.org/pep-0750/#how-to-teach-this | | |
| ▲ | Waterluvian 9 days ago | parent | next [-] | | I think "how can you teach this to beginners?" is a fantastic, low-hanging fruit option for encouraging the wizards to think about that very important type of user. Other than a more broad "how is the language as a whole faring?" test, which might be done through surveys or other product-style research, I think this is just plainly a hard problem to approach, just by the nature that it's largely about user experience. | | |
| ▲ | gtirloni 8 days ago | parent [-] | | "How does this fit with everything else beginners have to learn to understand basic code?" is sorely needed. |
| |
| ▲ | anon-3988 8 days ago | parent | prev [-] | | The average Python developer does not even know what a "PEP" is. Open discussion is good yes, but no one really knows what the average developer wants because they simply does not care if its Python or Java or whatever else. "Some hammers are just shaped weird, oh well, just make do with it." For example, some people that I interview does not "get" why you have to initialize the dict before doing dict[k] += 1. They know that they have to do some ritual of checking for k in dict and dict[k] = 0. But they don't get that += desugars into dict[k] = dict[k] + 1. | | |
| |
| ▲ | davepeck 9 days ago | parent | prev | next [-] | | You might find the Python discussion forums ([0] and [1]) interesting; conversation that guides the evolution of PEPs happens there. As Nick mentioned, PEP 750 had a long and winding road to its final acceptance; as the process wore on, and the complexities of the earliest cuts of the PEPs were reconsidered, the two converged. [0] The very first announcement: https://discuss.python.org/t/pep-750-tag-strings-for-writing... [1] Much later in the PEP process: https://discuss.python.org/t/pep750-template-strings-new-upd... | |
| ▲ | jackpirate 9 days ago | parent | prev [-] | | Building off this question, it's not clear to me why Python should have both t-strings and f-strings. The difference between the two seems like a stumbling block to new programmers, and my "ideal python" would have only one of these mechanisms. | | |
| ▲ | nhumrich 9 days ago | parent | next [-] | | f-strings immediately become a string, and are "invisible" to the runtime from a normal string. t-strings introduce an object so that libraries can do custom logic/formatting on the template strings, such as decided _how_ to format the string. My main motivation as an author of 501 was to ensure user input is properly escaped when inserting into sql, which you cant enforce with f-strings. | | |
| ▲ | williamdclt 9 days ago | parent | next [-] | | > ensure user input is properly escaped when inserting into sql I used to wish for that and got it in JS with template strings and libs around it. For what it’s worth (you got a whole PEP done, you have more credibility than I do) I ended up changing my mind, I think it’s a mistake. It’s _nice_ from a syntax perspective. But it obscures the reality of sql query/parameter segregation, it builds an abstraction on top of sql that’s leaky and doesn’t even look like an abstraction. And more importantly, it looks _way too close_ to the wrong thing. If the difference between the safe way to do sql and the unsafe way is one character and a non-trivial understanding of string formatting in python… bad things will happen. In a one-person project it’s manageable, in a bigger one where people have different experiences and seniority it will go wrong. It’s certainly cute. I don’t thing it’s a good thing for sql queries. | | |
| ▲ | nine_k 9 days ago | parent [-] | | I understand your concern, and I think the PEP addresses it. Quite bluntly, t"foo" is not a string, while f"foo" is. You'll get a typecheck error if you run a typechecker like any reasonable developer, and will get a runtime error if you ignore the type mismatch, because t"foo" even lacks a __str__() method. One statement the PEP could put front and center in the abstract could be "t-strings are not strings". | | |
| |
| ▲ | jackpirate 8 days ago | parent | prev [-] | | That all make senses to me. But it definitely won't make sense to my intro to programming students. They already have enough weird syntax to juggle. | | |
| |
| ▲ | davepeck 9 days ago | parent | prev | next [-] | | For one thing, `f"something"` is of type `str`; `t"something"` is of type `string.templatelib.Template`. With t-strings, your code can know which parts of the string were dynamically substituted and which were not. | | |
| ▲ | all2 9 days ago | parent | next [-] | | The types aren't so important. __call__ or reference returns type string, an f and a t will be interchangeable from the consumer side. Example, if you can go through (I'm not sure you can) and trivially replace all your fs with ts, and then have some minor fixups where the final product is used, I don't think a migration from one to the other would be terribly painful. Time-consuming, yes. | | | |
| ▲ | 9 days ago | parent | prev [-] | | [deleted] |
| |
| ▲ | skeledrew 9 days ago | parent | prev [-] | | Give it a few years to when f-string usage has worn off to the point that a decision can be made to remove it without breaking a significant number of projects in the wild. | | |
| ▲ | milesrout 9 days ago | parent | next [-] | | That will never happen. | | |
| ▲ | skeledrew 9 days ago | parent | next [-] | | Well if it continues to be popular then that is all good. Just keep it. What matters is that usage isn't complex for anyone. | | |
| ▲ | macNchz 9 days ago | parent [-] | | Well now we'll have four different ways to format strings, since removing old ones is something that doesn't actually happen: "foo %s" % "bar"
"foo {}".format("bar")
bar = "bar"; f"foo {bar}"
bar = "bar"; t"foo {bar}" # has extra functionality!
| | |
| ▲ | amenghra 9 days ago | parent | next [-] | | This is where an opinionated linter comes in handy. Ensures people gradually move to the “better” version while not breaking backwards compatibility. It does suck for beginners who end up having to know about all variations until their usage drops off. | | |
| ▲ | QuercusMax 9 days ago | parent [-] | | The linter is a big deal, actually. I've worked with Python off and on during the past few decades; I just recently moved onto a project that uses Python with a bunch of linters and autoformatters enabled. I was used to writing my strings ('foo %s % bar), and the precommit linter told me to write f'foo %{bar}'. Easy enough! |
| |
| ▲ | rtpg 9 days ago | parent | prev | next [-] | | printf-style formatting ("foo %s" % "bar") feels the most ready to be retired (except insofar as it probably never will, because it's a nice shortcut). The other ones at least are based on the same format string syntax. "foo {}".format("bar") would be an obvious "just use f-string" case, except when the formatting happens far off. But in that case you could "just" use t-strings? Except in cases where you're (for example) reading a format string from a file. Remember, t- and f- strings are syntactic elements, so dynamism prevents usage of it! So you have the following use cases: - printf-style formatting: some C-style string formatting is needed - .format: You can't use an f- string because of non-locality in data to format, and you can't use a t- string due to dynamism in - f-string: you have the template and the data in the same spot lexicographically, and you just want string concatenation (very common!) - t-string: you have the template and the data in the same spot lexicogrpahically, but want to use special logic to actually build up your resulting value (which might not even be a string!) The last two additions being syntax makes it hard to use them to cover all use cases of the first two. But in a specific use case? It's very likely that there is an exact best answer amongst these 4. | | |
| ▲ | masklinn 8 days ago | parent | next [-] | | > printf-style formatting ("foo %s" % "bar") feels the most ready to be retired (except insofar as it probably never will, because it's a nice shortcut). It’s also the only one which is anything near safe for being user provided. | | |
| ▲ | pansa2 8 days ago | parent | next [-] | | I don’t think I’ve ever used % formatting in Python - what makes it safer than `format`? | | |
| ▲ | masklinn 8 days ago | parent [-] | | `str.format` allows the format string to navigate through indexes, entries, and attributes. If the result of the formatting is echoed back and any non-trivial object it passed in, it allows for all sorts of introspection. printf-style... does not support any of that. It can only format the objects passed in. |
| |
| ▲ | rtpg 7 days ago | parent | prev [-] | | Very good point. While I think we could do away with the syntactic shorthand, definitely would want to keep some function/method around with the capabilities. |
| |
| ▲ | milesrout 8 days ago | parent | prev [-] | | .format is also nice because you can have more complex subexpressions broken over multiple lines instead of having complex expressions inside the {}. |
| |
| ▲ | skeledrew 9 days ago | parent | prev | next [-] | | And if it's being used, and isn't considered problematic, then it should remain. I've found use for all the current ones:
(1) for text that naturally has curlies,
(2) for templating
(3) for immediate interpolation, and improved at-site readability I see (4) being about the flexibility of (2) and readability of (3). Maybe it'll eventually grow to dominate one or both, but it's also fine if it doesn't. I don't see (1) going away at all since the curly collision still exists in (4). | |
| ▲ | milesrout 8 days ago | parent | prev | next [-] | | Don't forget string.Template: import string
t = string.Template("foo $bar")
t.substitute(bar="bar")
| |
| ▲ | darthrupert 8 days ago | parent | prev [-] | | Five, if you count the log module. I hope t-strings will come there soon. log.error("foo happend %s", reason) |
|
| |
| ▲ | bcoates 8 days ago | parent | prev [-] | | Putting down my marker on the opposite. Once you're targeting a version of python that has t-strings, decent linters/libraries have an excuse to put almost all uses of f-strings in the ground. |
| |
| ▲ | aatd86 9 days ago | parent | prev [-] | | No backward compatibility?! | | |
| ▲ | skeledrew 9 days ago | parent [-] | | If the usage of a feature is significantly close enough to 0 because there is a well used alternative, what need is there for backward compatibility? If anything, it can be pushed to a third party package on PyPI. |
|
|
|
|
|
| ▲ | patrec 9 days ago | parent | prev | next [-] |
| My memory is that ES6's template strings preceded f-strings. If that is correct, do you happen to know why python was saddled with f-strings, which seem like an obviously inferior design, in the first place? We are now at five largely redundant string interpolation systems (%, .format, string.Template, f-string, t-string). |
| |
| ▲ | nhumrich 9 days ago | parent | next [-] | | PEP 501 when originally written (not by me) was intended to be the competing standard against f-strings, and to have been more inline with ES6's template strings. There was debate between the more simple f-string PEP (PEP 498) and PEP 501. Ultimately, it was decided to go with f-strings as a less confusing, more approachable version (and also easier to implement) and to "defer" PEP 501 to "see what happens". Since then, the python internal have also changed, allowing t-strings to be even easier to implement (See PEP 701). We have seen what happens, and now its introduced. f-strings and t-strings are not competing systems. They are different. Similar to ES6 templates and namedTaggedTemplates, they are used for different things while API feels similar intentionally.
f-strings are not inferior to t-strings, they are better for most use cases of string templating where what you really want, is just a string. | | |
| ▲ | patrec 7 days ago | parent [-] | | Thanks! > they are better for most use cases of string templating where what you really want, is just a string. I think use cases where you want to unconditionally bash a string together are rare. I'd bet that in > 80% of cases the "just a string" really is just a terrible representation for what really is either some tree (html, sql, python, ...) structure or at least requires lazy processing (logging, where you only want to pay for the expensive string formatting and generation if you run at the log level or higher that the relevant logging line is meant to operate). |
| |
| ▲ | mardifoufs 8 days ago | parent | prev [-] | | I'm not familiar with ES6 template strings, but why are they better than f-strings? F-strings just work, and work well, in my experience so I'm wondering what I'm missing out on. Especially since the language I use the most is c++... So I guess I don't expect much out of string manipulation lol. | | |
| ▲ | patrec 7 days ago | parent | next [-] | | The problem with f-strings is that they make an extremely limited use case convenient (bashing unstructured text) and thus people people invariably use them for the less limited use case for which no such covenient mechanism exists. Constructing ASTs (including html and SQL). Or logging (where you want to avoid unconditionally computing some expensive string represenation). I do this myself. I basically always use the subtl wrong log.warning(f"Unexpected {response=} encountered") and not the correct, and depending on the loglevel cheaper log.warning("Unexpected respone=%s encountered", repsonse). The extra visual noise is typically not worth the extra correctness and performance (I'd obviously not do this in some publically exposed service receiving untrusted inputs). I'd argue these use cases are in fact more prevalent then the bashing unstructured text use case. Encouraging people to write injection vulnerabilities or performance and correcness bugs isn't great language design. | |
| ▲ | WorldMaker 8 days ago | parent | prev [-] | | ES2015 template strings from the beginning supported "tagged template literals" where the tag is a function that gets the template itself and the objects passed to the "holes" in the template as separate arguments. From there that function can do things like turn the holes themselves into something like SQL Parameter syntax and wrap the things that go in those holes in properly escaped SQL Parameters. `some template {someVar}` was f-strings and someFunction`some template {someVar}` was more like what these t-strings provide to Python. t-strings return an object (called Template) with the template and the things that go into the "holes", versus tagged templates are a function calling pattern, but t-strings are still basically the other, richer half of ES2015+ template strings. |
|
|
|
| ▲ | _cs2017_ 9 days ago | parent | prev | next [-] |
| Thank you! Curious what options for deferred evalution were considered and rejected? IMHO, the main benefit of deferred evaluation isn't in the saving of a bit of code to define a deferred evaluation class, but in standardazing the API so that anyone can read the code without having to learn what it means in each project. Also: were prompt templates for LLM prompt chaining a use case that influenced the design in any way (examples being LangChain and dozens of other libraries with similar functionlity)? |
| |
| ▲ | nhumrich 9 days ago | parent | next [-] | | One solution that existed for a while was using the `!` operator for deferred. `t!'my defered {str}'` The main reason for non having deferred evaluation was that it over-complicated the feature quite a bit and introduces a rune. Deferred evaluation also has the potential to dramatically increase complexity for beginners in the language, as it can be confusing to follow if you dont know what is going on. Which means "deferred by default" wasnt going to be accepted. As for LLM's, it was not the main consideration, as the PEP process here started before LLM's were popular. | | |
| ▲ | _cs2017_ 8 days ago | parent [-] | | Ah interesting, so the complexity wasn't in the API design or implementation, but only in the additional rune? Is that really such a big cost? |
| |
| ▲ | davepeck 9 days ago | parent | prev [-] | | > were prompt templates for LLM prompt chaining a use case that influenced the design in any way Maybe not directly, but the Python community is full of LLM users and so I think there's a general awareness of the issues. | | |
| ▲ | andy99 9 days ago | parent [-] | | Is there an example of how these could be used in LLM prompting? |
|
|
|
| ▲ | smnrchrds 8 days ago | parent | prev | next [-] |
| Thank you for your work on this topic and for answering questions here. I have a question: is there a way to avoid the security issues with string formatting described here? It seems like all (most?) string formatting options suffer from the same issue. https://lucumr.pocoo.org/2016/12/29/careful-with-str-format/ |
|
| ▲ | leobuskin 8 days ago | parent | prev | next [-] |
| As I understand, it may help a bit with logging performance, not sure, still trying to understand the template abilities. So, right now, you have two options to log: 1. `logger.debug(f'Processing {x}')` - looks great, but evaluates anyway, even if logging level > `logging.DEBUG`; 2. `logger.debug('Processing %s', x)` - won't evaluate till necessary. What would be the approach with t-strings in this case? Would we get any benefits? |
| |
| ▲ | bcoates 8 days ago | parent | next [-] | | The expression (x) is eagerly evaluated in both cases, cuz that's how Python works. You can defer the format call but Python fundamentally doesn't have an equivalent of lazy/compile time flag argument evaluation and this doesn't change that. For a logger t-strings are mostly just a more pleasant and less bug-prone syntax for #2 | |
| ▲ | davepeck 8 days ago | parent | prev [-] | | T-strings, like f-strings, are eagerly evaluated -- so in this sense, no, there's no benefit. | | |
| ▲ | trashburger 8 days ago | parent [-] | | Not quite; the interpolations are not eagerly stringified which is the potentially expensive part. In this sense it's kind of a middle ground between the two approaches. | | |
| ▲ | davepeck 8 days ago | parent [-] | | Sure, good point; I generally think of evaluation, not stringification, as the likely dominant expense. But maybe it’s sometimes the other way around? |
|
|
|
|
| ▲ | frainfreeze 9 days ago | parent | prev | next [-] |
| Nice work on PEP 501! Probably a silly question, but how comes PEP 292 isn't mentioned anywhere in PEP 750? |
| |
| ▲ | davepeck 9 days ago | parent [-] | | My hope is to write some new documentation as 3.14 nears release that explains the (growing) constellation of string formatting mechanisms in Python and describes when they might each be useful. They overlap to some degree, but each has a unique twist that makes them useful in different situations. PEP 292 is going nowhere and is used, for instance, in really powerful libraries like `flufl.i18n` | | |
| ▲ | sevensor 8 days ago | parent [-] | | Is a PEP 750 Template entirely different from a PEP 292 Template? I’m a bit confused about the relationship. | | |
| ▲ | davepeck 8 days ago | parent [-] | | Yeah, they’re unrelated. (PEP 750 places its Template and Interpolation classes in the new string.templatelib) |
|
|
|
|
| ▲ | bjourne 9 days ago | parent | prev | next [-] |
| Does Python really need yet another type of string literal? I feel like while templating is a good addition to the standard library, it's not something that needs syntactic support. t"blah blah" is just an alias for Template("blah blah", context), isn't it? |
| |
| ▲ | nhumrich 9 days ago | parent | next [-] | | yes, it does actually need syntax support. In order for it to work, you need to preserve which parts of the string are static (hard coded) and which parts are dynamic (likely user input). Which you can only do at a syntax level. You could potentially do it by hand, using placeholders, like with `%`, but as we now live in f-string world, we have something better. The syntax highlighting and ergonomics of f-strings are so good, devs prefer it in most cases. The idea is to make the most ergonomic thing, also the safest thing. By decreasing ergonomics, you reduce the adoption of safer symantics. | | |
| ▲ | bjourne 9 days ago | parent [-] | | That's why I specified the context argument. Something like Template("{name} {address}", dict(name = ..., address = ...)) would be exactly equivalent to t"{name} {address}" assuming those variables are fetched from the local scope. | | |
| ▲ | nhumrich 8 days ago | parent | next [-] | | Yes, which is essentially how SQLalchemy works today. You can still put strings in the context though, so for more complex things, it's turtles all the way down. Also, as f-srrings are more ergonomic, people now reach for them, even when they shouldn't | |
| ▲ | thayne 8 days ago | parent | prev [-] | | So now you are repeating the name of each interpolated value three times (once in the template string, once for the key in the dict, once for the actual value). Yes, you can do that, but a t-string is much more ergonomic, and IMO, more readable . | | |
| ▲ | bjourne 8 days ago | parent [-] | | Yes, of course. Any function with syntactic support will be more "ergonomic" than one without. But t-strings are for a relatively niche use case and can't even fully replace f-strings since they aren't actually strings. Even for input sanitizing they seem insufficient since you cant use them to create pre-compiled statements/templates. | | |
| ▲ | thayne 8 days ago | parent [-] | | Preventing injection attacks in sql queries, html, etc. is a niche use case? | | |
| ▲ | bjourne 8 days ago | parent [-] | | Yes. And it also relies on pre-compilation, which t-strings do not support. | | |
| ▲ | thayne 7 days ago | parent [-] | | No, you don't need pre-compilation, assuming by pre-compilation you mean compiling a template that you pass values to later[1] t-strings allow a library to perform transformations on the values, such as escaping them, or passing them as separate values to a parameterized query. Escaping html and parameterizing sql queries were the first two example use cases given in the PEP. And I disagree that such use cases are niche. In my experience, needing to sanitize user input is an extremely common thing to need to do, and having the language and library make it as easy as possible to do it correctly is a very good thing. [1]: I do wish they hadn't called these Templates, because it's not really a template so much as an intermediate representation of an interpolated value. | | |
| ▲ | bjourne 7 days ago | parent [-] | | Yes, web is a niche. Outside of web there is simply no possibility of turning untrusted input into executable code so sanitation isn't needed. In web development you already have two dozen templating libraries that offer much more comprehensive safe and fast text-generation solutions than what t-strings do. Pre-compilation means that you first compile the template, then you supply the template with values when you render it multiple times. This is not possible with t-strings since the values are bound when the t-string is created. | | |
| ▲ | thayne 5 days ago | parent [-] | | Even if you accept that "web" is niche (which I don't), and all your input is trusted not to be malicious (which is not necessarily true for non-web applications, especially if they are privileged), you still need to worry about input with special characters causing bugs. Web apps don't have a monopoly on using a database, or generating strings in a specific syntax that includes user input. With respect to compilation, that is basically is how t-strings work, but it is the python interpreter that does the compilation. When it parses the t-string, it compiles it to (byte) code to generate a Template object from from the expressions in scope when it is evaluated, which may happen more than once. And if you really want a template that is a separate object that is passed the values separately, you can just wrap a t-string in a function that takes the parameters as arguments. > two dozen templating libraries that offer much more comprehensive safe and fast text-generation solutions than what t-strings do But t-strings allow those libraries to be safer (users are less likely to accidentally interpolate values in an f-string, if a t-string is required) and possibly faster (since the python interpreter does the hard work of splitting up the string for you. t-strings don't replace those libraries, it allows them to be better. | | |
| ▲ | bjourne 5 days ago | parent [-] | | In non-web contexts untrusted input is not interpolated into the executable streams so you don't worry about special characters. E.g., there is no point in "sanitizing" the name of a variable in a C compiler. > And if you really want a template that is a separate object that is passed the values separately, you can just wrap a t-string in a function that takes the parameters as arguments. No, you can't do that: "Template strings are evaluated eagerly from left to right, just like f-strings. This means that interpolations are evaluated immediately when the template string is processed, not deferred or wrapped in lambdas." Every function evaluation creates a new Template object, it does not reuse a precompiled one. > and possibly faster Possibly not, since precompilation is not supported. | | |
| ▲ | thayne 5 days ago | parent [-] | | > In non-web contexts untrusted input is not interpolated into the executable streams so you don't worry about special characters I don't know what you mean by `executable` streams, but besides databases as I've already mentioned, a common thing that shows up in non-web applications is invoking a shell command that includes a user-supplied file name as part of it. Currently doing so safely means you need to call `shlex.quote` or similar on the filename, but with t-strings you could have something like: `shell(t"some-command {filename} 2> somefile | other-command")`. And that is just one specific example. There are other cases it might be useful as well, like say generating an XML configuration file from a template that includes user-supplied input. > No, you can't do that... Every function evaluation creates a new Template object, it does not reuse a precompiled one. The code that generates that Template object is pre-compiled though. If you define a function like: def my_template(a, b,c):
return t"a={a} b={b} c={c}"
When python parses that, it will generate bytecode equivalent to: def my_template(a, b,c):
return Template("a=", Interpolation(a, ...), " b=", Interpolation(b, ...), " c=", Interpolation(c,...))
yes, it does create a new `Template` object every time `my_template` is called, but it doesn't have to re-parse the template string each time, which is an improvement over existing APIs that do re-parse a template string every time it is used. |
|
|
|
|
|
|
|
|
|
| |
| ▲ | thayne 8 days ago | parent | prev [-] | | A library can't capture interpolated variables |
|
|
| ▲ | EndsOfnversion 9 days ago | parent | prev [-] |
| [flagged] |