| ▲ | Feynman vs. Computer(entropicthoughts.com) |
| 34 points by cgdl 4 hours ago | 18 comments |
| |
|
| ▲ | JKCalhoun 3 hours ago | parent | next [-] |
| As a hobbyist, I'm playing with analog computer circuits right now. If you can match your curve with a similar voltage profile, a simple analog integrator (an op-amp with a capacitor connected in feedback) will also give you the area under the curve (also as a voltage of course). Analog circuits (and op-amps just generally) are surprising cool. I know, kind of off on a tangent here but I have integration on the brain lately. You say "4 lines of Python", and I say "1 op-amp".) |
| |
| ▲ | dreamcompiler 2 hours ago | parent | next [-] | | Yep. This is also how you solve differential equations with analog computers. (You need to recast them as integral equations because real-world differentiators are not well-behaved, but it still works.) https://i4cy.com/analog_computing/ | | |
| ▲ | ogogmad 27 minutes ago | parent [-] | | How does this compare to the Picard-Lindelof theorem and the technique of Picard iteration? |
| |
| ▲ | addaon 2 hours ago | parent | prev [-] | | One of my favorite circuits from Korn & Korn [0] is an implementation of an arbitrary function of a single variable. Take an oscilloscope-style display tube. Put your input on the X axis as a deflection voltage. Close a feedback loop on the Y axis with a photodiode, and use the Y axis deflection voltage as your output. Cut your function of one variable out of cardboard and tape to the front of the tube. [0] https://www.amazon.com/Electronic-Analog-Computers-D-c/dp/B0... |
|
|
| ▲ | Animats an hour ago | parent | prev | next [-] |
| Good numerical integration is easy, because summing smooths out noise. Good numerical differentiation is hard, because noise is amplified. Conversely, good symbolic integration is hard, because you can get stuck and have to try another route through a combinatoric maze. Good symbolic differentiation is easy, because just applying the next obvious operation usually converges. Huh. Mandatory XKCD: [1] [1] https://xkcd.com/2117/ |
| |
| ▲ | kkylin 14 minutes ago | parent [-] | | That's exactly right. A couple more things: - Differenting a function composed of simpler pieces always "converges" (the process terminates). One just applies the chain rule. Among other things, this is why automatic differentiation is a thing. - If you have an analytic function (a function expressible locally as a power series), a surprisingly useful trick is to turn differentiation into integration via the Cauchy integral formula. Provided a good contour can be found, this gives a nice way to evaluate derivatives numerically. |
|
|
| ▲ | bananaflag 3 hours ago | parent | prev | next [-] |
| > I hear that in electronics and quantum dynamics, there are sometimes integrals whose value is not a number, but a function, and knowing that function is important in order to know how the thing it’s modeling behaves in interactions with other things. I'd be interested in this. So finding classical closed form solutions is the actual thing desired there? |
| |
| ▲ | morcus 2 hours ago | parent [-] | | I think what the author was alluding to was the path integral formulation [of quantum mechanics] which was advanced in large part by Feynman. It's not that finding closed form solutions is what matters (I don't think most path integrals would have closed form solutions), but that the integration is done over the space of functions, not over Euclidian space (or a manifold in Euclidian space, etc...) |
|
|
| ▲ | messe 2 hours ago | parent | prev | next [-] |
| An integral trick I picked up from a lecturer at university: if you know the result has to be of the form ax^n for some a that's probably rational and some integer n but you're feeling really lazy and/or it's annoying to simplify (even for mathematica), just plug in a transcendental value for x like Zeta[3]. Then just divide by powers of that irrational number until you have something that looks rational. That'll give you a and n. It's more or less numerical dimensional analysis. It's not that useful for complicated integrals, but when you're feeling lazy it's a fucking godsend to know what the answer should be before you've proven it. EDIT: s/irrational/transcendental/ |
|
| ▲ | eig 3 hours ago | parent | prev | next [-] |
| What is the advantage of this Monte Carlo approach over a typical numerical integration method (like Runge-Kutta)? |
| |
| ▲ | a-dub a minute ago | parent | next [-] | | as i understand: numerical methods -> smooth out noise from sampling/floating point error/etc for methods that are analytically inspired that are computationally efficient where monte carlo -> high cost brute force random sampling where you can improve accuracy by throwing more compute at the problem. | |
| ▲ | kens 2 hours ago | parent | prev | next [-] | | I was wondering the same thing, but near the end, the article discusses using statistical techniques to determine the standard error. In other words, you can easily get an idea of the accuracy of the result, which is harder with typical numerical integration techniques. | | |
| ▲ | ogogmad 31 minutes ago | parent [-] | | Numerical integration using interval arithmetic gets you the same thing but in a completely rigorous way. |
| |
| ▲ | edschofield an hour ago | parent | prev | next [-] | | Numerical integration methods suffer from the “curse of dimensionality”: they require exponentially more points in higher dimensions. Monte Carlo integration methods have an error that is independent of dimension, so they scale much better. See, for example, https://ww3.math.ucla.edu/camreport/cam98-19.pdf | |
| ▲ | MengerSponge 2 hours ago | parent | prev [-] | | Typical numerical methods are faster and way cheaper for the same level of accuracy in 1D, but it's trivial to integrate over a surface, volume, hypervolume, etc. with Monte Carlo methods. | | |
| ▲ | adrianN 2 hours ago | parent | next [-] | | At least if you can sample the relevant space reasonably accurately, otherwise it becomes really slow. | |
| ▲ | jgalt212 2 hours ago | parent | prev [-] | | The writer would have been well served to discuss why he chose Monte Carlo over than summing up all the small trapezoids. |
|
|
|
| ▲ | ogogmad 35 minutes ago | parent | prev [-] |
| The usage of confidence intervals here reminds me of the clearest way to see that integration is a computable operator, in exactly the same way that a function like sin() or sqrt() is computable. It's true thanks to a natural combination of (i) interval arithmetic and (ii) the "Darboux integral" approach to defining integration. So, intervals can do magic. |