| |
| ▲ | JohnKemeny 4 days ago | parent | next [-] | | But a random walk is precisely a stochastic process for which the _next state_ depends only on the _current state_. In terms of graphs (where _random walk_ comes from), the next node is decided by randomly selecting a neighbor of the current node. | | |
| ▲ | seanhunter 3 days ago | parent | next [-] | | That's not true for random walks in general I don't think. A random walk is a process derived from taking random steps in some mathematical space. It can include jumps and it can include memory. Take a "path-avoiding" random walk. At time t the distribution of the next step depends on whether or not I have at some point hit any of the adjacent nodes in the current path. That's not the current state, that's memory. | | |
| ▲ | JohnKemeny 3 days ago | parent | next [-] | | A random walk on a graph is a stochastic process that starts at a vertex and, at each step, moves to a randomly chosen neighboring vertex. Formally: Given a graph, a random walk is a sequence of vertices [v1, v2, ..., vk] such that each v{i+1} is selected uniformly at random from the neighbors of vi. In weighted graphs, the next vertex is chosen with probability proportional to edge weights. | | |
| ▲ | seanhunter 3 days ago | parent [-] | | I’m pretty sure it’s not a requirement that the distribution is uniform and also not path-dependent as per the example I gave - a random walk where you’re not allowed to visit a node more than once. | | |
| ▲ | JohnKemeny 3 days ago | parent [-] | | Here's a reference for the definition I'm using: https://www.cs.yale.edu/homes/spielman/561/lect10-18.pdf It's from lecture notes (pdf) from a course in Spectral Graph Theory by Professor Daniel Spielman at Yale. | | |
| ▲ | seanhunter 3 days ago | parent [-] | | Those notes look interesting thanks. I’ve really never heard of someone saying you have to have uniform probability for a random walk on a graph. In fact the context where I’m most familiar with them (lattice/grid pricers) it’s always specified something like “the probability of branch a is p and b is 1-p” (ie explicitly not uniform) and they aren’t a weighted graph in the normal sense. | | |
| ▲ | srean 3 days ago | parent [-] | | It doesn't have to be uniform or memory less. However, in general when people mention random walks without further qualifications they are usually talking about the uniform and memoryless case. That vanilla case can be generalized extensively, on kinds of state spaces, kinds of index sets, kinds of dependence and so on. | | |
|
|
|
| |
| ▲ | vladimirralev 3 days ago | parent | prev [-] | | But also "the memory" of the random walk can be encoded in a state itself. In your example you can just keep accumulating the visited nodes, so your state space will be now the space of tuples of nodes from your initial space. |
| |
| ▲ | 4 days ago | parent | prev [-] | | [deleted] |
| |
| ▲ | jmaker 4 days ago | parent | prev [-] | | A Markov chain is commonly understood to be a time-discrete Markov process. Intuitively, it’s a “chain” because you can “single out” its states in time rather than intervals. That’s also Markov’s original definition. Instead of Wassermann one can look up the notion in Wikipedia. A path is a notion relevant to the state space of a process - it’s a realization of states at every single point in time for the time span of interest. |
|