Questions tagged [markov-chain]

For questions about the use of Markov models in the field of AI/ML.

A Markov chain is "a stochastic model describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event."

Markov Chain - Wikipedia

18 questions
8
votes
1 answer

What is ergodicity in a Markov Decision Process (MDP)?

I have read about the concept of ergodicity on the safe RL paper by Moldovan (section 3.2) and the RL book by Sutton (chapter 10.3, 2nd paragraph). The first one says that "a belief over MDPs is ergodic if and only if any state is reachable from any…
7
votes
2 answers

What is a Markov chain and how can it be used in creating artificial intelligence?

I believe a Markov chain is a sequence of events where each subsequent event depends probabilistically on the current event. What are examples of the application of a Markov chain and can it be used to create artificial intelligence? Would a…
WilliamKF
  • 2,493
  • 1
  • 24
  • 31
6
votes
2 answers

What is the difference between a Bayesian Network and a Markov Chain?

I am trying to understand the difference between a Bayesian Network and a Markov Chain. When I search for this one the web, the unanimous solution seems to be that a Bayesian Network is directional (i.e. it's a DAG) and a Markov Chain is not…
Newskooler
  • 163
  • 5
5
votes
1 answer

Detect patterns in sequences of actions

I have to analyse sequences of actions that look more or less like this JSON blob. The question I'm trying to answer is whether there are recurring (sub)patterns that different users adopt when asked to perform a certain specific task -- in this…
Morpheu5
  • 101
  • 4
3
votes
2 answers

Can an Markov decision process be dependent on the past?

As far as I know MDP are independent from the past. But the definition says that the same policy should always take the same action depending on the state. What if I define my state as the current "main" state + previous decisions? For Example in…
Miemels
  • 389
  • 2
  • 10
3
votes
1 answer

How can I use a Hidden Markov Model to recognize images?

How could I use a 16x16 image as an input in a HMM? And at the same time how would I train it? Can I use backpropagation?
3
votes
2 answers

Difference in continuing and episodic cases in Sutton and Barto - Introduction to RL, exercise 3.5

Excercise 3.5 The equastions in Section 3.1 are for the continuing case and need to be modified (very slightly) to apply to episodic tasks. Show that you know the modifications needed by giving the modified version of…
2
votes
3 answers

Forward Diffusion Process Derivation In Diffusion Models

In papers and other material regarding diffusion models the forward diffusion process is defined by adding a small amount of Gaussian noise to an image $x_0$ for $T$ time steps. In each time step the noise has a variance of $\beta_t$. This process…
nkam
  • 21
  • 1
2
votes
0 answers

Can $Q$-learning or SARSA be thought of a Markov Chain?

I might just be overthinking a very simple question but nonetheless the following has been bugging me a lot. Given an MDP with non-trivial state and action sets, we can implement the SARSA algorithm to find the optimal policy or the optimal…
2
votes
0 answers

Are there any ways to model markov chains from time series data?

I want to make a thing that produces a stochastic process from time series data. The time series data is recorded every hour over the year, which means 24-hour of patterns exist for 365 days. What I want to do is something like below: Fit a…
JH Lee
  • 125
  • 4
1
vote
0 answers

In the original GAN paper, why is it mentioned that you can sample deep directed graphical models without a Markov chain?

In the original GAN paper (table 2), why is it mentioned that you can sample deep directed graphical models without a Markov chain (well, they say without difficulties, but others list MCMC as a difficulty). I was wondering how this is done because…
1
vote
0 answers

Looking for help on initializing continuous HMM model for word level ASR

I have been studying HMM implementation approaches on ASR for the last couple of weeks. This probabilistic model is very new to me. I am currently using a Python package called Pomegranate to implement an ASR model of my own for the Librispeech…
1
vote
1 answer

How is the probability transition matrix populated in the Markov process (chain) for a board game?

Following on from my other (answered) question: With regards to the Markov process (chain), if an environment is a board game and its states are the various position the game pieces may be in, how would the transition probability matrix be…
1
vote
1 answer

In the Markov chain, how are the directions to each successive state defined?

I'm watching the David Silver series on YT which has raised a couple of questions: In the Markov process (or chain), how are the directions to each successive state defined? For example, how are the arrow directions defined for the MP below? What's…
1
vote
0 answers

Predicting Hot Categories In a Reference Manager

Reference managers like Zotero or Mendeley allow researchers to categorize papers into hierarchical categories called collections. The User navigates through a listing of these collections when filing a new item. The retrieval time grows something…
1
2