For example, if you consider a man that lives in a big city, he can have only 4 states: sleep, eat, walking in the park and playing cards.
Markov chains can be used to predict the next state, for example, if the current state is sleeping, what is the probability that the next state is walking?
state/state | sleep | eat | walking | play cards |
---|---|---|---|---|
sleep | 0.2 | 0.4 | 0.3 | 0.1 |
eat | 0.1 | 0.2 | 0.6 | 0.1 |
walking | 0.2 | 0.2 | 0.3 | 0.2 |
play cards | 0.1 | 0.4 | 0.2 | 0.3 |
if the man sleep, probably the next state is eat (because is the more probable next event (0.4)),
if the man eat, probably the next state is walking, and so on.
This procedure can learn, because if the next state is not the state predicted, we can change the probabilities in the matrix and next time the predicted state will be different.
I know Markov chains was used in some chatter bots, in that case the states are one or more words, and the
sequence was a sequence of words. Training the chain with common phrases, the system was able
to produce phrases that made sense. Obviously the machine have no idea of the meaning of the phrase it is generating.
I'm still new in Markov Chains and I still have to assimilate the concept, but I'm sure it's powerful. I'm thinking about Markov chain these days to find some interesting application.
I started with neural networks, but they are heavy to use in python on a device like esp32, training requires too much cpu calculation.
I think Markov chains can be a simple alternative to neural networks, but they have to be used in the right way for the right application.
Nessun commento:
Posta un commento