Monads vs Objects
The examples in documentation of MonadRandom does a good job on how monads are useful. I have it cut and pasted here:
There is some correspondence between notions in programming and in mathematics:
random generator ~ random variable / probabilistic experiment result of a random generator ~ outcome of a probabilistic experiment
Thus the signature
rx :: (MonadRandom m, Random a) => m a
can be considered as "rx is a random variable". In the do-notation the line
x <- rx
means that "x is an outcome of rx".
In a language without higher order functions and using a random generator "function" it is not possible to work with random variables, it is only possible to compute with outcomes, e.g. rand()+rand()
. In a language where random generators are implemented as objects, computing with random variables is possible but still cumbersome [as you need to wrap the generators (Composition Pattern) in object and perform the computation as methods of the new object.]
In Haskell we have both options either computing with outcomes
do x <- rx
y <- ry
return (x+y)
or computing with random variables
liftM2 (+) rx ry
This means that liftM like functions convert ordinary arithmetic into random variable arithmetic [Since you can't do lift in OO, you would need to write your own version of each operation you like to support]. But there is also some arithmetic on random variables which can not be performed on outcomes [Or would be adhoc in OO]. For example, given a function that repeats an action until the result fulfills a certain property (I wonder if there is already something of this kind in the standard libraries)
untilM :: Monad m => (a -> Bool) -> m a -> m a
untilM p m = do x <- m
if p x then return x else untilM p m
we can suppress certain outcomes of an experiment. E.g. if
getRandomR (-10,10)is a uniformly distributed random variable between −10 and 10, then
untilM (0/=) (getRandomR (-10,10))is a random variable with a uniform distribution of {−10, …, −1, 1, …, 10}.