Difference between Monad and Applicative in Haskell
haskell applicative
haskell monad
monads
haskell applicative do
applicative style
free applicative
what is pure haskell
I just read the following from typeclassopedia about the difference between Monad
and Applicative
. I can understand that there is no join
in Applicative
. But the following description looks vague to me and I couldn't figure out what exactly is meant by "the result" of a monadic computation/action. So, if I put a value into Maybe
, which makes a monad, what is the result of this "computation"?
Let’s look more closely at the type of (>>=). The basic intuition is that it combines two computations into one larger computation. The first argument, m a, is the first computation. However, it would be boring if the second argument were just an m b; then there would be no way for the computations to interact with one another (actually, this is exactly the situation with Applicative). So, the second argument to (>>=) has type a -> m b: a function of this type, given a result of the first computation, can produce a second computation to be run. ... Intuitively, it is this ability to use the output from previous computations to decide what computations to run next that makes Monad more powerful than Applicative. The structure of an Applicative computation is fixed, whereas the structure of a Monad computation can change based on intermediate results.
Is there a concrete example illustrating "ability to use the output from previous computations to decide what computations to run next", which Applicative does not have?
My favorite example is the "purely applicative Either". We'll start by analyzing the base Monad instance for Either
instance Monad (Either e) where return = Right Left e >>= _ = Left e Right a >>= f = f a
This instance embeds a very natural short-circuiting notion: we proceed from left to right and once a single computation "fails" into the Left
then all the rest do as well. There's also the natural Applicative
instance that any Monad
has
instance Applicative (Either e) where pure = return (<*>) = ap
where ap
is nothing more than left-to-right sequencing before a return
:
ap :: Monad m => m (a -> b) -> m a -> m b ap mf ma = do f <- mf a <- ma return (f a)
Now the trouble with this Either
instance comes to light when you'd like to collect error messages which occur anywhere in a computation and somehow produce a summary of errors. This flies in the face of short-circuiting. It also flies in the face of the type of (>>=)
(>>=) :: m a -> (a -> m b) -> m b
If we think of m a
as "the past" and m b
as "the future" then (>>=)
produces the future from the past so long as it can run the "stepper" (a -> m b)
. This "stepper" demands that the value of a
really exists in the future... and this is impossible for Either
. Therefore (>>=)
demands short-circuiting.
So instead we'll implement an Applicative
instance which cannot have a corresponding Monad
.
instance Monoid e => Applicative (Either e) where pure = Right
Now the implementation of (<*>)
is the special part worth considering carefully. It performs some amount of "short-circuiting" in its first 3 cases, but does something interesting in the fourth.
Right f <*> Right a = Right (f a) -- neutral Left e <*> Right _ = Left e -- short-circuit Right _ <*> Left e = Left e -- short-circuit Left e1 <*> Left e2 = Left (e1 <> e2) -- combine!
Notice again that if we think of the left argument as "the past" and the right argument as "the future" then (<*>)
is special compared to (>>=)
as it's allowed to "open up" the future and the past in parallel instead of necessarily needing results from "the past" in order to compute "the future".
This means, directly, that we can use our purely Applicative
Either
to collect errors, ignoring Right
s if any Left
s exist in the chain
> Right (+1) <*> Left [1] <*> Left [2] > Left [1,2]
So let's flip this intuition on its head. What can we not do with a purely applicative Either
? Well, since its operation depends upon examining the future prior to running the past, we must be able to determine the structure of the future without depending upon values in the past. In other words, we cannot write
ifA :: Applicative f => f Bool -> f a -> f a -> f a
which satisfies the following equations
ifA (pure True) t e == t ifA (pure False) t e == e
while we can write ifM
ifM :: Monad m => m Bool -> m a -> m a -> m a ifM mbool th el = do bool <- mbool if bool then th else el
such that
ifM (return True) t e == t ifM (return False) t e == e
This impossibility arises because ifA
embodies exactly the idea of the result computation depending upon the values embedded in the argument computations.
Functors, Applicatives, And Monads In Pictures, style, a convenient way of structuring functorial computations, and also provides means to express a number of important patterns. The terse answer is context sensitivity: with a monad, you can make decisions on which processing path to follow based on previous results. With applicative functors, you have to always apply the same functions. Let's give a contrived example: if the future year is less than the birth year,
Just 1
describes a "computation", whose "result" is 1. Nothing
describes a computation which produces no results.
The difference between a Monad and an Applicative is that in the Monad there's a choice. The key distinction of Monads is the ability to choose between different paths in computation (not just break out early). Depending on a value produced by a previous step in computation, the rest of computation structure can change.
Here's what this means. In the monadic chain
return 42 >>= (\x -> if x == 1 then return (x+1) else return (x-1) >>= (\y -> return (1/y) ))
the if
chooses what computation to construct.
In case of Applicative, in
pure (1/) <*> ( pure (+(-1)) <*> pure 1 )
all the functions work "inside" computations, there's no chance to break up a chain. Each function just transforms a value it's fed. The "shape" of the computation structure is entirely "on the outside" from the functions' point of view.
A function could return a special value to indicate failure, but it can't cause next steps in the computation to be skipped. They all will have to process the special value in a special way too. The shape of the computation can not be changed according to received value.
With monads, the functions themselves construct computations to their choosing.
Functor, Applicative, and Monad, is a function which takes a function, say, fmap() and returns another function. Monads are not a replacement for applicative functors. Instead, every monad is an applicative functor (as well as a functor). It is considered good practice not to use >>= if all you need is <*>, or even fmap.
The key of the difference can be observed in the type of ap
vs type of =<<
.
ap :: m (a->b) -> (m a->m b) =<< :: (a->m b) -> (m a->m b)
In both cases there is m a
, but only in the second case m a
can decide whether the function (a->m b)
gets applied. In its turn, the function (a->m b)
can "decide" whether the function bound next gets applied - by producing such m b
that does not "contain" b
(like []
, Nothing
or Left
).
In Applicative
there is no way for functions "inside" m (a->b)
to make such "decisions" - they always produce a value of type b
.
f 1 = Nothing -- here f "decides" to produce Nothing f x = Just x Just 1 >>= f >>= g -- g doesn't get applied, because f decided so.
In Applicative
this is not possible, so can't show a example. The closest is:
f 1 = 0 f x = x g <$> f <$> Just 1 -- oh well, this will produce Just 0, but can't stop g -- from getting applied
The difference between Applicative and Monad is that between , make semantics explicit for a kind of computation, they can also be used to implement convenient language features. Applicatives normalize products, while monads normalize exponentials. Applicative takes a product of m s of any length (even an empty product ()) and normalizes to a single m, while monad takes any level of nesting of m (even zero levels) and normalizes to one level of m.
Here is my take on @J. Abrahamson's example as to why ifA
cannot use the value inside e.g. (pure True)
. In essence, it still boils down to the absence of the join
function from Monad
in Applicative
, which unifies the two different perspectives given in typeclassopedia to explain the difference between Monad
and Applicative
.
So using @J. Abrahamson's example of purely applicative Either
:
instance Monoid e => Applicative (Either e) where pure = Right Right f <*> Right a = Right (f a) -- neutral Left e <*> Right _ = Left e -- short-circuit Right _ <*> Left e = Left e -- short-circuit Left e1 <*> Left e2 = Left (e1 <> e2) -- combine!
(which has similar short-circuiting effect to the Either
Monad
), and the ifA
function
ifA :: Applicative f => f Bool -> f a -> f a -> f a
What if we try to achieve the mentioned equations:
ifA (pure True) t e == t ifA (pure False) t e == e
?
Well, as already pointed out, ultimately, the content of (pure True)
, cannot be used by a later computation. But technically speaking, this isn't right. We can use the content of (pure True)
since a Monad
is also a Functor
with fmap
. We can do:
ifA' b t e = fmap (\x -> if x then t else e) b
The problem is with the return type of ifA'
, which is f (f a)
. In Applicative
, there is no way of collapsing two nested Applicative
S into one. But this collapsing function is precisely what join
in Monad
performs. So,
ifA = join . ifA'
will satisfy the equations for ifA
, if we can implement join
appropriately. What Applicative
is missing here is exactly the join
function. In other words, we can somehow use the result from the previous result in Applicative
. But doing so in an Applicative
framework will involve augmenting the type of the return value to a nested applicative value, which we have no means to bring back to a single-level applicative value. This will be a serious problem because, e.g., we cannot compose functions using Applicative
S appropriately. Using join
fixes the issue, but the very introduction of join
promotes the Applicative
to a Monad
.
Haskell/Applicative functors, Monad, applicative functor, and functor are just functional programming patterns you can The definition of effect is “power to bring about a result”. check out Movie Monad and Gifcurry — two desktop GUI apps created with Haskell, a purely Between Applicative Functor and Monad. Is it possible to design an Applicative Functor with a few extra stack manipulation functions push, pop, and a specialized branching function ifA :: forall a.
But the following description looks vague to me and I couldn't figure out what exactly is meant by "the result" of a monadic computation/action.
Well, that vagueness is somewhat deliberate, because what "the result" is of a monadic computation is something that depends on each type. The best answer is a bit tautological: the "result" (or results, since there can be more than one) is whatever value(s) the instance's implementation of (>>=) :: Monad m => m a -> (a -> m b) -> m b
invokes the function argument with.
So, if I put a value into
Maybe
, which makes a monad, what is the result of this "computation"?
The Maybe
monad looks like this:
instance Monad Maybe where return = Just Nothing >>= _ = Nothing Just a >>= k = k a
The only thing in here that qualifies as a "result" is the a
in the second equation for >>=
, because it's the only thing that ever gets "fed" to the second argument of >>=
.
Other answers have gone into depth about the ifA
vs. ifM
difference, so I thought I'd highlight another significant difference: applicatives compose, monads don't. With Monad
s, if you want to make a Monad
that combines the effects of two existing ones, you have to rewrite one of them as a monad transformer. In contrast, if you have two Applicatives
you can easily make a more complex one out of them, as shown below. (Code is copypasted from transformers
.)
-- | The composition of two functors. newtype Compose f g a = Compose { getCompose :: f (g a) } -- | The composition of two functors is also a functor. instance (Functor f, Functor g) => Functor (Compose f g) where fmap f (Compose x) = Compose (fmap (fmap f) x) -- | The composition of two applicatives is also an applicative. instance (Applicative f, Applicative g) => Applicative (Compose f g) where pure x = Compose (pure (pure x)) Compose f <*> Compose x = Compose ((<*>) <$> f <*> x) -- | The product of two functors. data Product f g a = Pair (f a) (g a) -- | The product of two functors is also a functor. instance (Functor f, Functor g) => Functor (Product f g) where fmap f (Pair x y) = Pair (fmap f x) (fmap f y) -- | The product of two applicatives is also an applicative. instance (Applicative f, Applicative g) => Applicative (Product f g) where pure x = Pair (pure x) (pure x) Pair f g <*> Pair x y = Pair (f <*> x) (g <*> y) -- | The sum of a functor @f@ with the 'Identity' functor data Lift f a = Pure a | Other (f a) -- | The sum of two functors is always a functor. instance (Functor f) => Functor (Lift f) where fmap f (Pure x) = Pure (f x) fmap f (Other y) = Other (fmap f y) -- | The sum of any applicative with 'Identity' is also an applicative instance (Applicative f) => Applicative (Lift f) where pure = Pure Pure f <*> Pure x = Pure (f x) Pure f <*> Other y = Other (f <$> y) Other f <*> Pure x = Other (($ x) <$> f) Other f <*> Other y = Other (f <*> y)
Now, if we add in the Constant
functor/applicative:
newtype Constant a b = Constant { getConstant :: a } instance Functor (Constant a) where fmap f (Constant x) = Constant x instance (Monoid a) => Applicative (Constant a) where pure _ = Constant mempty Constant x <*> Constant y = Constant (x `mappend` y)
...we can assemble the "applicative Either
" from the other responses out of Lift
and Constant
:
type Error e a = Lift (Constant e) a
Haskell - Functor, This article describes the Haskell 2014 Applicative => Monad For example, using join in a monad definition requires it to be a functor, so it Like monads, applicative functors are functors with extra laws and operations; in fact, Applicative is an intermediate class between Functor and Monad. Applicative is a widely used class with a wealth of applications.
Monad (functional programming), This is possible with a monad. openDialog, openWindow :: String -> CleanIO () liftToCleanup Functors, Applicative Functors and Monoids. Haskell's combination of purity, higher order functions, parameterized algebraic data types, and typeclasses allows us to implement polymorphism on a much higher level than possible in other languages. We don't have to think about types belonging to a big hierarchy of types.
Your easy guide to Monads, Applicatives, & Functors, Monad.Instances, since that's where the instance is defined and then try It starts the definition of the Applicative class and it also introduces a class constraint. Haskell also provides us with some syntactical sugar for monads, called do notation: foo = do filename <-getLine contents <-readFile filename putStrLn contents Conclusion. A functor is a data type that implements the Functor typeclass. An applicative is a data type that implements the Applicative typeclass. A monad is a data type that implements the Monad typeclass.
Functor-Applicative-Monad Proposal, Monads allow us to run actions depending on the results of earlier actions, but not all applications of monads need this extended functionality. It is "A Monad is always an Applicative but due to historical reasons it's not but you can easily verify it by setting pure = return and (<*>) = ap " " liftM is fmap but not really." - "So when should I use fmap and when liftM ?"
Comments
Just 1
describes a "computation", whose "result" is 1.Nothing
describes a computation which produces no results.- See also arrowdodger's question "What advantage does Monad give us over an Applicative?", which has some good answers (full disclosure: including one of mine).
- what's wrong with
ifA t c a = g <$> t <*> c <*> a where g b x y = if b then x else y
? - @WillNess: That always uses all the computational structure/runs all the effects. For instance,
ifA (Just True) (Just ()) Nothing == Nothing
, whereasifM (Just True) (Just ()) Nothing == Just ()
. It'd probably be more accurate to say "we cannot writeifA
with the expected semantics". - I'd argue that
ifM
is exactly the vehicle for examining the power of monad, though it does assume thatifM
is the expected semantics notif' <$> a <*> b <*> c where if' b t e = if b then t else e
. The real challenge is when effects get conflated with values. - I think it's critical to note that when a type has a
Monad
instance defined, itsApplicative
instance must be compatible with thatMonad
instance (pure = return
, (<*>) = ap). While the second
Applicative` instance definition in this answer satisfies theApplicative
laws, it violates this documented requirement. The proper way to get this secondApplicative
instance is to define it for some other type that's isomorphic toEither
. - Note also that the
Errors
type inControl.Applicative.Lift
implements precisely the "collect all errors" behavior described in this answer. - This example demonstrates a few things succintly : You can not only transform values as with applicative functors, but you can also ... 1) store the history of computations anywhere within a chain of monadic operations, 2) decide how, and when to transform values (if the values are to be transformed at all) (in a possibly non-linear manner) based on the history of computations saved, 3) model side-effects within the body of these monadic operations, 4) more trivially, use
do-block
notation. - cf. this, later, related, answer of mine for some clarifying comparisons.
- The one marked as correct is useless, while this unswer really helps especially when you are not familiar with the language...
- @Ivan it might be harder for non-Haskellers, but it is much better actually. The key difference is that all computation descriptions involved in the applicative's combination (
a <*> b <*> ...
) are known upfront; but with Monadic combination (a >>= (\ ... -> b >>= ... )
) each next computation is calculated ("is dependent") on the value produced by the previous computation. there are two timelines involved, two worlds: a pure one where computation descriptions (a
,b
...) are created and combined, and a potentially impure one where they are "run" - where actual computations happen. - a nice example of the difference is in this answer.
- Is the problem joining here that these 3 are okay:
join Right (Right a) = Right a; join Right (Left e) = Left e; join Left (Left e) = Left e
but this is no good:join Left (Right a) =? Left (Right a)
? - Actually trying it out a bit for the first time, I see I'm ending up in a mess of a type for
join
's argument ofResult<Result<_,_>,Result<_,_>>
, or worse.