A CT and F(R)P Diary - Archive

Naturality Condition

I’m currently working through Mio Alter’s post, “Yoneda, Currying, and Fusion” (his other piece on equivalence relations is equally stellar).

Early on, Mio mentions:

It turns out that being a natural transformation is hard: the commutative diagram that a natural transformation has to satisfy means that the components […] fit together in the right way.

Visually, the diagram formed by functors F and G between C and D and with natural transformation α must commute (signaled by the rounded arrow in the center).

I’m trying to get a sense for why this condition is “hard” to meet.

What’s helped is making the commutativity explicit by drawing the diagonals and seeing that they must be the equal. The four legs of the two triangles that cut the square must share a common diagonal.

Maybe that’s why natural transformations are rare? There might be many morphisms between F(A) and G(A) and F(B) and G(B), but only a select few (or none) which cause their compositions to coincide.

For more on commutative diagrams, Tai-Danae Bradley has a post dedicated to the topic.

Why applicatives are monoidal

Applicative functors are […] lax monoidal functors with tensorial strength.

I still don’t quite have the foundation to grok the “lax” and “tensorial strength” bits (I will, some day). Yet, seeing applicatives as monoidal always felt out of reach.

They’re often introduced with the pure and apply duo (sketched out below for Optional).

(An aside, it finally clicked that the choice of “applicative” is, because, well, it supports function application inside of the functor’s context.)

And then, coincidentally, I recently asked:

is there any special terminology around types that support a zip in the same way functors have a map and monads have a flatMap?

To which, Matthew Johnson let me in on the secret.

zip is an alternate way to formulate applicative

!!!.

That makes way more sense and sheds light on why Point-Free treated map, flatMap, and zip as their big three instead of introducing pure and apply.

I can only sort of see that apply implies monoidal-ness (pardon the formality) in that it reduces an Optional<(A) -> B> and Optional<A> into a single Optional<B>. However, the fact that they contained different shapes always made me wonder.

zip relays the ability to combine more readily. “Hand me an Optional<A> and an Optional<B> and I’ll give you an Optional<(A, B)>.”

Yesterday, I finally got around to defining apply in terms of zip to see the equivalence.

Funnily enough, Brandon pointed me to exercise 25.2 which asks exactly this.

In short,

  • Functors allow for a map.
  • Applicatives, a zip.
  • Monads, a flatMap.

Type erasure and forgetful functors

Type erasure and forgetful functors, at least in terms of intuition, feel very similar.

One removes detail (e.g. Publishers.Zip -> AnyPublisher) and the other strips structure, leaving the underlying set behind (a monoid or group being mapped onto its base set).

Wonder if there’s a way to visualize this by considering eraseToAnyPublisher as a sort of forgetful endofunctor into a subcategory of Swift (hand waving) that only contains AnyPublishers?

Co-, contra-, and invariance

Folks often refer to the component of a functor’s “polarity.” “The input is in the negative position.” “The output is positive.”

And that made me wonder, is there a, well, neutral polarity?

Maybe that’s when a component is in both a negative and positive position, canceling one another out.

Let’s see what happens.

A -> ….

A is in a negative position? Check. Let’s add it to the positive spot.

A -> A.

This is our old friend, Endo! At the bottom of the file, I noticed imap and asked some folks what the “i” stood for. Turns out it’s short for, “invariant,” which reads nicely in that both co- and contravariance net out to invariance.

Pairing functor type, variance(s), and *map name:

  • Functor, covariant, map.
  • Functor, contravariant, contramappullback.
  • Bifunctor, covariant (and I’m guessing contra-, maybe both working in the same direction is what matters?), bimap.
  • Invariant functor, invariant (co- and contravariant in the same component), imap.
  • Profunctor, co- and contravariant along two components, dimap.

What makes natural transformations, natural?

Learning category theory often involves patiently sitting with a concept until it—eventually—clicks and then considering it as a building block for the next1.

Grokking natural transformations went that way for me.

I still remember the team lunch last spring where I couldn’t keep my excitement for the abstraction a secret and had to share with everyone (I’m a blast at parties).

After mentioning the often-cited example of a natural transformation in engineering, Collection.first (a transformation between the Collection and Optional functors), a teammate asked me the question heading this note:

What makes a natural transformation, natural?

I found an interpretation of the word.

Say we have some categories C and D and functors F and G between them, diagrammed below:

If we wanted to move from the image of F acting on C to the image of G, we have to find a way of moving between objects in the same category.

The question, rephrased, becomes what connects objects in a category? Well, morphisms!

Now, how do we pick them? Another condition on natural transformations is that the square formed by mapping two objects, A and B connected by a morphism f, across two functors F and G must commute.

Let’s call our choices of morphisms between F_A and G_A and F_B and G_B, α_A and α_B, respectively.

Even if f flips directions across F and G—i.e. they’re contravariant functors—our choice in α_A and α_B is fixed!

The definition, the choice of morphisms, seems to naturally follow from structure at hand. It doesn’t depend on arbitrary choices.

Tangentially, a definition shaking out from some structure reminds me of how the Curry–Howard correspondence causes certain functions to have a unique implementation. Brandon covered this topic in a past Brooklyn Swift presentation (timestamped).

For more resources on natural transformations:


“Oh, the morphisms you’ll see!”

There are many prefixes placed in front of the word “morphism”—here’s a glossary of the ones I’ve seen so far:

  • Epimorphism
    • The categorical analog to surjectivity—the “epi” root connotes morphisms mapping “over” the entirety of the codomain. Bartosz covers this well in lecture 2.1 (timestamped) of Part I of his category theory course.
  • Monomorphism
    • Injectivity’s analog and epimorphism’s dual (it blew my mind to realize injectivity and surjectivity, two properties I never thought twice about, are duals). “Mono” in that it generalizes one-to-one functions. Bartosz also mentions them in 2.1.
  • Bimorphism
    • “A morphism that is both an epimorphism and a monomorphism is called a bimorphism.”
    • I don’t quite have the foundation needed to grok when a bimorphism isn’t an isomorphism—maybe because I spend most of my time working in Hask and Swift (Set, in disguise) and Wikipedia mentions “a category, such as Set, in which every bimorphism is an isomorphism is known as a balanced category.” On the other hand, and I need to read more into what “split” means in the following, “while every isomorphism is a bimorphism, a bimorphism is not necessarily an isomorphism. For example, in the category of commutative rings the inclusion ZQ is a bimorphism that is not an isomorphism. However, any morphism that is both an epimorphism and a split monomorphism, or both a monomorphism and a split epimorphism, must be an isomorphism.”
  • Isomorphism
    • Show up all over mathematics. A morphism that “admits a two-sided inverse, meaning that there is another morphism in [the] category [at hand] such that [their forward and backward compositions emit identity arrows on the domain and codomain, respectively].” “Iso” for equal in the sense that if an isomorphism exists, there is an sort of sameness to the two objects.
  • Endomorphism
    • A morphism from an object onto itself that isn’t necessarily an identity arrow. “Endo” for “within” or “inner.” The prefix shed light on why the Point-Free folks named functions in the form (A) -> A, Endo<A>. Looking at that file now, I wonder what the “i” in imap stands for and I may or may not have gotten nerd sniped into checking if imap’s definition shakes out from Func.dimap when dealing with Func<A, A>s and Z == C == B (the B being imap’s generic parameter). Looks like it does?…a few messages later and Peter Tomaselli helped me out! The “i” stands for “invariant,” which reads nicely in that imap’s co- and contravariant parameters kind of cancel out to invariance.
  • Automorphism
    • An isomorphic endomorphism. “Auto” for same or self.
  • Homomorphism
    • The star of algebra, a structure-preserving mapping between two algebraic structures. i.e. a homomorphism f on some structure with a binary operation, say *, will preserve it across the mapping—f(a * b) = f(a) * f(b). I’ll cover the etymological relationship between “hom” and its appearances in category theory—hom-sets and hom-functors—that isn’t quite restricted to sets in the way algebra generally is in a future note.
  • Homeomorphism
    • The one I’m least familiar with—in another life (or maybe later in this one), I want to dig into (algebraic) topology. Seems to be the topologist’s isomorphism (in the category Top).
  • Catamorphism, Anamorphism, and Hylomorphism
    • I’ve only dealt with these three in the functional programming sense. Catamorphisms break down a larger structure into a reduced value (“cata” for “down”), anamorphisms build structure from a smaller set of values (“ana” for “up”), and hylomorphism is an ana- followed by a catamorphism (oddly, “hylo” stands for “matter” or “wood,” wat).
    • I ran into catamorphism the other day when trying to put a name on a function in the form ((Left) -> T) -> ((Right) -> T) -> (Either<Left, Right>) -> T. Turns out folks call this either, analysis, converge, or fold (the last of which was somewhat surprising to me in that the Foldable type class requires a monoidal instance, whereas this transformation doesn’t quite have the same requirement). This function is catamorphic in that it reduces an Either into a T.
    • zip is an example of an anamorphism that builds a Zip2Sequence from two Sequences and by extension, zipWith is a hylomorphism that zips and then reduces down to a summary value by a transformation.
    • Hylomorphisms and imap both seem to be compositions of dual transformations. Wonder if this pattern pops up elsewhere?

Getting an intuition around why we can call contramap a pullback

Last October, Brandon and Stephen reintroduced contramap—covered in episode 14—as pullback.

(The analog for the Haskell peeps is Contravariant’s contramap requirement.)

However, the name contramap isn’t fantastic. In one way it’s nice because it is indeed the contravariant version of map. It has basically the same shape as map, it’s just that the arrow flips the other direction. Even so, the term may seem a little overly-jargony and may turn people off to the idea entirely, and that would be a real shame.

Luckily, there’s a concept in math that is far more general than the idea of contravariance, and in the case of functions is precisely contramap. And even better it has a great name. It’s called the pullback. Intuitively it expresses the idea of pulling a structure back along a function to another structure.

—“Some news about contramap

I had trouble getting an intuition around why contramap’ing is a pullback, in the categorical sense and here’s why (mirroring a recent Twitter thread):

@pointfreeco—Hey y’all 👋🏽 I’m a tad confused when it comes to grounding the canonical pullback diagram with types and functions between them. (Pardon the rough sketch hah, I forgot my LaTeX and this was more fun. 😄) /1

Pullbacks, generally, seem to give two morphisms and objects worth of freedom—f, g, X, and Y—whereas in code, we almost always seem to pullback along one [path across] the square. /2

Do y’all call this operation pullback because there isn’t a restriction preventing f from equaling g and X equaling Y (collapsing the square into a linear diagram)? /3

Yesterday, Eureka Zhu cleared up my confusion on why we don’t take the upper path through pullback’s definitional diagram:

In [the] Set [category], the pullback of f: a -> c along g: b -> c is { (a, b) | f a = g b }, which is sometimes undecidable in Haskell [and by extension, Swift].

Hand-waving past Hask and Swift not quite being categories, we reason about them through the category Set.

And Zhu points out that a pullback in Set requires us to equate two functions, f and g in the diagram above, for a subset of inputs and that’s undecidable in programming.

How do we get around this?

Well, setting X to be Y and f to g:

Since pure functions equal themselves, we can sidestep that undecidability by collapsing the diagram. Wickedly clever.

That’s how pullback’s flexibility allows us to consider contramap as a specialization of it.

(un)zurry

(un)zurry moves arguments in and out of Void-accepting closures in the same way (un)curry moves arguments in and out of tuples. Concretely, here’s how they’re defined:

The former is often helpful when staying point-free with functions that are close, but not quite the right shape you need. e.g. mapping unapplied method references:

Eep.

Let’s try to make this work.

We’d like [Int].sorted to have the form [Int] -> [Int], and since SE-0042 got rejected, we have to chisel it down on our own.

First, let’s flip the first two arguments and then address the rogue ().

Getting closer—this is where zurry can zero out the initial Void for us.

zurry(flip([Int].sorted)) // (Array<Int>) -> Array<Int>

Wicked, now we can return to our original example:

[[2, 1], [3, 1], [4, 1]].map(zurry(flip([Int].sorted))) // [[1, 2], [1, 3], [1, 4]].

Dually, unzurry shines when you’re working against a () -> Return interface, that isn’t @autoclosure’d, and you only have a Return in hand. Instead of opening a closure, you can pass along unzurry(yourReturnInstance) and call it a day.

The Point-Free folks link to Eitan Chatav’s post where he introduced the term and shows how function application is a zurried form of function composition (!).

ObservableType.do‌ or .subscribe (Publisher.handleEvents or .sink)?

ObservableType.do‌ or .subscribe?

(The question, rephrased for Combine, involves swapping in Publisher.handleEvents and .sink, respectively.)

do (handleEvents) gives you a hook into a sequence’s events without triggering a subscription, while subscribe (sink) does the same and triggers a subscription—a way to remember this is the former returns an Observable (Publisher) and the latter returns a Disposable (AnyCancellable).

So, when should we use one over the other?

In the past, I’ve always placed effectful work in the do block, even if meant an empty subscribe() call at the end of a chain.

And I’m starting to change my mind here—putting that sole do work in the subscribe block makes the chain more succinct (there are definitely cases where it’s cleaner to use both to sort of group effects, e.g. UI-related versus persistence-related).

I dug up an old bit from Bryan Irace in an iOS Slack we’re a part of that puts it well:

do is for performing a side effect inside of a bigger chain

if all you’re doing is performing the side effect, just do that in your subscribe block

  1. Then, repeating further. One day, I hope to have built the machinery needed to read Riehl’s research and writing on ∞-category theory.