Diary Archive

(Note: these are entries from my technical diary. There will likely be typos, mistakes, or wider logical leaps—the intent here is to “[let] others look over my shoulder while I figure things out.”)


Stress testing variadic zip

Yesterday, Daniel Williams and I messaged through a crasher he ran into when using CombineExt.Collection.zip (similarly with .Collection.combineLatest).

For the uninitiated, Combine ships with zip (and combineLatest) overloads up to arity four in the Publisher namespace.

But, if you want to zip arbitrarily many publishers, you’re kind of stuck and as more Combine code gets written, folks are quickly realizing this. That’s why we’ve been heads down filling in gaps with an extensions package to sit next to Combine proper.

Daniel was attempting to first ping https://hacker-news.firebaseio.com/v0/topstories.json for an array of Hacker News story IDs and then hydrate each by hitting the https://hacker-news.firebaseio.com/v0/item/:id.json endpoint. The former returns on the order of 500 entries and that turned out to be enough to push variadic zip beyond its limits.

We can reduce the scenario down with convenience publishers for a closer look.

(Gist permalink.)

(You might need to tweak count to trigger the crash.)

The stack trace is a head scratcher.

And the repeated Zip, Map, and PublisherBox frames hint at the issue.

CombineExt’s variadic zip and combineLatest are “composed” operators — they’re built up from existing Publisher methods instead of a more dedicated conformance. While this simplifies things and lets each implementation measure out to ~15 lines, it also introduces intermediate runtime overhead.

Let’s take a look at why (in shorter form, here’s the fuller implementation).

(Gist permalink.)

  1. The only way to line up seed’s type with reduce’s accumulator is to erase — or, at least I tried without in hopes of preserving fusion and got type checked into a corner.
  2. This and the following two lines are the source of the Zip, Map, and PublisherBox stack trace dance. As we approach thousands of publishers, we’re triply nesting for each.

Can we fix this?

Yep — by writing a specialized ZipCollectionType à la RxSwift’s! But with WWDC around the corner, it’s probably best to hang tight and see if the Combine team will address the gap.

Until then, and if you want to read more about variadic zipping, an older entry has your back.

Why “sink?”

“Sink” is a word you’ll see all over reactive declarative programming, and Combine is no exception.

There’s Subscribers.Sink (the subscriber behind the two Publisher.sink overloads), CombineExt.Sink (albeit internally-scoped), and similarly in the framework’s predecessor, RxSwift.Sink.

The often-cited kitchen sink metaphor aside, the term’s etymology is a bit unclear. My guess would be it borrows from the corresponding graph theory term.

A local sink is a node of a directed graph with no exiting edges, also called a terminal.

We can view a subscription graph as a directed graph between upstream publishers, through various operators, and down towards local sinks (which, in Combine’s language are Subscribers).

Profunctors generalize relations

An aside before the note, Day One reminded me that I started studying category theory, in earnest, almost a year ago today.

Despite being a hobby, I’m pretty proud of how far I’ve come (I struggle with saying that aloud, since I’m the type to keep pushing forward and not really look around along the way).

I’m

Richard Guy put how I’ve felt as of late succinctly.

…and I love anybody who can [do mathematics] well, so I just like to hang on and try to copy them as best I can, even though I’m not really in their league.

(I’m not sure if anyone reads these entries hah (quite literally, I removed analytics on the site a few years ago). If so, pardon the moment to reflect.)

I was revisiting profunctors yesterday and Bartosz mentioned an intuition in lecture III.6.1 (timestamped) that made their motivation click.

You can think of a profunctor as [generalizing] a relation between objects.

Huh, okay. Let’s take a step back and recap what a relation is in plain ol’ set theory and follow the intuition.

A binary relation over two sets $X$ and $Y$ is a subset of their Cartesian product, $X \times Y$. That is, a set of ordered pairs indicating which $X$s are related to specific $Y$s.

And now to generalize.

First, let’s swap out the sets for categories $C$ and $D$.

$C \times D$. Okay, the product category. Since $C$ and $D$ are possibly distinct categories, we can’t directly consider morphisms between them. But, we can in their product category—morphisms between objects $(c, d)$ and $(c^\prime, d^\prime)$ are those in the form $(f, g)$ with $f: c \rightarrow c^\prime$ and $g: d \rightarrow d^\prime$. So, in a sense, the collection of relationships between $c$ and $d$ is $\textrm{Hom}((c, d), (c^\prime, d^\prime))$.

That hom-set is, well, a set (assuming we’re working with small categories)! What if we tried to create a functor from $C \times D \rightarrow \mathbf{Set}$ defined by $(c, d) \mapsto \ldots$

Wait. $c$ and $d$ come from different categories and hom-sets only work in a single category. I read around to reconcile this and stumbled upon heteromorphisms. “Morphisms” between two objects from different categories that use a bridging functor to then construct a hom-set. I got lost trying to read further, and with warning from slide 3/41 of David Ellerman’s presentation on the topic.

So, let’s assume $C = D$ and carry on (I’ll understand that presentation someday).

Okay, let’s map $c, d$ (both objects in $C$) to $\textrm{Hom}(c, d)$. And for morphisms, we need to map some $(f, g)$ for $f: c \rightarrow c^\prime$ and $g: d \rightarrow d^\prime$ to a function between $\textrm{Hom}(c, d)$ and $\textrm{Hom}(c^\prime, d^\prime)$. Let’s pluck out a morphism, say $h$ from $\textrm{Hom}(c, d)$.

We have $f, g, h$ in hand and need to construct a morphism from $c^\prime$ to $d^\prime$. There’s…no way to do this. None of our morphisms map from $c^\prime$.

That’s where the contravariance in the profunctor construction comes from when folks write $C^{\textrm{op}} \times C \rightarrow \mathbf{Set}$ (or, in the general case $C^{\textrm{op}} \times D \rightarrow \mathbf{Set}$). Taking the dual in the first component of the product category flips $f$ and now lets us get from $c^\prime$ to $d^\prime$ by way of $g \circ h \circ f$.

It’s okay if you need to walk around the park with that composition before it makes sense. I certainly needed to and it demystified the rogue dimap’s I’d see in Preludes.

But, let’s take stock on how this generalizes relations. In the same-category setting, we’re constructing a functor that maps two objects to the ways in which they’re related, their hom-sets. Since it’s a functor, we also need to consider mapping morphisms across the functor into functions between hom-sets and dimap (link to Haskell’s Prelude) does just that.


⇒ “Profunctors as relations

⇒ “Understanding profunctors

All metric spaces are Lawvere metric spaces

I’m definitely the rookie in my research group, so the diary will be a bit math-heavy as I try to catch up.

To start, here’s an entry on a topic—amongst many—Jade walked us through during our first meeting, Lawvere metric spaces.

nLab’s definiton is a bit impenetrable. At a glance, it seems like tacking on Lawvere’s name, to an already general concept, means added axioms.

It’s…surprisingly the opposite.

All metric spaces are Lawvere metric spaces—that is, we lift some of the constraints on plain ol’ metrics.

Recapping, a metric space is a set $X$ equipped with a distance function $d: X \times X \rightarrow [0, \infty)$ under the following coherences:

Assuming $x, y, z \in X$,

  • $d(x, y) = 0 \iff x = y$ (zero-distance coincides with equality).
  • $d(x, y) = d(y, x)$ (symmetry).
  • $d(x, y) + d(y, z) \geq d(x, z)$ (the triangle inequality).

And Lawvere relaxed a few bits. A Lawvere metric space has a distance function

  • that respects the triangle inequality,
  • whose codomain includes $\infty$ (which is helpful when we want a “disconnectedness” between points),
  • and $d(x, x) = 0$ (points are zero-distance from themselves).

We’re dropping the symmetry requirement and allowing for possibly zero distances between distinct points.

The former lets us represent, e.g. in a distance as cost situation, non-symmetric costs. Borrowing from Baez, imagine the commute from $x$ to $y$ being cheaper than from $y$ to $x$.

The easing of zero-distance being necessary and sufficient for equality to only one side of the implication adds the ability to reach points “for free” (continuing with the transportation theme).

I need to read up on more applications this freedom affords us. In the meantime, here’s some links I’ve come across:

Easing AnyCancellable storage

Quick note on a late-night PR I drafted for CombineExt. It tidies the repetitive AnyCancellable.store(in:) calls needed to hold onto cancellation tokens.

(Gist permalink.)

I’ve also added a Sequence variant.

And both are Element == AnyCancellable constrained to avoid crowding Set’s namespace.

Weak assignment in Combine

Publisher.assign(to:on:) comes with a pretty big warning label,

The Subscribers.Assign instance created by this operator maintains a strong reference to object […]

and we need to read this label when piping publishers to @Published properties in ObservableObject conformances.

(Gist permalink.)

Of course, we could do the sink-weak-self-receiveValue dance. But, that’s a bit of ceremony.

(Gist permalink.)

My first instinct was to weak-ly overload assign and PR’d it to CombineExt, in case it’d help others, too. And with some distance and thoughtful feedback from both Shai and Adam, I decided to re-think that instinct.

There’s a few downsides to Publisher.weaklyAssign(to:on).

  • It crowds an already packed Publisher namespace.
  • In its current form, it doesn’t relay which argument is weakly captured. A clearer signature would be .assign(to:onWeak:) (and similarly, for an unowned variant).

Adam mentioned a couple of alternatives:

So, I’m back to where I started and with a slightly modified overload. Gist’ing it below for the curious.

(Gist permalink.)

Now call sites can read—

(Gist permalink.)

Ah! Almost forgot. Writing tests for the operator had me reaching for Swift.isKnownUniquelyReferenced(_:)—“a [free-function] I haven’t heard [from] in a long time, a long time.

There’s sliced bread, and then Result.publisher

Another learning from Adam.

A situation I often find myself in is sketching an operator chain and exercising both the value and failure paths by swapping upstream with Just or Fail, respectively.

And it turns out that Apple added a Combine overlay to Result with the .publisher property that streamlines the two. That is, while all three of Just, Fail, and Result.Publisher have their uses, the latter might be easier to reach for in technical writing. Moreover, it’s a quick way to materialize a throwing function and pipe it downstream.

(Gist permalink.)

Or, as I’ll call it going forward—“the ol’ razzle dazzle.

Postfix type erasure

A belated entry on an operator I posted before…all of this (gestures wildly) started.

There’s nuance in determining whether or not to type erase a publisher—my next longer-form post will cover this—but when you need to, eraseToAnyPublisher()’s ergonomics aren’t great.

It requires 22 characters (including a dot for the method call and Void argument) to apply a rather one-character concept.

And I know operators are borderline #holy-war—still, if you’re open to them, I’ve borrowed prior art from Bow and Point-Free by using a ^ postfix operator.

It passes the three checks any operator should.

  • Does the operator overload an existing one in Swift? Thankfully not (since bitwise XOR is infix).
  • Does its shape convey its intent? To an extent! The caret hints at a sort of “lifting” and that’s what erasure is after all. Removing specific details, leaving behind a more general shape.
  • Does it have prior art? Yep.

The operator has tidied the Combine I’ve written so far. Here’s a gist with its definition.

Parity and arity

Two tucked-away, somewhat-related terms I enjoy: parity and arity.

The former is the odd or even-ness of an integer.

The latter describes the number of arguments a function accepts.

Example usage of parity:

Today I learned about the Handshaking Lemma. It states that any finite undirected graph, will have an even number of vertices with an odd degree.

The proof rests on parity. Specifically, if you sum the degrees of every vertex in a graph, you’ll double count each edge. And that double counting implies the sum is even, and even parity is only maintained if there is an even—including zero—number of vertices with an odd degree.

Put arithmetically, a sum can only be even if its components contain an even number of odd terms.

Examples of arity:

Contrasting isomorphisms, equivalencies, and adjunctions

When I was first introduced to adjunctions, I reacted in the way Spivak anticipated during an applied category theory lecture (timestamped, transcribed below).

…and when people see this definition [of an adjunction], they think “well, that seems…fine. I’m glad you told me about…that.”

And it wasn’t until I stumbled upon a Catsters lecture from 2007 (!) where Eugenia Cheng clarified the intent behind the definition (and contrasted it to isomorphisms and equivalencies).

To start, assume we have two categories C and D with functors, F and G between them (F moving from C to D and G in the other direction).

There are a few possible scenarios we could find ourselves in.

  • Taking the round trip—i.e. GF and FG—is equal to the identity functors on C and D (denoted with 1_C and 1_D below).
  • The round trip is isomorphic to each identity functor.
  • Or, the round trip lands us a morphism away from where we started.

Moving between the scenarios, there’s a sort of “relaxing” of strictness:

  • The round trip is the identity functor.
  • […] an isomorphism away from the identity functor.
  • […] a hop away from identity functor.

Why optics?

The better part of my weekend was spent reading Chris Penner’s incredibly well-written Optics by Example and attempting the exercises with BowOptics.

I’m early in my learning. Still, I wanted to note the motivation behind optics.

They seek to capture control flow, which, is usually baked into languages with for, while, if, guard, switch, and similar statements, as values.

In the same way that effect systems capture effects as values—decoupling them from execution—optics separate control flow when navigating data structures from the actions taken on them.

Sara Fransson put this well in their recent talk: Functional Lenses Through a Practical Lens.

Noticing that optics excite me much like FRP did back when I first learned about it.

And I haven’t even gotten to the category-theoretic backings yet.

(Attempts to contain excitement. Back to reading.)

TIL about Publishers.MergeMany.init

A few weeks back, Jordan Morgan nerd sniped me1 into writing a Combine analog to RxSwift’s ObservableType.merge(sources:)—an operator that can merge arbitrarily many observable sequences.

Here’s a rough, not-tested-in-production sketch (if you know a way to ease the eraseToAnyPublisher dance, let me know):

And in writing this entry, I decided to check and make sure I wasn’t missing something. After all, it’s odd (pun intended) that the merging operators on Publisher stop at arity seven.

Then, I noticed the Publishers.MergeMany return value on the first and, below the (hopefully temporary) “No overview available” note on its documentation, there’s a variadic initializer!

So, there you have it. TIL merging a sequence of publishers goes by the name of Publishers.MergeMany.init(_:).


Naturality condition

I’m currently working through Mio Alter’s post, “Yoneda, Currying, and Fusion” (his other piece on equivalence relations is equally stellar).

Early on, Mio mentions:

It turns out that being a natural transformation is hard: the commutative diagram that a natural transformation has to satisfy means that the components […] fit together in the right way.

Visually, the diagram formed by functors F and G between C and D and with natural transformation α must commute (signaled by the rounded arrow in the center).

I’m trying to get a sense for why this condition is “hard” to meet.

What’s helped is making the commutativity explicit by drawing the diagonals and seeing that they must be the equal. The four legs of the two triangles that cut the square must share a common diagonal.

Maybe that’s why natural transformations are rare? There might be many morphisms between F(A) and G(A) and F(B) and G(B), but only a select few (or none) which cause their compositions to coincide.

For more on commutative diagrams, Tai-Danae Bradley has a post dedicated to the topic.

Why applicatives are monoidal

Applicative functors are […] lax monoidal functors with tensorial strength.

I still don’t quite have the foundation to grok the “lax” and “tensorial strength” bits (I will, some day). Yet, seeing applicatives as monoidal always felt out of reach.

They’re often introduced with the pure and apply duo (sketched out below for Optional).

(An aside, it finally clicked that the choice of “applicative” is, because, well, it supports function application inside of the functor’s context.)

And then, coincidentally, I recently asked:

is there any special terminology around types that support a zip in the same way functors have a map and monads have a flatMap?

To which, Matthew Johnson let me in on the secret.

zip is an alternate way to formulate applicative

!!!.

That makes way more sense and sheds light on why Point-Free treated map, flatMap, and zip as their big three instead of introducing pure and apply.

I can only sort of see that apply implies monoidal-ness (pardon the formality) in that it reduces an Optional<(A) -> B> and Optional<A> into a single Optional<B>. However, the fact that they contained different shapes always made me wonder.

zip relays the ability to combine more readily. “Hand me an Optional<A> and an Optional<B> and I’ll give you an Optional<(A, B)>.”

Yesterday, I finally got around to defining apply in terms of zip to see the equivalence.

Funnily enough, Brandon pointed me to exercise 25.2 which asks exactly this.

In short,

  • Functors allow for a map.
  • Applicatives, a zip.
  • Monads, a flatMap.

Type erasure and forgetful functors

Type erasure and forgetful functors, at least in terms of intuition, feel very similar.

One removes detail (e.g. Publishers.Zip -> AnyPublisher) and the other strips structure, leaving the underlying set behind (a monoid or group being mapped onto its base set).

Wonder if there’s a way to visualize this by considering eraseToAnyPublisher as a sort of forgetful endofunctor into a subcategory of Swift (hand waving) that only contains AnyPublishers?

Co-, contra-, and invariance

Folks often refer to the component of a functor’s “polarity.” “The input is in the negative position.” “The output is positive.”

And that made me wonder, is there a, well, neutral polarity?

Maybe that’s when a component is in both a negative and positive position, canceling one another out.

Let’s see what happens.

A -> ….

A is in a negative position? Check. Let’s add it to the positive spot.

A -> A.

This is our old friend, Endo! At the bottom of the file, I noticed imap and asked some folks what the “i” stood for. Turns out it’s short for, “invariant,” which reads nicely in that both co- and contravariance net out to invariance.

Pairing functor type, variance(s), and *map name:

  • Functor, covariant, map.
  • Functor, contravariant, contramappullback.
  • Bifunctor, covariant (and I’m guessing contra-, maybe both working in the same direction is what matters?), bimap.
  • Invariant functor, invariant (co- and contravariant in the same component), imap.
  • Profunctor, co- and contravariant along two components, dimap.

What makes natural transformations, natural?

Learning category theory often involves patiently sitting with a concept until it—eventually—clicks and then considering it as a building block for the next1.

Grokking natural transformations went that way for me.

I still remember the team lunch last spring where I couldn’t keep my excitement for the abstraction a secret and had to share with everyone (I’m a blast at parties).

After mentioning the often-cited example of a natural transformation in engineering, Collection.first (a transformation between the Collection and Optional functors), a teammate asked me the question heading this note:

What makes a natural transformation, natural?

I found an interpretation of the word.

Say we have some categories C and D and functors F and G between them, diagrammed below:

If we wanted to move from the image of F acting on C to the image of G, we have to find a way of moving between objects in the same category.

The question, rephrased, becomes what connects objects in a category? Well, morphisms!

Now, how do we pick them? Another condition on natural transformations is that the square formed by mapping two objects, A and B connected by a morphism f, across two functors F and G must commute.

Let’s call our choices of morphisms between F_A and G_A and F_B and G_B, α_A and α_B, respectively.

Even if f flips directions across F and G—i.e. they’re contravariant functors—our choice in α_A and α_B is fixed!

The definition, the choice of morphisms, seems to naturally follow from structure at hand. It doesn’t depend on arbitrary choices.

Tangentially, a definition shaking out from some structure reminds me of how the Curry–Howard correspondence causes certain functions to have a unique implementation. Brandon covered this topic in a past Brooklyn Swift presentation (timestamped).

For more resources on natural transformations:


“Oh, the morphisms you’ll see!”

There are many prefixes placed in front of the word “morphism”—here’s a glossary of the ones I’ve seen so far:

  • Epimorphism
    • The categorical analog to surjectivity—the “epi” root connotes morphisms mapping “over” the entirety of the codomain. Bartosz covers this well in lecture 2.1 (timestamped) of Part I of his category theory course.
  • Monomorphism
    • Injectivity’s analog and epimorphism’s dual (it blew my mind to realize injectivity and surjectivity, two properties I never thought twice about, are duals). “Mono” in that it generalizes one-to-one functions. Bartosz also mentions them in 2.1.
  • Bimorphism
    • “A morphism that is both an epimorphism and a monomorphism is called a bimorphism.”
    • I don’t quite have the foundation needed to grok when a bimorphism isn’t an isomorphism—maybe because I spend most of my time working in Hask and Swift (Set, in disguise) and Wikipedia mentions “a category, such as Set, in which every bimorphism is an isomorphism is known as a balanced category.” On the other hand, and I need to read more into what “split” means in the following, “while every isomorphism is a bimorphism, a bimorphism is not necessarily an isomorphism. For example, in the category of commutative rings the inclusion ZQ is a bimorphism that is not an isomorphism. However, any morphism that is both an epimorphism and a split monomorphism, or both a monomorphism and a split epimorphism, must be an isomorphism.”
  • Isomorphism
    • Show up all over mathematics. A morphism that “admits a two-sided inverse, meaning that there is another morphism in [the] category [at hand] such that [their forward and backward compositions emit identity arrows on the domain and codomain, respectively].” “Iso” for equal in the sense that if an isomorphism exists, there is an sort of sameness to the two objects.
  • Endomorphism
    • A morphism from an object onto itself that isn’t necessarily an identity arrow. “Endo” for “within” or “inner.” The prefix shed light on why the Point-Free folks named functions in the form (A) -> A, Endo<A>. Looking at that file now, I wonder what the “i” in imap stands for and I may or may not have gotten nerd sniped into checking if imap’s definition shakes out from Func.dimap when dealing with Func<A, A>s and Z == C == B (the B being imap’s generic parameter). Looks like it does?…a few messages later and Peter Tomaselli helped me out! The “i” stands for “invariant,” which reads nicely in that imap’s co- and contravariant parameters kind of cancel out to invariance.
  • Automorphism
    • An isomorphic endomorphism. “Auto” for same or self.
  • Homomorphism
    • The star of algebra, a structure-preserving mapping between two algebraic structures. i.e. a homomorphism f on some structure with a binary operation, say *, will preserve it across the mapping—f(a * b) = f(a) * f(b). I’ll cover the etymological relationship between “hom” and its appearances in category theory—hom-sets and hom-functors—that isn’t quite restricted to sets in the way algebra generally is in a future note.
  • Homeomorphism
    • The one I’m least familiar with—in another life (or maybe later in this one), I want to dig into (algebraic) topology. Seems to be the topologist’s isomorphism (in the category Top).
  • Catamorphism, Anamorphism, and Hylomorphism
    • I’ve only dealt with these three in the functional programming sense. Catamorphisms break down a larger structure into a reduced value (“cata” for “down”), anamorphisms build structure from a smaller set of values (“ana” for “up”), and hylomorphism is an ana- followed by a catamorphism (oddly, “hylo” stands for “matter” or “wood,” wat).
    • I ran into catamorphism the other day when trying to put a name on a function in the form ((Left) -> T) -> ((Right) -> T) -> (Either<Left, Right>) -> T. Turns out folks call this either, analysis, converge, or fold (the last of which was somewhat surprising to me in that the Foldable type class requires a monoidal instance, whereas this transformation doesn’t quite have the same requirement). This function is catamorphic in that it reduces an Either into a T.
    • zip is an example of an anamorphism that builds a Zip2Sequence from two Sequences and by extension, zipWith is a hylomorphism that zips and then reduces down to a summary value by a transformation.
    • Hylomorphisms and imap both seem to be compositions of dual transformations. Wonder if this pattern pops up elsewhere?

Getting an intuition around why we can call contramap a pullback

Last October, Brandon and Stephen reintroduced contramap—covered in episode 14—as pullback.

(The analog for the Haskell peeps is Contravariant’s contramap requirement.)

However, the name contramap isn’t fantastic. In one way it’s nice because it is indeed the contravariant version of map. It has basically the same shape as map, it’s just that the arrow flips the other direction. Even so, the term may seem a little overly-jargony and may turn people off to the idea entirely, and that would be a real shame.

Luckily, there’s a concept in math that is far more general than the idea of contravariance, and in the case of functions is precisely contramap. And even better it has a great name. It’s called the pullback. Intuitively it expresses the idea of pulling a structure back along a function to another structure.

—“Some news about contramap

I had trouble getting an intuition around why contramap’ing is a pullback, in the categorical sense and here’s why (mirroring a recent Twitter thread):

@pointfreeco—Hey y’all 👋🏽 I’m a tad confused when it comes to grounding the canonical pullback diagram with types and functions between them. (Pardon the rough sketch hah, I forgot my LaTeX and this was more fun. 😄) /1

Pullbacks, generally, seem to give two morphisms and objects worth of freedom—f, g, X, and Y—whereas in code, we almost always seem to pullback along one [path across] the square. /2

Do y’all call this operation pullback because there isn’t a restriction preventing f from equaling g and X equaling Y (collapsing the square into a linear diagram)? /3

Yesterday, Eureka Zhu cleared up my confusion on why we don’t take the upper path through pullback’s definitional diagram:

In [the] Set [category], the pullback of f: a -> c along g: b -> c is { (a, b) | f a = g b }, which is sometimes undecidable in Haskell [and by extension, Swift].

Hand-waving past Hask and Swift not quite being categories, we reason about them through the category Set.

And Zhu points out that a pullback in Set requires us to equate two functions, f and g in the diagram above, for a subset of inputs and that’s undecidable in programming.

How do we get around this?

Well, setting X to be Y and f to g:

Since pure functions equal themselves, we can sidestep that undecidability by collapsing the diagram. Wickedly clever.

That’s how pullback’s flexibility allows us to consider contramap as a specialization of it.

(un)zurry

(un)zurry moves arguments in and out of Void-accepting closures in the same way (un)curry moves arguments in and out of tuples. Concretely, here’s how they’re defined:

The former is often helpful when staying point-free with functions that are close, but not quite the right shape you need. e.g. mapping unapplied method references:

Eep.

Let’s try to make this work.

We’d like [Int].sorted to have the form [Int] -> [Int], and since SE-0042 got rejected, we have to chisel it down on our own.

First, let’s flip the first two arguments and then address the rogue ().

Getting closer—this is where zurry can zero out the initial Void for us.

zurry(flip([Int].sorted)) // (Array<Int>) -> Array<Int>

Wicked, now we can return to our original example:

[[2, 1], [3, 1], [4, 1]].map(zurry(flip([Int].sorted))) // [[1, 2], [1, 3], [1, 4]].

Dually, unzurry shines when you’re working against a () -> Return interface, that isn’t @autoclosure’d, and you only have a Return in hand. Instead of opening a closure, you can pass along unzurry(yourReturnInstance) and call it a day.

The Point-Free folks link to Eitan Chatav’s post where he introduced the term and shows how function application is a zurried form of function composition (!).

ObservableType.do‌ or .subscribe (Publisher.handleEvents or .sink)?

ObservableType.do‌ or .subscribe?

(The question, rephrased for Combine, involves swapping in Publisher.handleEvents and .sink, respectively.)

do (handleEvents) gives you a hook into a sequence’s events without triggering a subscription, while subscribe (sink) does the same and triggers a subscription—a way to remember this is the former returns an Observable (Publisher) and the latter returns a Disposable (AnyCancellable).

So, when should we use one over the other?

In the past, I’ve always placed effectful work in the do block, even if meant an empty subscribe() call at the end of a chain.

And I’m starting to change my mind here—putting that sole do work in the subscribe block makes the chain more succinct (there are definitely cases where it’s cleaner to use both to sort of group effects, e.g. UI-related versus persistence-related).

I dug up an old bit from Bryan Irace in an iOS Slack we’re a part of that puts it well:

do is for performing a side effect inside of a bigger chain

if all you’re doing is performing the side effect, just do that in your subscribe block

  1. Then, repeating further. One day, I hope to have built the machinery needed to read Riehl’s research and writing on ∞-category theory.  2