 Theoretical Computer Science

There is a sound theory of overloading operators and functions realized by type classes in Haskell, and to rougher extent by traits in Rust, etc.

In mathematics however, there are many situations where one set carries the same structure in multiple ways, like the integers being a monoid under both addition and multiplication.

We normally just give these different names, but potentially one wants to abstract some tougher mathematical function, or a proof done in the type system.

In principle, one could do this the way mathematicians do it, by applying a type class to a type that associates the operations to the base type, as opposed to just the set itself with a fixed association.

Is that the "right" or "only" way to gain this flexibility? Or is there something else?

In particular, there are a bunch of languages like Scala that do overloading of overloading rules in a rather dangerously complex ways, well even incoherent instances in Haskell probably. It'd be interesting if there were clearer "more parametric" way to achieve the ends that motivates those design decisions. ## Solution

As far as I know, there is no "right" or "only" way of associating operations of theorems to mathematical structures. Certainly there is no canonical way, and even on paper you often see sentences such as "we write foo instead of foobar when no ambiguity occurs". Even this does not prevent misunderstandings, and often clarification by the author or guesswork by the reader is required.

There has been a lot of work in disambiguating mathematical statements of the kind you describe, especially in mathematical proof assistants like Coq, Isabelle, Mizar, etc., but there are many annoyances in practice, and a lot of compromise to design a disambiguation system which insures:

1. Inferring reasonable statements, i.e. $1+\pi$ should be easy to recognize as a real number
2. Acceptable performance, i.e. $1+1$ in $\mathbb{N}$ should be recognized nigh-instantaneously, even as part of a much larger expression
3. Understandable error messages, i.e. $\pi+\mathbb{Z}$ should fail because $\mathbb{Z}$ is not (coercible to) a real number, rather than some crazy failure involving additive structure on quotient sets.

I can't understate how important (and difficult) point 3 is: experimentally, it's better to fail predictably than succeed in a bizarre manner, as I'm sure Scala users are aware.

Another issue is that notations like $+$ and $\cdot$ are often used to describe abstract operations like commuative monoids, but also concrete operations on the sets that support them, so if you want to write $$\forall\ x\ y\ z,\ (x\cdot y)\cdot z=x \cdot (y\cdot z)$$ for any monoid, you have to imagine instantiating $\cdot$ by addition in the naturals.

In general, the idea of applying an operation to a whole structure rather than just the carrier set is called the bundled approach, and it is mostly used in the mathematical components library which was used in the recent formalization of the Feit-Thompson theorem in Coq.

One big disadvantage of the bundled approach is that, as you noticed, mathematical sets often carry a great deal of structure, e.g. $\mathbb{R}$ is a group, and a ring and a field (as well as a topological space, and a great many other things) and one wants to exploit the knowledge that the additive group operation of $\mathbb{R}$ is the same as the $+$ operation of the ring, and that the order $\leq$ is compatible with them.

The unbundled approach is much more amenable to these kinds of mixins, which are more severe in mathematics than in CS, as properties need to be supported in addition to simply operations.

The Mathematical Components team has written about these issues in the 2009 paper Packaging Mathematical Structures, as have we in the context of the Lean interactive theorem prover in the paper Elaboration in Dependent Type Theory.

I feel that the question of a reasonable inference mechanism for disambiguation is still a very open question though.