We often want to define an object $A \in U$ according to some inference rules. Those rules denote a generating function $F$ which, when it is monotonic, yields a least fixed point $\mu F$. We take $A := \mu F$ to be the "inductive definition" of $A$. Moreover, monotonicity of $F$ allows us to reason with the "principle of induction" to determine when a set contains $A$ (i.e. when a property universally holds on $A$).
In Coq this corresponds to writing an $\mathtt{Inductive}$ definition of $A$ with explicit introduction terms. While this definition denotes a particular function $F$, that function is not necessarily monotonic. Coq therefore employs some syntactic checks to guarantee the "well-formedness" of the definition. To some approximation, it rejects occurrences of $A$ in negative positions in the types of the introduction terms.
(If my understanding up to this point is flawed, please correct me!)
First, some questions in the context of Coq:
1) Does the syntactic check in Coq merely serve to ensure that the definition of $A$ is predicative? (If so, is impredicativity the only way in which the definition would be ill-defined?) Or is it checking for monotonicity? (Correspondingly, is non-monotonicity what might kill it?)
2) Does such a negative occurrence of $A$ necessarily imply that $A$'s definition is impredicative/non-monotonic? Or is Coq simply unable to verify that it's well-defined in that case?
And more generally:
3) What is the relation between predicativity of an inductive definition and monotonicity of that definition's generating function? Are they two sides of the same coin? Are they unrelated? Informally, which one matters more?
No, in this case, predicativity and monotonicity are not closely related.
The positivity check in Coq/Adga serves to ensure that you are taking the least fixed point of a monotonic thing, roughly.
Here's how to think of inductive types in terms of lattices and monotone operators. Recall that the Knaster-Tarski theorem says that on a complete lattice $L$, every monotone operator $f : L \to L$ has a least fixed point $\mu(f)$. Next, we can think of the types in a type theory as forming a lattice under provability. That is, type $S$ is below $T$ if the truth of $S$ entails that of $T$. Now, what we would like to do is to take a monotone operator $F$ on types, and use Knaster-Tarski to get out an interpretation of the least fixed point of this operator $\mu(F)$.
However, types in type theory aren't just a lattice: they form a category. That is, given two types $S$ and $T$, there are potentially many ways for $S$ to be below $T$, with one way for each proof $e : S \to T$. So a type operator $F$ also has to do something sensible on these proofs. The appropriate generalization of monotonicity is functoriality. That is, we want $F$ to have an operator on types, and also to have an action on proofs, such that if $e : S \to T$, then $F(e) : F(S) \to F(T)$.
Now, functoriality is preserved by sums and products (ie., if $F$ and $G$ are endofunctors on types, then $F+G$ and $F\times G$ (acting pointwise) are also functors on types (assuming we have sums and products in our algebra of types). However, it is not preserved by the function space, since the exponential bifunctor $F \to G$ is contravariant in its left argument. So when you write an inductive type definition, you are defining a functor to take a least fixed point of. To ensure that it is indeed a functor, you need to rule out occurences of the recursive parameter on the left-hand-side of function spaces --- hence the positivity check.
Impredicativity (in the sense of System F) is generally avoided, because it is a principle that forces you to choose between classical logic and set-theoretic models. You can't interpret types as sets in classical set theory if you have F-style indexing. (See Reynolds' famous "Polymorphism is Not Set-Theoretic".)
Categorically, F-style impredicativity says that the category of types and terms forms a small complete category (that is, homs and objects are both sets, and limits of all small diagrams exist). Classically this forces a category to be a poset. Many constructivists are constructive because they want their theorems to hold in more systems than just classical logic, and so they don't want to prove anything that would be classically false. Hence they are leery of impredicative polymorphism.
However, polymorphism lets you say many conditions that are classically "large" internally to your type theory -- and positivity is one of them! A type operator $F$ is functorial, if you can produce a polymorphic term:
$$ \mathsf{Fmap} : \forall \alpha, \beta.\; (\alpha \to \beta) \to (F(\alpha) \to F(\beta)) $$
See how this corresponds to functoriality? IMO, this would be a very nice option to have in Coq, since it would let you do generic programming much more easily. The syntactic nature of the positivity check is a big hindrance to generic programming, and I would be happy to trade the possibility of classical axioms for more flexible functional programs.
EDIT: The question you are asking about the difference between Prop and Set arises from the fact that the Coq developers want to permit you think about Coq theorems in naive set-theoretic terms if you want, without forcing you to do so. Technically, they split Prop and Set, and then prohibit sets from depending on the computational content of Prop.
So you can interpret Prop as truth values in ZFC, which are the booleans true and false. In this world, all proofs of propositions are equal, and so obviously you should not be able to branch on the proof of a proposition. So the prohibition on sets depending on the computational content of proofs of Prop is totally sensible. Furthermore, the 2-element boolean lattice is obviously a complete lattice, so it should support impredicative indexing, since arbitrary set-valued meets exist. The predicativity restriction on Sets arises from the fact (mentioned above) that F-style indexing is degenerate in classical set-theoretic models.
Coq has other models (it's constructive logic!) but the point is that off the shelf it will never prove anything that a classical mathematician would be puzzled by.
Inductive Blah : Prop := Foo : (Blah -> Blah) -> Blah
same as anything else? — Apr 13, 2011 at 14:56 Inductive prop : Prop := prop_intro : Prop -> prop.
vs. Inductive set : Set := set_intro: Set -> set.
. Why the distinction if predicativity is of no concern to inductive definition? — Apr 13, 2011 at 15:04 Type@{i}
, must live in a bigger universe, at least Type@{i+1}
. — Mar 31, 2019 at 17:14