The complexity of counting edge colorings and a dichotomy for some higher domain Holant problems
 JinYi Cai^{1},
 Heng Guo^{2} and
 Tyson Williams^{1}Email author
https://doi.org/10.1186/s4068701600678
© The Author(s) 2016
Received: 9 June 2015
Accepted: 5 May 2016
Published: 1 September 2016
Abstract
We show that an effective version of Siegel’s theorem on finiteness of integer solutions for a specific algebraic curve and an application of elementary Galois theory are key ingredients in a complexity classification of some Holant problems. These Holant problems, denoted by \({\text {Holant}}(f)\), are defined by a symmetric ternary function f that is invariant under any permutation of the \(\kappa \ge 3\) domain elements. We prove that \({\text {Holant}}(f)\) exhibits a complexity dichotomy. The hardness, and thus the dichotomy, holds even when restricted to planar multigraphs. A special case of this result is that counting edge \(\kappa \)colorings is #Phard over planar 3regular multigraphs for all \(\kappa \ge 3\). In fact, we prove that counting edge \(\kappa \)colorings is #Phard over planar rregular multigraphs for all \(\kappa \ge r \ge 3\). The problem is polynomial time computable in all other parameter settings. The proof of the dichotomy theorem for \({\text {Holant}}(f)\) depends on the fact that a specific polynomial p(x, y) has an explicitly listed finite set of integer solutions and the determination of the Galois groups of some specific polynomials. In the process, we also encounter the Tutte polynomial, medial graphs, Eulerian partitions, Puiseux series, and a certain lattice condition on the (logarithm of) the roots of polynomials.
1 Introduction
What do Siegel’s theorem and Galois theory have to do with complexity theory? In this paper, we show that an effective version of Siegel’s theorem on finiteness of integer solutions for a specific algebraic curve and an application of elementary Galois theory are key ingredients in a chain of steps that lead to a complexity classification of some counting problems. More specifically, we consider a certain class of counting problems that are expressible as Holant problems with an arbitrary domain of size \(\kappa \) over 3regular multigraphs (i.e., selfloops and parallel edges are allowed) and prove a dichotomy theorem for this class of problems. The hardness, and thus the dichotomy, holds even when restricted to planar multigraphs. Among other things, the proof of the dichotomy theorem depends on the following: (A) the specific polynomial \(p(x,y) = x^5  2 x^3 y  x^2 y^2  x^3 + x y^2 + y^3  2 x^2  x y\) has only the integer solutions \((x,y) = (1,1), (0,0), (1,1), (1,2), (3,3)\), and (B) the determination of the Galois groups of some specific polynomials. In the process, we also encounter the Tutte polynomial, medial graphs, Eulerian partitions, Puiseux series, and a certain lattice condition on the (logarithm of) the roots of polynomials such as p(x, y).
A special case of this dichotomy theorem is the problem of counting edge colorings over planar 3regular multigraphs using \(\kappa \) colors. In this case, the corresponding constraint function is the \({\textsc {AllDistinct}}_{3,\kappa }\) function, which takes value 1 when all three inputs from \([\kappa ]\) are distinct and 0 otherwise. We further prove that the problem using \(\kappa \) colors over rregular multigraphs is \({\#\mathrm {P}}\)hard for all \(\kappa \ge r \ge 3\), even when restricted to planar multigraphs. The problem is polynomial time computable in all other parameter settings. This solves a longstanding open problem.
Holant problems appear in many areas under a variety of different names. They are equivalent to counting constraint satisfaction problems (#CSPs) [7, 9] with the restriction that all variables are read twice,^{1} to the contraction of a tensor network [25, 41], and to the partition function of graphical models in Forney normal form [42, 47] from artificial intelligence, coding theory, and signal processing. Special cases of Holant problems include simulating quantum circuits [48, 56], counting graph homomorphisms [2, 5, 12, 27, 34], and evaluating the partition function of the edgecoloring model [2, Section 3.6].
An edge \(\kappa \)coloring of a graph G is an edge \(\kappa \)labeling of G such that any two incident edges have different colors. A fundamental problem in graph theory is to determine how many colors are required to edge color G. The obvious lower bound is \(\Delta (G)\), the maximum degree of the graph. By Vizing’s theorem [60], an edge coloring using just \(\Delta (G) + 1\) colors always exists for simple graphs (i.e., graphs without selfloops or parallel edges). Whether \(\Delta (G)\) colors suffice depends on the graph G.
Consider the edgecoloring problem over 3regular graphs. It follows from the parity condition (Lemma 4.4) that any graph containing a bridge does not have an edge 3coloring. For bridgeless planar simple graphs, Tait [55] showed that the existence of an edge 3coloring is equivalent to the fourcolor theorem. Thus, the answer for the decision problem over planar 3regular simple graphs is that there is an edge 3coloring iff the graph is bridgeless.
Without the planarity restriction, determining whether a 3regular (simple) graph has an edge 3coloring is \(\text {NP}\)complete [39]. This hardness extends to finding an edge \(\kappa \)coloring over \(\kappa \)regular (simple) graphs for all \(\kappa \ge 3\) [45]. However, these reductions are not parsimonious, and, in fact, it is claimed that no parsimonious reduction exists unless \(\text {P}= \text {NP}\) [62, p. 118]. The counting complexity of this problem has remained open.
We prove that counting edge colorings over planar regular multigraphs is \({\#\mathrm {P}}\)hard.^{2}
Theorem 1.1
#\(\kappa \)EdgeColoring is \({\#\mathrm {P}}\)hard over planar rregular multigraphs if \(\kappa \ge r \ge 3\).
This theorem is proved in Theorem 4.8 for \(\kappa = r\) and Theorem 4.20 for \(\kappa > r\).
The techniques we develop to prove Theorem 1.1 naturally extend to a class of Holant problems with domain size \(\kappa \ge 3\) over planar 3regular multigraphs. Functions such as \({\textsc {AllDistinct}}_{3,\kappa }\) are symmetric, which means that they are invariant under any permutation of its three inputs. But \({\textsc {AllDistinct}}_{3,\kappa }\) has another invariance—it is invariant under any permutation of the \(\kappa \) domain elements. We call the second property domain invariance.
A ternary function that is both symmetric and domain invariant is specified by three values, which we denote by \(\langle a,b,c \rangle \). The output is a when all inputs are the same, the output is c when all inputs are distinct, and the output is b when two inputs are the same but the third input is different.
We prove a dichotomy theorem for such functions with complex weights.
Theorem 1.2
Suppose \(\kappa \ge 3\) is the domain size and \(a,b,c \in {\mathbb {C}}\). Then either \({\text {Holant}}(\langle a,b,c \rangle )\) is computable in polynomial time or \({\text {PlHolant}}(\langle a,b,c \rangle )\) is \({\#\mathrm {P}}\)hard. Furthermore, given a, b, c, there is a polynomialtime algorithm that decides which is the case.
See Theorem 10.1 for an explicit listing of the tractable cases. Note that counting edge \(\kappa \)colorings over 3regular multigraphs is the special case when \(\langle a,b,c \rangle = \langle 0,0,1 \rangle \).
There is only one previous dichotomy theorem for higher domain Holant problems [22] (see Theorem 5.1). The important difference is that the present work is for general domain size \(\kappa \ge 3\), while the previous result is for domain size \(\kappa = 3\). When restricted to domain size 3, the result in [22] assumes that all unary functions are available, while this dichotomy does not assume that; however, it does assume domain invariance. Dichotomy theorems for an arbitrary domain size are generally difficult to prove. The FederVardi conjecture for decision constraint satisfaction problems (CSPs) is still open [32]. It was a major achievement to prove this conjecture for domain size 3 [6]. The #CSP dichotomy was proved after a long series of papers [4, 5, 7–9, 11, 15, 16, 24, 26, 28, 35].
Our proof of Theorem 1.2 has many components, and a number of new ideas are introduced in this proof. We discuss some of these ideas and give an outline of our proof in Sect. 2. In Sect. 3, we review basic terminology and define the notation of a succinct signature. Section 4 contains our proof of Theorem 1.1 about edge coloring. In Sect. 5, we discuss the tractable cases of Theorem 1.2. In Sect. 6, we extend our main proof technique of polynomial interpolation. Then in Sects. 7, 8, and 9, we develop our hardness proof and tie everything together in Sect. 10.
2 Proof outline and techniques
As usual, the difficult part of a dichotomy theorem is to carve out exactly the tractable problems in the class and prove all the rest \({\#\mathrm {P}}\)hard. A dichotomy theorem for Holant problems has the additional difficulty that some tractable problems are only shown to be tractable under a holographic transformation, which can make the appearance of the problem rather unexpected. For example, we show in Sect. 5 that the problem \({\text {Holant}}(\langle 3  4 i, 1, 1 + 2 i \rangle )\) on domain size 4 is tractable. Despite its appearance, this problem is intimately connected with a tractable graph homomorphism problem defined by the Hadamard matrix \(\left[ {\begin{matrix} 1 &{}\quad 1 &{}\quad 1 &{}\quad 1 \\ 1 &{}\quad 1 &{}\quad 1 &{}\quad 1 \\ 1 &{}\quad 1 &{}\quad 1 &{}\quad 1 \\ 1 &{}\quad 1 &{}\quad 1 &{}\quad 1 \end{matrix}}\right] \). In order to understand all problems in a Holant problem class, we must deal with such problems. Dichotomy theorems for graph homomorphisms and for #CSP do not have to deal with as varied a class of such problems, since they implicitly assume all Equality functions are available and must be preserved. This restricts the possible transformations.
After isolating a set of tractable problems, our \({\#\mathrm {P}}\)hardness results in both Theorem 1.1 and Theorem 1.2 are obtained by reducing from evaluations of the Tutte polynomial over planar graphs. A dichotomy is known for such problems (Theorem 4.1).
The chromatic polynomial, a specialization of the Tutte polynomial (Proposition 4.10), is concerned with vertex colorings. On domain size \(\kappa \), one starting point of our hardness proofs is the chromatic polynomial, which contains the problem of counting vertex colorings using at most \(\kappa \) colors. By the planar dichotomy for the Tutte polynomial, this problem is \({\#\mathrm {P}}\)hard for all \(\kappa \ge 3\).
Another starting point for our hardness reductions is the evaluation of the Tutte polynomial at an integer diagonal point (x, x), which is \({\#\mathrm {P}}\)hard for all \(x \ge 3\) by the same planar Tutte dichotomy. These are new starting places for reductions involving Holant problems. These problems were known to have a socalled statesum expression (Lemma 4.3), which is a sum over weighted Eulerian partitions. This sum is not over the original planar graph but over its directed medial graph, which is always a planar 4regular graph (Fig. 4). We show that this statesum expression is naturally expressed as a Holant problem with a particular quaternary constraint function (Lemma 4.6).
Below we highlight some of our proof techniques.
Interpolation within an orthogonal subspace We develop the ability to interpolate when faced with some nontrivial null spaces inherently present in interpolation constructions. In any construction involving an initial signature and a recurrence matrix, it is possible that the initial signature is orthogonal to some row eigenvectors of the recurrence matrix. Previous interpolation results always attempt to find a construction that avoids this. In the present work, this avoidance seems impossible. In Sect. 6, we prove an interpolation result that can succeed in this situation to the greatest extent possible. We prove that one can interpolate any signature, provided that it is orthogonal to the same set of row eigenvectors, and the relevant eigenvalues satisfy a lattice condition (Lemma 6.6).
Satisfy lattice condition via Galois theory A key requirement for this interpolation to succeed is the lattice condition (Definition 6.3), which involves the roots of the characteristic polynomial of the recurrence matrix. We use Galois theory to prove that our constructions satisfy this condition. If a polynomial has a large Galois group, such as \(S_n\) or \(A_n\), and its roots do not all have the same complex norm, then we show that its roots satisfy the lattice condition (Lemma 6.5).
Effective Siegel’s theorem via Puiseux series We need to determine the Galois groups for an infinite family of polynomials, one for each domain size. If these polynomials are irreducible, then we can show they all have the full symmetric group as their Galois group and hence fulfill the lattice condition. We suspect that these polynomials are all irreducible but are unable to prove it.
A necessary condition for irreducibility is the absence of any linear factor. This infinite family of polynomials, as a single bivariate polynomial in \((x, \kappa )\), defines an algebraic curve, which has genus 3. By a wellknown theorem of Siegel [52], there are only a finite number of integer values of \(\kappa \) for which the corresponding polynomial has a linear factor. However, this theorem and others like it are not effective in general. There are some effective versions of Siegel’s theorem that can be applied to the algebraic curve, but the best general effective bound is over \(10^{20,000}\) [61] and hence cannot be checked in practice. Instead, we use Puiseux series to show that this algebraic curve has exactly five explicitly listed integer solutions (Lemma 7.6).
Eigenvalue shifted triples For a pair of eigenvalues, the lattice condition is equivalent to the statement that the ratio of these eigenvalues is not a root of unity. A sufficient condition is that the eigenvalues have distinct complex norms. We prove three results, each of which is a different way to satisfy this sufficient condition. Chief among them is the technique we call an Eigenvalue Shifted Triple (EST). These generalize the technique of Eigenvalue Shifted Pairs from [43]. In an EST, we have three recurrence matrices, each of which differs from the other two by a nonzero additive multiple of the identity matrix. Provided these two multiples are linearly independent over \({\mathbb {R}}\), we show at least one of these matrices has eigenvalues with distinct complex norms (Lemma 9.10). (However, determining which one succeeds is a difficult task, but we need not know that).
E Pluribus Unum When the ratio of a pair of eigenvalues is a root of unity, it is a challenge to effectively use this failure condition. Direct application of this cyclotomic condition is often of limited use. We introduce an approach that uses this cyclotomic condition effectively. A direct recursive construction involving these two eigenvalues only creates a finite number of different signatures. We reuse all of these signatures in a multitude of new interpolation constructions (Lemma 9.3), one of which we hope will succeed. If the eigenvalues in all of these constructions also satisfy a cyclotomic condition, then we obtain a more useful condition than any of the previous cyclotomic conditions. This idea generalizes the antigadget technique [17], which only reuses the “last” of these signatures.
Local holographic transformation One reason to obtain all succinct binary signatures is for use in the gadget construction known as a local holographic transformation (Fig. 11). This construction mimics the effect of a holographic transformation applied on a single signature. In particular, using this construction, we attempt to obtain a succinct ternary signature of the form \(\langle a,b,b \rangle \), where \(a \not = b\) (Lemma 7.1). This signature turns out to have some magical properties in the Bobby Fischer gadget, which we discuss next.
Bobby Fischer gadget Typically, any combinatorial construction for higher domain Holant problems produces very intimidating looking expressions that are nearly impossible to analyze. In our case, it seems necessary to consider a construction that has to satisfy multiple requirements involving at least nine polynomials. However, we are able to combine the signature \(\langle a,b,b \rangle \), where \(a \not = b\), with a succinct binary signature of our choice in a special construction that we call the Bobby Fischer gadget (Fig. 9). This gadget is able to satisfy seven conditions using just one degree of freedom (Lemma 4.18). This ability to satisfy a multitude of constraints simultaneously in one magic stroke reminds us of some unfathomably brilliant moves by Bobby Fischer, the chess genius extraordinaire.
3 Preliminaries
3.1 Problems and definitions
The framework of Holant problems is defined for functions mapping any \([\kappa ]^n \rightarrow R\) for a finite \(\kappa \) and some commutative semiring R. In this paper, we investigate some complexweighted \({\text {Holant}}\) problems on domain size \(\kappa \ge 3\). A constraint function, or signature, of arity n maps from \([\kappa ]^n \rightarrow {\mathbb {C}}\). For consideration of models of computation, functions take complex algebraic numbers.
Graphs (called multigraphs in Sect. 1) may have selfloops and parallel edges. A graph without selfloops or parallel edges is a simple graph. A signature grid \(\Omega = (G, \pi )\) of \({\text {Holant}}({\mathcal {F}})\) consists of a graph \(G = (V,E)\), where \(\pi \) assigns to each vertex \(v \in V\) and its incident edges some \(\,f_v \in {\mathcal {F}}\) and its input variables. We say \(\Omega \) is a planar signature grid if G is planar, where the variables of \(\,f_v\) are ordered counterclockwise. The Holant problem on instance \(\Omega \) is to evaluate \({\text {Holant}}(\Omega ; {\mathcal {F}}) = \sum _{\sigma } \prod _{v \in V} \,f_v(\sigma \mid _{E(v)})\), a sum over all edge assignments \(\sigma : E \rightarrow [\kappa ]\), where E(v) denotes the incident edges of v and \(\sigma \mid _{E(v)}\) denotes the restriction of \(\sigma \) to E(v).
A function \(\,f_v\) can be represented by listing its values in lexicographical order as in a truth table, which is a vector in \({\mathbb {C}}^{\kappa ^{\deg (v)}}\), or as a tensor in \(({\mathbb {C}}^{\kappa })^{\otimes \deg (v)}\). In this paper, we consider symmetric signatures. An example of which is the Equality signature \(=_r\) of arity r. Sometimes we represent f as a matrix \(M_f\) that we call its signature matrix, which has row index \((x_1, \cdots , x_t)\) and column index \((x_k, \cdots , x_{t+1})\) (in reverse order) for some t that will be clear from context.
A Holant problem is parametrized by a set of signatures.
Definition 3.1

Input: A signature grid \(\Omega = (G, \pi )\);

Output: \({\text {Holant}}(\Omega ; {\mathcal {F}})\).
The problem \({\text {PlHolant}}({\mathcal {F}})\) is defined similarly using a planar signature grid.
A signature f of arity n is degenerate if there exist unary signatures \(u_j \in {\mathbb {C}}^\kappa \) (\(1 \le j \le n\)) such that \(f = u_1 \otimes \cdots \otimes u_n\). A symmetric degenerate signature has the form \(u^{\otimes n}\). For such signatures, it is equivalent to replace it by n copies of the corresponding unary signature. Replacing a signature \(f \in {\mathcal {F}}\) by a constant multiple cf, where \(c \ne 0\), does not change the complexity of \({\text {Holant}}({\mathcal {F}})\). It introduces a global nonzero factor to \({\text {Holant}}(\Omega ; {\mathcal {F}})\).
We allow \({\mathcal {F}}\) to be an infinite set. For \({\text {Holant}}({\mathcal {F}})\) to be tractable, the problem must be computable in polynomial time even when the description of the signatures in the input \(\Omega \) is included in the input size. In contrast, we say \({\text {Holant}}({\mathcal {F}})\) is \({\#\mathrm {P}}\)hard if there exists a finite subset of \({\mathcal {F}}\) for which the problem is \({\#\mathrm {P}}\)hard. The same definitions apply for \({\text {PlHolant}}({\mathcal {F}})\) when \(\Omega \) is a planar signature grid. We say a signature set \({\mathcal {F}}\) is tractable (resp. \({\#\mathrm {P}}\)hard) if the corresponding counting problem \({\text {Holant}}({\mathcal {F}})\) is tractable (resp. \({\#\mathrm {P}}\)hard). We say \({\mathcal {F}}\) is tractable (resp. \({\#\mathrm {P}}\)hard) for planar problems if \({\text {PlHolant}}({\mathcal {F}})\) tractable (resp. \({\#\mathrm {P}}\)hard). Similarly for a signature f, we say f is tractable (resp. \({\#\mathrm {P}}\)hard) if \(\{f\}\) is.
We follow the usual conventions about polynomialtime Turing reduction \(\le _T\) and polynomialtime Turing equivalence \(\equiv _T\). We use \(I_n\) and \(J_n\) to denote the nbyn identity matrix and nbyn matrix of all 1’s, respectively.
3.2 Holographic reduction
To introduce the idea of holographic reductions, it is convenient to consider bipartite graphs. For a general graph, we can always transform it into a bipartite graph while preserving the Holant value, as follows. For each edge in the graph, we replace it by a path of length two. (This operation is called the 2stretch of the graph and yields the edgevertex incidence graph.) Each new vertex is assigned the binary Equality signature \(=_2\).
We use \({\text {Holant}}({\mathcal {F}}\mid {\mathcal {G}})\) to denote the Holant problem on bipartite graphs \(H = (U,V,E)\), where each vertex in U or V is assigned a signature in \({\mathcal {F}}\) or \({\mathcal {G}}\), respectively. Signatures in \({\mathcal {F}}\) are considered as row vectors (or covariant tensors); signatures in \({\mathcal {G}}\) are considered as column vectors (or contravariant tensors) [25]. Similarly, \({\text {PlHolant}}({\mathcal {F}}\mid {\mathcal {G}})\) denotes the Holant problem over signature grids with a planar bipartite graph.
For a \(\kappa \)by\(\kappa \) matrix T and a signature set \({\mathcal {F}}\), define \(T {\mathcal {F}} = \{g \mid \exists f \in {\mathcal {F}}\) of arity \(n,~g = T^{\otimes n} f\}\), similarly for \({\mathcal {F}} T\). Whenever we write \(T^{\otimes n} f\) or \(T {\mathcal {F}}\), we view the signatures as column vectors, similarly for \(f T^{\otimes n} \) or \({\mathcal {F}} T\) as row vectors.
Let T be an invertible \(\kappa \)by\(\kappa \) matrix. The holographic transformation defined by T is the following operation: given a signature grid \(\Omega = (H, \pi )\) of \({\text {Holant}}({\mathcal {F}}\mid {\mathcal {G}})\), for the same bipartite graph H, we get a new grid \(\Omega ' = (H, \pi ')\) of \({\text {Holant}}({\mathcal {F}} T\mid T^{1} {\mathcal {G}})\) by replacing each signature in \({\mathcal {F}}\) or \({\mathcal {G}}\) with the corresponding signature in \({\mathcal {F}} T\) or \(T^{1} {\mathcal {G}}\). Valiant’s Holant Theorem [57] (see also [13]) is easily generalized to domain size \(\kappa \ge 3\).
Theorem 3.2
Suppose \(\kappa \ge 3\) is the domain size. If \(T \in {\mathbb {C}}^{\kappa \times \kappa }\) is an invertible matrix, then \({\text {Holant}}(\Omega ; {\mathcal {F}} \mid {\mathcal {G}}) = {\text {Holant}}(\Omega '; {\mathcal {F}} T \mid T^{1} {\mathcal {G}})\).
Therefore, an invertible holographic transformation does not change the complexity of the Holant problem in the bipartite setting. Furthermore, there is a special kind of holographic transformation, the orthogonal transformation, that preserves the binary equality and thus can be used freely in the standard setting. For \(\kappa = 2\), this first appeared in [18] as Theorem 2.2.
Theorem 3.3
Suppose \(\kappa \ge 3\) is the domain size. If \(T \in {\mathbb {C}}^{\kappa \times \kappa }\) is an orthogonal matrix (i.e., \(T T^{\texttt {T}} = I_\kappa \)), then \({\text {Holant}}(\Omega ; {\mathcal {F}}) = {\text {Holant}}(\Omega '; T {\mathcal {F}})\).
Since the complexity of a signature is unchanged by a nonzero constant multiple, we also call a transformation T such that \(T T^{\texttt {T}} = \lambda I\) for some \(\lambda \ne 0\) an orthogonal transformation. Such transformations do not change the complexity of a problem.
3.3 Realization
An \({\mathcal {F}}\)gate is planar if the underlying graph G is a planar graph, and the dangling edges, ordered counterclockwise corresponding to the order of the input variables, are in the outer face in a planar embedding. A planar \({\mathcal {F}}\)gate can be used in a planar signature grid as if it is just a single vertex with the particular signature.
Using the idea of planar \({\mathcal {F}}\)gates, we can reduce one planar Holant problem to another. Suppose g is the signature of some planar \({\mathcal {F}}\)gate. Then \({\text {PlHolant}}({\mathcal {F}} \cup \{g\}) \le _T {\text {PlHolant}}({\mathcal {F}})\). The reduction is simple. Given an instance of \({\text {PlHolant}}({\mathcal {F}} \cup \{g\})\), by replacing every appearance of g by the \({\mathcal {F}}\)gate, we get an instance of \({\text {PlHolant}}({\mathcal {F}})\). Since the signature of the \({\mathcal {F}}\)gate is g, the Holant values for these two signature grids are identical.
Although our main results are about symmetric signatures (i.e., signatures that are invariant under any permutation of inputs), some of our proofs utilize asymmetric signatures. When a gadget has an asymmetric signature, we place a diamond on the edge corresponding to the first input. The remaining inputs are ordered counterclockwise around the vertex. (See Fig. 5 for an example.)
We note that even for a very simple signature set \({\mathcal {F}}\), the signatures for all \({\mathcal {F}}\)gates can be quite complicated and expressive.
3.4 Succinct signatures
An arity r signature on domain size \(\kappa \) is fully specified by \(\kappa ^r\) values. However, some special cases can be defined using far fewer values. Consider the signature \({\textsc {AllDistinct}}_{r,\kappa }\) of arity r on domain size \(\kappa \) that outputs 1 when all inputs are distinct and 0 otherwise. We also denote this signature by \({\text {AD}}_{r,\kappa }\). In addition to being symmetric, it is also invariant under any permutation of the \(\kappa \) domain elements. We call the second property domain invariance. The signature of an \({\mathcal {F}}\)gate in which all signatures in \({\mathcal {F}}\) are domain invariant is itself domain invariant.
Definition 3.4
(Succinct signature) Let \(\tau = (P_1, P_2, \cdots , P_\ell )\) be a partition of \([\kappa ]^r\) listed in some order. We say that f is a succinct signature of type \(\tau \) if f is constant on each \(P_i\). A set \({\mathcal {F}}\) of signatures is of type \(\tau \) if every \(f \in {\mathcal {F}}\) has type \(\tau \). We denote a succinct signature f of type \(\tau \) by \(\langle f(P_1), \cdots , f(P_\ell ) \rangle \), where \(f(P) = f(x)\) for any \(x \in P\).
Furthermore, we may omit 0 entries. If f is a succinct signature of type \(\tau \), we also say f is a succinct signature of type \(\tau '\) with length \(\ell '\), where \(\tau '\) lists \(\ell '\) parts of the partition \(\tau \), and we write f as \(\langle \,f_1, \,f_2, \cdots , \,f_{\ell '} \rangle \), provided that all nonzero values \(f(P_i)\) are listed. When using this notation, we will make it clear which zero entries have been omitted.
For example, a symmetric signature in the Boolean domain (i.e., \(\kappa = 2\)) has been denoted in previous work [14] by \([\,f_0, \,f_1, \cdots , \,f_r]\), where \(\,f_w\) is the output on inputs of Hamming weight w. This corresponds to the succinct signature type \((P_0, P_1, \cdots , P_r)\), where \(P_w\) is the set of inputs of Hamming weight w. A similar succinct signature notation was used for symmetric signatures on domain size 3 [22, p. 1282].
We prove a dichotomy theorem for \({\text {PlHolant}}(f)\) when f is a succinct ternary signature of type \(\tau _3\) on domain size \(\kappa \ge 3\). For \(\kappa \ge 3\), the succinct signature of type \(\tau _3 = (P_1, P_2, P_3)\) is a partition of \([\kappa ]^3\) with \(P_i = \{(x,y,z) \in [\kappa ]^3 : \{x, y, z\} = i\}\) for \(1 \le i \le 3\). The notation \(\{x, y, z\}\) denotes a multiset, and \(\{x, y, z\}\) denotes the number of distinct elements in it. Succinct signatures of type \(\tau _3\) are exactly the symmetric and domain invariant ternary signatures. In particular, the succinct ternary signature for \({\text {AD}}_{3,\kappa }\) is \(\langle 0,0,1 \rangle \).
We use several other succinct signature types as well. For domain invariant unary signatures, there are only two signatures up to a nonzero scalar. Using the trivial partition that contains all inputs, we denote these two succinct unary signatures as \(\langle 0 \rangle \) and \(\langle 1 \rangle \) and say that they have succinct type \(\tau _1\). We also need a succinct signature type for domain invariant binary signatures. Such signatures are necessarily symmetric. We call their succinct signature type \(\tau _2 = (P_1, P_2)\), where \(P_i = \{(x,y) \in [\kappa ]^2 : \{x, y\} = i\}\) for \(1 \le i \le 2\).
We note that the number of succinct signature types for arity r signatures on domain size \(\kappa \) that are both symmetric and domain invariant is the number of partitions of r into at most \(\kappa \) parts. This is related to the partition function from number theory, which is not to be confused with the partition function with its origins in statistical mechanics and has been intensively studied in complexity theory of counting problems.
4 Counting edge \(\kappa \)colorings over planar rregular graphs
In this section, we show that counting edge \(\kappa \)colorings over planar rregular graphs is \({\#\mathrm {P}}\)hard provided \(\kappa \ge r \ge 3\). When this condition fails to hold, the problem is trivially tractable. There are two cases depending on whether \(\kappa = r\) or not.
4.1 The Case \(\kappa = r\)
When \(\kappa = r\), we reduce from evaluating the Tutte polynomial of a planar graph at the positive integer points on the diagonal \(x = y\). For \(x \ge 3\), evaluating the Tutte polynomial of a planar graph at (x, x) is \({\#\mathrm {P}}\)hard.
Theorem 4.1
(Theorem 5.1 in [59]) For \(x, y \in {\mathbb {C}}\), evaluating the Tutte polynomial at (x, y) is \({\#\mathrm {P}}\)hard over planar graphs unless \((x  1) (y  1) \in \{1, 2\}\) or \((x,y) \in \{(1,1), (1, 1), (\omega , \omega ^2), (\omega ^2, \omega )\}\), where \(\omega = e^{2 \pi i / 3}\). In each exceptional case, the computation can be done in polynomial time.
To state the connection with the diagonal of the Tutte polynomial, we need to consider Eulerian subgraphs in directed medial graphs. We say a graph is an Eulerian (di)graph if every vertex has even degree (resp. indegree equal to outdegree), but connectedness is not required. Now recall the definition of a medial graph and its directed variant.
Definition 4.2
(cf. Section 4 in [30]) For a connected plane graph G (i.e., a planar embedding of a connected planar graph), its medial graph \(G_m\) has a vertex on each edge of G and two vertices in \(G_m\) are joined by an edge for each face of G in which their corresponding edges occur consecutively.
The directed medial graph \(\vec {G}_m\) of G colors the faces of \(G_m\) black or white depending on whether they contain or do not contain, respectively, a vertex of G. Then the edges of the medial graph are directed so that the black face is on the left.
Figures 3 and 4 give examples of a medial graph and a directed medial graph, respectively. Notice that the (directed) medial graph is always a planar 4regular graph.
Building on previous work [1, 29, 49, 58], EllisMonaghan gave the following connection with the diagonal of the Tutte polynomial. A monochromatic vertex is a vertex with all its incident edges having the same color.
Lemma 4.3
The Eulerian partitions in \({\mathcal {C}}(\vec {G}_m)\) have the property that the subgraphs induced by each partition do not intersect (or crossover) each other due to the orientation of the edges in the medial graph. We call the counting problem defined by the sum on the righthand side of (1) counting weighted Eulerian partitions over planar 4regular graphs. This problem also has an expression as a Holant problem using a succinct signature. To define this succinct signature, it helps to know the following basic result about edge colorings.
When the number of available colors coincides with the regularity parameter of the graph, the cuts in any coloring satisfy a wellknown parity condition. This parity condition follows from a more general parity argument (see (1.2) and the parity argument on page 95 in [54]). We state this simpler parity condition and provide a short proof for completeness.
Lemma 4.4
Proof
Consider two distinct colors i and j. Remove from G all edges not colored i or j. The resulting graph is a disjoint union of cycles consisting of alternating colors i and j. Each cycle in this graph must cross the cut C an even number of times. Therefore, \(c_i \equiv c_j \pmod {2}\). \(\square \)
Lemma 4.5
Suppose \(\kappa \ge 3\) is the domain size and let F be a quaternary \(\{{\text {AD}}_{\kappa ,\kappa }\}\)gate with succinct signature f of type \(\tau _{\text {color}}\). Then \(f(P_0) = 0\).
Proof
Let \(\sigma _0 \in P_0\) be an edge \(\kappa \)labeling of the external edges of F. Assume for a contradiction that \(\sigma _0\) can be extended to an edge \(\kappa \)coloring of F. We form a graph G from two copies of F (namely, \(F_1\) and \(F_2\)) by connecting their corresponding external edges. Then G has a coloring \(\sigma \) that extends \(\sigma _0\). Consider the cut \(C = (F_1, F_2)\) in G whose cut set contains exactly those edges labeled by \(\sigma _0\). By Lemma 4.4, the counts of the colors assigned by \(\sigma _0\) must satisfy the parity condition. However, this is a contradiction since no edge \(\kappa \)labeling in \(P_0\) satisfies the parity condition. \(\square \)
By Lemma 4.5, we denote a quaternary signature f of an \(\{{\text {AD}}_{\kappa ,\kappa }\}\)gate by the succinct signature \(\langle f(P_{{\begin{matrix} 1 &{} 1 \\ 1 &{} 1 \end{matrix}}}), f(P_{{\begin{matrix} 1 &{} 2 \\ 1 &{} 2 \end{matrix}}}), f(P_{{\begin{matrix} 1 &{} 2 \\ 2 &{} 1 \end{matrix}}}), f(P_{{\begin{matrix} 1 &{} 1 \\ 2 &{} 2 \end{matrix}}}), f(P_{{\begin{matrix} 1 &{} 4 \\ 2 &{} 3 \end{matrix}}}) \rangle \) of type \(\tau _{\text {color}}\), which has the entry for \(P_0\) omitted.^{3} When \(\kappa = 3\), \(P_{{\begin{matrix} 1 &{} 4 \\ 2 &{} 3 \end{matrix}}}\) is empty and we define its entry in the succinct signature to be 0.
Lemma 4.6
Proof
Each \(c \in {\mathcal {C}}(\vec {G}_m)\) is also an edge \(\kappa \)labeling of \(G_m\). At each vertex \(v \in V(\vec {G}_m)\), the four incident edges are assigned at most two distinct colors by c. If all four edges are assigned the same color, then the constraint f on v contributes a factor of 2 to the total weight. This is given by the value in the first entry of f. Otherwise, there are two different colors, say x and y. Because the orientation at v in \(\vec {G}_m\) is cyclically “in, out, in, out,” the coloring around v can only be of the form xxyy or xyyx. These correspond to the second and fourth entries of f. Therefore, every term in the summation on the lefthand side of (2) appears (with the same nonzero weight) in the summation \({\text {PlHolant}}(G_m; f)\).
It is also easy to see that every nonzero term in \({\text {PlHolant}}(G_m; f)\) appears in the sum on the lefthand side of (2) with the same weight of 2 to the power of the number of monochromatic vertices. In particular, any coloring with a vertex that is cyclically colored xyxy for different colors x and y does not contribute because \(f(P_{{\begin{matrix} 1 &{} 2 \\ 2 &{} 1 \end{matrix}}}) = 0\). \(\square \)
Remark
This result shows that this planar Holant problem is at least as hard as computing the Tutte polynomial at the point \((\kappa +1, \kappa +1)\) over planar graphs, which implies \({\#\mathrm {P}}\)hardness. Of course they are equally difficult in the sense that both are \({\#\mathrm {P}}\)complete. In fact, they are more directly related since every 4regular plane graph is the medial graph of some plane graph.
By Theorem 4.1 and Lemma 4.6, the problem \({\text {PlHolant}}(\langle 2,1,0,1,0 \rangle )\) is \({\#\mathrm {P}}\)hard. We state this as a corollary.
Corollary 4.7
Suppose \(\kappa \ge 3\) is the domain size. Let \(\langle 2,1,0,1,0 \rangle \) be a succinct quaternary signature of type \(\tau _{\text {color}}\). Then \({\text {PlHolant}}(\langle 2,1,0,1,0 \rangle )\) is \({\#\mathrm {P}}\)hard.
With this connection established, we can now show that counting edge colorings is \({\#\mathrm {P}}\)hard over planar regular graphs when the number of colors and the regularity parameter coincide.
Theorem 4.8
#\(\kappa \)EdgeColoring is \({\#\mathrm {P}}\)hard over planar \(\kappa \)regular graphs for all \(\kappa \ge 3\).
Proof
Consider an instance \(\Omega \) of \({\text {PlHolant}}(\langle 2,1,0,1,0 \rangle )\) on domain size \(\kappa \). Suppose \(\langle 2,1,0,1,0 \rangle \) appears n times in \(\Omega \). We construct from \(\Omega \) a sequence of instances \(\Omega _t\) of \({\text {PlHolant}}(f)\) indexed by t, where \(t = 2 s\) with \(s \ge 0\). We obtain \(\Omega _t\) from \(\Omega \) by replacing each occurrence of \(\langle 2,1,0,1,0 \rangle \) with the gadget \(\,f_t\).
As a polynomial in \(x = (\kappa  1)^t\), \({\text {PlHolant}}(\Omega _t; f)\) is independent of t and has degree at most n with integer coefficients. Using our oracle for \({\text {PlHolant}}(f)\), we can evaluate this polynomial at \(n+1\) distinct points \(x = (\kappa  1)^{2s}\) for \(0 \le s \le n\). Then via polynomial interpolation, we can recover the coefficients of this polynomial efficiently. Evaluating this polynomial at \(x = \kappa + 1\) (so that \(y = 1\)) gives the value of \({\text {PlHolant}}(\Omega ; \langle 2,1,0,1,0 \rangle )\), as desired. \(\square \)
Remark
For \(\kappa = 3\), the interpolation step is actually unnecessary since the succinct signature of \(\,f_2\) happens to be \(\langle 2,1,0,1,0 \rangle \).
When \(\kappa = 3\), it is easy to extend Theorem 4.8 by further restricting to simple graphs, i.e., graphs without selfloops or parallel edges.
Theorem 4.9
#3EdgeColoring is \({\#\mathrm {P}}\)hard over simple planar 3regular graphs.
Proof
By Theorem 4.8, it suffices to efficiently compute the number of edge 3colorings of a planar 3regular graph G that might have selfloops and parallel edges. Furthermore, we can assume that G is connected since the number of edge colorings is multiplicative over connected components. If G contains a selfloop, then there are no edge colorings in G, so assume G contains no selfloops. If G also contains no parallel edges, then G is simple and we are done.
Thus, assume that G contains n vertices and parallel edges between some distinct vertices u and v. If u and v are connected by three edges, then this constitutes the whole graph, which has six edge 3colorings. Otherwise, u and v are connected by two edges. Then there exist vertices \(u'\) and \(v'\) such that u and \(u'\) are connected by a single edge, v and \(v'\) are connected by a single edge, and \(u' \ne v'\). In any edge 3coloring of G, it is easy to see that the edges \((u,u')\) and \((v,v')\) must be assigned the same color. By removing u, v, and their incident edges while adding an edge between \(u'\) and \(v'\), we have a planar 3regular graph \(G'\) on \(n  2\) vertices with half as many edge colorings as G. Then by induction, we can efficiently compute the number of edge 3colorings in \(G'\). \(\square \)
In “Appendix 3”, we give an alternative proof of Theorem 4.8, which uses the interpolation techniques we develop in Sect. 6. The purpose of “Appendix 3” is to show how a recursive construction in an interpolation proof can be used to form a hypothesis about possible invariance properties. One example of an invariance property is that any planar \(\{{\text {AD}}_{\kappa ,\kappa }\}\)gate with a succinct quaternary signature \(\langle a,b,c,d,e \rangle \) of type \(\tau _{\text {color}}\) must satisfy \(a + c = b + d\) (Lemma 13.1).
4.2 The case \(\kappa > r\)
Now we consider \(\kappa > r \ge 3\). This time, we reduce from the problem of counting vertex \(\kappa \)colorings over planar graphs. This problem is also \({\#\mathrm {P}}\)hard by the same dichotomy for the Tutte polynomial (Theorem 4.1) since the chromatic polynomial is a specialization the Tutte polynomial.
Proposition 4.10
The first step in the proof is to interpolate every possible binary signature that is domain invariant, which are necessarily symmetric. These signatures have the succinct signature type \(\tau _2\).
Lemma 4.11
Suppose \(\kappa \ge 3\) is the domain size and let \(x,y \in {\mathbb {C}}\). If we assign the succinct binary signature \(\langle x,y \rangle \) of type \(\tau _2\) to every vertex of the recursive construction in Fig. 7, then the corresponding recurrence matrix is \(\left[ {\begin{matrix} x &{} (\kappa  1) y \\ y &{} x + (\kappa  2) y \end{matrix}}\right] \) with eigenvalues \(x + (\kappa  1) y\) and \(x  y\).
Proof
Let \(\,f_\ell \) be the signature of the \(\ell \)th gadget in this construction. To obtain \(\,f_{\ell +1}\) from \(\,f_\ell \), we view \(\,f_\ell \) as a column vector and multiply it by the recurrence matrix \(M = \left[ {\begin{matrix} x &{} (\kappa  1) y \\ y &{} x + (\kappa  2) y \end{matrix}}\right] \). In general, we have \(\,f_\ell = M^\ell \,f_0\), where \(\,f_0\) is the initial signature, which is just a single edge and has the succinct binary signature \(\langle 1,0 \rangle \) of type \(\tau _2\). The (column) eigenvectors of M are \(\left[ {\begin{matrix} 1 \\ 1 \end{matrix}}\right] \) and \(\left[ {\begin{matrix} 1  \kappa \\ 1 \end{matrix}}\right] \) with eigenvalues \(x + (\kappa  1) y\) and \(x  y\), respectively, as claimed. \(\square \)
The success of interpolation depends on these eigenvalues and the relationship between the recurrence matrix and the initial signature of the construction. To show that the interpolation succeeds, we use a result from [36], the full version of [37]. This result is about interpolating unary signatures on a Boolean domain for planar Holant problems, but the same proof applies equally well for higher domains using a binary recursive construction (like that in Fig. 7) and a succinct signature type with length 2.
Lemma 4.12
 1.
\(\det (M) \ne 0\);
 2.
\(\det ([s\ M s]) \ne 0\);
 3.
M has infinite order modulo a scalar;
Consider the recursive construction in Fig. 7. To every vertex, we assign the succinct binary signature \(\langle x,y \rangle \). Since the initial signature is \(s = \langle 1,0 \rangle \), the determinant of the matrix \([s\ M s]\) is simply y. In order to interpolate all binary succinct signatures of type \(\tau _2\), we need to satisfy the second condition of Lemma 4.12, which is \(y \ne 0\). However, when \(y = 0\), the recurrence matrix is a scalar multiple of the identity matrix, which implies that the eigenvalues are the same. For twodimensional interpolation using a matrix with a full basis of eigenvectors, as is the case here, the third condition of Lemma 4.12 is equivalent to the ratio of the eigenvalues not being a root of unity. In particular, the eigenvalues cannot be the same. Therefore, when using the recursive construction in Fig. 7, it suffices to satisfy the first and third conditions of Lemma 4.12. We state this as a corollary.
Corollary 4.13
 1.
\(\det (M) \ne 0\);
 2.
M has infinite order modulo a scalar;
Lemma 4.14
Proof
Let \((n)_k = n (n1) \cdots (nk+1)\) be the kth falling power of n. Consider the gadget in Fig. 8. We assign \({\text {AD}}_{r,\kappa }\) to both vertices. The succinct binary signature of type \(\tau _2\) for this gadget is \(f = \langle (\kappa  1)_{r  1}, (\kappa  2)_{r  1} \rangle \). Up to a nonzero factor of \((\kappa  2)_{r  2}\), we have the signature \(f' = \frac{1}{(\kappa  2)_{r  2}} f = \langle \kappa  1, \kappa  r \rangle \).
Consider the recursive construction in Fig. 7. We assign \(f'\) to all vertices. By Lemma 4.11, the eigenvalues of the corresponding recurrence matrix are \((r  1) > 0\) and \((\kappa  1) (\kappa  r + 1) > 0\). Thus, M is nonsingular. Furthermore, the eigenvalues are not equal since \(\kappa \not \in \{0,r\}\). Therefore, we are done by Corollary 4.13. \(\square \)
Next we show that \({\text {PlHolant}}({\text {AD}}_{r,\kappa })\) is at least as hard as \({\text {PlHolant}}({\text {AD}}_{3,\kappa })\). To overcome a difficulty when r is even, we apply the following result, which uses the notion of a planar pairing.
Definition 4.15
(Definition 11 in [37]) A planar pairing in a graph \(G = (V, E)\) is a set of edges \(P \subset V \times V\) such that P is a perfect matching in the graph \((V, V \times V)\), and the graph \((V, E \cup P)\) is planar.
Lemma 4.16
(Lemma 12 in [37]) For any planar 3regular graph G, there exists a planar pairing that can be computed in polynomial time.
Lemma 4.17
Proof
By Lemma 4.14, we can assume that \(\langle 1,1 \rangle \) is available. Take \({\text {AD}}_{r,\kappa }\) and first form \(t = \left\lceil \frac{r4}{2} \right\rceil \) selfloops. Then add a new vertex on each selfloop and assign \(\langle 1,1 \rangle \) to each of these new vertices. Up to a nonzero factor of \((\kappa  3)_{2t}\), the resulting signature is \({\text {AD}}_{3,\kappa }\) if r is odd and \({\text {AD}}_{4,\kappa }\) if r is even. To reduce from \(r = 3\) to \(r = 4\), we use a planar pairing, which can be efficiently computed by Lemma 4.16. We add a new vertex on each edge in a planar pairing and assign \(\langle 1,1 \rangle \) to each of these new vertices. Then up to a nonzero factor of \(\kappa  3\), the signature at each vertex of the initial graph is effectively \({\text {AD}}_{3,\kappa }\). \(\square \)
The succinct binary signature \(\langle 1  \kappa , 1 \rangle \) of type \(\tau _2\) has a special property. Let u be any constant unary signature, which has a succinct signature of type \(\tau _1\). If \(\langle 1  \kappa , 1 \rangle \) is connected to u, then the resulting unary signature is identically 0.
This observation is the key to what follows. We use it in the next lemma to achieve what would appear to be an impossible task. The requirements, if duly specified, would result in multiple conditions to be satisfied by nine separate polynomials pertaining to some construction in place of the gadget in Fig. 9. And yet we are able to use just one degree of freedom to cause seven of the polynomials to vanish, satisfying most of these conditions. In addition, the other two polynomials are not forgotten. They are nonzero, and their ratio is not a root of unity, which allows interpolation to succeed.
This ability to satisfy a multitude of constraints simultaneously in one magic stroke reminds us of some unfathomably brilliant moves by Bobby Fischer, the chess genius extraordinaire, and so we name this gadget (Fig. 9) the Bobby Fischer gadget.
Lemma 4.18
Proof
Consider the gadget in Fig. 9. We assign \(\langle a,b,b \rangle \) to the circle vertices and \(\langle 1  \kappa , 1 \rangle \) to the square vertex. This gadget has a succinct quaternary signature of type \(\tau _4\), which has length 9. However, all but two of the entries in this succinct signature must be 0.
To see this, consider an assignment that assigns different values to the two edges incident to the circle vertex on top. Since the assignment to these two edges differs, the signature \(\langle a,b,b \rangle \) contributes a factor of b regardless of the value of its third edge, which is connected to the square vertex assigned \(\langle 1  \kappa , 1 \rangle \). From the perspective of \(\langle 1  \kappa , 1 \rangle \), this behavior is equivalent to connecting it to the succinct unary signature \(b \langle 1 \rangle \) of type \(\tau _1\). Thus, the sum over the possible assignments to this third edge is 0. The same argument shows that the two edges incident to the circle vertex on the bottom do not contribute anything to the Holant sum when assigned different values.
Thus, any nonzero contribution to the Holant sum comes from assignments where the top two dangling edges are assigned the same value and the bottom two dangling edges are assigned the same value. There are only two entries that satisfy this requirement in the succinct quaternary signature of type \(\tau _4\) for this gadget, which are the entries for \(P_{{\begin{matrix} 1 &{} 1 \\ 1 &{} 1 \end{matrix}}}\) and \(P_{{\begin{matrix} 1 &{} 1 \\ 2 &{} 2 \end{matrix}}}\). To compute those two entries, we use the following trick. Since the two external edges of each circle vertex must be assigned the same value, we think of them as just a single edge. Then the effective succinct binary signature of type \(\tau _2\) for the circle vertices is just \(\langle a,b \rangle \). By connecting the first \(\langle a,b \rangle \) with \(\langle 1\kappa , 1 \rangle \), the result is \((ab) \langle 1\kappa , 1 \rangle \) like it is an eigenvector. Connecting the other copy of \(\langle a,b \rangle \) to \((ab) \langle 1\kappa , 1 \rangle \) gives \((ab)^2 \langle 1\kappa , 1 \rangle \). This computation is expressed via the matrix multiplication \([b J_\kappa + (a  b) I_\kappa ] [J_\kappa  \kappa I_\kappa ] [b J_\kappa + (a  b) I_\kappa ] = (a  b) [J_\kappa  \kappa I_\kappa ] [b J_\kappa + (a  b) I_\kappa ] = (a  b)^2 [J_\kappa  \kappa I_\kappa ]\). Thus up to a nonzero factor of \((ab)^2\), the corresponding succinct quaternary signature of type \(\tau _4\) for this gadget is \(f = \langle 1\kappa ,0,0,0,0,0,1,0,0 \rangle \).
Consider the recursive construction in Fig. 6. We assign f to all vertices. Let \(\,f_s\) be the signature of the sth gadget in this construction. The seven entries that are 0 in the succinct signature of type \(\tau _4\) for f are also 0 in the succinct signature of type \(\tau _4\) for \(\,f_s\). Thus, we can express \(\,f_s\) via a succinct signature of type \(\tau _4'\) with length 2, defined as follows. The first two parts in \(\tau _4'\) are \(P_{{\begin{matrix} 1 &{} 1 \\ 1 &{} 1 \end{matrix}}}\) and \(P_{{\begin{matrix} 1 &{} 1 \\ 2 &{} 2 \end{matrix}}}\) from the succinct signature type \(\tau _4\). The last part contains all the remaining assignments. Then the succinct signature for \(\,f_s\) of type \(\tau _4'\) is \(M^s \,f_0\), where \(M = \left[ {\begin{matrix} 1\kappa &{} 0 \\ 0 &{} 1 \end{matrix}}\right] \) and \(\,f_0 = \langle 1,1 \rangle \), which is just the succinct signature of type \(\tau _4'\) for two parallel edges.
Clearly the conditions in Lemma 4.12 hold, so we can interpolate any succinct signature of type \(\tau _4'\). In particular, we can interpolate our target signature \(=_4\), which is \(\langle 1,0 \rangle \) when expressed as a succinct signature of type \(\tau _4'\). \(\square \)
Remark
The nine polynomials mentioned before Lemma 4.18 correspond to the nine entries of some quaternary gadget with a succinct signature of type \(\tau _4\). In light of Lemma 4.14, this gadget might involve many succinct binary signatures \(\langle x,y \rangle \) of type \(\tau _2\) for various choices of \(x,y \in {\mathbb {C}}\). Each distinct binary signature provides an additional degree of freedom to these polynomials. Our construction in Fig. 9 only requires one binary signature \(\langle x,y \rangle \), and we use our one degree of freedom to set \(\frac{x}{y} = 1  \kappa \).
With the aid of the succinct unary signature \(\langle 1 \rangle \) of type \(\tau _1\) and the succinct binary signature \(\langle 0,1 \rangle \) of type \(\tau _2\), the assumptions in the previous lemma are sufficient to prove \({\#\mathrm {P}}\)hardness.
Corollary 4.19
Suppose \(\kappa \ge 3\) is the domain size and \(a,b \in {\mathbb {C}}\). Let \({\mathcal {F}}\) be a signature set containing the succinct ternary signature \(\langle a, b, b \rangle \) of type \(\tau _3\), the succinct unary signature \(\langle 1 \rangle \) of type \(\tau _1\), and the succinct binary signatures \(\langle 1  \kappa , 1 \rangle \) and \(\langle 0,1 \rangle \) of type \(\tau _2\). If \(a \ne b\), then \({\text {PlHolant}}({\mathcal {F}})\) is \({\#\mathrm {P}}\)hard.
Proof
By Lemma 4.18, we have \(=_4\). Connecting \(\langle 1 \rangle \) to \(=_4\) gives \(=_3\). With \(=_3\), we can construct the equality signatures of every arity. Along with the binary disequality signature \(\ne _2\), which is the succinct binary signature \(\langle 0,1 \rangle \) of type \(\tau _2\), we can express the problem of counting the number of vertex \(\kappa \)colorings over planar graphs. By Proposition 4.10, this is, up to a nonzero factor, the problem of evaluating the Tutte polynomial at \((1  \kappa , 0)\), which is \({\#\mathrm {P}}\)hard by Theorem 4.1. \(\square \)
Now we can show that counting edge colorings is \({\#\mathrm {P}}\)hard over planar regular graphs when there are more colors than the regularity parameter.
Theorem 4.20
#\(\kappa \)EdgeColoring is \({\#\mathrm {P}}\)hard over planar rregular graphs for all \(\kappa > r \ge 3\).
Proof
By Lemma 4.17, it suffices to consider \(r = 3\). By Lemma 4.14, we can assume that any succinct binary signature of type \(\tau _2\) is available.
Consider the gadget in Fig. 10. We assign \({\text {AD}}_{3,\kappa }\) to the circle vertex and \(\langle 3  \kappa , 1 \rangle \) to the square vertices. By Lemma 11.6, the succinct ternary signature of type \(\tau _3\) for this gadget is \(f = 2 (\kappa 2) \langle (\kappa 3)(\kappa 1), 1, 1 \rangle \).
Now take two edges of \({\text {AD}}_{3,\kappa }\) and connect them to the two edges of \(\langle 1,1 \rangle \). Up to a nonzero factor of \((\kappa  1) (\kappa  2)\), this gadget has the succinct unary signature \(\langle 1 \rangle \) of type \(\tau _1\). Then we are done by Corollary 4.19. \(\square \)
5 Tractable problems
In the rest of the paper, we adapt and extend our previous proof techniques to obtain a dichotomy for \({\text {PlHolant}}(\langle a,b,c \rangle )\), where \(\langle a,b,c \rangle \) is a succinct ternary signature of type \(\tau _3\) on domain size \(\kappa \ge 3\), for any \(a,b,c \in {\mathbb {C}}\). In this section, we show how to compute a few of these problems in polynomial time.
5.1 Previous dichotomy theorem
There is only one previous dichotomy theorem for higher domain Holant problems. It is a dichotomy for a single symmetric ternary signature on domain size \(\kappa = 3\) in the framework of Holant\(^*\) problems, which means that all unary signatures are assumed to be freely available.
In Theorem 5.1, the notation \(f ^\frown g\) denotes the signature that results from connecting one edge incident to a vertex assigned the signature f to one edge incident to a vertex assigned the signature g. When f and g are both unary signatures, which are represented by vectors, then the resulting 0ary signature is just a scalar.
Theorem 5.1
 1.There exists \(\alpha , \beta , \gamma \in {\mathbb {C}}^3\) that are mutually orthogonal (i.e., \(\alpha ^\frown \beta = \alpha ^\frown \gamma = \beta ^\frown \gamma = 0\)) and$$\begin{aligned} f = \alpha ^{\otimes 3} + \beta ^{\otimes 3} + \gamma ^{\otimes 3}; \end{aligned}$$
 2.There exists \(\alpha , \beta _1, \beta _2 \in {\mathbb {C}}^3\) such that \(\alpha ^\frown \beta _1 = \alpha ^\frown \beta _2 = \beta _1 ^\frown \beta _1 = \beta _2 ^\frown \beta _2 = 0\) and$$\begin{aligned} f = \alpha ^{\otimes 3} + \beta _1^{\otimes 3} + \beta _2^{\otimes 3}; \end{aligned}$$
 3.There exists \(\beta , \gamma \in {\mathbb {C}}^3\) and \(\,f_\beta \in ({\mathbb {C}}^{3})^{\otimes 3}\) such that \(\beta \ne {\mathbf {0}}\), \(\beta ^\frown \beta = 0\), \(\,f_\beta ^\frown \beta = {\mathbf {0}}\), and$$\begin{aligned} f = \,f_\beta + \beta ^{\otimes 2} \otimes \gamma + \beta \otimes \gamma \otimes \beta + \gamma \otimes \beta ^{\otimes 2}. \end{aligned}$$
Some domain invariant signatures are tractable by Theorem 5.1.
Corollary 5.2
 1.
\(f = \lambda \langle 1, 0, 0 \rangle = \lambda \left[ (1,0, 0)^{\otimes 3} + (0, 1,0)^{\otimes 3} + ( 0,0,1)^{\otimes 3}\right] \);
 2.
\(f = 3 \lambda \langle 5, 2, 4 \rangle = \lambda \left[ (1,2,2)^{\otimes 3} + (2,1,2)^{\otimes 3} + (2,2,1)^{\otimes 3}\right] \);
 3.
\(f = \langle a, b, a \rangle = \frac{a + 2 b}{3} (1,1, 1)^{\otimes 3} + \frac{a  b}{3} \left[ (1, \omega , \omega ^2)^{\otimes 3} + (1, \omega ^2, \omega )^{\otimes 3}\right] \),
In Corollary 5.2, form 1 is the ternary equality signature \(=_3\), which is trivially tractable for any domain size. Then form 2 is just form 1 after a holographic transformation by the matrix \(T = \left[ {\begin{matrix} 1 &{}\quad 2 &{}\quad 2 \\ 2 &{}\quad 1 &{}\quad 2 \\ 2 &{}\quad 2 &{}\quad 1 \end{matrix}}\right] \), which is orthogonal after scaling by \(\frac{1}{3}\). This is an example of two problems that must have the same complexity by Theorem 3.3.
The tractability of these two problems for higher domain sizes is stated in the following corollary.
Corollary 5.3
 1.
\(f = \lambda \langle 1,0,0 \rangle \);
 2.
\(f = \lambda T^{\otimes 3} \langle 1,0,0 \rangle = \lambda \kappa \langle \kappa ^2  6 \kappa + 4, 2 (\kappa  2), 4 \rangle \),
Note that \(T = \kappa I_\kappa  2 J_\kappa \) is an orthogonal matrix after scaling by \(\frac{1}{\kappa }\).
5.2 Affine signatures
Let \(\omega \) be a primitive third root of unity. Consider the ternary signature f(x, y, z) with succinct ternary signature \(\langle 1,0,c \rangle \) of type \(\tau _3\) on domain size 3, where \(c^3 = 1\). Its support is an affine subspace of \({\mathbb {Z}}_3\) defined by \(x + y + z = 0\). Furthermore, consider the quadratic polynomial \(q_c(x, y, z) = \lambda _c (x y + x z + y z)\), where \(\lambda _1 = 0\), \(\lambda _\omega = 2\), and \(\lambda _{\omega ^2} = 1\). Then \(\omega ^{q_c(x, y, z)}\) agrees with f when \(x + y + z = 0\). This function f is an example of a ternary domain affine signature.
Definition 5.4
The ternary domain affine signatures are tractable just as those in the Boolean domain are [10].
Lemma 5.5
Suppose the domain size is 3. Then \({\text {Holant}}({\mathscr {A}})\) is computable in polynomial time.
Proof
Given an instance of \({\text {Holant}}({\mathscr {A}})\), the output can be expressed as the summation of a single function \(F = \chi _{Ax = 0} \cdot e^{\frac{2 \pi i}{3} q(x_1, x_2, \cdots , x_k)}\) since \({\mathscr {A}}\) is closed under multiplication. In polynomial time, we can solve the linear system \(A x = 0\) over \({\mathbb {Z}}_3\) and decide whether it is feasible. If the linear system is infeasible, then the function is the identically 0 function, so the output is just 0.
\(\square \)
After multiplying the function \(\langle 1,0,c \rangle \) by a scalar, we obtain the succinct ternary signature \(\langle a,0,c \rangle \) of type \(\tau _3\) such that \(a^3 = c^3\). Since undergoing an orthogonal transformation does not change the complexity of the problem by Theorem 3.3, we obtain the following corollary of the previous result.
Corollary 5.6
Suppose the domain size is 3 and \(a,c \in {\mathbb {C}}\). Let \(T \in {\mathbf {O}}_3({\mathbb {C}})\) and let \(\langle a,0,c \rangle \) be a succinct ternary signature of type \(\tau _3\). If \(a^3 = c^3\), then \({\text {Holant}}(T^{\otimes 3} \langle a,0,c \rangle )\) is computable in polynomial time.
For domain size 3, the only orthogonal matrix T such that \(T^{\otimes 3} \langle a,b,c \rangle \) is still a succinct ternary signature of type \(\tau _3\) is \(\pm \frac{1}{3} \left[ {\begin{matrix} 1 &{}\quad 2 &{}\quad 2 \\ 2 &{}\quad 1 &{}\quad 2 \\ 2 &{}\quad 2 &{}\quad 1 \end{matrix}}\right] \). However, the tractability in Corollary 5.6 holds for any orthogonal matrix T.
We are interested in this problem because its tractability implies the tractability of a set of problems defined by a succinct ternary signature of type \(\tau _3\).
Lemma 5.7
Suppose the domain size is 4 and \(\lambda , \mu \in {\mathbb {C}}\). Let \(\langle \mu ^2, 1, \mu \rangle \) be a succinct ternary signature of type \(\tau _3\). If \(\mu = 1 + \varepsilon 2 i\) with \(\varepsilon = \pm 1\), then \({\text {Holant}}(\lambda \langle \mu ^2, 1, \mu \rangle )\) is computable in polynomial time.
Proof
We restate this lemma as a simple corollary for later convenience.
Corollary 5.8
Suppose the domain size is 4 and \(a,b,c \in {\mathbb {C}}\). Let \(\langle a,b,c \rangle \) be a succinct ternary signature of type \(\tau _3\). If \(a + 5 b + 2 c = 0\) and \(5 b^2 + 2 b c + c^2 = 0\), then \({\text {Holant}}(\langle a,b,c \rangle )\) is computable in polynomial time.
Proof
Since \(a = 5 b  2 c\) and \(b = \frac{1}{5} (1 \pm 2 i) c\), after scaling by \(\mu = 1 \mp 2 i\), we have \(\mu \langle a,b,c \rangle = c \langle \mu ^2, 1, \mu \rangle \) and are done by Lemma 5.7. \(\square \)
6 An interpolation result
The goal of this section is to generalize an interpolation result from [21], which we rephrase using our notion of a succinct signature (cf. Lemma 4.12).
Theorem 6.1
 1.
\(\det (M) \ne 0\);
 2.
s is not orthogonal to any row eigenvector of M;
 3.
for all \((i,j,k) \in {\mathbb {Z}}^3  \{(0,0,0)\}\) with \(i+j+k=0\), we have \(\alpha ^i \beta ^j \gamma ^k \ne 1\);
Our generalization of this result is designed to relax the second condition so that s can be orthogonal to some row eigenvectors of M. Suppose r is a row eigenvector of M, with eigenvalue \(\lambda \), that is orthogonal to s (i.e., the dot product \(r \cdot s\) is 0). Consider \(M^k s\), the kth signature in the infinite sequence defined by M and s. This signature is also orthogonal to r since \(r \cdot M^k s = \lambda ^k r \cdot s = 0\). We do not know of any way of interpolating a signature using this infinite sequence that is not also orthogonal to r. On the other hand, we would like to interpolate those signatures that do satisfy this orthogonality condition. Our interpolation result gives a sufficient condition to achieve this.
We assume our nbyn matrix M is diagonalizable, i.e., it has n linearly independent (row and column) eigenvectors. We do not assume that M necessarily has n distinct eigenvalues (although this would be a sufficient condition for it to be diagonalizable). The relaxation of the second condition is that, for some positive integer \(\ell \), the initial signature s is not orthogonal to exactly \(\ell \) of these linearly independent row eigenvectors of M. To satisfy this condition, we use a twostep approach. First, we explicitly exhibit \(n  \ell \) linearly independent row eigenvectors of M that are orthogonal to s. Then we use the following lemma to show that the remaining row eigenvectors of M are not orthogonal to s. The justification for this approach is that the eigenvectors orthogonal to s are often simple to express while the eigenvectors not orthogonal to s tend to be more complicated.
Lemma 6.2
For \(n \in {\mathbb {Z}}^+\), let \(s \in {\mathbb {C}}^{n \times 1}\) be a vector and let \(M \in {\mathbb {C}}^{n \times n}\) be a diagonalizable matrix. If \({\text {rank}}([s\ M s\ \ldots \ M^{n1} s]) \ge \ell \), then for any set of n linearly independent row eigenvectors, s is not orthogonal to at least \(\ell \) of them.
Proof
Since M is diagonalizable, it has n linearly independent eigenvectors. Suppose for a contradiction that there exists a set of n linearly independent row eigenvectors of M such that s is orthogonal to \(t > n  \ell \) of them. Let \(N \in {\mathbb {C}}^{t \times n}\) be the matrix whose t rows are the row eigenvectors of M that are orthogonal to s. Then \(N [s\ M s\ \ldots \ M^{n1} s]\) is the zero matrix. From this, it follows that \({\text {rank}}([s\ M s\ \ldots \ M^{n1} s]) < \ell \), a contradiction. \(\square \)
The third condition of Theorem 6.1 is also known as the lattice condition.
Definition 6.3
Fix some \(\ell \in {\mathbb {N}}\). We say that \(\lambda _1, \lambda _2, \cdots , \lambda _\ell \in {\mathbb {C}}  \{0\}\) satisfy the lattice condition if for all \(x \in {\mathbb {Z}}^\ell  \{{\mathbf {0}}\}\) with \(\sum _{i=1}^\ell x_i = 0\), we have \(\prod _{i=1}^\ell \lambda _i^{x_i} \ne 1\).
When \(\ell \ge 3\), we use Galois theory to show that the lattice condition is satisfied. The idea is that the lattice condition must hold if the Galois group of the polynomial, whose roots are the \(\lambda _i\)’s, is large enough. In [21], for the special case \(n = \ell = 3\), it was shown that the roots of most cubic polynomials satisfy the lattice condition using this technique.
Lemma 6.4
(Lemma 5.2 in [21]) Let \(f(x) \in {\mathbb {Q}}[x]\) be an irreducible cubic polynomial. Then the roots of f(x) satisfy the lattice condition iff f(x) is not of the form \(a x^3 + b\) for some \(a,b \in {\mathbb {Q}}\).
In the following lemma, we show that if the Galois group for a polynomial of degree n is one of the two largest possible groups, \(S_n\) or \(A_n\), then its roots satisfy the lattice condition provided these roots do not all have the same complex norm.
Lemma 6.5
Let f be a polynomial of degree \(n \ge 2\) with rational coefficients. If the Galois group of f over \({\mathbb {Q}}\) is \(S_n\) or \(A_n\) and the roots of f do not all have the same complex norm, then the roots of f satisfy the lattice condition.
Proof
If \(n=2\), then \(s=t=1\) and \(b_1 = c_1\). This is a contradiction to the assumption that roots of f do not all have the same complex norm. Otherwise, assume \(n \ge 3\). If \(s = t = 1\), then \(b_1 = c_1\) again. We apply 3cycles from \(A_n\) to conclude that all roots of f have the same complex norm, a contradiction. Otherwise, \(s + t > 2\). Without loss of generality, suppose \(s \ge t\), which implies \(s \ge 2\). Pick \(j \in \{0, \cdots , nst\}\) such that \(a_{j+1} \le \cdots \le a_{j+s+t}\) contains a strict inequality. We permute the roots so that \(b_i = a_{j+i}\) for \(1 \le i \le s\) and \(c_i = a_{j+s+i}\) for \(1 \le i \le t\) (or possibly swapping \(b_1\) and \(b_2\) if necessary to ensure the permutation is in \(A_n\)). Then taking the complex norm of both sides gives a contradiction. \(\square \)
Remark
This result can simplify the interpolation arguments in [21]. Since each of their cubic polynomials is irreducible, the corresponding Galois groups are transitive subgroups of \(S_3\), namely \(S_3\) or \(A_3\) (and in fact by inspection, they are all \(S_3\)). Then Lemma 4.5 from [44] (the full version of [43]) shows that the eigenvalues of these polynomials do not all have the same complex norm. Thus, the roots of all polynomials exhibited in [21] satisfy the lattice condition by Lemma 6.5.
In the current paper, we apply Lemma 6.5 to an infinite family of quintic polynomials that we encounter in Sect. 7. If the polynomials are irreducible, then we are able to apply this lemma. Unfortunately, we are unable to show that all these polynomials are irreducible and thus also have to consider the possible ways in which they could factor. Nevertheless, we are still able to show that all these polynomials satisfy the lattice condition.
To conclude, we state and prove our new interpolation result.
Lemma 6.6
 1.
M is diagonalizable with n linearly independent eigenvectors;
 2.
s is not orthogonal to exactly \(\ell \) of these linearly independent row eigenvectors of M with eigenvalues \(\lambda _1, \cdots , \lambda _\ell \);
 3.
\(\lambda _1, \cdots , \lambda _\ell \) satisfy the lattice condition;
Proof
Let \(\lambda _{1}, \cdots , \lambda _n\) be the n eigenvalues of M, with possible repetition. Since M is diagonalizable, we can write M as \(T \Lambda T^{1}\), where \(\Lambda \) is the diagonal matrix \(\left[ {\begin{matrix} B_1 &{} {\mathbf {0}} \\ {\mathbf {0}} &{} B_2 \end{matrix}}\right] \) with \(B_1 = {\text {diag}}(\lambda _1, \cdots , \lambda _\ell )\) and \(B_2 = {\text {diag}}(\lambda _{\ell +1}, \cdots , \lambda _n)\). Notice that the columns of T are the column eigenvectors of M and the rows of \(T^{1}\) are the row eigenvectors of M. Let \({\mathbf {t}}_i\) be the ith column T and let \(T^{1} s = [\alpha _1\ \ldots \ \alpha _n]^{\texttt {T}}\). Then \(\alpha _i \ne 0\) for \(1 \le i \le \ell \) and \(\alpha _i = 0\) for \(\ell < i \le n\), since s is not orthogonal to exactly the first \(\ell \) row eigenvectors of M.
Therefore, we can solve for the \(c_y\)’s in polynomial time and compute \({\text {PlHolant}}(\Omega ; {\mathcal {F}} \cup \{f\})\). \(\square \)
Remark
When restricted to \(n = \ell = 3\), this proof is simpler than the one given in [21] for Theorem 6.1 due to our implicit use of a local holographic transformation (i.e., the writing of f as a linear combination of \({\mathbf {t}}_1', \cdots , {\mathbf {t}}_\ell '\) and expressing \({\text {PlHolant}}(\Omega ; {\mathcal {F}} \cup \{f\})\) in terms of this).
7 Puiseux series, Siegel’s theorem, and Galois theory
We prove our main dichotomy theorem in three stages. This section covers the last stage, which assumes that all succinct binary signatures of type \(\tau _2\) are available. Among the ways we utilize this assumption is to build the gadget known as a local holographic transformation (see Fig. 11), which is the focus of Sect. 7.1. Then in Sect. 7.2, our efforts are largely spent, proving that a certain interpolation succeeds. To that end, we employ Galois theory aided by an effective version of Siegel’s theorem for a specific algebraic curve, which is made possible by analyzing Puiseux series expansions.
7.1 Constructing a special ternary signature
We construct one of two special ternary signatures. Either we construct a signature of the form \(\langle a,b,b \rangle \) with \(a \ne b\) and can finish the proof with Corollary 4.19 or we construct \(\langle 3 (\kappa  1), \kappa  3, 3 \rangle \). With this latter signature, we can interpolate the weighted Eulerian partition signature.
Lemma 7.1
Proof
The previous proof fails when \({\mathfrak {A}} = 0\) because such signatures are invariant setwise under this type of local holographic transformation. With the exception of a single point, we can use this same gadget construction to reduce between any two of these points.
Lemma 7.2
Proof
If \(x + (\kappa  1) y = 0\), then \(y = \frac{\sqrt{st}}{\kappa }\). However, plugging this into (9) gives \(\frac{(bc) [3 s + (\kappa  3) t]}{k} \ne 0\), so \(x + (\kappa  1) y\) is indeed nonzero. \(\square \)
Lemma 7.3
Proof
If \(3 b + (\kappa  3) c = 0\), then up to a nonzero factor of \(\frac{c}{3}\), \(\langle 3 b  2 c, b, c \rangle \) is already the desired signature. Otherwise, \(3 b + (\kappa  3) c \ne 0\). By Lemma 7.2, we have \(\langle 3 s  2 t, s, t \rangle \) for any \(s,t \in {\mathbb {C}}\) satisfying \(3 s + (\kappa  3) t \ne 0\).
If \(s = t\), then \(t = \frac{1}{\kappa }\). Plugging this into (10) gives \(1\), so \(s \ne t\). If \(3 s + (\kappa  3) t = 0\), then \(t = \frac{3}{\kappa }\). Plugging this into (10) gives \(1  \kappa \ne 0\), so \(3 s + (\kappa  3) t \ne 0\). \(\square \)
7.2 Dose of an effective Siegel’s theorem and Galois theory
It suffices to show that \(\langle 3 (\kappa  1), \kappa  3, 3 \rangle \) is \({\#\mathrm {P}}\)hard for all \(\kappa \ge 3\). The general strategy is to use interpolation. However, proving that this interpolation succeeds presents a significant challenge.
Lemma 7.4
For any integer \(y \ge 1\), the polynomial p(x, y) in x has three distinct real roots and two nonreal complex conjugate roots.
Proof
Therefore, p(x, y) has distinct roots in x for all \(y \ge 1\). Furthermore, with a negative discriminant, p(x, y) has 2s nonreal complex conjugate roots for some odd integer s. Since p(x, y) is a quintic polynomial (in x), the only possibility is \(s = 1\). \(\square \)
We suspect that for any integer \(y \ge 4\), p(x, y) is in fact irreducible over \({\mathbb {Q}}\) as a polynomial in x. When considering y as an indeterminate, the bivariate polynomial p(x, y) is irreducible over \({\mathbb {Q}}\) and the algebraic curve it defines has genus 3, so by Theorem 1.2 in [50], p(x, y) is reducible over \({\mathbb {Q}}\) for at most a finite number of \(y \in {\mathbb {Z}}\). For any integer \(y \ge 4\), if p(x, y) is irreducible over \({\mathbb {Q}}\) as a polynomial in x, then its Galois group is \(S_5\) and its roots satisfy the lattice condition.
Lemma 7.5
For any integer \(y \ge 4\), if p(x, y) is irreducible in \({\mathbb {Q}}[x]\), then the roots of p(x, y) satisfy the lattice condition.
Proof
By Lemma 7.4, p(x, y) has three distinct real roots and two nonreal complex conjugate roots. With three distinct real roots, we know that not all the roots have the same complex norm. It is well known that an irreducible polynomial of prime degree n with exactly two nonreal roots has \(S_n\) as a Galois group over \({\mathbb {Q}}\) (for example, Theorem 10.15 in [53]). Then we are done by Lemma 6.5. \(\square \)
However, it is shown in the next lemma that in fact these five are the only integer solutions. In particular, for any integer \(y \ge 4\), p(x, y) does not have a linear factor in \({\mathbb {Z}}[x]\), and hence by Gauss’s Lemma, also no linear factor in \({\mathbb {Q}}[x]\). The following proof is essentially due to Aaron Levin [46]. We thank Aaron for suggesting the key auxiliary function \(g_2(x,y) = \frac{y^2}{x} + y  x^2 + 1\), as well as for his permission to include the proof here. We also thank Bjorn Poonen [51] who suggested a similar proof. After the proof, we will explain certain complications in the proof.
Lemma 7.6
The only integer solutions to \(p(x,y) = 0\) are \((1,1)\), (0, 0), \((1,1)\), (1, 2), and (3, 3).
Proof
Clearly these five points are solutions to \(p(x,y) = 0\). For \(a \in {\mathbb {Z}}\) with \(3< a < 17\), one can directly check that \(p(a,y) = 0\) has no other integer solutions in y.
Now consider the functions \(g_1(x,y) = y  x^2\) and \(g_2(x,y) = \frac{y^2}{x} + y  x^2 + 1\). Whenever \((a,b) \in {\mathbb {Z}}^2\) is a solution to \(p(x,y) = 0\) with \(a \ne 0\), \(g_1(a,b)\) and \(g_2(a,b)\) are integers. However, we show that if \(a \le 3\) or \(a \ge 17\), then either \(g_1(a,b)\) or \(g_2(a,b)\) is not an integer.
Remark
We note that the expressions for the \(y_i^+(x)\) and \(y_i^(x)\) are the truncated or rounded Puiseux series expansions. The reason we discuss \(y_i^+(x)\) and \(y_i^(x)\) is because we want to prove an absolute bound, instead of the asymptotic bound implied by the Onotation.
By Lemma 7.6, if p(x, y) is reducible over \({\mathbb {Q}}\) as a polynomial in x for any integer \(y \ge 4\), then the only way it can factor is as a product of an irreducible quadratic and an irreducible cubic. The next lemma handles this possibility.
Lemma 7.7
For any integer \(y_0 \ge 4\), if \(p(x,y_0)\) is reducible over \({\mathbb {Q}}\), then the roots of \(p(x,y_0)\) satisfy the lattice condition.
Proof
Let \(q(x) = p(x,y_0)\) for a fixed integer \(y_0 \ge 4\). Suppose that \(q(x) = f(x) g(x)\), where \(f(x), g(x) \in {\mathbb {Q}}[x]\) are monic polynomials of degree at least 1. By Lemma 7.6, the degree of each factor must be at least 2. Then without loss of generality, let f(x) and g(x) be quadratic and cubic polynomials, respectively, both of which are irreducible over \({\mathbb {Q}}\). By Gauss’ Lemma, we can further assume \(f(x), g(x) \in {\mathbb {Z}}[x]\).
We first show that if \(i=j\) and \(k=m=n\), then \(i=j=k=m=n=0\). By (12), we have \((\alpha \beta )^i = (\gamma \delta \epsilon )^k\) and \(2 i = 3 k\). Suppose \(i \ne 0\), then also \(k \ne 0\). We can write \(i = 3 t\) and \(k = 2 t\) for some nonzero \(t \in {\mathbb {Z}}\). Let \(A = \alpha \beta \) and \(B = \gamma \delta \epsilon \). Then, both A and B are integers and \(A B = y_0^3\). From \(A^{3t} = B^{2t}\), we have \(A^3 = \pm B^2\). Then \(y_0^6 = A^2 B^2 = \pm A^5\), and since \(y_0 > 3\), there is a nonzero integer \(s > 1\) such that \(y_0 = s^5\). This implies \(A = \pm s^6\) and \(B = \pm s^9\) (with the same ± sign). Then \(f(x) = x^2 + c_1 x \pm s^6\), \(g(x) = x^3 + c_2' x^2 + c_1' x \pm s^9\), and \(q(x) = x^5  (2 s^5 + 1) x^3  (s^{10} + 2) x^2 + s^5 (s^51) x + s^{15}\). We consider the coefficient of x in \(q(x) = f(x) g(x)\). This is \(s^{10}  s^5 = \pm c_1' s^6 \pm c_1 s^9\). Since \(s > 1\), there is a prime p such that \(p^u \ \ s\) and \(p^{u+1} \not \; s\), for some \(u \ge 1\). But then \(p^{6u}\) divides \(s^5 = s^{10} \pm c_1' s^6 \pm c_1 s^9\). This is a contradiction. Hence, \(i=j\) and \(k=m=n\) imply \(i=j=k=m=n=0\).
Now we claim that \(\omega = \alpha / \beta \) is not a root of unity. For a contradiction, suppose that \(\omega \) is a primitive dth root of unity. Since \(\omega \in {\mathbb {Q}}_f\), which is a degree 2 extension over \({\mathbb {Q}}\), we have \(\phi (d) \ \ 2\), where \(\phi (\cdot )\) is Euler’s totient function. Hence, \(d \in \{1,2,3,4,6\}\). The quadratic polynomial f(x) has the form \(x^2  (1 + \omega ) \beta x + \omega \beta ^2 \in {\mathbb {Z}}[x]\). Hence, \(r = \frac{(1 + \omega )}{\omega \beta } \in {\mathbb {Q}}\). We prove the claim separately according to whether \(r=0\) or not.
If \(r = 0\), then \(\omega = 1\) and \(d = 2\). In this case, f(x) has the form \(x^2 + a\) for some \(a \in {\mathbb {Z}}\). It is easy to check that q(x) has no such polynomial factor in \({\mathbb {Z}}[x]\) unless \(y_0 = 0\). In fact, suppose \(x^2 + a \ \ q(x)\) in \({\mathbb {Z}}[x]\). Then \(q(x) = (x^2 + a)(x^3 + bx + c)\) since the coefficient of \(x^4\) in q(x) is 0. Also \(a + b = (2 y_0 + 1)\), \(c= (y_0^2 +2)\), \(a b = y_0 (y_0  1)\) and \(a c = y_0^3\). It follows that a and b are the two roots of the quadratic polynomial \(X^2 + (2 y_0 + 1) X + y_0^2  y_0 \in {\mathbb {Z}}[X]\). Since \(a, b \in {\mathbb {Z}}\), the discriminant \(8 y_0 + 1\) must be a perfect square, and in fact an odd perfect square \((2 z  1)^2\) for some \(z \in {\mathbb {Z}}\). Thus, \(y_0 = z (z  1) / 2\). By the quadratic formula, \(a = y_0 + z 1\) or \(y_0  z\). On the other hand, \(a = a c / c = y_0^3 / (y_0^2 + 2)\). In both cases, this leads to a polynomial in z in \({\mathbb {Z}}[z]\) that has no integer solutions other than \(z = 0\), which gives \(y_0 = 0\).
Now suppose \(r \ne 0\). Plugging r back in f(x), we have \(f(x) = x^2  (2 + \omega + \omega ^{1}) r^{1} x + (2 + \omega + \omega ^{1}) r^{2}\). The quantity \(2 + \omega + \omega ^{1} = 4,1,2,3\) when \(d = 1,3,4,6\), respectively. Since \((2 + \omega + \omega ^{1}) r^{2} \in {\mathbb {Z}}\), the rational number \(r^{1}\) must be an integer when \(d = 3,4,6\) and half an integer when \(d = 1\). In all cases, it is easy to check that a polynomial f(x) of the specified form does not divide q(x) unless \(y = 0\) or \(y = 1\). Thus, we have proved the claim that \(\omega = \alpha / \beta \) is not a root of unity.
Next consider the case that f(x) is irreducible over \({\mathbb {Q}}_g\). Let E be the splitting field of f over \({\mathbb {Q}}_g\). Then, \([E:{\mathbb {Q}}_g] = 2\). Therefore, there exists an automorphism \(\tau \in {\text {Gal}}(E / {\mathbb {Q}}_g)\) that swaps \(\alpha \) and \(\beta \) but fixes \({\mathbb {Q}}_g\) and thus fixes \(\gamma ,\delta ,\epsilon \) pointwise. By applying \(\tau \) to (12), we have \(\alpha ^j \beta ^i = \gamma ^k \delta ^m \epsilon ^n\). Dividing by (12) gives \((\alpha / \beta )^{ji} = 1\). Since \(\alpha / \beta \) is not a root of unity, we get \(i=j\). Hence, we have \((\alpha \beta )^i = \gamma ^k \delta ^m \epsilon ^n\). The order of \({\text {Gal}}({\mathbb {Q}}_g / {\mathbb {Q}})\) is \([{\mathbb {Q}}_g:{\mathbb {Q}}]\), which is divisible by 3. Thus, \({\text {Gal}}({\mathbb {Q}}_g / {\mathbb {Q}}) \subseteq S_3\) contains an element of order 3, which must act as a 3cycle on \(\gamma ,\delta ,\epsilon \). Since \(\alpha \beta \in {\mathbb {Q}}\), applying this cyclic permutation gives \((\alpha \beta )^i = \gamma ^m \delta ^n \epsilon ^k\). Therefore, \(\gamma ^{km} \delta ^{mn} \epsilon ^{nk} = 1\). Notice that \((km) + (mn) + (nk) = 0\).
It can be directly checked that q(x) is not divisible by any \(x^3 + c \in {\mathbb {Z}}[x]\),
and therefore by Lemma 6.4, the roots \(\gamma , \delta , \epsilon \) of the cubic polynomial g(x) satisfy the lattice condition. Therefore, \(k=m=n\). Again, we have shown that \(i=j\) and \(k=m=n\) imply \(i=j=k=m=n=0\).
The last case is when f(x) splits in \({\mathbb {Q}}_g[x]\). Then \({\mathbb {Q}}_f\) is a subfield of \({\mathbb {Q}}_g\), and \(2 = [{\mathbb {Q}}_f: {\mathbb {Q}}]  [{\mathbb {Q}}_g: {\mathbb {Q}}]\). Therefore, \([{\mathbb {Q}}_g:{\mathbb {Q}}] = 6\) and \({\text {Gal}}({\mathbb {Q}}_g / {\mathbb {Q}}) = S_3\). Since \({\mathbb {Q}}_f\) is normal over \({\mathbb {Q}}\), being a splitting field of a separable polynomial in characteristic 0, by the fundamental theorem of Galois theory, the corresponding subgroup for \({\mathbb {Q}}_f\) is \({\text {Gal}}({\mathbb {Q}}_g / {\mathbb {Q}}_f)\), which is a normal subgroup of \(S_3\) with index 2. Such a subgroup of \(S_3\) is unique, namely \(A_3\). In particular, the transposition \(\tau '\) that swaps \(\gamma \) and \(\delta \) but fixes \(\epsilon \) is an element in \({\text {Gal}}({\mathbb {Q}}_g / {\mathbb {Q}}) = S_3\) but not in \({\text {Gal}}({\mathbb {Q}}_g / {\mathbb {Q}}_f) = A_3\). This transposition must fix \(\alpha \) and \(\beta \) setwise but not pointwise. Hence, it must swap \(\alpha \) and \(\beta \).
By applying \(\tau '\) to (12), we have \(\alpha ^j \beta ^i = \gamma ^m \delta ^k \epsilon ^n\). Then dividing these two equations gives \((\alpha / \beta )^{ij} = (\delta / \gamma )^{mk}\). Similarly, by considering the transposition that switches \(\gamma \) and \(\epsilon \) and fixes \(\delta \), we get \((\alpha / \beta )^{ij} = (\gamma / \epsilon )^{kn}\). By combining these two equations, we have \(\gamma ^{nm} \delta ^{mk} \epsilon ^{kn} = 1\). Note that \((nm) + (mk) + (kn) = 0\).
As we noted above, the roots of the irreducible g(x) satisfy the lattice condition, so we conclude that \(k=n=m\). From \((\alpha / \beta )^{ij} = (\delta / \gamma )^{mk} = 1\), we get \(i=j\) since \(\alpha / \beta \) is not a root of unity. We conclude that \(i=j=k=m=n=0\), so the roots of q(x) satisfy the lattice condition. \(\square \)
Even though \(p(x,3) = (x  3) (x^4 + 3 x^3 + 2 x^2  5 x  9)\) is reducible, its roots still satisfy the lattice condition. To show this, we utilize a few results, Theorem 7.8, Lemma 7.9, and Lemma 7.10.
The first is a wellknown theorem of Dedekind.
Theorem 7.8
(Theorem 4.37 [40]) Suppose \(f(x) \in {\mathbb {Z}}[x]\) is a monic polynomial of degree n. For a prime p, let \(\,f_p(x)\) be the corresponding polynomial in \({\mathbb {Z}}_p[x]\). If \(\,f_p(x)\) has distinct roots and factors over \({\mathbb {Z}}_p[x]\) as a product of irreducible factors with degrees \(d_1, d_2, \cdots , d_r\), then the Galois group of f over \({\mathbb {Q}}\) contains an element with cycle type \((d_1, d_2, \cdots , d_r)\).
With the second result, we can show that \(x^4 + 3 x^3 + 2 x^2  5 x  9\) has Galois group \(S_4\) over \({\mathbb {Q}}\).
Lemma 7.9
(Lemma on page 98 in [33]) For \(n \ge 2\), let G be a subgroup of \(S_n\). If G is transitive, contains a transposition and contains a pcycle for some prime \(p > n / 2\), then \(G = S_n\).
In the contrapositive, the third result shows that the roots of \(x^4 + 3 x^3 + 2 x^2  5 x  9\) do not all have the same complex norm.
Lemma 7.10
(Lemma D.2 in [17]) If all roots of \(x^4 + a_3 x^3 + a_2 x^2 + a_1 x + a_0 \in {\mathbb {C}}[x]\) have the same complex norm, then \(a_2 a_1^2 = a_3^2 \overline{a_2} a_0\).
Theorem 7.11
The roots of \(p(x,3) = (x  3) (x^4 + 3 x^3 + 2 x^2  5 x  9)\) satisfy the lattice condition.
Proof
Let \(f(x) = x^4 + 3 x^3 + 2 x^2  5 x  9\) and let \(G_f\) be the Galois group of f over \({\mathbb {Q}}\). We claim that \(G_f = S_4\). As a polynomial over \({\mathbb {Z}}_5\), \(f(x) \equiv x^4 + 3 x^3 + 2 x^2 + 1\) is irreducible, so f(x) is also irreducible over \({\mathbb {Z}}\). By Gauss’ Lemma, this implies irreducibility over \({\mathbb {Q}}\). Over \({\mathbb {Z}}_{13}\), f(x) factors into the product of irreducibles \((x^2 + 7) (x + 6) (x + 10)\) and clearly has distinct roots, so by Theorem 7.8, \(G_f\) contains a transposition. Over \({\mathbb {Z}}_3\), f(x) factors into the product of irreducibles \(x (x^3 + 2 x + 1)\) and has distinct roots because its discriminant is \(1 \not \equiv 0 \pmod {3}\), so by Theorem 7.8, \(G_f\) contains a 3cycle. Then by Lemma 7.9, \(G_f = S_4\).
Thus, it suffices to show that the roots of f(x) satisfy the lattice condition. By the contrapositive of Lemma 7.10, the roots of f(x) do not all have the same complex norm. Then we are done by Lemma 6.5. \(\square \)
From Lemma 7.5, Lemma 7.7, and Theorem 7.11, we obtain the following Theorem.
Theorem 7.12
For any integer \(y_0 \ge 3\), the roots of \(p(x,y_0)\) satisfy the lattice condition.
We use Theorem 7.12 to prove Lemma 7.14. We note that the succinct signature type \(\tau _4\) is a refinement of \(\tau _{\text {color}}\), so any succinct signature of type \(\tau _{\text {color}}\) can also be expressed as a succinct signature of type \(\tau _4\). In particular, the succinct signature \(\langle 2,1,0,1,0 \rangle \) of type \(\tau _{\text {color}}\) is written \(\langle 2,0,1,0,0,0,1,0,0 \rangle \) of type \(\tau _4\). Then the following is a restatement of Corollary 4.7.
Corollary 7.13
Suppose \(\kappa \ge 3\) is the domain size. Let \(\langle 2,0,1,0,0,0,1,0,0 \rangle \) be a succinct quaternary signature of type \(\tau _4\). Then \({\text {PlHolant}}(\langle 2,0,1,0,0,0,1,0,0 \rangle )\) is \({\#\mathrm {P}}\)hard.
Lemma 7.14
Suppose \(\kappa \ge 4\) is the domain size. Then \({\text {PlHolant}}(\langle 3 (\kappa  1), \kappa  3, 3 \rangle )\) is \({\#\mathrm {P}}\)hard.
Proof
Let \(\langle 2,0,1,0,0,0,1,0,0 \rangle \) be a succinct quaternary signature of type \(\tau _4\). We reduce from \({\text {PlHolant}}(\langle 2,0,1,0,0,0,1,0,0 \rangle )\), which is \({\#\mathrm {P}}\)hard by Corollary 7.13.
Recurrence matrix for the recursive construction in the proof of Lemma 7.14
\(\left[ \begin{matrix} (\kappa 1) \left( \kappa ^2+9 \kappa 9\right) &{}\quad 12 (\kappa 3) (\kappa 1)^2 &{}\quad (\kappa 3)^2 (\kappa 1) &{}\quad 2 (\kappa 3)^2 (\kappa 2) (\kappa 1) &{}\quad (\kappa 3)^2 (\kappa 1) &{}\quad 2 (\kappa 3)^2 (\kappa 2) (\kappa 1) &{}\quad (\kappa 1) (2 \kappa 3) (4 \kappa 3) &{}\quad 6 (\kappa 3) (\kappa 2) (\kappa 1)^2 &{}\quad (\kappa 3)^3 (\kappa 2) (\kappa 1) \\ 3 (\kappa 3) (\kappa 1) &{}\quad 3 \kappa ^328 \kappa ^2+60 \kappa 36 &{}\quad (\kappa 3) (2 \kappa 3) &{}\quad 2 (\kappa 3) (\kappa 2) (2 \kappa 3) &{}\quad (\kappa 3) (2 \kappa 3) &{}\quad 2 (\kappa 3) (\kappa 2) (2 \kappa 3) &{}\quad 3 (\kappa 3) (\kappa 1)^2 &{}\quad (\kappa 2) \left( \kappa ^314 \kappa ^2+30 \kappa 18\right) &{}\quad (\kappa 3)^2 (\kappa 2) (2 \kappa 3) \\ (2 \kappa 3) (4 \kappa 3) &{}\quad 12 (\kappa 3) (\kappa 1)^2 &{}\quad (\kappa 3)^2 (\kappa 1) &{}\quad 2 (\kappa 3)^2 (\kappa 2) (\kappa 1) &{}\quad (\kappa 3)^2 (\kappa 1) &{}\quad 2 (\kappa 3)^2 (\kappa 2) (\kappa 1) &{}\quad 9 \kappa ^326 \kappa ^2+27 \kappa 9 &{}\quad 6 (\kappa 3) (\kappa 2) (\kappa 1)^2 &{}\quad (\kappa 3)^3 (\kappa 2) (\kappa 1) \\ 3 (\kappa 3) (\kappa 1) &{}\quad 2 \left( \kappa ^314 \kappa ^2+30 \kappa 18\right) &{}\quad (\kappa 3) (2 \kappa 3) &{}\quad 2 (\kappa 3) (\kappa 2) (2 \kappa 3) &{}\quad (\kappa 3) (2 \kappa 3) &{}\quad 2 (\kappa 3) (\kappa 2) (2 \kappa 3) &{}\quad 3 (\kappa 3) (\kappa 1)^2 &{}\quad (\kappa 3) \left( \kappa ^312 \kappa ^2+22 \kappa 12\right) &{}\quad (\kappa 3)^2 (\kappa 2) (2 \kappa 3) \\ (\kappa 3)^2 &{}\quad 4 (\kappa 3) (2 \kappa 3) &{}\quad 3 (\kappa 3) &{}\quad 6 (\kappa 3) (\kappa 2) &{}\quad \kappa ^3+3 \kappa 9 &{}\quad 6 (\kappa 3) (\kappa 2) &{}\quad (\kappa 3)^2 (\kappa 1) &{}\quad 2 (\kappa 3) (\kappa 2) (2 \kappa 3) &{}\quad 3 (\kappa 3)^2 (\kappa 2) \\ (\kappa 3)^2 &{}\quad 4 (\kappa 3) (2 \kappa 3) &{}\quad 3 (\kappa 3) &{}\quad 6 (\kappa 3) (\kappa 2) &{}\quad 3 (\kappa 3) &{}\quad \kappa ^3+6 \kappa ^230 \kappa +36 &{}\quad (\kappa 3)^2 (\kappa 1) &{}\quad 2 (\kappa 3) (\kappa 2) (2 \kappa 3) &{}\quad 3 (\kappa 3)^2 (\kappa 2) \\ (\kappa 3)^2 &{}\quad 4 (\kappa 3) (2 \kappa 3) &{}\quad \kappa ^3+3 \kappa 9 &{}\quad 6 (\kappa 3) (\kappa 2) &{}\quad 3 (\kappa 3) &{}\quad 6 (\kappa 3) (\kappa 2) &{}\quad (\kappa 3)^2 (\kappa 1) &{}\quad 2 (\kappa 3) (\kappa 2) (2 \kappa 3) &{}\quad 3 (\kappa 3)^2 (\kappa 2) \\ (\kappa 3)^2 &{}\quad 4 (\kappa 3) (2 \kappa 3) &{}\quad 3 (\kappa 3) &{}\quad \kappa ^3+6 \kappa ^230 \kappa +36 &{}\quad 3 (\kappa 3) &{}\quad 6 (\kappa 3) (\kappa 2) &{}\quad (\kappa 3)^2 (\kappa 1) &{}\quad 2 (\kappa 3) (\kappa 2) (2 \kappa 3) &{}\quad 3 (\kappa 3)^2 (\kappa 2) \\ (\kappa 3)^2 &{}\quad 4 (\kappa 3) (2 \kappa 3) &{}\quad 3 (\kappa 3) &{}\quad 6 (\kappa 3) (\kappa 2) &{}\quad 3 (\kappa 3) &{}\quad 6 (\kappa 3) (\kappa 2) &{}\quad (\kappa 3)^2 (\kappa 1) &{}\quad 2 (\kappa 3) (\kappa 2) (2 \kappa 3) &{}\quad (2 \kappa 3) \left( 2 \kappa ^29 \kappa +18\right) \end{matrix} \right] \) 
Therefore, by Lemma 6.6, we can interpolate \(\langle 2, 0, 1, 0, 0, 0, 1, 0, 0 \rangle \), which completes the proof. \(\square \)
Lemma 7.15
Suppose the domain size is 3. Then \({\text {PlHolant}}(\langle 2,0,1 \rangle )\) is \({\#\mathrm {P}}\)hard.
Proof
Let \(g = \langle 2,0,1,0,0,0,1,0 \rangle \) be a succinct quaternary signature of type \(\tau _4\). We reduce from \({\text {PlHolant}}(g)\), which is \({\#\mathrm {P}}\)hard by Corollary 7.13.
Consider the gadget in Fig. 15. The vertices are assigned \(\langle 2,0,1 \rangle \). Up to a factor of 9, the signature of this gadget is g, as desired. \(\square \)
We summarize this section with the following result. With all succinct binary signatures of type \(\tau _2\) available as well as the succinct unary signature \(\langle 1 \rangle \) of type \(\tau _1\), any succinct ternary signature \(\langle a,b,c \rangle \) of type \(\tau _3\) satisfying \({\mathfrak {B}} \ne 0\) is \({\#\mathrm {P}}\)hard.
Lemma 7.16
Suppose \(\kappa \ge 3\) is the domain size and \(a,b,c \in {\mathbb {C}}\). Let \({\mathcal {F}}\) be a signature set containing the succinct ternary signature \(\langle a, b, c \rangle \) of type \(\tau _3\), the succinct unary signature \(\langle 1 \rangle \) of type \(\tau _1\), and the succinct binary signature \(\langle x, y \rangle \) of type \(\tau _2\) for all \(x,y \in {\mathbb {C}}\). If \({\mathfrak {B}} \ne 0\), then \({\text {PlHolant}}({\mathcal {F}})\) is \({\#\mathrm {P}}\)hard.
Proof
Suppose \({\mathfrak {A}} \ne 0\). By Lemma 7.1, we have a succinct ternary signature \(\langle a', b', b' \rangle \) of type \(\tau _3\) with \(a' \ne b'\). Then we are done by Corollary 4.19.
Otherwise, \({\mathfrak {A}} = 0\). Since \({\mathfrak {B}} \ne 0\), we have \(b \ne c\). By Lemma 7.3, we have \(\langle 3 (\kappa  1), \kappa  3, 3 \rangle \). If \(\kappa \ge 4\), then we are done by Lemma 7.14. Otherwise, \(\kappa = 3\) and we are done by Lemma 7.15. \(\square \)
8 Constructing a nonzero unary signature
The primary goal of this section is to construct the succinct unary signature \(\langle 1 \rangle \) of type \(\tau _1\). However, this is not always possible. For example, the succinct ternary signature \(\langle 0,0,1 \rangle = {\text {AD}}_{3,3}\) of type \(\tau _3\) (on domain size 3) cannot construct \(\langle 1 \rangle \). This follows from the parity condition (Lemma 4.4). In such cases, we show that the problem is either computable in polynomial time or \({\#\mathrm {P}}\)hard without the help of additional signatures.
Lemma 8.1
Proof
Suppose \(a + (\kappa  1) b \ne 0\). Consider the gadget in Fig. 16a. We assign \(\langle a,b,c \rangle \) to its vertex. By Lemma 11.1, this gadget has the succinct unary signature \(\langle u \rangle \) of type \(\tau _1\), where \(u = a + (\kappa  1) b\). Since \(u \ne 0\), this signature is equivalent to \(\langle 1 \rangle \).
Otherwise, \(a + (\kappa  1) b = 0\), and \([2 b + (\kappa  2) c] [b^2  4 b c  (\kappa  3) c^2] \ne 0\). Consider the gadget in Fig. 16b. We assign \(\langle a,b,c \rangle \) to all three vertices. By Lemma 11.1, this gadget has the succinct unary signature \(\langle u' \rangle \) of type \(\tau _1\), where \(u' = (\kappa  1) (\kappa  2) [2 b + (\kappa  2) c] [b^2  4 b c  (\kappa  3) c^2]\). Since \(u' \ne 0\), this signature is equivalent to \(\langle 1 \rangle \). \(\square \)
One of the failure conditions of Lemma 8.1 is when both \(a + (\kappa  1) b = 0\) and \(b^2  4 b c  (\kappa  3) c^2 = 0\) hold. In this case, \(\langle a,b,c \rangle = c \langle (\kappa  1) (2 \pm \sqrt{\kappa + 1}), 2 \pm \sqrt{\kappa + 1}, 1 \rangle \). If \(c = 0\), then \(a = b = c = 0\) and the signature is trivial. Otherwise, \(c \ne 0\). Then up to a nonzero factor of c, this signature further simplifies to \({\text {AD}}_{3,3}\) by taking the minus sign when \(\kappa = 3\). Just like \({\text {AD}}_{3,3}\), we show (in Lemma 8.2) that all of these signatures are \({\#\mathrm {P}}\)hard.
Lemma 8.2
Proof
If \(c = 0\), then \(a = b = c = 0\) so the output is always 0. Otherwise, \(c \ne 0\). Up to a nonzero factor of c, \(\langle a,b,c \rangle \) can be written as \(\langle (\kappa  1) (2 + \varepsilon \sqrt{\kappa + 1}), 2 + \varepsilon \sqrt{\kappa + 1}, 1 \rangle \) under the given assumptions, where \(\varepsilon = \pm 1\).
The recurrence matrix M, up to a factor of \((\gamma + 1)\), for the recursive construction in the proof of Lemma 8.2
\(\left[ \begin{matrix} (\kappa 1) (\gamma 3) \gamma ^2 &{}\quad 2 (\kappa 2) (\kappa 1) \gamma &{}\quad (\kappa 1) (3 \gamma 1) &{}\quad 2 (\kappa 2) (\kappa 1) \gamma &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad \kappa ^2 (\gamma +1)  4 \kappa \gamma + 2 (\gamma +1) &{}\quad 0 &{} \quad (\kappa 2) (3 \gamma 1) &{}\quad (\kappa 2) \gamma &{}\quad (\kappa 4) (\kappa 2) \gamma &{}\quad (\kappa 2) \gamma &{}\quad (\kappa 4) (\kappa 2) \gamma &{}\quad 2 (\kappa 2) (\gamma 4) \gamma ^2 \\ 3 \gamma 1 &{}\quad 2 (\kappa 2) \gamma &{}\quad \kappa ^2 (\gamma +1) + \kappa (3 \gamma 5)  7 \gamma + 5 &{}\quad 2 (\kappa 2) \gamma &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 2 (3 \gamma 1) &{}\quad 0 &{}\quad (\kappa 2) \gamma (\kappa +\gamma +1) &{}\quad 2 \gamma &{}\quad 2 (\kappa 4) \gamma &{}\quad 2 \gamma &{}\quad 2 (\kappa 4) \gamma &{}\quad 4 (\gamma 4) \gamma ^2 \\ 0 &{}\quad 2 (\kappa 2) \gamma &{}\quad 0 &{}\quad 2 (\kappa 2) \gamma &{}\quad (\gamma 3) \gamma ^2 &{}\quad 4 (\kappa 2) \gamma &{}\quad 3 \gamma 1 &{}\quad 4 (\kappa 2) \gamma &{}\quad (\kappa 2) (\gamma 4) \gamma (\gamma +1) \\ 0 &{}\quad (\kappa 4) \gamma &{}\quad 0 &{}\quad (\kappa 4) \gamma &{}\quad 2 \gamma &{}\quad 2 (\kappa 4) \gamma &{}\quad 2 \gamma &{}\quad \kappa (3 \gamma +1)  4 (\gamma +1) &{}\quad (\gamma 4) \gamma (\gamma \kappa +\kappa 4) \\ 0 &{}\quad 2 (\kappa 2) \gamma &{}\quad 0 &{}\quad 2 (\kappa 2) \gamma &{}\quad 3 \gamma 1 &{}\quad 4 (\kappa 2) \gamma &{}\quad (\gamma 3) \gamma ^2 &{}\quad 4 (\kappa 2) \gamma &{}\quad (\kappa 2) (\gamma 4) \gamma (\gamma +1) \\ 0 &{}\quad (\kappa 4) \gamma &{}\quad 0 &{}\quad (\kappa 4) \gamma &{}\quad 2 \gamma &{}\quad \kappa (3 \gamma +1)  4 (\gamma +1) &{}\quad 2 \gamma &{}\quad 2 (\kappa 4) \gamma &{}\quad (\gamma 4) \gamma (\gamma \kappa +\kappa 4) \\ 0 &{}\quad 4 \gamma &{}\quad 0 &{}\quad 4 \gamma &{}\quad \gamma +1 &{}\quad 2 (\gamma \kappa +\kappa 4) &{}\quad \gamma +1 &{}\quad 2 (\gamma \kappa +\kappa 4) &{}\quad \kappa ^2 (\gamma +1)  2 (\gamma +5)  2 (5 \gamma 11) \end{matrix} \right] \) 
The matrix P whose rows are the row eigenvectors of the matrix in Table 2
\( \left[ \begin{matrix} 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 1 &{}\quad 2 &{}\quad 0 &{}\quad 0 &{}\quad 1 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 1 &{}\quad 0 &{}\quad 1 &{}\quad 0 \\ 0 &{}\quad 1 &{}\quad 0 &{}\quad 1 &{}\quad (\gamma 3) \gamma &{}\quad (\gamma 3) \gamma &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 1 &{}\quad 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad (\kappa 2) \gamma &{}\quad 0 &{}\quad (\kappa 2) \gamma &{}\quad 0 &{}\quad (\kappa 2) (\gamma 1) &{}\quad 0 &{}\quad (\kappa 2) (\gamma 1) &{}\quad (\kappa 2) (\gamma 4) (\gamma 1) \gamma \\ 0 &{}\quad (\kappa 2) \gamma &{}\quad 0 &{}\quad (\kappa 2) \gamma &{}\quad \gamma 1 &{}\quad (\kappa 2) (\gamma 1) &{}\quad \gamma 1 &{}\quad (\kappa 2) (\gamma 1) &{}\quad 0 \\ 0 &{}\quad 2 &{}\quad 0 &{}\quad \kappa 2 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 1 &{}\quad 0 &{}\quad \kappa 1 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ (\gamma 3) \gamma &{}\quad \kappa ^2 + \kappa (2 \gamma 7)  2 (\gamma 5) &{}\quad (\gamma 3) \gamma &{}\quad \kappa ^2  \kappa (2 \gamma 7) + 2 (\gamma 5) &{}\quad (\gamma 3) \gamma &{}\quad (\kappa 4) (\gamma 3) \gamma &{}\quad (\gamma 3) \gamma &{}\quad (\kappa 4) (\gamma 3) \gamma &{}\quad 2 (\gamma 4) (\gamma 3) \gamma ^2 \end{matrix} \right] \) 
Remark
Although the matrices in Table 2 and Table 3 seem large, they are probably the smallest possible to succeed in this recursive quaternary construction. In fact, for quaternary signatures one would normally expect these matrices to be even larger since there are typically fifteen different entries in a domain invariant signature of arity 4.
The other failure condition of Lemma 8.1 is when both \(a + (\kappa  1) b = 0\) and \(2 b + (\kappa  2) c = 0\) hold. In this case, \(\langle a,b,c \rangle = c \langle (\kappa  1) (\kappa  2), (\kappa  2), 2 \rangle \). If this signature is connected to \(\langle 1 \rangle \), then the first entry of the resulting succinct binary signature of type \(\tau _2\) is \((\kappa  1) (\kappa  2) \cdot 1  (\kappa  2) \cdot (\kappa  1) = 0\) while the second entry is \((\kappa  2) \cdot 2 + 2 \cdot (\kappa  2) = 0\). That is, the resulting binary signature is identically 0. This suggests we apply a holographic transformation such that the support of the resulting signature is only on \(\kappa  1\) of the domain elements.
If \(c = 0\), then \(a = b = c = 0\) and the signature is trivial. Otherwise, \(c \ne 0\). If \(\kappa = 3\), then up to a nonzero factor of c, this signature further simplifies to \(\langle 2,1,2 \rangle \), which is tractable by case 3 of Corollary 5.2. Otherwise, \(\kappa \ge 4\), and we show the problem is \({\#\mathrm {P}}\)hard.
Lemma 8.3
Suppose \(\kappa \ge 4\) is the domain size. Let \(f = \langle (\kappa  1) (\kappa  2), (\kappa  2), 2 \rangle \) be a succinct ternary signature of type \(\tau _3\). Then \({\text {PlHolant}}(f)\) is \({\#\mathrm {P}}\)hard.
Proof
Consider the matrix \(T = \left[ {\begin{matrix} 1 &{} {\mathbf {1}} \\ {\mathbf {1}} &{} T' \end{matrix}}\right] \in {\mathbb {C}}^{\kappa \times \kappa }\), where \(T' = y J_{\kappa 1} + (x  y) I_{\kappa 1}\) with \(x = \frac{\kappa + \sqrt{\kappa }  1}{\sqrt{\kappa } + 1}\) and \(y = \frac{1}{\sqrt{\kappa } + 1}\). After scaling by \(\frac{1}{\sqrt{\kappa }}\), we claim that T is an orthogonal matrix.
We apply a holographic transformation by T to the signature f to obtain \(\,\widehat{f} = T^{\otimes 3} f\), which does not change the complexity of the problem by Theorem 3.3. Since the first row of T is a row of all 1’s, the output of \(\,\widehat{f}\) on any input containing the first domain element is 0. When restricted to the remaining \(\kappa  1\) domain elements, \(\,\widehat{f}\) is domain invariant and symmetric, so it can be expressed as a succinct ternary signature of type \(\tau _3\).
Up to a nonzero factor of \(\frac{\kappa ^3}{(\sqrt{\kappa } + 1)^2}\), it can be verified that \(\,\widehat{f} = \langle (\kappa  2) (2 + \sqrt{\kappa }), 2 + \sqrt{\kappa }, 1 \rangle \). One way to do this is as follows. We write \(f = \langle a,b,2 \rangle \) and \(T = \left[ {\begin{matrix} 1 &{} {\mathbf {1}} \\ {\mathbf {1}} &{} T' \end{matrix}}\right] \in {\mathbb {C}}^{\kappa \times \kappa }\), where \(T' = y J_{\kappa 1} + (x  y) I_{\kappa 1}\). The entries of \(\,\widehat{f}\) are polynomials in \(\kappa \) with coefficients from \({\mathbb {Z}}[a,b,x,y]\). The degree of these polynomials is at most 3 since the arity of f is 3. After computing the entries of \(\,\widehat{f}\) for domain sizes \(3 \le \kappa \le 6\) as elements in \({\mathbb {Z}}[a,b,x,y]\), we interpolate the entries of \(\,\widehat{f}\) as elements in \(({\mathbb {Z}}[a,b,x,y])[\kappa ]\). Then replacing a, b, x, y with their actual values gives the claimed expression for the signature.
Since \(\kappa \ge 4\), \(\,\widehat{f}\) is \({\#\mathrm {P}}\)hard by Lemma 8.2, which completes the proof. \(\square \)
At this point, we have achieved the broader goal of this section. For any \(a,b,c \in {\mathbb {C}}\) and domain size \(\kappa \ge 3\), either \({\text {PlHolant}}(\langle a,b,c \rangle )\) is computable in polynomial time, or \({\text {PlHolant}}(\langle a,b,c \rangle )\) is \({\#\mathrm {P}}\)hard, or we can use \(\langle a,b,c \rangle \) to construct \(\langle 1 \rangle \) (i.e., the reduction \({\text {PlHolant}}(\{\langle a,b,c, \rangle , \langle 1 \rangle \} \le _T {\text {PlHolant}}(\langle a,b,c \rangle )\) holds). However, Lemma 8.3 is easily generalized, and this generalization turns out to be necessary to obtain our dichotomy.
Recall that connecting \(f = \langle (\kappa  1) (\kappa  2), (\kappa  2), 2 \rangle \) to \(\langle 1 \rangle \) results in an identically 0 signature. This suggests that we consider the more general signature \(\widetilde{f} = \alpha \langle 1 \rangle ^{\otimes 3} + \beta f\) for any \(\alpha \in {\mathbb {C}}\) and any nonzero \(\beta \in {\mathbb {C}}\) since this does not change the complexity (as we argue in Corollary 8.4). For any \(a,b,c \in {\mathbb {C}}\) satisfying \({\mathfrak {B}} = 0\) (cf. (7)), if \(\alpha = \frac{2 b + (\kappa  2) c}{\kappa }\) and \(\beta = \frac{b + c}{\kappa }\), then \(\widetilde{f} = \langle a,b,c \rangle \). We note that the condition \({\mathfrak {B}} = 0\) can also be written as \((\kappa  2) (b  c) = b  a\). We now prove a dichotomy for the signature \(\widetilde{f}\).
Corollary 8.4
Suppose \(\kappa \ge 3\) is the domain size and \(a,b,c \in {\mathbb {C}}\). Let \(\langle a,b,c \rangle \) be a succinct ternary signature of type \(\tau _3\). If \({\mathfrak {B}} = 0\), then \({\text {PlHolant}}(\langle a,b,c \rangle )\) is \({\#\mathrm {P}}\)hard unless \(b = c\) or \(\kappa = 3\), in which case, the problem is computable in polynomial time.
Proof
If \(b = c\), then by \({\mathfrak {B}} = 0\) we have \(a = b = c\), which means the signature is degenerate and the problem is trivially tractable. If \(\kappa = 3\), then \(a = c\) and the problem is tractable by case 3 of Corollary 5.2. Otherwise \(b \ne c\) and \(\kappa \ge 4\).
Since \({\mathfrak {B}} = 0\), it can be verified that \(\langle a,b,c \rangle = \frac{2 b + (\kappa  2) c}{\kappa } \langle 1 \rangle ^{\otimes 3} + \frac{b + c}{\kappa } f\), where \(f = \langle (\kappa  1) (\kappa  2), (\kappa  2), 2 \rangle \). We show that \({\text {PlHolant}}(\langle a,b,c \rangle )\) is \({\#\mathrm {P}}\)hard iff \({\text {PlHolant}}(f)\) is. Since \({\text {PlHolant}}(f)\) is \({\#\mathrm {P}}\)hard by Lemma 8.3, this proves the result.
Let \(G = (V,E)\) be a connected planar 3regular graph with \(n = V\) and \(m = E\). We can view \({\text {PlHolant}}(G; \langle a,b,c \rangle )\) as a sum of \(2^n\) Holant computations using the signatures \(\alpha \langle 1 \rangle ^{\otimes 3}\) and \(\beta f\). Each of these Holant computations considers a different assignment of either \(\alpha \langle 1 \rangle ^{\otimes 3}\) or \(\beta f\) to each vertex. Since connecting f to \(\langle 1 \rangle \) gives an identically 0 signature, if any connected signature grid contains both \(\alpha \langle 1 \rangle ^{\otimes 3}\) and \(\beta f\), then that particular Holant computation is 0. This is because a vertex of degree three assigned \(\langle 1 \rangle ^{\otimes 3}\) is equivalent to three vertices of degree one connected to the same three neighboring vertices and each assigned \(\langle 1 \rangle \). There are only two possible assignments that could be nonzero. If all vertices are assigned \(\alpha \langle 1 \rangle ^{\otimes 3}\), then the Holant is \(\alpha ^n \kappa ^m\). Otherwise, all vertices are assigned \(\beta f\) and the Holant is \(\beta ^n {\text {PlHolant}}(G; f)\). Thus, \({\text {PlHolant}}(G; \alpha \langle 1 \rangle ^{\otimes 3} + \beta f) = \alpha ^n \kappa ^m + \beta ^n {\text {PlHolant}}(G; f)\). Since \(\beta \ne 0\), one can solve for either Holant value given the other. \(\square \)
9 Interpolating all binary signatures of type \(\tau _2\)
In this section, we show how to interpolate all binary succinct signatures of type \(\tau _2\) in most settings. We use two general techniques to achieve this goal. In the first subsection, we use a generalization of the antigadget technique that creates a multitude of gadgets. They are so numerous that one is most likely to succeed. In the second subsection, we introduce a new technique called Eigenvalue Shifted Triples (ESTs). These generalize the technique of Eigenvalue Shifted Pairs from [43], and we use EST to interpolate binary succinct signatures in cases where the antigadget technique cannot handle. There are a few isolated problems for which neither technique works. However, these problems are easily handled separately in Lemma 12.1 in “Appendix 2”.
From Sect. 8, every problem fits into one of three cases: either (1) the problem is tractable, (2) the problem is \({\#\mathrm {P}}\)hard, or (3) we can construct the succinct unary signature \(\langle 1 \rangle \) of type \(\tau _1\). Thus, many results in this section assume that \(\langle 1 \rangle \) is available.
9.1 E pluribus unum
we use Lemma 4.12 to prove our interpolation results. The main technical difficulty is to satisfy the third condition of Lemma 4.12, which is to prove that some recurrence matrix (that defines a sequence of gadgets) has infinite order up to a scalar. When the matrix has a finite order up to a scalar, we can utilize this failure condition to our advantage by constructing an antigadget [17], which is the “last” gadget with a distinct signature (up to a scalar) in the infinite sequence of gadgets. To make sure that we construct a multitude of nontrivial gadgets without cancelation, we put the antigadget inside another gadget (contrast the gadget in Fig. 18 with the gadget in Fig. 19b). From among this plethora of gadgets, at least one must succeed under the right conditions.
Although this idea works quite well in that some gadget among those constructed does succeed, we still must prove that one such gadget succeeds in every setting. We aim to exhibit a recurrence matrix whose ratio of eigenvalues is not a root of unity. We consider three related recurrence matrices at once. The next two lemmas consider two similar situations involving the eigenvalues of three such matrices. When applied, these lemmas show that some recurrence matrix must have eigenvalues with distinct complex norms, even though exactly which one among them succeeds may depend on the parameters in a complicated way.
Lemma 9.1
Let \(d_0, d_1, d_2, \Psi \in {\mathbb {C}}\). If \(d_0\), \(d_1\), and \(d_2\) have the same argument but are distinct, then for all \(\rho \in {\mathbb {R}}\), there exists \(i \in \{0,1,2\}\) such that \(\Psi + d_i \ne \rho \).
Proof
Assume to the contrary that there exists \(\rho \in {\mathbb {R}}\) such that \(\Psi + d_i = \rho \) for every \(i \in \{0,1,2\}\). In the complex plane, consider the circle centered at the origin of radius \(\rho \). Each \(\Psi + d_i\) is a distinct point on this circle as well as a distinct point on a common line through \(\Psi \). However, the line intersects the circle in at most two points, a contradiction. \(\square \)
Lemma 9.2
Let \(d_0, d_1, d_2, \Psi \in {\mathbb {C}}\). If \(d_0\), \(d_1\), and \(d_2\) have the same complex norm but are distinct and \(\Psi \ne 0\), then for all \(\rho \in {\mathbb {R}}\), there exists \(i \in \{0,1,2\}\) such that \(\Psi + d_i \ne \rho \).
Proof
Let \(\ell = d_0\). Assume to the contrary that there exists \(\rho \in {\mathbb {R}}\) such that \(\Psi + d_i = \rho \) for every \(i \in \{0,1,2\}\). In the complex plane, consider the circle centered at the origin of radius \(\rho \) and the circle centered at \(\Psi \) of radius \(\ell \). Since \(\Psi \ne 0\), these circles are distinct. Each \(\Psi + d_i\) is a distinct point on both circles. However, these circles intersect in at most two points, a contradiction. \(\square \)
Lemma 9.3
 1.
\(\omega \not \in \{0, \pm 1\}\),
 2.
\({\mathfrak {B}} \ne 0\), and
 3.at least one of the following holds:
 (i)
\({\mathfrak {C}} = 0\) or
 (ii)
\({\mathfrak {C}}^2 = \omega ^{2 \ell } {\mathfrak {B}}^2\) for some \(\ell \in \{0,1\}\) but either \({\mathfrak {C}}^2 \ne {\mathfrak {A}}^2\) or \(\kappa \ne 3\),
 (i)
We use this lemma to establish that various 2by2 recurrence matrices have infinite order modulo scalars. When applied, \(\omega \) will be the ratio of two eigenvalues, one of which is a multiple of \({\mathfrak {B}}\) or \({\mathfrak {B}}^2\) by a nonzero function of \(\kappa \).
Proof of Lemma 9.3
Let \(\Phi = \frac{{\mathfrak {C}}^2}{{\mathfrak {B}}^2}\) and \(\Psi = \frac{(\kappa  2) {\mathfrak {A}}^2}{{\mathfrak {B}}^2}\). Consider the recursive construction in Fig. 7. After scaling by a nonzero factor of \(\kappa \), we assign \(f = \frac{1}{\kappa } \langle \omega + \kappa  1, \omega  1 \rangle \) to every vertex. Let \(\,f_s\) be the succinct binary signature of type \(\tau _2\) for the sth gadget in this construction. We can express \(\,f_s\) as \(M^s \left[ {\begin{matrix} 1 \\ 0 \end{matrix}}\right] \), where \(M = \frac{1}{\kappa } \left[ {\begin{matrix} \omega + \kappa  1 &{} (\kappa  1) (\omega  1) \\ \omega  1 &{} (\kappa  1) \omega + 1 \end{matrix}}\right] = \left[ {\begin{matrix} 1 &{} 1  \kappa \\ 1 &{} 1 \end{matrix}}\right] \left[ {\begin{matrix} \omega &{} 0 \\ 0 &{} 1 \end{matrix}}\right] \left[ {\begin{matrix} 1 &{} 1  \kappa \\ 1 &{} 1 \end{matrix}}\right] ^{1}\) by Lemma 4.11. Then \(\,f_s = \frac{1}{\kappa } \langle \omega ^s + \kappa  1, \omega ^s  1 \rangle \). The eigenvalues of M are 1 and \(\omega \), so the determinant of M is \(\omega \ne 0\). If \(\omega \) is not a root of unity, then we are done by Corollary 4.13.
 1.
Suppose \(\Phi = 0\). Consider the gadget M(0, 1). The determinant of M(0, 1) is nonzero since \(g(0, 1) \ne 0\) and the ratio of its eigenvalues is not a root of unity because they have distinct complex norms. Thus, we are done by Corollary 4.13.
 2.
Suppose \(\Phi = \omega ^{2 \ell }\) for some \(\ell \in \{0,1\}\). Consider the gadget \(M(n  \ell , n  \ell )\). The determinant of \(M(n  \ell , n  \ell )\) is nonzero since \(g(n  \ell , n  \ell ) \ne 0\) and the ratio of its eigenvalues is not a root of unity because they have distinct complex norms. Thus, we are done by Corollary 4.13.
Suppose \(n \ge 4\) and let \(S_0 = \{(0,0), (1, n1), (2, n2)\}\) and \(S_1 = \{(1,1), (2, 0), (3, n1)\}\). Then \(g(r,s) = 0\) holds for at most one \((r,s) \in S_0 \cup S_1\). In particular, g(r, s) is either nonzero for all \((r,s) \in S_0\) or nonzero for all \((r,s) \in S_1\). Pick \(j \in \{0,1\}\) such that g(r, s) is nonzero for all \((r,s) \in S_j\). By Lemma 9.1 with \(d_i = (\omega ^i + \omega ^{i}) \omega ^j\) and \(\rho = \Phi \omega ^{2j} + \kappa  1\), there exists some \((r,s) \in S_j\) such that the eigenvalues of M(r, s) have distinct complex norms, so we are done by Corollary 4.13.
 1.
Suppose \(\Phi = 0\). Let \(S_j = \{(0,j), (1,j+1), (2,j+2)\}\). Then \(g(r,s) = 0\) holds for at most one \((r,s) \in S_0 \cup S_1\). In particular, g(r, s) is either nonzero for all \((r,s) \in S_0\) or nonzero for all \((r,s) \in S_1\). Pick \(j \in \{0,1\}\) such that g(r, s) is nonzero for all \((r,s) \in S_j\). By Lemma 9.2 with \(d_i = (1 + \omega ^j) \omega ^i\) and \(\rho = \kappa  1\), there exists some \((r,s) \in S_j\) such that the eigenvalues of M(r, s) have distinct complex norms, so we are done by Corollary 4.13.
 2.
Suppose \(\Phi = \omega ^{2 \ell }\) for some \(\ell \in \{0,1\}\) but either \({\mathfrak {C}}^2 \ne {\mathfrak {A}}^2\) or \(\kappa \ne 3\). Note that this is equivalent to \(\Phi \ne \Psi \) or \(\kappa \ne 3\). Consider the set \(S = \{(0,0), (0,1), (0,2), (1,1), (1,2), (2,2)\}\). If there exists some \((r,s) \in S\) such that \(g(r,s) \ne 0\) and the eigenvalues of M(r, s) have distinct complex norms, then we are done by Corollary 4.13.
The previous lemma is strong enough to handle the typical case.
Lemma 9.4
 1.
\({\mathfrak {B}} \ne 0\),
 2.
\({\mathfrak {C}} \ne 0\),
 3.
\({\mathfrak {C}}^2 \ne {\mathfrak {B}}^2\), and
 4.
either \({\mathfrak {C}}^2 \ne {\mathfrak {A}}^2\) or \(\kappa \ne 3\),
Proof
By starting the proof with a different gadget, Lemma 9.3 can handle the first three failure conditions. The last two failure conditions require a new idea, Eigenvalue Shifted Triples, which we introduce in Sect. 9.2. In fact, these two cases are equivalent under an orthogonal holographic transformation.
The next lemma considers the failure condition in (14). Note that \({\mathfrak {C}} = {\mathfrak {B}}\) iff the signature can be written as \(\langle 2 a, (\kappa  2) c, 2 c \rangle \) up to a factor of 2. The first excluded case in Lemma 9.5 is handled by Corollary 8.4, and the last two excluded cases are tractable by Corollary 5.3.
Lemma 9.5
 1.
\(2 a \ne (\kappa  1) (\kappa  2) c\),
 2.
\(4 a \ne (\kappa ^2  6 \kappa + 4) c\), and
 3.
\(c \ne 0\),
Proof
Note that when \(2 b =  (\kappa  2) c\), we have \({\mathfrak {B}} = {\mathfrak {C}} = 2 a  (\kappa  1) (\kappa  2) c\) by (14), which is nonzero by condition 1 of the lemma. Let \(\omega _0 = 4 a^2 + (\kappa  2) [4 ac + (2 \kappa ^2 + \kappa  2) c^2]\) and assume \(\omega _0 \ne 0\). Then let \(\omega = \frac{{\mathfrak {B}}^2}{\omega _0} \ne 0\). By conditions 2 and 3, it follows that \(\omega \ne 1\). Also we note that when \(2 b =  (\kappa  2) c\), we have \(2 {\mathfrak {A}} = 2 a + (3 \kappa  2) c\) and \(2 {\mathfrak {C}} = 2 a  (\kappa  1) (\kappa  2) c\). By the same conditions, 2 and 3, we have \({\mathfrak {C}}^2 \ne {\mathfrak {A}}^2\). We further assume that \(\omega \ne 1\), which is equivalent to \(8 a^2  4 (\kappa  2)^2 a c + (\kappa  2) (\kappa ^3  2 \kappa ^2 + 6 \kappa  4) c^2 \ne 0\).
 1.
If \(\omega _0 = 0\), then \(2 a = \big [\kappa  2 \pm i \kappa \sqrt{2 (\kappa  2)}\big ] c\). Up to a nonzero factor of \(c\), we have \(\frac{1}{c} \langle 2 a, (\kappa  2) c, 2 c \rangle = \langle \kappa  2 \pm i \kappa \sqrt{2 (\kappa  2)}, \kappa  2, 2 \rangle \) and are done by case 1 of Lemma 12.1.
 2.If \(8 a^2  4 (\kappa  2)^2 a c + (\kappa  2) (\kappa ^3  2 \kappa ^2 + 6 \kappa  4) c^2 = 0\), then \(4 a = \big [(\kappa  2)^2 \pm i \kappa \sqrt{\kappa ^2  4}\big ] c\). Up to a nonzero factor of \(\frac{c}{2}\), we haveand are done by case 2 of Lemma 12.1. \(\square \)$$\begin{aligned} \frac{2}{c} \langle 2 a, (\kappa  2) c, 2 c \rangle = \langle (\kappa  2)^2 \pm i \kappa \sqrt{\kappa ^2  4}, 2 (\kappa  2), 4 \rangle \end{aligned}$$
The next lemma considers the failure condition in (15). Note that \({\mathfrak {C}} = {\mathfrak {B}}\) iff the signature can be written as \(\langle 2 (2 \kappa  3) b  (\kappa  2)^2 c, 2 b, 2 c \rangle \) up to a factor of 2. The first excluded case in Lemma 9.6 is handled by Corollary 8.4, and the last excluded case is tractable by Corollary 5.8.
Lemma 9.6
 1.
\(2 b \ne (\kappa  2) c\) and
 2.
\(\kappa \ne 4\) or \(5 b^2 + 2 b c + c^2 \ne 0\),
Proof
Note that when \(2 a = 2 (2 \kappa  3) b  (\kappa  2)^2 c\), we have \({\mathfrak {B}} = {\mathfrak {C}}\) by (15) and \(2 {\mathfrak {B}} = \kappa [2 b + (\kappa  2) c]\), which is nonzero by condition 1 of the lemma. Let \(\omega _0 = 8 (2 \kappa  3) b^2 + (\kappa  2) \left[ 8 (\kappa  3) b c + (\kappa ^2  6 \kappa + 12) c^2\right] \) and assume \(\omega _0 \ne 0\). Then let \(\omega = \frac{\kappa [2 b + (\kappa  2) c]^2}{\omega _0}\). By condition 1, \(\omega \ne 0\). It can be shown that \(\kappa [2 b + (\kappa  2) c]^2 = \omega _0\) is equivalent to \((b  c) [3 b + (\kappa  3) c] = 0\). Thus, assume \(b \ne c\) and \(3 b \ne (\kappa  3) c\). Then \(\omega \ne 1\). Also we note that when \(2 a = 2 (2 \kappa  3) b  (\kappa  2)^2 c\), we have \(2 {\mathfrak {A}} = \kappa [4 b + (\kappa  4) c]\) and \(2 {\mathfrak {C}} = \kappa [2 b + (\kappa  2) c]\). By the same assumptions, \(b \ne c\) and \(3 b \ne (\kappa  3) c\), we have \({\mathfrak {C}}^2 \ne {\mathfrak {A}}^2\). Further assume that \(\omega \ne 1\), which is equivalent to \(2 (5 \kappa  6) b^2 + (\kappa  2) [6 (\kappa  2) b c + (\kappa ^2  4 \kappa + 6) c^2] \ne 0\).
 1.If \(\omega _0 = 0\), then we have \(4 (2 \kappa  3) b = \big [2 (\kappa  3) (\kappa  2) \pm i \kappa \sqrt{2 (\kappa  2)}\big ] c\) but \(\kappa \ne 4\) by condition 2 since otherwise \(\omega _0 = 8 (5 b^2 + 2 b c + c^2) \ne 0\). Up to a nonzero factor of \(\frac{c}{2 (2 \kappa  3)}\),and are done by case 3 of Lemma 12.1.$$\begin{aligned}&\frac{2 (2 \kappa  3)}{c} \langle 2 (2 \kappa  3) b  (\kappa  2)^2 c, 2 b, 2 c \rangle \\&\quad = \left\langle (2 \kappa  3) \big [2 (\kappa  2) \mp i \kappa \sqrt{2 (\kappa  2)}\big ], \right. \\&\qquad 2\left. (\kappa  3) (\kappa  2) \mp i \kappa \sqrt{2 (\kappa  2)}, 4 (2 \kappa  3) \right\rangle \end{aligned}$$
 2.
If \(b = c\), then up to a nonzero factor of c, we have \(\frac{1}{c} \langle 2 (2 \kappa  3) b  (\kappa  2)^2 c, 2 b, 2 c \rangle = \langle \kappa ^2 + 2, 2, 2 \rangle \) and are done by case 4 Lemma 12.1.
 3.
If \(3 b = (\kappa  3) c\), then up to a nonzero factor of \(\frac{c}{3}\), we have \(\frac{3}{c} \langle 2 (2 \kappa  3) b  (\kappa  2)^2 c, 2 b, 2 c \rangle = \langle \kappa ^2  6 \kappa + 6, 2 (\kappa  3), 6 \rangle \) and are done by case 5 of Lemma 12.1.
 4.If \(2 (5 \kappa  6) b^2 + (\kappa  2) [6 (\kappa  2) b c + (\kappa ^2  4 \kappa + 6) c^2] = 0\), then \(2 (5 \kappa  6) b = \big [3 (\kappa  2)^2 \pm i \kappa \sqrt{\kappa ^2  4}\big ] c\). Up to a nonzero factor of \(\frac{c}{5 \kappa  6}\),and are done by case 6 of Lemma 12.1. \(\square \)$$\begin{aligned}&\frac{5 \kappa  6}{c} \langle 2 (2 \kappa  3) b  (\kappa  2)^2 c, 2 b, 2 c \rangle \\&\quad = \left\langle (\kappa  3) (\kappa  2)^2 \pm i \kappa (2 \kappa  3) \sqrt{\kappa ^2  4},\right. \\&\qquad 3\left. (\kappa  2)^2 \mp i \kappa \sqrt{\kappa ^2  4}, 2 (5 \kappa  6) \right\rangle \end{aligned}$$
The next lemma considers the failure condition in (16). Note that \({\mathfrak {C}} = 0\) iff the signature can be written as \(\langle 3 (\kappa  1) b  (\kappa  1) (\kappa  2) c, b, c \rangle \). The excluded case in Lemma 9.7 is handled by Corollary 8.4.
Lemma 9.7
Proof
Note that when \(a = 3 (\kappa  1) b  (\kappa  2) (\kappa  1) c\), we have \({\mathfrak {C}} = 0\) and \(2 {\mathfrak {B}} = \kappa [2 b + (\kappa  2) c]\), which is nonzero by assumption. Let \(\omega _0 = (9 \kappa  10) b^2 + (\kappa  2) [2 (3 \kappa  5) b c + (\kappa ^2  4 \kappa + 5) c^2]\) and assume \(\omega _0 \ne 0\). Then let \(\omega = \frac{(\kappa  1) [2 b + (\kappa  2) c]^2}{\omega _0}\). By assumption, \(\omega \ne 0\). Assume \(\omega \ne 1\), which is equivalent to \((5 \kappa  6) b^2  (\kappa  3) (\kappa  2) (2 b  c) c \ne 0\). Further assume \(\omega \ne 1\), which is equivalent to \((13 \kappa  14) b^2 + (\kappa  2) [2 (5 \kappa  7) b c + (2 \kappa ^2  7 \kappa + 7) c^2] \ne 0\).
 1.If \(\omega _0 = 0\), then \((9 \kappa  10) b = [(\kappa  2) (3 \kappa  5) \pm i \kappa \sqrt{2 (\kappa  2)}] c\). Up to a nonzero factor of \(\frac{c}{9 \kappa  10}\), we haveand we are done by case 7 of Lemma 12.1.$$\begin{aligned}&\frac{9 \kappa  10}{c} \langle 3 (\kappa  1) b  (\kappa  1) (\kappa  2) c, b, c \rangle \\&\quad = \langle (\kappa  1) \big [5 (\kappa  2) \mp 3 i \kappa \sqrt{2 (\kappa  2)}\big ], \\&\qquad (\kappa  2) (3 \kappa  5) \mp i \kappa \sqrt{2 (\kappa  2)}, \quad 9 \kappa  10 \rangle \end{aligned}$$
 2.If \((5 \kappa  6) b^2  (\kappa  3) (\kappa  2) (2 b  c) c = 0\), then \((5 \kappa  6) b = \big [(\kappa  3) (\kappa  2) \pm \kappa \sqrt{\kappa ^2  5 \kappa + 6}\big ] c\). Up to a nonzero factor of \(\frac{c}{5 \kappa  6}\), we haveand are done by case 8 Lemma 12.1.$$\begin{aligned}&\frac{5 \kappa  6}{c} \langle 3 (\kappa  1) b  (\kappa  1) (\kappa  2) c, b, c \rangle \\&\quad = \langle (\kappa  1) \left[ (\kappa  2) (2 \kappa + 3) \mp 3 \kappa \sqrt{\kappa ^2  5 \kappa + 6}\right] , \\&\quad (\kappa  3) (\kappa  2) \pm \kappa \sqrt{\kappa ^2  5 \kappa + 6}, \quad 5 \kappa + 6 \rangle \end{aligned}$$
 3.If \((13 \kappa  14) b^2 + (\kappa  2) [2 (5 \kappa  7) b c + (2 \kappa ^2  7 \kappa + 7) c^2] = 0\), then \((13 \kappa  14) b = \big [(\kappa  2) (5 \kappa  7) \pm i \kappa \sqrt{\kappa ^2  \kappa  2}\big ] c\). Up to a nonzero factor of \(\frac{c}{13 \kappa  14}\), we haveand are done by case 9 of Lemma 12.1. \(\square \)$$\begin{aligned}&\frac{13 \kappa  14}{c} \langle 3 (\kappa  1) b  (\kappa  1) (\kappa  2) c, b, c \rangle \\&\quad = \langle (\kappa  1) \left[ (\kappa  2) (2 \kappa  7) \pm 3 i \kappa \sqrt{\kappa ^2  \kappa  2}\right] , \\&\qquad (\kappa  2) (5 \kappa  7) \mp i \kappa \sqrt{\kappa ^2  \kappa  2}, ~~~ 13 \kappa  14 \rangle \end{aligned}$$
9.2 Eigenvalue shifted triples
To handle failure conditions (17) and (18) from Lemma 9.4, we need another technique. We introduce an Eigenvalue Shifted Triple, which extends the concept of an Eigenvalue Shifted Pair.
Definition 9.8
(Definition 4.6 in [43]) A pair of nonsingular matrices \(M, M' \in {\mathbb {C}}^{2 \times 2}\) is called an Eigenvalue Shifted Pair if \(M' = M + \delta I\) for some nonzero \(\delta \in {\mathbb {C}}\), and M has distinct eigenvalues.
Eigenvalue Shifted Pairs were used in [43] to show that interpolation succeeds in most cases since these matrices correspond to some recursive gadget constructions and at least one of them usually has eigenvalues with distinct complex norms. In [43], it is shown that the interpolation succeeds unless the variables in question take real values. Then other techniques were developed to handle the real case. We use Eigenvalue Shifted Pairs in a stronger way. We exhibit three matrices such that any two form an Eigenvalue Shifted Pair. Provided that these shifts are linearly independent over \({\mathbb {R}}\), this is enough to show that interpolation succeeds for both real and complex settings of the variables. We call this an Eigenvalue Shifted Triple.
Definition 9.9
A trio of nonsingular matrices \(M_0, M_1, M_2 \in {\mathbb {C}}^{2 \times 2}\) is called an Eigenvalue Shifted Triple (EST) if \(M_0\) has distinct eigenvalues and there exist nonzero \(\delta _1, \delta _2 \in {\mathbb {C}}\) satisfying \(\frac{\delta _1}{\delta _2} \not \in {\mathbb {R}}\) such that \(M_1 = M_0 + \delta _1 I\), and \(M_2 = M_0 + \delta _2 I\).
We note that if \(M_0\), \(M_1\), and \(M_2\) form an Eigenvalue Shifted Triple, then any permutation of the matrices is also an Eigenvalue Shifted Triple.
The proof of the next lemma is similar to the proof of Lemma 4.7 in [44], the full version of [43].
Lemma 9.10
Suppose \(\alpha , \beta , \delta _1, \delta _2 \in {\mathbb {C}}\). If \(\alpha \ne \beta \), \(\delta _1, \delta _2 \ne 0\), and \(\frac{\delta _1}{\delta _2} \not \in {\mathbb {R}}\), then \(\alpha  \ne \beta \) or \(\alpha + \delta _1 \ne \beta + \delta _1\) or \(\alpha + \delta _2 \ne \beta + \delta _2\).
Proof
The next lemma considers the failure condition in (17), which is \(\kappa = 3\) and \(b = 0\), so the signature has the form \(\langle a,0,c \rangle \). If \(a = 0\), then the problem is already \({\#\mathrm {P}}\)hard by Theorem 4.8. If \(c = 0\), then the problem is tractable by case 1 of Corollary 5.2. If \(a^3 = c^3\), then the problem is tractable by Corollary 5.6.
Lemma 9.11
Proof
Assume \(2 a + c \ne 0\) and let \(\omega = \frac{a^2 + 2 c^2}{c (2 a + c)}\). Assume \(a^2 + 2 c^2\) so that \(\omega \ne 0\). Further assume \(a^2 + 2 a c + 3 c^2 \ne 0\) so that \(\omega ^2 \ne 1\) as well as \(a^2 + a c + 7 c^2 \ne 0\) so that \(\omega ^3 \ne 1\). Note that these conclusions also require \(a \ne c\) and \(a^3 \ne c^3\), respectively.
Otherwise, suppose \(\omega \) is a primitive root of unity of order n. By assumption, \(n \ge 4\). Now consider the recursive construction in Fig. 7. We assign \(\,f_s\) to every vertex, where \(s \ge 0\) is a parameter of our choice. Let \(g_t(s)\) be the signature of the tth gadget in this recursive construction when using \(\,f_s\). Then \(g_1(s) = \,f_s\) and \(g_t(s) = (N(s))^t \left[ {\begin{matrix} 1 \\ 0 \end{matrix}}\right] \), where \(N(s) = \left[ {\begin{matrix} \omega ^s &{} 2 z \\ z &{} \omega ^s + z \end{matrix}}\right] \).
By Lemma 4.11, the eigenvalues of N(s) are \(\omega ^s + 2 z\) and \(\omega ^s  z\), which means the determinant of N(s) is \((\omega ^s + 2 z) (\omega ^s  z)\). Each eigenvalue can vanish for at most one value of \(s \in {\mathbb {Z}}_n\) since both eigenvalues are linear polynomials in \(\omega ^s\) that are not identically 0. Furthermore, at least one of the eigenvalues never vanishes for all \(s \in {\mathbb {Z}}_n\) since otherwise \(1 = z = \frac{1}{2}\).
Thus, at most one matrix among N(0), N(1), N(2), and N(3) can be singular. Pick distinct \(j, k, \ell \in \{0,1,2,3\}\) such that N(j), N(k), and \(N(\ell )\) are nonsingular. To finish the proof, we show that N(j), N(k), and \(N(\ell )\) form an Eigenvalue Shifted Triple. Then by Lemma 9.10, at least one of the matrices has eigenvalues with distinct complex norms, so we are done by Corollary 4.13.
 1.
If \(2 a + c = 0\), then up to a nonzero factor of a, we have \(\frac{1}{a} \langle a,0,c \rangle = \langle 1, 0, 2 \rangle \) and are done by case 10 of Lemma 12.1.
 2.
If \(a^2 + 2 c^2 = 0\), then \(a = \pm i \sqrt{2} c\). Up to a nonzero factor of c, we have \(\frac{1}{c} \langle a,0,c \rangle = \langle \pm i \sqrt{2}, 0, 1 \rangle \) and are done by case 11 of Lemma 12.1.
 3.
If \(a^2 + 2 a c + 3 c^2 = 0\), then \(a = c (1 \pm i \sqrt{2})\). Up to a nonzero factor of c, we have \(\frac{1}{c} \langle a,0,c \rangle = \langle 1 \pm i \sqrt{2}, 0, 1 \rangle \) and are done by case 12 of Lemma 12.1.
 4.
If \(a^2 + a c + 7 c^2 = 0\), then \(2 a = c (1 \pm 3 i \sqrt{3})\). Up to a nonzero factor of \(\frac{c}{2}\), we have \(\frac{2}{c} \langle a,0,c \rangle = \langle 1 \pm 3 i \sqrt{3}, 0, 2 \rangle \) and are done by case 13 of Lemma 12.1. \(\square \)
The next lemma considers the failure condition in (18). Since this failure condition is just a holographic transformation of the failure condition in (17), the excluded cases in this lemma are handled exactly as those preceding Lemma 9.11.
Lemma 9.12
Proof
By Lemma 11.6 with \(x = 1\) and \(y = 2\), we have \(\hat{b} = 0\). Thus, after a holographic transformation by T, we are in the case covered by Lemma 9.11. Since T is orthogonal after scaling by \(\frac{1}{3}\), the complexity of these problems is unchanged by Theorem 3.3. \(\square \)
We summarize this section with the following lemma.
Corollary 9.13

\({\mathfrak {B}} = 0\) or

there exist \(\lambda \in {\mathbb {C}}\) and \(T \in \left\{ I_\kappa , \kappa I_\kappa  2 J_\kappa \right\} \) such that$$\begin{aligned} \langle a,b,c \rangle = {\left\{ \begin{array}{ll} T^{\otimes 3} \lambda \langle 1,0,0 \rangle , {\text { or}}\\ T^{\otimes 3} \lambda \langle 0,0,1 \rangle {\text { and }} \kappa = 3, {\text { or}}\\ T^{\otimes 3} \lambda \langle 1,0,\omega \rangle {\text { and }} \kappa = 3 {\text { where }} \omega ^3 = 1, {\text { or}}\\ T^{\otimes 3} \lambda \langle \mu ^2,1,\mu \rangle {\text { and }} \kappa = 4 {\text { where }} \mu = 1 \pm 2 i. \end{array}\right. } \end{aligned}$$
10 The main dichotomy
Now we can prove our main dichotomy theorem.
Theorem 10.1
 1.
\(a = b = c\);
 2.
\(a = c\) and \(\kappa = 3\);
 3.
\(\langle a,b,c \rangle = T^{\otimes 3} \lambda \langle 1,0,0 \rangle \);
 4.
\(\langle a,b,c \rangle = T^{\otimes 3} \lambda \langle 1,0,\omega \rangle \) and \(\kappa = 3\) where \(\omega ^3 = 1\);
 5.
\(\langle a,b,c \rangle = T^{\otimes 3} \lambda \langle \mu ^2,1,\mu \rangle \) and \(\kappa = 4\) where \(\mu = 1 \pm 2 i\);
Proof
The signature in case 1 is degenerate, which is trivially tractable. Case 2 is tractable by case 3 of Corollary 5.2. Case 3 is tractable by Corollary 5.3. Case 4 is tractable by Corollary 5.6. Case 5 is tractable by Lemma 5.7.
Otherwise, \(\langle a,b,c \rangle \) is none of these tractable cases. If \({\mathfrak {B}} = 0\), then we are done by Corollary 8.4, so assume that \({\mathfrak {B}} \ne 0\). If \(a + (\kappa  1) b = 0\) and \(b^2  4 b c  (\kappa  3) c^2 = 0\), then we are done by Lemma 8.2, so assume that \(a + (\kappa  1) b \ne 0\) or \(b^2  4 b c  (\kappa  3) c^2 \ne 0\).
If \(a + (\kappa  1) b \ne 0\), then we have the succinct unary signature \(\langle 1 \rangle \) of type \(\tau _1\) by Lemma 8.1. Otherwise, \(a + (\kappa  1) b = 0\) and \(b^2  4 b c  (\kappa  3) c^2 \ne 0\). Since \({\mathfrak {B}} \ne 0\), we have \(2 b + (\kappa  2) c \ne 0\). Then again we have \(\langle 1 \rangle \) by Lemma 8.1. Thus, in either case, we have \(\langle 1 \rangle \).
By Corollary 9.13, we have all binary succinct signatures \(\langle x,y \rangle \) for any \(x,y \in {\mathbb {C}}\). Then we are done by Lemma 7.16. \(\square \)
Acknowledgements
We thank Joanna EllisMonaghan for bringing [30] to our attention. We are thankful to Mingji Xia who discussed with us an early version of this work. We are very grateful to Bjorn Poonen and especially Aaron Levin for sharing their expertise on Runge’s method, in particular for the auxiliary function \(g_2(x, y)\) in the proof of Lemma 7.6. We benefited from discussions with William Whistler on a draft of this work, whom we thank. We also thank the anonymous referees for their helpful comments. All authors were supported by NSF CCF1217549. The second author was also supported by a Simons Award for Graduate Students in Theoretical Computer Science from the Simons Foundation. The third author was also supported by a Cisco Systems Distinguished Graduate Fellowship.
Vizing’s theorem is for simple graphs. In Holant problems as well as counting complexity such as graph homomorphism or #CSP, one typically considers multigraphs (i.e., selfloops and parallel edges are allowed). However, our hardness result for counting edge 3colorings over planar 3regular multigraphs also holds for simple graphs (Theorem 4.9).
If \(\kappa > 4\), then Lemma 4.4 further implies that these signatures are also 0 on \(P_{{\begin{matrix} 1 &{} 4 \\ 2 &{} 3 \end{matrix}}}\). However, when \(\kappa = 4\), this value might be nonzero. The \({\text {AD}}_{4,4}\) signature is an example of this. Instead of using this observation that depends on \(\kappa \) in our proof, we only construct gadgets such that their signatures are 0 on \(P_{{\begin{matrix} 1 &{} 4 \\ 2 &{} 3 \end{matrix}}}\) for any value of \(\kappa \).
Declarations
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Arratia, R., Bollobás, B., Sorkin, G.B.: The interlace polynomial: a new graph polynomial. In: SODA, pp. 237–245. Society for Industrial and Applied Mathematics (2000)Google Scholar
 Borgs, C., Chayes, J., Lovász, L., Sós, V.T., Vesztergombi, K.: Counting graph homomorphisms. In: Klazar, M., Kratochvíl, J., Loebl, M., Matous̆ek, J., Valtr, P., Thomas, R. (eds.) Topics in Discrete Mathematics, volume 26 of Algorithms and Combinatorics, pp. 315–371. Springer, Berlin (2006)Google Scholar
 Brylawski, T., Oxley, J.: The Tutte polynomial and its applications. In: White, N. (ed.) Matriod Applications, pp. 123–225. Cambridge University Press, Cambridge (1992)View ArticleGoogle Scholar
 Bulatov, A., Dyer, M., Ann Goldberg, L., Jalsenius, M., Richerby, D.: The complexity of weighted Boolean #CSP with mixed signs. Theor. Comput. Sci. 410(38–40), 3949–3961 (2009)MathSciNetView ArticleMATHGoogle Scholar
 Bulatov, A., Grohe, M.: The complexity of partition functions. Theor. Comput. Sci. 348(2), 148–186 (2005)MathSciNetView ArticleMATHGoogle Scholar
 Bulatov, A.A.: A dichotomy theorem for constraint satisfaction problems on a 3element set. J. ACM 53(1), 66–120 (2006)MathSciNetView ArticleMATHGoogle Scholar
 Bulatov, A.A.: The complexity of the counting constraint satisfaction problem. J. ACM 60(5), 34:1–34:41 (2013)MathSciNetMATHGoogle Scholar
 Bulatov, A.A., Dalmau, V.: Towards a dichotomy theorem for the counting constraint satisfaction problem. Inform. Comput. 205(5), 651–678 (2007)MathSciNetView ArticleMATHGoogle Scholar
 Cai, J.Y., Chen, X.: Complexity of counting CSP with complex weights. In: STOC, pp. 909–920. ACM (2012)Google Scholar
 Cai, J.Y., Chen, X., Lipton, R.J., Lu, P.: On tractable exponential sums. In: FAW, pp. 148–159. Springer, Berlin (2010)Google Scholar
 Cai, J.Y., Chen, X., Lu, P.: Nonnegatively weighted #CSP: an effective complexity dichotomy. In: CCC, pp. 45–54. IEEE Computer Society (2011)Google Scholar
 Cai, J.Y., Chen, X., Lu, P.: Graph homomorphisms with complex values: a dichotomy theorem. SIAM J. Comput. 42(3), 924–1029 (2013)MathSciNetView ArticleMATHGoogle Scholar
 Cai, J.Y., Choudhary, V.: Valiant’s Holant theorem and matchgate tensors. Theor. Comput. Sci. 384(1), 22–32 (2007)MathSciNetView ArticleMATHGoogle Scholar
 Cai, J.Y., Guo, H., Williams, T.: A complete dichotomy rises from the capture of vanishing signatures (extended abstract). In: STOC, pp. 635–644. ACM (2013)Google Scholar
 Cai, J.Y., Huang, S., Pinyan, L.: From Holant to #CSP and back: Dichotomy for Holant\(^c\) problems. Algorithmica 64(3), 511–533 (2012)MathSciNetView ArticleMATHGoogle Scholar
 Cai, J.Y., Kowalczyk, M.: Spin systems on \(k\)regular graphs with complex edge functions. Theor. Comput. Sci. 461, 2–16 (2012)MathSciNetView ArticleMATHGoogle Scholar
 Cai, J.Y., Kowalczyk, M., Williams, T.: Gadgets and antigadgets leading to a complexity dichotomy. In: ITCS, pp. 452–467. ACM (2012)Google Scholar
 Cai, J.Y., Lu, P., Xia, M.: Holant problems and counting CSP. In: STOC, pp. 715–724. ACM (2009)Google Scholar
 Cai, J.Y., Lu, P., Xia, M.: Holographic algorithms with matchgates capture precisely tractable planar #CSP. In: FOCS, pp. 427–436. IEEE Computer Society (2010)Google Scholar
 Cai, J.Y., Pinyan, L., Xia, M.: Computational complexity of Holant problems. SIAM J. Comput. 40(4), 1101–1132 (2011)MathSciNetView ArticleMATHGoogle Scholar
 Cai, J.Y., Pinyan, L., Xia, M.: Holographic reduction, interpolation and hardness. Comput. Complex. 21(4), 573–604 (2012)MathSciNetView ArticleMATHGoogle Scholar
 Cai, J.Y., Lu, P., Xia, M.: Dichotomy for Holant* problems with domain size 3. In: SODA, pp. 1278–1295. SIAM (2013)Google Scholar
 Cai, J.Y., Pinyan, L., Xia, M.: Holographic algorithms by Fibonacci gates. Linear Algebra Appl. 438(2), 690–707 (2013)MathSciNetView ArticleMATHGoogle Scholar
 Cai, J.Y., Pinyan, L., Xia, M.: The complexity of complex weighted Boolean #CSP. J. Comput. Syst. Sci. 80(1), 217–236 (2014)MathSciNetView ArticleMATHGoogle Scholar
 Dodson, C.T.J., Poston, T.: Tensor Geometry. Graduate Texts in Mathematics, vol. 130, 2nd edn. Springer, Berlin (1991)View ArticleMATHGoogle Scholar
 Dyer, M., Ann Goldberg, L., Jerrum, M.: The complexity of weighted Boolean #CSP. SIAM J. Comput. 38(5), 1970–1986 (2009)MathSciNetView ArticleMATHGoogle Scholar
 Dyer, M., Greenhill, C.: The complexity of counting graph homomorphisms. Random Struct. Algorithms 17(3–4), 260–289 (2000)MathSciNetView ArticleMATHGoogle Scholar
 Dyer, M., Richerby, D.: On the complexity of #CSP. In: STOC, pp. 725–734. ACM (2010)Google Scholar
 EllisMonaghan, J.A.: New results for the Martin polynomial. J. Comb. Theory Ser. B 74(2), 326–352 (1998)MathSciNetView ArticleMATHGoogle Scholar
 EllisMonaghan, J.A.: Identities for circuit partition polynomials, with applications to the Tutte polynomial. Adv. Appl. Math. 32(1–2), 188–197 (2004)MathSciNetView ArticleMATHGoogle Scholar
 Faltings, G.: Endlichkeitssätze für abelsche Varietäten über Zahlkörpern. Invent. Math. 73(3), 349–366 (1983)MathSciNetView ArticleGoogle Scholar
 Feder, T., Vardi, M.Y.: The computational structure of monotone monadic SNP and constraint satisfaction: a study through Datalog and group theory. SIAM J. Comput. 28(1), 57–104 (1998)MathSciNetView ArticleMATHGoogle Scholar
 Gallagher, P.X.: The large sieve and probabilistic Galois theory. In: Proc. Symp. Pure Math., volume 24 of Analytic Number Theory, pp. 91–101. American Mathematical Society (1973)Google Scholar
 Goldberg, L.A., Grohe, M., Jerrum, M., Thurley, M.: A complexity dichotomy for partition functions with mixed signs. SIAM J. Comput. 39(7), 3336–3402 (2010)MathSciNetView ArticleMATHGoogle Scholar
 Guo, H., Huang, S., Lu, P., Xia, M.: The complexity of weighted Boolean #CSP modulo $$k$$ k . In: STACS, pp. 249–260. Schloss Dagstuhl–LeibnizZentrum fuer Informatik (2011)Google Scholar
 Guo, H., Williams, T.: The complexity of planar Boolean #CSP with complex weights. CoRR, abs/1212.2284 (2012)Google Scholar
 Guo, H., Williams, T.: The complexity of planar Boolean #CSP with complex weights. In: ICALP, pp. 516–527. Springer, Berlin (2013)Google Scholar
 Håstad, J.: Tensor rank is NPcomplete. J. Algorithm. 11(4), 644–654 (1990)MathSciNetView ArticleMATHGoogle Scholar
 Holyer, I.: The NPcompleteness of edgecoloring. SIAM J. Comput. 10(4), 718–720 (1981)MathSciNetView ArticleMATHGoogle Scholar
 Jacobson, N.: Basic Algebra I, 2nd edn. W. H. Freeman & Co., San Francisco (1985)MATHGoogle Scholar
 Joshi, A.W.: Matrices and Tensors in Physics. New Age International, revised third edition(1995)Google Scholar
 David Forney Jr., G.: Codes on graphs: normal realizations. IEEE Trans. Inf. Theory 47(2), 520–548 (2001)MathSciNetView ArticleMATHGoogle Scholar
 Kowalczyk, M., Cai, J.Y.: Holant problems for regular graphs with complex edge functions. In: STACS, pp. 525–536. Schloss Dagstuhl—LeibnizZentrum fuer Informatik (2010)Google Scholar
 Kowalczyk, M., Cai, J.Y.: Holant problems for regular graphs with complex edge functions. CoRR. arXiv:1001.0464 (2010)
 Leven, D., Galil, Z.: NP completeness of finding the chromatic index of regular graphs. J. Algorithm 4(1), 35–44 (1983)MathSciNetView ArticleMATHGoogle Scholar
 Levin, A.: Private communication (2013)Google Scholar
 Loeliger, H.A.: An introduction to factor graphs. IEEE Signal Process. Mag. 21(1), 28–41 (2004)View ArticleGoogle Scholar
 Markov, I.L., Shi, Y.: Simulating quantum computation by contracting tensor networks. SIAM J. Comput. 38(3), 963–981 (2008)MathSciNetView ArticleMATHGoogle Scholar
 Martin, P.:. Enumérations Eulériennes dans les multigraphes et invariants de TutteGrothendieck. PhD thesis, Joseph Fourier University (1977). http://tel.archivesouvertes.fr/tel00287330
 Müller, P.: Hilbert’s irreducibility theorem for prime degree and general polynomials. Israel J. Math. 109(1), 319–337 (1999)MathSciNetView ArticleMATHGoogle Scholar
 Poonen, B.: Private communication (2013)Google Scholar
 Siegel, C.L.: Über einige anwendungen diophantischer approximationen. Abh. Pruess. Akad. Wiss. Phys. Math. Kl., pp. 41–69 (1929)Google Scholar
 Stewart, I.: Galois Theory, 3rd edn. Chapman Hall/CRC Mathematics Series. Taylor & Francis, London (2003)MATHGoogle Scholar
 Stiebitz, M., Scheide, D., Toft, B., Favrholdt, L.M.: Graph Edge Coloring: Vizing’s Theorem and Goldberg’s Conjecture. Wiley, New York (2012)MATHGoogle Scholar
 Tait, P.: Remarks on the colourings of maps. Proc. R. Soc. Edinb. 10, 729 (1880)MATHGoogle Scholar
 Valiant, L.G.: Quantum circuits that can be simulated classically in polynomial time. SIAM J. Comput. 31(4), 1229–1254 (2002)MathSciNetView ArticleMATHGoogle Scholar
 Leslie Valiant, G.: Holographic algorithms. SIAM J. Comput. 37(5), 1565–1594 (2008)MathSciNetView ArticleMATHGoogle Scholar
 Michel Las Vergnas: Le polynôme de Martin d’un graphe Eulerien. Ann. Discrete Math. 17, 397–411 (1983)MATHGoogle Scholar
 Vertigan, D.: The computational complexity of Tutte invariants for planar graphs. SIAM J. Comput. 35(3), 690–712 (2005)MathSciNetView ArticleMATHGoogle Scholar
 Vizing, V.G.: Critical graphs with given chromatic class. Metody Diskret. Analiz. 5, 9–17 (1965)MathSciNetGoogle Scholar
 Walsh, P.G.: A quantitative version of Runge’s theorem on Diophantine equations. Acta Arith. 62(2), 157–172 (1992)MathSciNetMATHGoogle Scholar
 Welsh, D.: Complexity: Knots. Colourings and Countings. London Mathematical Society Lecture Note Series. Cambridge University Press (1993)Google Scholar