Comparison of Greedy Angle Decompositions: Harmonic, Dyadic, and General Sequences
Victor Geere
March 2026
Table of Contents
Abstract
In a companion paper [Geere, 2026a] we introduced the greedy harmonic decomposition of angles, in which a target \(\theta \in [0,\pi]\) is expressed as a sub-sum of the harmonic angles \(\alpha_n = \pi/(n+2)\), and studied the correlation kernel of the resulting selection indicators. An open question posed there asks how the correlation structure depends on the choice of angle sequence. In this paper we give a systematic answer. We define a general greedy decomposition for any positive, decreasing, divergent sequence \((\alpha_n)\) and compare three canonical families: the harmonic sequence \(\alpha_n^{(\mathrm{H})} = \pi/(n+2)\), the dyadic (binary) sequence \(\alpha_n^{(\mathrm{D})} = \pi/2^{n+1}\), and the power-law family \(\alpha_n^{(p)} = \pi/(n+2)^p\) for \(p \in (0,1)\). We prove that the dyadic decomposition admits an explicit closed-form threshold sequence and a product-structure correlation kernel, in sharp contrast with the harmonic case, where thresholds obey a complex recursion. For general decreasing divergent sequences, we establish universal properties—monotonicity of selection, existence of thresholds, positive semi-definiteness of the kernel—and isolate the features that distinguish different sequences: convergence rate, threshold regularity, and spectral decay of the kernel. The power-law interpolation between harmonic (\(p=1\)) and slowly-decreasing (\(p \to 0^+\)) regimes reveals a phase transition in threshold complexity at \(p = 1\).
1. General Framework
- (Decreasing) \(\alpha_0 \geq \alpha_1 \geq \alpha_2 \geq \cdots > 0\).
- (Divergent sum) \(\sum_{n=0}^{\infty} \alpha_n = +\infty\).
- (Bounded start) \(\alpha_0 \leq \pi\).
- \(\delta_n(\theta)\) denotes the selection indicator at index \(n\) for target \(\theta\).
- \(\theta_n^*\) denotes the threshold angle at which \(\delta_n\) jumps from 0 to 1.
- \(K_\alpha(n,m)\) denotes the correlation kernel, where the subscript \(\alpha\) indicates the angle sequence.
- \(D_\alpha(N,\theta) = \sum_{n=0}^{N} \delta_n(\theta)\) counts the number of selected indices up to \(N\).
- \(S_N = \sum_{n=0}^{N} \alpha_n\) denotes the partial sum of the angle sequence.
- Harmonic: \(\alpha_n^{(\mathrm{H})} = \pi/(n+2)\). Partial sums \(S_N^{(\mathrm{H})} = \pi(H_{N+2} - 1) \sim \pi\ln N\). This is the sequence studied in [Geere, 2026a].
- Dyadic: \(\alpha_n^{(\mathrm{D})} = \pi/2^{n+1}\). Partial sums \(S_N^{(\mathrm{D})} = \pi(1 - 2^{-(N+1)}) \to \pi\). This sequence has convergent partial sums, so it is not an angle sequence in the sense of Definition 1.1. We include it because it is the natural binary decomposition, but note that it can only represent angles in \([0,\pi)\) and cannot approximate \(\theta = \pi\) exactly.
- Power-law: \(\alpha_n^{(p)} = \pi/(n+2)^p\) for \(p \in (0,1)\). Partial sums \(S_N^{(p)} \sim \pi N^{1-p}/(1-p)\). This interpolates between slowly-decreasing (\(p \to 0^+\)) and the harmonic boundary (\(p \to 1^-\)).
2. The Dyadic Decomposition
The dyadic (binary) decomposition is the most classical and best-understood angle decomposition. We develop its properties in full to provide a baseline for comparison with the harmonic case.
For \(\theta = \pi\): at step 0, \(r_0 = \pi \geq \pi/2\), so \(\delta_0 = 1\) and \(r_1 = \pi/2\). At step 1, \(r_1 = \pi/2 \geq \pi/4\), so \(\delta_1 = 1\) and \(r_2 = \pi/4\). By induction, \(\delta_n = 1\) and \(r_{n+1} = \pi/2^{n+1}\) for all \(n\).
The smallest \(\theta\) for which \(\delta_n^{(\mathrm{D})}(\theta) = 1\) is achieved when all earlier digits \(\delta_0 = \cdots = \delta_{n-1} = 0\) and the residual at step \(n\) is exactly \(\alpha_n^{(\mathrm{D})}\). This gives \(\theta_n^{*(\mathrm{D})} = \alpha_n^{(\mathrm{D})} = \pi/2^{n+1}\).
To verify: if \(\theta = \pi/2^{n+1}\), then for all \(k < n\), \(\alpha_k^{(\mathrm{D})} = \pi/2^{k+1} > \pi/2^{n+1} = \theta \geq r_k\), so \(\delta_k = 0\) and the residual remains \(\theta\). At step \(n\), \(r_n = \theta = \pi/2^{n+1} = \alpha_n^{(\mathrm{D})}\), so \(\delta_n = 1\).
However, if we instead centre by the constant \(1/2\), the digits become exactly uncorrelated in the \(L^2([0,\pi])\) inner product, since the binary digits of a uniform random variable in \([0,1]\) are independent. We record this observation for comparison in Section 5.
3. Greedy Decompositions for General Sequences
We now establish the properties that hold universally for any angle sequence satisfying Definition 1.1, and identify the features that depend on the specific choice of sequence.
- Convergence. For every \(\theta \in [0,\pi]\), the accumulated angle \(\theta_N \to \theta\) as \(N \to \infty\).
- Selection monotonicity. For each fixed \(n\), the map \(\theta \mapsto \delta_n(\theta)\) is non-decreasing.
- Threshold existence. For each \(n\), there exists \(\theta_n^* \in [0,\pi]\) such that \(\delta_n(\theta) = \mathbf{1}[\theta \geq \theta_n^*]\).
- Threshold lower bound. \(\theta_n^* \geq \alpha_n\) for all \(n\).
- Positive semi-definite kernel. The matrix \((K_\alpha(n,m))_{n,m}\) is positive semi-definite for any centring of the form \(\delta_n(\theta) - f(\theta)\) with \(f\) measurable.
(a) The residual bound \(r_N \leq \alpha_{N-1}\) follows from the greedy rule exactly as in [Geere, 2026a, Lemma 3.1]: if \(\delta_N = 0\), then \(r_N = r_{N-1} < \alpha_N \leq \alpha_{N-1}\); if \(\delta_N = 1\), then \(r_{N+1} = r_N - \alpha_N < \alpha_{N-1} - \alpha_N \leq \alpha_{N-1}\). Since \(\alpha_n \to 0\) (as a decreasing sequence whose terms must tend to zero for the sum to diverge, or even for a convergent sum), \(r_N \to 0\).
(b)–(c) The proof of [Geere, 2026a, Lemma 4.1] uses only the decreasing property of \((\alpha_n)\) and the greedy rule; it applies verbatim to any decreasing sequence.
(d) At the threshold \(\theta = \theta_n^*\), we have \(\theta_n^* = \alpha_n + \theta_n(\theta_n^*)\) with \(\theta_n(\theta_n^*) \geq 0\).
(e) The quadratic form \(\sum_{n,m} a_n\bar{a}_m K_\alpha(n,m) = \int_0^\pi |\sum_n a_n[\delta_n(\theta) - f(\theta)]|^2\,d\theta \geq 0\).
- Harmonic: \(\rho_{\mathrm{H}}(N) = \pi/(N+1) = O(1/N)\).
- Dyadic: \(\rho_{\mathrm{D}}(N) = \pi/2^N = O(2^{-N})\).
- Power-law: \(\rho_p(N) = \pi/(N+1)^p = O(N^{-p})\) for \(p \in (0,1)\).
- Harmonic: \(d_{\mathrm{H}}(N,\theta) \sim (\theta/\pi)\ln(N+2)/(N+1) \to 0\). The selection count grows logarithmically.
- Dyadic: \(d_{\mathrm{D}}(N,\theta) \to \theta/\pi \in (0,1)\). The selection density converges to a positive constant.
- Power-law (\(p \in (0,1)\)): \(d_p(N,\theta) \sim (\theta/\pi)(1-p)N^{p-1} \to 0\). The selection count grows as \(\Theta(N^{1-p})\), intermediate between the harmonic and dyadic cases.
Harmonic: \(\theta_N \approx \pi \sum_{n : \delta_n = 1} 1/(n+2)\). Since the selected terms contribute \(\theta/\pi\) of the harmonic sum asymptotically (cf. [Geere, 2026a, Lemma 3.3]), \(D_{\mathrm{H}}(N,\theta) \sim (\theta/\pi)\ln N\).
Dyadic: Given the binary expansion interpretation (Theorem 2.2), the density of 1-digits in the binary expansion of \(\theta/\pi\) depends on the specific value of \(\theta\). However, by the equidistribution of binary digits for Lebesgue-typical \(\theta\), the density is approximately \(1/2\) for typical targets. More precisely, \(D_{\mathrm{D}}(N,\theta)/(N+1)\) converges to the asymptotic density of 1-digits in the binary expansion of \(\theta/\pi\), which exists and equals \(1/2\) for Lebesgue-a.e. \(\theta\) (by the normal number theorem).
Power-law: \(\theta_N \approx \pi \sum_{n : \delta_n = 1}(n+2)^{-p}\). Since the full partial sum grows as \(\pi N^{1-p}/(1-p)\) and the selected terms accumulate to \(\theta\), the selection count satisfies \(D_p(N,\theta) \cdot \bar{\alpha}(N) \sim \theta\) where \(\bar{\alpha}(N) \sim \pi N^{-p}\) is the average term size near index \(N\). This gives \(D_p(N,\theta) \sim \theta N^p/\pi\), so \(d_p(N,\theta) \sim \theta N^{p-1}/\pi \to 0\).
4. Threshold Structure Comparison
The threshold angles \(\theta_n^*\) encode the combinatorial complexity of a greedy decomposition. We compare their structure across the three canonical sequences.
- Dyadic: \(\theta_n^{*(\mathrm{D})} = \pi/2^{n+1}\). The thresholds are strictly decreasing, explicitly computable, and satisfy \(\theta_n^{*(\mathrm{D})} = \alpha_n^{(\mathrm{D})}\) (the threshold equals the angle). No interaction between indices.
- Harmonic: \(\theta_n^{*(\mathrm{H})} = \alpha_n^{(\mathrm{H})} + \theta_n(\theta_n^{*(\mathrm{H})})\). The thresholds satisfy a coupled recursion, are non-monotone in \(n\), and satisfy \(\theta_n^{*(\mathrm{H})} \geq \alpha_n^{(\mathrm{H})} = \pi/(n+2)\) with equality holding only when no earlier index is selected at \(\theta = \theta_n^{*(\mathrm{H})}\).
- Power-law (\(p \in (0,1)\)): \(\theta_n^{*(p)} = \alpha_n^{(p)} + \theta_n(\theta_n^{*(p)})\). The thresholds exhibit the same coupled recursion as the harmonic case, with non-monotonicity and interaction between indices. However, since the terms \(\alpha_n^{(p)} = \pi/(n+2)^p\) decrease more slowly than the harmonic terms, the accumulated angle \(\theta_n(\theta_n^{*(p)})\) is smaller relative to \(\alpha_n^{(p)}\) for large \(n\), so \(\theta_n^{*(p)}/\alpha_n^{(p)} \to 1\) more rapidly.
(a) Proved in Lemma 2.3. The key is that the dyadic terms decrease geometrically: \(\alpha_k^{(\mathrm{D})} = 2\alpha_{k+1}^{(\mathrm{D})}\), so at the threshold \(\theta = \pi/2^{n+1}\), all earlier terms exceed \(\theta\) and are not selected. Thus \(\theta_n(\theta_n^{*(\mathrm{D})}) = 0\).
(b) Established in [Geere, 2026a, Sections 4–5]. The non-monotonicity arises because the harmonic terms decrease polynomially: \(\alpha_k^{(\mathrm{H})}/\alpha_{k+1}^{(\mathrm{H})} = (k+3)/(k+2) \to 1\), so at the threshold of a later index, some earlier indices may already be selected, adding to the accumulated angle and pushing the threshold higher than \(\alpha_n^{(\mathrm{H})}\) alone.
(c) For the power-law sequence, the ratio \(\alpha_k^{(p)}/\alpha_{k+1}^{(p)} = ((k+3)/(k+2))^p \to 1\) as \(k \to \infty\), similarly to the harmonic case. The number of earlier indices selected at the threshold is of order \(D_p(n, \theta_n^{*(p)})\), which (by Lemma 3.4) grows sub-linearly. The accumulated angle from these selections is bounded by \(\theta_n^{*(p)}\), creating a self-consistent equation. For \(p < 1\), the terms \(\alpha_k^{(p)}\) decrease more slowly, so fewer selections are needed to accumulate a given angle, and the interaction effect is weaker.
- Dyadic: \(\varepsilon_n^{(\mathrm{D})} = 0\) for all \(n\). Every threshold equals the angle exactly.
- Harmonic: The set \(\{n : \varepsilon_n^{(\mathrm{H})} > 0\}\) has positive density. Moreover, \(\varepsilon_n^{(\mathrm{H})}\) can be arbitrarily large: for any \(C > 0\), there exist indices \(n\) with \(\varepsilon_n^{(\mathrm{H})} > C\).
- Power-law (\(p \in (0,1)\)): For each fixed \(p < 1\), the set \(\{n : \varepsilon_n^{(p)} > C\}\) has density tending to zero as \(C \to \infty\), and the decay is faster for smaller \(p\).
(a) Immediate from Lemma 2.3.
(b) For the harmonic sequence, at the threshold \(\theta_n^{*(\mathrm{H})}\), the accumulated angle from earlier selections is \(\theta_n(\theta_n^{*(\mathrm{H})}) = \sum_{k < n,\, \theta_k^{*(\mathrm{H})} \leq \theta_n^{*(\mathrm{H})}} \alpha_k^{(\mathrm{H})}\). When \(\theta_n^{*(\mathrm{H})}\) is not too small, many earlier indices with small angles (large \(k\)) satisfy \(\theta_k^{*(\mathrm{H})} \leq \theta_n^{*(\mathrm{H})}\) and are therefore selected, contributing a sum that can be much larger than \(\alpha_n^{(\mathrm{H})}\). For instance, the threshold \(\theta_0^{*(\mathrm{H})} = \pi/2\) has \(\varepsilon_0^{(\mathrm{H})} = 0\), but for any index \(n\) whose threshold happens to be near \(\pi/2\), many earlier small-angle terms are selected, producing a large excess.
(c) For the power-law sequence with \(p < 1\), the terms decrease more slowly, so at the threshold of index \(n\), fewer earlier indices have angles smaller than \(\theta_n^{*(p)}\) (since \(\alpha_k^{(p)} = \pi/(k+2)^p\) is larger for a given \(k\) compared to the harmonic case). The accumulated angle from the selected earlier indices grows more slowly relative to \(\alpha_n^{(p)}\), bounding the excess ratio.
- Dyadic (most regular): monotone, explicit, no interaction \((\varepsilon_n = 0)\).
- Power-law \(p \in (0,1)\) (intermediate): non-monotone, coupled recursion, but bounded excess ratios for most indices.
- Harmonic \((p = 1)\) (most complex): non-monotone, coupled recursion, unbounded excess ratios.
- Dyadic: \(\alpha_n/\alpha_{n-1} = 1/2 \not\to 0\), but the thresholds are still monotone because \(\varepsilon_n = 0\) for all \(n\). The geometric decay is fast enough to prevent any earlier selection at the threshold.
- Harmonic: \(\alpha_n/\alpha_{n-1} = (n+1)/(n+2) \to 1\), so the thresholds are non-monotone.
- Super-geometric sequences (e.g., \(\alpha_n = \pi/n!\)): the thresholds are eventually strictly decreasing.
For the dyadic case: \(\alpha_n/\alpha_{n-1} = 1/2\), but \(\theta_n^{*(\mathrm{D})} = \pi/2^{n+1}\) while \(\alpha_{n-1}^{(\mathrm{D})} = \pi/2^n > \theta_n^{*(\mathrm{D})}\) for all \(n\), so no earlier term is selected. The threshold equals \(\alpha_n\) despite the ratio not tending to zero, because the geometric structure ensures exact separation.
5. Correlation Kernel Comparison
We now compare the correlation kernels \(K_\alpha(n,m)\) for the three canonical sequences. Recall from [Geere, 2026a, Definition 4.4] that \[ K_\alpha(n,m) = \int_0^\pi \!\left[\delta_n(\theta) - \frac{\theta}{\pi}\right]\!\left[\delta_m(\theta) - \frac{\theta}{\pi}\right]d\theta. \]
For the diagonal, substituting \(\theta_n^* = \pi/2^{n+1}\):
\[ K_{\mathrm{D}}(n,n) = \frac{(\pi/2^{n+1})^3 + (\pi(1 - 2^{-(n+1)}))^3}{3\pi^2} = \frac{\pi}{3}\left(\frac{1}{2^{3(n+1)}} + \left(1 - \frac{1}{2^{n+1}}\right)^3\right). \]| Property | Dyadic \(K_{\mathrm{D}}\) | Harmonic \(K_{\mathrm{H}}\) |
|---|---|---|
| Diagonal values | \(K_{\mathrm{D}}(n,n) \to \pi/3\) | \(K_{\mathrm{H}}(n,n) \in [\pi/12, \pi/3]\), oscillating |
| Off-diagonal decay | Rapid: \(|K_{\mathrm{D}}(n,m)|\) decreases as \(|n-m|\) increases | Complex: depends on threshold spacing, non-monotone |
| Threshold separation | \(|\theta_n^* - \theta_m^*| = \pi|2^{-(n+1)} - 2^{-(m+1)}|\) | Non-monotone, complex spacing |
| Constant-centred kernel | Diagonal (Proposition 2.7) | Dense, non-diagonal |
| Kernel rank (finite truncation) | Full rank for any \(N\) | Full rank for any \(N\) |
6. Convergence Rate Comparison
The convergence rate of \(\sin(\theta_N)\) to \(\sin(\theta)\) depends on how quickly the residual \(r_N\) tends to zero. We compare the three families and derive the reconstruction error bounds.
- Dyadic: \(|\sin\theta - \sin\theta_N| \leq \pi/2^{N+1}\). Exponential convergence.
- Harmonic: \(|\sin\theta - \sin\theta_N| \leq \pi/(N+2)\). Polynomial convergence, rate \(O(1/N)\).
- Power-law (\(p \in (0,1)\)): \(|\sin\theta - \sin\theta_N| \leq \pi/(N+1)^p\). Polynomial convergence, rate \(O(N^{-p})\).
- The dyadic decomposition converges exponentially fast but requires exponentially many bits of precision in \(\theta\) to determine each selection. The thresholds are completely regular (monotone, explicit).
- The harmonic decomposition converges polynomially (\(O(1/N)\)) but has complex, non-monotone thresholds with unbounded excess ratios. Its combinatorial structure is much richer.
- The power-law decomposition with \(p < 1\) converges even more slowly (\(O(N^{-p})\)) and has simpler thresholds than the harmonic case (bounded excess ratios for most indices).
- Dyadic: \(N_{\mathrm{D}}(\varepsilon) = \lceil \log_2(\pi/\varepsilon) \rceil = O(\log(1/\varepsilon))\).
- Harmonic: \(N_{\mathrm{H}}(\varepsilon) = \lceil \pi/\varepsilon \rceil - 1 = O(1/\varepsilon)\).
- Power-law: \(N_p(\varepsilon) = \lceil (\pi/\varepsilon)^{1/p} \rceil - 1 = O(\varepsilon^{-1/p})\).
- Dyadic: \(\pi/2^N = \varepsilon \implies N = \log_2(\pi/\varepsilon)\).
- Harmonic: \(\pi/(N+1) = \varepsilon \implies N = \pi/\varepsilon - 1\).
- Power-law: \(\pi/(N+1)^p = \varepsilon \implies N = (\pi/\varepsilon)^{1/p} - 1\).
7. Spectral Properties
The positive semi-definite kernel matrices \((K_\alpha(n,m))_{n,m=0}^{N}\) have spectra (sets of eigenvalues) whose structure encodes global features of the decomposition. We compare the spectral properties of the dyadic and harmonic kernels.
- The constant-centred kernel \(\widetilde{K}_{\mathrm{D}}\) is \((\pi/4) I\), so all eigenvalues equal \(\pi/4\). The spectral measure is a single atom at \(\pi/4\).
- The linearly-centred kernel \(K_{\mathrm{D}}\) has eigenvalues that converge: the \((N+1) \times (N+1)\) matrix \((K_{\mathrm{D}}(n,m))_{n,m=0}^{N}\) has a bulk of eigenvalues near \(\pi/3\) (since \(K_{\mathrm{D}}(n,n) \to \pi/3\) and the off-diagonal entries are small for well-separated indices), with a bounded number of outlier eigenvalues arising from the first few indices where \(\theta_n^{*(\mathrm{D})}\) is not close to zero.
(a) Immediate from Proposition 2.7.
(b) The linearly-centred kernel can be written as \(K_{\mathrm{D}} = \widetilde{K}_{\mathrm{D}} + E\) where \(E\) is a correction matrix arising from the change of centring from constant \(1/2\) to linear \(\theta/\pi\). The correction involves the function \(1/2 - \theta/\pi\), whose \(L^2\) norm is bounded. By Weyl's inequality, the eigenvalues of \(K_{\mathrm{D}}\) differ from those of \(\widetilde{K}_{\mathrm{D}}\) by at most \(\|E\|_{\mathrm{op}}\), where the operator norm of \(E\) is bounded by the \(L^2\) norm of the centring difference times \(\sqrt{N+1}\). For large \(N\), the bulk eigenvalues approach \(\pi/4 + O(1)\).
- The eigenvalues of \((K_{\mathrm{H}}(n,m))_{n,m=0}^{N}\) are all positive (the matrix is strictly positive definite for any finite \(N\)).
- The largest eigenvalue is \(O(N)\) (growing linearly), while the smallest positive eigenvalue is \(O(1)\) (bounded away from zero).
- The spectral distribution does not concentrate at a single point, reflecting the non-trivial off-diagonal structure.
(a) Strict positive definiteness follows from the linear independence of the step functions \(\delta_n(\theta) - \theta/\pi\) in \(L^2([0,\pi])\). These are linearly independent because the step functions \(\mathbf{1}[\theta \geq \theta_n^*]\) have distinct jump points (the thresholds are all distinct, since each \(\theta_n^*\) depends on all previous selections in a way that generically produces distinct values).
(b) The trace is \(\sum_{n=0}^{N} K_{\mathrm{H}}(n,n) = \Theta(N)\) (since each diagonal entry is \(\Theta(1)\)), so the average eigenvalue is \(\Theta(1)\). The largest eigenvalue is bounded above by the trace, giving \(O(N)\). Due to the oscillatory structure of the thresholds, the off-diagonal mass is distributed, preventing any eigenvalue from growing faster than \(O(N)\).
(c) The non-concentration follows from the non-trivial correlation between different indices: unlike the dyadic case, \(K_{\mathrm{H}}(n,m)\) is not zero or negligible for many pairs \((n,m)\), so the kernel matrix is not close to a scalar times the identity.
| Spectral property | Dyadic (constant-centred) | Dyadic (linear-centred) | Harmonic (linear-centred) |
|---|---|---|---|
| Eigenvalue spread | All equal (\(\pi/4\)) | Concentrated near \(\pi/3\) | Spread over \([\Omega(1), O(N)]\) |
| Condition number | 1 | \(O(1)\) | \(O(N)\) |
| Trace growth | \(\Theta(N)\) | \(\Theta(N)\) | \(\Theta(N)\) |
| Off-diagonal mass | Zero | \(O(1)\) per row | \(\Theta(1)\) per row |
8. Worked Examples and Special Sequences
8.1. First three thresholds
We compute the first three thresholds for each canonical sequence to illustrate the structural differences.
- \(\alpha_0^{(\mathrm{D})} = \pi/2\), \(\theta_0^{*(\mathrm{D})} = \pi/2\).
- \(\alpha_1^{(\mathrm{D})} = \pi/4\), \(\theta_1^{*(\mathrm{D})} = \pi/4\).
- \(\alpha_2^{(\mathrm{D})} = \pi/8\), \(\theta_2^{*(\mathrm{D})} = \pi/8\).
- \(\alpha_0^{(\mathrm{H})} = \pi/2\). At \(\theta = \pi/2\): no previous selections, residual \(= \pi/2 = \alpha_0\). So \(\theta_0^{*(\mathrm{H})} = \pi/2\).
- \(\alpha_1^{(\mathrm{H})} = \pi/3\). At \(\theta = \pi/3\): since \(\pi/3 < \pi/2 = \theta_0^{*(\mathrm{H})}\), index 0 is not selected. Residual \(= \pi/3 = \alpha_1\). So \(\theta_1^{*(\mathrm{H})} = \pi/3\). (Note: \(\theta_1^{*(\mathrm{H})} < \theta_0^{*(\mathrm{H})}\).)
- \(\alpha_2^{(\mathrm{H})} = \pi/4\). At \(\theta = \pi/4\): since \(\pi/4 < \pi/3 = \theta_1^{*(\mathrm{H})}\), neither index 0 nor 1 is selected. Residual \(= \pi/4 = \alpha_2\). So \(\theta_2^{*(\mathrm{H})} = \pi/4\).
- \(\alpha_0^{(1/2)} = \pi/\sqrt{2} \approx 2.221\). Since \(\alpha_0 > \pi\), we must adjust: the condition \(\alpha_0 \leq \pi\) in Definition 1.1 is violated. We use instead \(\alpha_n^{(1/2)} = \pi/(n+2)^{1/2}\), giving \(\alpha_0 = \pi/\sqrt{2} \approx 2.221\). This exceeds \(\pi\), so we renormalise or begin from \(n = 1\). To keep the comparison clean, we adopt the convention \(\alpha_n^{(p)} = \pi/(n+2)^p\), so \(\alpha_0^{(1/2)} = \pi/\sqrt{2} \approx 2.221\). Since \(\pi/\sqrt{2} < \pi\), this is valid.
- \(\theta_0^{*(1/2)} = \pi/\sqrt{2}\). At \(\theta = \pi/\sqrt{2}\): residual \(= \pi/\sqrt{2} = \alpha_0\). Selected.
- \(\alpha_1^{(1/2)} = \pi/\sqrt{3} \approx 1.814\). At \(\theta = \pi/\sqrt{3}\): since \(\pi/\sqrt{3} < \pi/\sqrt{2}\), index 0 is not selected. Residual \(= \pi/\sqrt{3} = \alpha_1\). So \(\theta_1^{*(1/2)} = \pi/\sqrt{3}\).
- \(\alpha_2^{(1/2)} = \pi/2\). At \(\theta = \pi/2\): since \(\pi/2 < \pi/\sqrt{3}\), neither index is selected. Residual \(= \pi/2 = \alpha_2\). So \(\theta_2^{*(1/2)} = \pi/2\).
8.2. Interaction effects in the harmonic case
- \(\theta_0^* = \pi/2 \approx 1.571\)
- \(\theta_1^* = \pi/3 \approx 1.047\)
- \(\theta_2^* = \pi/4 \approx 0.785\)
- \(\theta_3^* = \pi/5 = 0.628\)
At \(n = 5\), \(\alpha_5 = \pi/7 \approx 0.449\). At \(\theta = \pi/7\): still below all previous thresholds, so \(\theta_5^* = \pi/7\).
The interaction effects become significant for larger \(n\), when the target angle \(\theta_n^*\) is large enough for some earlier small-angle terms to be selected. The first instance of a threshold exceeding the simple bound occurs when the greedy selections at the threshold create a nonzero accumulated angle.
8.3. A rapidly-decreasing sequence
9. Discussion
9.1. Summary of the comparison
The following table summarises the main comparison axes:
| Feature | Dyadic | Harmonic | Power-law \((p \in (0,1))\) |
|---|---|---|---|
| Sum convergence | \(\sum \alpha_n = \pi\) (convergent) | \(\sum \alpha_n = \infty\) | \(\sum \alpha_n = \infty\) |
| Reconstruction rate | \(O(2^{-N})\) | \(O(1/N)\) | \(O(N^{-p})\) |
| Efficiency \(N(\varepsilon)\) | \(O(\log(1/\varepsilon))\) | \(O(1/\varepsilon)\) | \(O(\varepsilon^{-1/p})\) |
| Selection density | \(\to \theta/\pi\) (const) | \(\sim (\theta/\pi)\frac{\ln N}{N} \to 0\) | \(\sim (\theta/\pi) N^{p-1} \to 0\) |
| Threshold sequence | Monotone, explicit | Non-monotone, recursive | Non-monotone, recursive |
| Excess ratio \(\varepsilon_n\) | 0 for all \(n\) | Unbounded | Bounded for most \(n\) |
| Kernel (constant-centred) | Diagonal | Dense | Dense |
| Kernel (linear-centred) | Near-diagonal | Dense, oscillatory | Dense |
| Spectral condition number | \(O(1)\) | \(O(N)\) | Intermediate |
9.2. The harmonic boundary
A recurring theme is that the harmonic sequence \(\alpha_n = \pi/(n+2)\) sits at a critical boundary:
- It is the borderline case for divergence of \(\sum \alpha_n\) among the power-law family \(\alpha_n = \pi/(n+2)^p\): the sum diverges for \(p \leq 1\) and converges for \(p > 1\).
- The threshold complexity is maximal at \(p = 1\): the excess ratios \(\varepsilon_n = \theta_n^*/\alpha_n - 1\) are unbounded for the harmonic sequence but bounded (for most \(n\)) for \(p < 1\).
- The convergence rate \(O(1/N)\) is the fastest polynomial rate in the power-law family that is achieved by a divergent-sum sequence.
This boundary character makes the harmonic decomposition the most interesting from a structural standpoint: it has the richest combinatorial features while still providing useful convergence.
9.3. The role of centring
The choice of centring function in the correlation kernel has a significant effect on the structure:
- Constant centring (\(\delta_n - \bar\delta_n\)) yields a diagonal kernel for the dyadic decomposition (Proposition 2.7) but a dense kernel for the harmonic decomposition. This is because the dyadic digits are independent (as a family of random variables under Lebesgue measure), while the harmonic greedy selections are not.
- Linear centring (\(\delta_n - \theta/\pi\)) yields a non-diagonal kernel even for the dyadic decomposition (Theorem 5.1), because the linear function \(\theta/\pi\) is not the conditional mean of \(\delta_n\) given the other digits. However, linear centring is the natural choice for measuring how the selection count deviates from the "expected" count \(\theta/\pi\), and it is the centring used in [Geere, 2026a].
For the harmonic decomposition, both centrings produce dense kernels, and neither reveals a diagonal structure. This is a genuine structural feature of the harmonic case, not an artefact of the centring choice.
9.4. Open questions
This comparison suggests several further questions:
- Optimal angle sequences. Among all angle sequences with a given convergence rate \(\rho(N)\), which one has the simplest threshold structure (e.g., minimises the total excess \(\sum_{n=0}^{N} \varepsilon_n\))? Is there a trade-off between threshold simplicity and other desirable properties?
- Transition at \(p = 1\). The passage from \(p < 1\) (bounded excess, simpler thresholds) to \(p = 1\) (unbounded excess, complex thresholds) is qualitative. What is the quantitative behaviour of the excess ratios \(\varepsilon_n^{(p)}\) as \(p \to 1^-\)? Is there a scaling limit?
- Natural centring for general sequences. For the dyadic decomposition, the natural centring is the constant \(1/2\) (which diagonalises the kernel). For a general angle sequence, is there a centring function \(f_\alpha(\theta)\) that minimises the off-diagonal mass of the kernel \(K_\alpha\)?
- Non-greedy decompositions. All decompositions considered here are greedy. How do non-greedy selection rules (e.g., alternating, random, or optimisation-based) compare in terms of threshold structure and kernel properties?
References
- Geere, V. (2026a). On the Correlation Structure of a Greedy Harmonic Decomposition of the Sine Function. [link]
- Geere, V. (2020). Geometric Sine Construction. [link]
- Geere, V. (2026b). A Harmonic Reconstruction of the Sine Function and Its Relation to the Riemann Zeta Function. [link]
- Graham, R.L. (1964). On a conjecture of Erdős in additive number theory. Acta Arithmetica, 10, 63–70.
- Hardy, G.H. & Wright, E.M. (2008). An Introduction to the Theory of Numbers. 6th ed., Oxford University Press.
- Borel, É. (1909). Les probabilités dénombrables et leurs applications arithmétiques. Rend. Circ. Mat. Palermo, 27, 247–271.