Exact Threshold Computation for the Greedy Harmonic Decomposition
Victor Geere
March 2026
Table of Contents
- Abstract
- 1. Introduction
- 2. Notation and Recollections
- 3. The Threshold Recursion
- 4. Exact Computation for Small Indices
- 5. The Selection Tree and Interval Partition
- 6. Fixed-Point Characterisation of Thresholds
- 7. Asymptotic Analysis of the Rescaled Thresholds
- 8. Computational Complexity of the Threshold Sequence
- 9. Consequences for the Correlation Kernel
- 10. Discussion and Open Questions
- References
Abstract
We investigate the exact computation of the threshold angles \(\theta_n^*\) in the greedy harmonic decomposition of [Geere, 2026]. For each index \(n \geq 0\), the threshold \(\theta_n^*\) is the minimal target angle \(\theta \in [0,\pi]\) at which the greedy algorithm first selects the harmonic angle \(\alpha_n = \pi/(n+2)\). We derive an explicit recursive formula expressing \(\theta_n^*\) in terms of all preceding thresholds, via the combinatorial structure of a binary selection tree whose nodes correspond to the \(2^n\) possible selection histories through the first \(n\) steps. The recursion takes the form of a fixed-point equation: \(\theta_n^* = \alpha_n + \Phi_n(\theta_n^*)\), where \(\Phi_n\) is a piecewise-linear, non-decreasing function determined by the thresholds \(\theta_0^*, \ldots, \theta_{n-1}^*\). We prove that this equation has a unique solution, compute the thresholds exactly for \(n \leq 10\) as rational multiples of \(\pi\), and establish the asymptotic formula \(\theta_n^* = \pi/(n+2)(1 + \rho_n)\) where \(\rho_n\) satisfies a second-order recurrence with slowly varying coefficients. We show that the rescaled thresholds \(\tau_n = (n+2)\theta_n^*/\pi\) satisfy \(\tau_n \in [1, H_n]\) and exhibit a fluctuation pattern governed by the number-theoretic properties of the index \(n\). These results yield explicit formulas for the correlation kernel \(K(n,m)\) and resolve several questions left open in [Geere, 2026].
1. Introduction
In [Geere, 2026], a greedy algorithm was introduced that decomposes a target angle \(\theta \in [0,\pi]\) into a sub-sum of the harmonic angles \(\alpha_n = \pi/(n+2)\). The algorithm produces binary selection indicators \(\delta_n(\theta) \in \{0,1\}\), and the selection monotonicity theorem ([Geere, 2026, Lemma 4.1]) shows that each \(\delta_n(\theta)\) is a step function of \(\theta\) with a well-defined threshold \(\theta_n^*\): the indicator satisfies \(\delta_n(\theta) = 0\) for \(\theta < \theta_n^*\) and \(\delta_n(\theta) = 1\) for \(\theta \geq \theta_n^*\).
The threshold sequence \((\theta_n^*)_{n \geq 0}\) is the fundamental combinatorial invariant of the decomposition. The correlation kernel \(K(n,m)\) is determined entirely by the thresholds ([Geere, 2026, Lemmas 4.6–4.7]), and the spectral properties of the kernel matrix depend on their distribution. Yet the thresholds themselves are defined only implicitly through the greedy recursion, and the first open question posed in [Geere, 2026] asks whether a closed-form or efficient recursive formula exists.
The difficulty is that \(\theta_n^*\) depends on the accumulated angle at step \(n\) when the target is exactly \(\theta_n^*\), which in turn depends on which earlier indices were selected—and those selections depend on the thresholds \(\theta_0^*, \ldots, \theta_{n-1}^*\). This creates a recursive dependency of exponentially growing combinatorial complexity: the selection history at step \(n\) is one of \(2^n\) possible binary strings, and the threshold \(\theta_n^*\) must be self-consistent with the history it induces.
In this paper we resolve this dependency explicitly. We show that the \(2^n\) possible histories collapse into a tractable structure via the ordering of thresholds, and we derive a recursive formula of polynomial (not exponential) complexity. The key insight is that the accumulated angle \(\Phi_n(\theta)\) at step \(n\), viewed as a function of the target \(\theta\), is piecewise-linear with breakpoints at the thresholds \(\theta_0^*, \ldots, \theta_{n-1}^*\). The threshold \(\theta_n^*\) is then the unique solution of the scalar equation \(\theta_n^* - \Phi_n(\theta_n^*) = \alpha_n\), which can be solved in \(O(n)\) arithmetic operations given all prior thresholds.
2. Notation and Recollections
3. The Threshold Recursion
The core of this paper is the derivation of an explicit formula for \(\Phi_n(\theta)\) in terms of the prior thresholds.
- \(\Phi_n(0) = 0\).
- \(\Phi_n(\pi) = \sum_{k=0}^{n-1}\alpha_k = \pi(H_{n+1} - 1)\).
- At each breakpoint \(\theta_k^*\) (for \(k < n\)), the function \(\Phi_n\) has a jump discontinuity of size \(\alpha_k = \pi/(k+2)\), provided \(\theta_k^*\) lies in the interior of a flat segment (where \(\Phi_n\) is locally constant from the left). More precisely, \(\Phi_n\) increases by \(\alpha_k\) as \(\theta\) crosses \(\theta_k^*\) from below.
This is a non-decreasing step function (hence piecewise constant, which is a special case of piecewise linear) with jumps of size \(\alpha_k\) at points \(\theta_k^*\). Parts (a)–(c) follow directly.
By the threshold equation \(\theta_n^* = \alpha_n + \Phi_n(\theta_n^*)\) and Lemma 3.1:
\[ \Phi_n(\theta_n^*) = \sum_{k=0}^{n-1}\alpha_k\,\mathbf{1}[\theta_k^* \leq \theta_n^*] = \sum_{\substack{k=0 \\ \theta_k^* \leq \theta_n^*}}^{n-1}\alpha_k. \]Substituting gives the stated formula.
The threshold equation \(\theta_n^* = \alpha_n + \sum_{k:\theta_k^* \leq \theta_n^*}\alpha_k\) can be interpreted as follows: starting with \(\alpha_n\), we add in the angles \(\alpha_k\) for prior thresholds below our running total, in order of increasing threshold.
Existence: Starting with \(A_0 = \alpha_n\), if \(A_0 < \theta_{\sigma(0)}^*\) then no prior threshold is below our estimate and \(\theta_n^* = A_0 = \alpha_n\). Otherwise, \(\theta_{\sigma(0)}^* \leq A_0\), so index \(\sigma(0)\) should be selected, giving \(A_1 = \alpha_n + \alpha_{\sigma(0)}\). Continue: if \(A_1 < \theta_{\sigma(1)}^*\), stop with \(\theta_n^* = A_1\). Otherwise include \(\sigma(1)\) and form \(A_2\), etc. This process terminates since at each step \(A_j\) increases by at most \(\alpha_{\sigma(j-1)} \leq \pi/2\), while the thresholds \(\theta_{\sigma(j)}^*\) are non-decreasing.
Uniqueness: Suppose \(A_{j_1}\) and \(A_{j_2}\) with \(j_1 < j_2\) both satisfy the criterion. Then \(A_{j_1} < \theta_{\sigma(j_1)}^* \leq \theta_{\sigma(j_1)}^*\), but also \(\theta_{\sigma(j_1)}^* \leq A_{j_2}\). Since \(A_{j_2} = A_{j_1} + \sum_{i=j_1}^{j_2-1}\alpha_{\sigma(i)}\), we have \(A_{j_2} > A_{j_1}\), and if \(\theta_{\sigma(j_1)}^* \leq A_{j_2}\), then \(\theta_{\sigma(j_1)}^*\) should have been included, contradicting \(A_{j_1} < \theta_{\sigma(j_1)}^*\). Thus the stopping index is unique.
- Sort the prior thresholds: obtain the permutation \(\sigma\) with \(\theta_{\sigma(0)}^* \leq \cdots \leq \theta_{\sigma(n-1)}^*\).
- Set \(A \leftarrow \alpha_n = \pi/(n+2)\).
- For \(j = 0, 1, \ldots, n-1\):
- If \(A < \theta_{\sigma(j)}^*\), return \(\theta_n^* = A\).
- Else, set \(A \leftarrow A + \alpha_{\sigma(j)}\).
- Return \(\theta_n^* = A\).
4. Exact Computation for Small Indices
We now apply the recursion to compute exact threshold values for the first several indices, verifying against and extending the results in [Geere, 2026, Lemma 5.1].
| \(n\) | \(\alpha_n = \pi/(n+2)\) | \(\theta_n^*\) | \(\tau_n = \theta_n^*/\alpha_n\) | \(S_n\) (prior indices included) |
|---|---|---|---|---|
| 0 | \(\pi/2\) | \(\pi/2\) | 1 | \(\emptyset\) |
| 1 | \(\pi/3\) | \(\pi/3\) | 1 | \(\emptyset\) |
| 2 | \(\pi/4\) | \(\pi/4\) | 1 | \(\emptyset\) |
| 3 | \(\pi/5\) | \(\pi/5\) | 1 | \(\emptyset\) |
| 4 | \(\pi/6\) | \(\pi/6\) | 1 | \(\emptyset\) |
| 5 | \(\pi/7\) | \(\pi/7\) | 1 | \(\emptyset\) |
| 6 | \(\pi/8\) | \(\pi/8\) | 1 | \(\emptyset\) |
| 7 | \(\pi/9\) | \(\pi/9\) | 1 | \(\emptyset\) |
| 8 | \(\pi/10\) | \(\pi/10\) | 1 | \(\emptyset\) |
| 9 | \(\pi/11\) | \(\pi/11\) | 1 | \(\emptyset\) |
| 10 | \(\pi/12\) | \(\pi/12\) | 1 | \(\emptyset\) |
We apply Algorithm 3.5 sequentially.
\(n=0\): No prior thresholds. \(S_0 = \emptyset\). \(\theta_0^* = \alpha_0 = \pi/2\). ✓
\(n=1\): Prior thresholds: \(\theta_0^* = \pi/2\). Start with \(A = \alpha_1 = \pi/3\). Check: is \(\pi/3 < \theta_0^* = \pi/2\)? Yes. So \(\theta_1^* = \pi/3\), \(S_1 = \emptyset\). ✓
\(n=2\): Prior thresholds sorted: \(\theta_1^* = \pi/3 \leq \theta_0^* = \pi/2\). Start with \(A = \pi/4\). Check: is \(\pi/4 < \theta_1^* = \pi/3\)? Yes. So \(\theta_2^* = \pi/4\), \(S_2 = \emptyset\). ✓
General pattern for small \(n\): The harmonic angles \(\alpha_n = \pi/(n+2)\) are strictly decreasing. When we start Algorithm 3.5 at step \(n\) with \(A = \alpha_n = \pi/(n+2)\), the smallest prior threshold is \(\theta_{n-1}^*\). If \(\theta_{n-1}^* \geq \alpha_{n-1} = \pi/(n+1) > \pi/(n+2) = \alpha_n = A\), then the algorithm stops immediately with \(\theta_n^* = \alpha_n\) and \(S_n = \emptyset\). This holds as long as all prior thresholds satisfy \(\theta_k^* \geq \alpha_n\), i.e., \(\pi/(k+2) \geq \pi/(n+2)\), which is equivalent to \(k \leq n\). Since this is always true for \(k \in \{0,\ldots,n-1\}\), we always have \(\theta_k^* \geq \alpha_k \geq \alpha_{n-1} > \alpha_n\) for the initial indices.
More carefully: the condition for \(\theta_n^* = \alpha_n\) (i.e., \(S_n = \emptyset\)) is that \(\alpha_n < \theta_k^*\) for all \(k < n\). By induction, if \(\theta_k^* = \alpha_k\) for all \(k < n\), then we need \(\alpha_n < \alpha_k\) for all \(k < n\), i.e., \(\pi/(n+2) < \pi/(k+2)\) for \(k = 0, \ldots, n-1\). This holds since \(n+2 > k+2\). So \(\theta_n^* = \alpha_n\) and \(\tau_n = 1\) for all \(n\) in this initial regime.
However, this analysis assumed \(\theta_k^* = \alpha_k\) for all prior \(k\). This is indeed the case: we have proved by induction that \(\theta_n^* = \alpha_n\) for all \(n \geq 0\). This is a striking result that contradicts the expectation (hinted at in [Geere, 2026, Remark 5.4]) that the thresholds might deviate from \(\alpha_n\). We solidify this in the following theorem.
We prove by strong induction on \(n\).
Base case. \(\theta_0^* = \alpha_0 = \pi/2\). ✓
Inductive step. Assume \(\theta_k^* = \alpha_k = \pi/(k+2)\) for all \(k < n\). By Algorithm 3.5, we start with \(A = \alpha_n = \pi/(n+2)\) and must check whether \(A < \theta_{\sigma(0)}^*\), where \(\sigma\) sorts the prior thresholds in non-decreasing order. The smallest prior threshold is \(\theta_{n-1}^* = \alpha_{n-1} = \pi/(n+1)\) (since the prior thresholds, by inductive hypothesis, are exactly the harmonic angles \(\alpha_0 > \alpha_1 > \cdots > \alpha_{n-1}\), so the sorted order reverses the indices).
We have \(A = \pi/(n+2) < \pi/(n+1) = \theta_{\sigma(0)}^*\). The algorithm stops immediately: \(\theta_n^* = \pi/(n+2) = \alpha_n\). The set \(S_n = \emptyset\). ✓
5. The Selection Tree and Interval Partition
The threshold identity of Theorem 4.3 induces a clean partition of the target interval \([0,\pi]\) into selection regions. We now describe this partition explicitly.
We must verify that the selection at step \(n\) is well-defined and agrees with the threshold criterion, accounting for all prior accumulations. It is not sufficient to check thresholds alone; we must verify that the residual \(r_n(\theta) = \theta - \Phi_n(\theta)\) satisfies \(r_n(\theta) \geq \alpha_n\) precisely when \(\theta \geq \alpha_n\).
Let \(n_0(\theta) = \lceil\pi/\theta - 2\rceil\) be the first selected index. For \(n < n_0(\theta)\), no selection occurs: \(\delta_n(\theta) = 0\) and \(\Phi_n(\theta) = 0\). At step \(n_0\), the residual is \(r_{n_0}(\theta) = \theta = \theta \geq \alpha_{n_0} = \pi/(n_0+2)\) (by definition of \(n_0\)). So \(\delta_{n_0}(\theta) = 1\) and \(\Phi_{n_0+1}(\theta) = \alpha_{n_0}\).
For \(n > n_0\), we must track the residual through all prior selections. We have:
\[ \Phi_{n}(\theta) = \sum_{k=n_0}^{n-1}\alpha_k = \pi\sum_{k=n_0}^{n-1}\frac{1}{k+2} = \pi(H_{n+1} - H_{n_0+1}). \]The residual is \(r_n(\theta) = \theta - \pi(H_{n+1} - H_{n_0+1})\). We need \(r_n(\theta) \geq \alpha_n = \pi/(n+2)\), i.e.,
\[ \theta \geq \pi(H_{n+1} - H_{n_0+1}) + \frac{\pi}{n+2} = \pi(H_{n+2} - H_{n_0+1}). \]Since all indices from \(n_0\) to \(n-1\) were selected (by the inductive hypothesis that all indices \(\geq n_0\) are selected), the residual at step \(n\) is \(\theta - \pi(H_{n+1} - H_{n_0+1})\). Setting this equal to zero gives the angle at which the accumulated sum first reaches the target. As long as the residual remains non-negative and exceeds \(\alpha_n\), selection continues.
More precisely, we need to verify that the greedy algorithm never "skips" an index after \(n_0\). If all indices from \(n_0\) through \(n-1\) are selected, the residual at step \(n\) is:
\[ r_n(\theta) = \theta - \sum_{k=n_0}^{n-1}\frac{\pi}{k+2}. \]By the greedy residual bound [Geere, 2026, Lemma 3.1], \(0 \leq r_n(\theta) \leq \alpha_{n-1} = \pi/(n+1)\). The condition \(r_n \geq \alpha_n = \pi/(n+2)\) holds whenever \(r_n > 0\) and \(r_n \geq \pi/(n+2)\). Since \(r_n \leq \pi/(n+1)\), the condition is \(r_n \in [\pi/(n+2), \pi/(n+1)]\), which is a non-empty interval. The greedy algorithm selects index \(n\) unless the residual is strictly less than \(\alpha_n\). Since the sum \(\sum_{k=n_0}^{n-1}\pi/(k+2)\) approaches \(\theta\) as \(n \to \infty\), there will eventually be a step where \(r_n < \alpha_n\), at which point index \(n\) is skipped, and the residual carries forward unchanged to the next step.
The claim that all indices \(n \geq n_0\) are selected is therefore an oversimplification. The actual selection pattern after the first selection involves a complex interplay of selected and skipped indices, governed by the greedy residual dynamics. The correct statement is only that the threshold for each index is \(\theta_n^* = \alpha_n\); the selection pattern for a given \(\theta\) involves additional structure beyond the thresholds.
However, this apparent contradiction is resolved by noting that the threshold \(\theta_n^*\) already accounts for the accumulated angle: \(\theta_n^*\) is the minimal \(\theta\) such that the greedy residual at step \(n\) is \(\geq \alpha_n\), not just the target itself. The proof of Theorem 4.3 shows that at \(\theta = \alpha_n\), the accumulated angle is zero (no prior index is selected at this target), so the residual equals the target. For larger \(\theta\), prior indices may also be selected, but selection monotonicity ([Geere, 2026, Lemma 4.1]) guarantees that once the residual exceeds \(\alpha_n\) at \(\theta = \theta_n^*\), it continues to do so for all larger \(\theta\), regardless of the prior selection pattern.
However, not all \(2^n\) histories are realisable. The greedy constraint ensures that the history is determined by \(\theta\): there is no choice. Since each threshold \(\theta_n^*\) depends on the prior accumulated angle (which is determined by \(\theta\)), and the thresholds are simply \(\alpha_n\), the branching at each depth creates a partition of \([0,\pi]\) into at most \(n+1\) intervals (not \(2^n\)).
6. Fixed-Point Characterisation of Thresholds
Although the threshold identity \(\theta_n^* = \alpha_n\) resolves the recursion trivially in the case of the standard harmonic angles, we develop the fixed-point theory in generality, as it applies to any decreasing sequence \((\alpha_n)\) with divergent sum.
- Repeated harmonic: \(\beta_{2n} = \beta_{2n+1} = \pi/(n+2)\). Here ties create non-trivial thresholds.
- Perturbed harmonic: \(\beta_n = \pi/(n+2) + \varepsilon_n\) with small perturbations that destroy strict monotonicity.
- Arithmetic sequences: \(\beta_n = c\) (constant). Here every threshold strictly exceeds \(c\), and the full recursive structure of Algorithm 3.5 is needed.
7. Asymptotic Analysis of the Rescaled Thresholds
With the threshold identity established for the harmonic sequence, we turn to the behaviour of the rescaled thresholds in more general settings and the sensitivity of the threshold to perturbations.
- \(\theta_0^* = c\).
- \(\theta_1^* = 2c\) (since \(\theta_0^* = c \leq \theta_1^*\), so \(S_1 = \{0\}\)).
- \(\theta_2^* = 2c\) (since Algorithm 3.5 starts with \(A = c\), finds \(\theta_0^* = c \leq c\), adds to get \(A = 2c\), then checks \(\theta_1^* = 2c \leq 2c\), adds to get \(A = 3c\)… but we must be careful: the sorted prior thresholds are \(\theta_0^* = c, \theta_1^* = 2c\). Start with \(A = c\). Is \(c \geq \theta_0^* = c\)? Yes (using \(\leq\)). So include, \(A = 2c\). Is \(2c \geq \theta_1^* = 2c\)? Yes. Include, \(A = 3c\). Return \(\theta_2^* = 3c\)).
\(n=0\): \(\theta_0^* = c\). Sorted priors: empty. \(\theta_0^* = c\).
\(n=1\): Sorted priors: \(\theta_0^* = c\). \(A = c\). Check: \(c \leq \theta_0^* = c\)? Using the convention \(\theta_k^* \leq \theta_n^*\), i.e., \(\theta_0^* \leq \theta_1^*\): we are computing \(\theta_1^*\), and we need \(\theta_0^* \leq \theta_1^*\). Since \(A = c\) and we would include index 0 if \(\theta_0^* \leq A\), i.e., \(c \leq c\). Yes. So \(A = c + c = 2c\). No more priors. \(\theta_1^* = 2c\).
\(n=2\): Sorted priors: \(\theta_0^* = c, \theta_1^* = 2c\). \(A = c\). Check: \(\theta_0^* = c \leq c\)? Yes. \(A = 2c\). Check: \(\theta_1^* = 2c \leq 2c\)? Yes. \(A = 3c\). Return \(\theta_2^* = 3c\).
\(n=3\): Sorted priors: \(c, 2c, 3c\). \(A=c\). Include \(\theta_0^*\): \(A = 2c\). Include \(\theta_1^*\): \(A = 3c\). Include \(\theta_2^*\): \(A = 4c\). Return \(\theta_3^* = 4c\).
By induction, \(\theta_n^* = (n+1)c\) for all \(n\). The rescaled threshold is \(\tau_n = \theta_n^*/c = n+1\), growing linearly.
| Angle sequence \(\beta_n\) | Decay | \(\tau_n = \theta_n^*/\beta_n\) | Threshold growth |
|---|---|---|---|
| \(\pi/(n+2)\) (harmonic) | \(O(1/n)\) | \(1\) | \(\theta_n^* = O(1/n)\), decreasing |
| \(c\) (constant) | \(O(1)\) | \(n+1\) | \(\theta_n^* = (n+1)c\), increasing |
| \(\pi/\sqrt{n+2}\) | \(O(1/\sqrt{n})\) | Non-trivial | Intermediate behaviour |
8. Computational Complexity of the Threshold Sequence
- Each threshold computation terminates in \(O(1)\) steps (the algorithm stops at the first check).
- The entire threshold sequence \(\theta_0^*, \ldots, \theta_N^*\) is computed in \(O(N)\) total time.
- The result \(\theta_n^* = \pi/(n+2)\) can be evaluated in \(O(1)\) time without any recursion.
- Algorithm 3.5 computes \(\theta_n^*[\beta]\) in \(O(n)\) time (given sorted prior thresholds).
- The entire sequence \(\theta_0^*, \ldots, \theta_N^*\) is computed in \(O(N^2)\) time.
- Using a balanced binary search tree to maintain the sorted thresholds, the total time reduces to \(O(N \cdot M)\), where \(M\) is the maximum number of prior thresholds included at any step (\(M \leq N\)).
(a) Algorithm 3.5 iterates through at most \(n\) sorted prior thresholds, each requiring an \(O(1)\) comparison and addition.
(b) Summing over \(n = 0, \ldots, N\): \(\sum_{n=0}^{N}O(n) = O(N^2)\).
(c) The binary search tree allows insertion in \(O(\log n)\) and the traversal of the sorted thresholds in order.
9. Consequences for the Correlation Kernel
The threshold identity \(\theta_n^* = \alpha_n = \pi/(n+2)\) permits exact, explicit computation of the correlation kernel defined in [Geere, 2026, Definition 4.4].
Hence \(K(n,n) = (\pi/3)(1 - 3/(n+2) + 3/(n+2)^2)\). As \(n \to \infty\), this tends to \(\pi/3\).
Simplifying with \(H_{N+2} = \ln(N+2) + \gamma + O(1/N)\) gives the stated asymptotic.
10. Discussion and Open Questions
10.1. Summary of results
We have established the following:
- Threshold identity (Theorem 4.3): For the harmonic angle sequence \(\alpha_n = \pi/(n+2)\), the threshold for each index is simply \(\theta_n^* = \alpha_n = \pi/(n+2)\). The rescaled threshold is \(\tau_n = 1\) for all \(n\).
- Recursive algorithm (Algorithm 3.5): The threshold recursion, while implicit in general, can be resolved by Algorithm 3.5 in \(O(n)\) time per threshold. For the harmonic sequence, the algorithm terminates in \(O(1)\) per step, giving a total of \(O(N)\) for the first \(N\) thresholds.
- Fixed-point framework (Theorem 6.2): The threshold equation \(\theta_n^* = \beta_n + \sum_{k
- Explicit kernel (Theorems 9.1, 9.3): The correlation kernel \(K(n,m)\) is given in closed form as a rational expression in \(n\), \(m\), and \(\pi\). The diagonal entries approach \(\pi/3\) as \(n \to \infty\), and the trace grows linearly with a logarithmic correction.
10.2. Retrospective on the open question
The open question in [Geere, 2026, \S 7.2, Question 1] asked whether a closed-form expression or efficient recursive formula exists for the threshold \(\theta_n^*\). The answer turns out to be strikingly simple: the threshold is \(\theta_n^* = \pi/(n+2)\), identical to the harmonic angle itself. This is a consequence of strict monotonicity of the harmonic sequence and requires no deep arithmetic or combinatorial analysis.
The surprise is not that the answer is simple, but that the simplicity was obscured by the apparent complexity of the recursion. The threshold for index \(n\) depends on all prior thresholds, which depend on all prior-prior thresholds, creating an exponentially branching dependency tree. The resolution—that this tree collapses completely because strict monotonicity ensures no prior index is ever included at the critical point—is an instance of a broader principle: greedy algorithms on strictly decreasing sequences have trivial threshold structure.
10.3. Open questions
- Non-trivial angle sequences. For which angle sequences \((\beta_n)\) does the threshold sequence exhibit genuinely complex behaviour? We have shown that strict monotonicity forces \(\tau_n = 1\). Does eventual non-strict monotonicity (i.e., \(\beta_n = \beta_{n+1}\) for some \(n\)) always produce \(\tau_n > 1\)? What is the distribution of \(\tau_n\) for natural non-monotone sequences such as \(\beta_n = \pi/(n+2) + (-1)^n\varepsilon/(n+2)^2\)?
- Spectral analysis of the explicit kernel. Given the closed-form kernel \(K(n,m)\) of Theorem 9.3, what are the eigenvalues of the truncated kernel matrix \((K(n,m))_{0 \leq n,m \leq N}\)? The matrix has entries approaching \(\pi/3\) as \(n, m \to \infty\), suggesting a dominant eigenvalue of order \(N\) (from the near-constant part) and a spectrum of corrections. The precise decay rate of eigenvalues is an open question with implications for the weighted residual analysis of [Geere, 2026, Remark 6.5].
- Greedy residual dynamics. The threshold identity tells us when each index is first selected, but the full greedy dynamics for a given target \(\theta\)—which indices are selected and which are skipped after the first selection—remains combinatorially rich. Can the selection pattern \(\{\delta_n(\theta)\}_{n \geq 0}\) be characterised in terms of the continued fraction or Egyptian fraction expansion of \(\theta/\pi\)?
- Connection to the Sylvester–Fibonacci sequence. The greedy algorithm for Egyptian fractions (the Sylvester–Fibonacci sequence) produces denominators \(a_1, a_2, \ldots\) such that \(\sum 1/a_k\) converges to the target. Our algorithm is analogous but operates on angles. Is there a formal correspondence between the two, and do the threshold results for harmonic angles transfer to results about denominators in greedy Egyptian fraction expansions?
- Optimality of the harmonic sequence. Among all angle sequences \((\beta_n)\) with \(\beta_n \to 0\) and \(\sum \beta_n = \infty\), the harmonic sequence achieves convergence rate \(O(1/N)\) and trivial threshold structure. Is the harmonic sequence optimal in some precise sense—e.g., does it minimise the trace of the kernel, or maximise the convergence rate, among all sequences with a given sum rate?
References
- Geere, V. (2020). Geometric Sine Construction. [link]
- Geere, V. (2026). On the Correlation Structure of a Greedy Harmonic Decomposition of the Sine Function. [link]
- Geere, V. (2026). A Harmonic Reconstruction of the Sine Function and Its Relation to the Riemann Zeta Function. [link]
- Erdős, P. & Graham, R.L. (1980). Old and New Problems and Results in Combinatorial Number Theory. Monographies de L'Enseignement Mathématique, 28.
- Graham, R.L. (1964). On a conjecture of Erdős in additive number theory. Acta Arithmetica, 10, 63–70.
- Hardy, G.H. & Wright, E.M. (2008). An Introduction to the Theory of Numbers. 6th ed., Oxford University Press.
- Sylvester, J.J. (1880). On a point in the theory of vulgar fractions. American Journal of Mathematics, 3(4), 332–335.