Harmonic Sine Reconstruction and the Riemann Hypothesis:
Rigorous Foundations and Critical Corrections

Version 3 — Corrected

A Development of the Geometric-Analytic Pathway
Logic by Grok, Maths by Claude, Tinkering by Victor Geere
March 09, 2026

Critical revision note (v2 → v3). This version identifies and corrects three fundamental errors present in v1 and v2:
  1. Selection monotonicity is false (v2 Lemma 5.1): The map \(\theta \mapsto \delta_n(\theta)\) is not monotone. Counterexample: \(\delta_2(\pi/4) = 1\) but \(\delta_2(\pi/3) = 0\), with \(\pi/4 < \pi/3\). The selection regions are unions of disjoint intervals, not single intervals.
  2. The Zeta Bridge identity is false (v2 Proposition 4.3): \(\Phi(s,\pi) \neq \zeta(s)-1\). At \(\theta = \pi\), the greedy algorithm selects only indices \(\{0, 1, 4\}\) (corresponding to \(\pi/2 + \pi/3 + \pi/6 = \pi\)), not all indices. Hence \(\Phi(s,\pi) = 2^{-s} + 3^{-s} + 6^{-s}\).
  3. The selection density formula is false (v2 Lemma 3.3): \(D(N,\theta)\) does not grow as \(x\ln N\). For rational \(\theta/\pi\), the algorithm terminates finitely with \(D(N,\theta) = O(1)\), corresponding to the Egyptian fraction decomposition of \(\theta/\pi\).
As a consequence, the proof strategy of v2 (Sections 5–10) does not yield a proof of the Riemann Hypothesis. This document presents what is rigorously established, provides the detailed corrections, and identifies the precise obstructions to completing the approach.

Table of Contents


Abstract

We present a rigorous analysis of the harmonic sine reconstruction and its relationship to the Riemann zeta function, correcting fundamental errors present in Versions 1 and 2 of this work. The harmonic reconstruction decomposes a target angle \(\theta \in [0,\pi]\) into a greedy sub-sum of harmonic angles \(\alpha_n = \pi/(n+2)\), converging to \(\sin(\theta)\) at rate \(O(1/N)\). We prove the threshold identity \(\theta_n^* = \alpha_n\) (the minimal target for which index \(n\) is first selected), but demonstrate by explicit computation that the selection indicators \(\delta_n(\theta)\) are not monotone in \(\theta\), invalidating the kernel framework of v2. We show that the claimed identity \(\Phi(s,\pi) = \zeta(s)-1\) is false: at \(\theta = \pi\), the greedy algorithm selects only finitely many indices, giving an Egyptian fraction decomposition \(\pi/2 + \pi/3 + \pi/6 = \pi\) rather than a Dirichlet series. We identify the precise obstructions preventing this approach from establishing the Riemann Hypothesis and discuss possible reformulations.


1. Notation and Preliminaries

Definition 1.1 (Standard notation). Throughout this paper:
Definition 1.2 (Functional equation). The completed zeta function \[ \xi(s) = \frac{1}{2}s(s-1)\pi^{-s/2}\Gamma(s/2)\zeta(s) \] satisfies \(\xi(s) = \xi(1-s)\), implying that non-trivial zeros are symmetric about \(\sigma = 1/2\).

2. The Harmonic Sine Reconstruction

Definition 2.1 (Harmonic angle sequence). Define the harmonic angles \[ \alpha_n = \frac{\pi}{n+2}, \quad n = 0,1,2,\ldots \] These are strictly decreasing: \(\alpha_0 = \pi/2 > \alpha_1 = \pi/3 > \alpha_2 = \pi/4 > \cdots\), with \(\alpha_n \to 0\).
Definition 2.2 (Greedy selection algorithm). For a target angle \(\theta \in [0,\pi]\), define recursively: \[ \begin{align} \theta_0 &= 0, \quad s_0 = 0, \quad c_0 = 1, \\[6pt] \delta_n(\theta) &= \Theta\!\bigl(\theta - \theta_n - \alpha_n\bigr) \in \{0,1\}, \\[6pt] \theta_{n+1} &= \theta_n + \delta_n\,\alpha_n, \\[6pt] s_{n+1} &= \begin{cases} \sin(\theta_n)\cos(\alpha_n) + \cos(\theta_n)\sin(\alpha_n) & \text{if } \delta_n = 1, \\ s_n & \text{if } \delta_n = 0, \end{cases} \\[6pt] c_{n+1} &= \begin{cases} \cos(\theta_n)\cos(\alpha_n) - \sin(\theta_n)\sin(\alpha_n) & \text{if } \delta_n = 1, \\ c_n & \text{if } \delta_n = 0. \end{cases} \end{align} \] Here \(\theta_n = \theta_n(\theta)\) denotes the accumulated angle after the first \(n\) steps, and \(\delta_n(\theta) = 1\) means the greedy algorithm selects index \(n\), i.e., the residual \(r_n(\theta) = \theta - \theta_n(\theta) \geq \alpha_n\).
Theorem 2.3 (Harmonic reconstruction of sine). For any \(\theta \in [0,\pi]\), the sequence \((s_n)\) from Definition 2.2 satisfies \[ \sin(\theta) = \lim_{N\to\infty} s_N, \] and the invariant \(s_n = \sin(\theta_n)\), \(c_n = \cos(\theta_n)\) holds for all \(n \geq 0\).

Invariant. By induction. Base: \(s_0 = \sin(0) = 0\), \(c_0 = \cos(0) = 1\). If \(\delta_n = 0\): \(\theta_{n+1} = \theta_n\), so the invariant persists trivially. If \(\delta_n = 1\): \(\theta_{n+1} = \theta_n + \alpha_n\), and the update rule is exactly the sine and cosine addition formulas, giving \(s_{n+1} = \sin(\theta_{n+1})\), \(c_{n+1} = \cos(\theta_{n+1})\).

Convergence. The residual \(r_n(\theta) = \theta - \theta_n(\theta)\) satisfies \(r_n \geq 0\) (since we never overshoot) and is non-increasing (each selection reduces it). We prove \(r_{n+1} < \alpha_n = \pi/(n+2)\) for all \(n\):

Thus \(r_{N+1} < \pi/(N+2) \to 0\), and by 1-Lipschitz continuity of \(\sin\):

\[ |s_N - \sin\theta| = |\sin\theta_N - \sin\theta| \leq |\theta_N - \theta| = r_N \leq \frac{\pi}{N+1} \to 0. \]

3. Convergence of the Harmonic Reconstruction

Lemma 3.1 (Greedy residual bound). For all \(N \geq 0\): \[ 0 \leq r_{N+1}(\theta) < \frac{\pi}{N+2} = \alpha_N. \]
Proved as part of the convergence argument above: \(r_{n+1} < \alpha_n\) inductively, whether \(\delta_n = 0\) or \(\delta_n = 1\). The key step for \(\delta_n = 1\) uses \(r_n \leq \alpha_{n-1}\) and \(\alpha_{n-1} - \alpha_n = \pi/[(n+1)(n+2)] < \alpha_n\) for all \(n \geq 0\).
Theorem 3.2 (Quantitative convergence rate). For all \(\theta \in [0,\pi]\) and \(N \geq 0\): \[ \left|\sin(\theta) - s_N\right| \leq \frac{\pi}{N+1}. \]
From Lemma 3.1: \(|\theta - \theta_N| = r_N < \pi/(N+1)\). Then \(|\sin\theta - \sin\theta_N| \leq |\theta - \theta_N| < \pi/(N+1)\).
Lemma 3.3 (Finite termination for rational targets). If \(\theta/\pi = p/q\) is rational (in lowest terms), then the greedy algorithm terminates in finitely many steps: there exists \(N_0 = N_0(\theta)\) such that \(r_N = 0\) for all \(N \geq N_0\), and \(\delta_n(\theta) = 0\) for all \(n \geq N_0\).
The accumulated angle after each selection is a sum of distinct terms \(\pi/(n_k + 2)\), hence a rational multiple of \(\pi\). The residual \(r_N = \theta - \theta_N = (p/q - \sum_{k} 1/(n_k+2))\pi\) is a rational multiple of \(\pi\) at each step. When the greedy algorithm reduces the residual, it produces a smaller positive rational (with bounded denominator growth). By well-ordering of the positive rationals bounded by a fixed denominator, the sequence of residuals must reach zero in finitely many steps. (This is equivalent to the classical theorem that the greedy algorithm for Egyptian fractions terminates for all positive rationals.)
Example 3.4 (Egyptian fractions at \(\theta = \pi\)). At \(\theta = \pi\) (i.e., \(x = \theta/\pi = 1\)): So \(\pi = \pi/2 + \pi/3 + \pi/6\), equivalently \(1 = 1/2 + 1/3 + 1/6\). Exactly three indices are selected: \(\{0, 1, 4\}\). For all \(n \geq 5\): \(\delta_n(\pi) = 0\).

4. The Threshold Identity

Definition 4.1 (Threshold angle). For each \(n \geq 0\), define the threshold angle \[ \theta_n^* = \inf\{\theta \in [0,\pi] : \delta_n(\theta) = 1\}, \] the minimal target angle at which the greedy algorithm first selects index \(n\).
Theorem 4.2 (Threshold identity; cf. Geere, 2026). For all \(n \geq 0\): \[ \theta_n^* = \alpha_n = \frac{\pi}{n+2}. \]
By strong induction on \(n\).

Base case. At step \(n = 0\), there are no prior selections, so \(\theta_0(\theta) = 0\) and the residual is \(r_0 = \theta\). The selection criterion \(\delta_0(\theta) = 1\) iff \(\theta \geq \alpha_0 = \pi/2\). Hence \(\theta_0^* = \pi/2 = \alpha_0\). ✓

Inductive step. Assume \(\theta_k^* = \alpha_k = \pi/(k+2)\) for all \(k < n\). Consider the target \(\theta = \alpha_n = \pi/(n+2)\). Since \(\alpha_n < \alpha_k\) for all \(k < n\) (because \(n+2 > k+2\)), we have \(\theta = \alpha_n < \theta_k^* = \alpha_k\) for all \(k < n\). Therefore, at target \(\theta = \alpha_n\), no prior index \(k < n\) is selected (this follows because the infimum of targets that select index \(k\) is \(\alpha_k > \alpha_n\)). The accumulated angle at step \(n\) is \(\theta_n(\alpha_n) = 0\), so the residual is \(r_n = \alpha_n - 0 = \alpha_n \geq \alpha_n\), and \(\delta_n(\alpha_n) = 1\).

For \(\theta < \alpha_n\): since \(\theta < \alpha_k\) for all \(k \leq n\), no index \(k \leq n\) is selected at target \(\theta\). In particular, \(\delta_n(\theta) = 0\).

Therefore \(\theta_n^* = \alpha_n = \pi/(n+2)\). ✓

Remark 4.3 (Threshold vs. selection). The threshold identity establishes that for each \(n\), the first \(\theta\) at which \(\delta_n(\theta) = 1\) is \(\theta_n^* = \alpha_n\). At this threshold, no earlier index is selected, so the residual equals the target. However, as shown in Section 5, the set \(\{\theta : \delta_n(\theta) = 1\}\) is not the interval \([\alpha_n, \pi]\). It is a union of disjoint sub-intervals.

5. Selection Structure: Failure of Monotonicity

Versions 1 and 2 of this work, as well as Geere (2026, Lemma 4.1), claimed that the selection indicators \(\delta_n(\theta)\) are non-decreasing in \(\theta\). This claim is false.

Error 5.1 (Selection monotonicity is false). The claim that \(\theta \mapsto \delta_n(\theta)\) is non-decreasing for each fixed \(n\) is false. The following counterexample demonstrates this for \(n = 1\) and \(n = 2\).
Proposition 5.2 (Counterexample to monotonicity).
  1. \(\delta_1(\pi/3) = 1\) but \(\delta_1(\pi/2) = 0\). Since \(\pi/3 < \pi/2\), this violates monotonicity for \(n = 1\).
  2. \(\delta_2(\pi/4) = 1\) but \(\delta_2(\pi/3) = 0\). Since \(\pi/4 < \pi/3\), this violates monotonicity for \(n = 2\).

Part (a). At \(\theta = \pi/3\):

At \(\theta = \pi/2\):

So \(\delta_1(\pi/3) = 1 > 0 = \delta_1(\pi/2)\), with \(\pi/3 < \pi/2\).

Part (b). At \(\theta = \pi/4\):

At \(\theta = \pi/3\):

Proposition 5.3 (Mechanism of non-monotonicity). The failure of monotonicity occurs because increasing \(\theta\) can cause a prior index \(k < n\) to become selected (since \(\theta\) crosses the threshold \(\alpha_k\)), which increases the accumulated angle \(\theta_n(\theta)\) by \(\alpha_k\). When \(\alpha_k\) is larger than the increase \(\theta_2 - \theta_1\), the residual at step \(n\) decreases: \[ r_n(\theta_2) = \theta_2 - \theta_n(\theta_2) < \theta_1 - \theta_n(\theta_1) = r_n(\theta_1), \] even though \(\theta_2 > \theta_1\). This causes \(\delta_n(\theta_2) = 0\) despite \(\delta_n(\theta_1) = 1\).
In the counterexample for \(n = 2\): at \(\theta_1 = \pi/4\), the accumulated angle is \(\theta_2(\pi/4) = 0\) (neither \(\delta_0\) nor \(\delta_1\) fires). At \(\theta_2 = \pi/3\), the accumulated angle is \(\theta_2(\pi/3) = \pi/3\) (since \(\delta_1 = 1\), adding \(\alpha_1 = \pi/3\)). The residuals are: \[ r_2(\pi/4) = \pi/4 - 0 = \pi/4 \geq \alpha_2 = \pi/4, \qquad r_2(\pi/3) = \pi/3 - \pi/3 = 0 < \alpha_2 = \pi/4. \] The accumulated angle grew by \(\pi/3\) while the target grew by only \(\pi/3 - \pi/4 = \pi/12\). The net residual change is \(-\pi/4\), flipping the selection.
Error 5.4 (Flaw in the v2 proof of monotonicity). The proof of Lemma 5.1 in v2 (and Lemma 4.1 in Geere, 2026) claims: "If \(\delta_n(\theta_1) = 1\), then... the net effect is \(r_n(\theta_2) \geq \alpha_n\), giving \(\delta_n(\theta_2) = 1\)." This is a non-sequitur. The argument correctly establishes that the accumulated angle \(\theta_n(\theta)\) is non-decreasing in \(\theta\), but this does not imply that the residual \(r_n(\theta) = \theta - \theta_n(\theta)\) is non-decreasing. The accumulated angle can increase faster than the target when large early angles \(\alpha_k\) are newly selected.
Lemma 5.5 (Accumulated angle monotonicity). The accumulated angle \(\theta_n(\theta)\) is non-decreasing in \(\theta\) for each fixed \(n\). That is, \(\theta_1 \leq \theta_2\) implies \(\theta_n(\theta_1) \leq \theta_n(\theta_2)\) for all \(n \geq 0\).

Step 1 (Greedy optimality). We first establish that the greedy algorithm maximises the accumulated angle among all feasible selection patterns. A selection \((\delta_0, \ldots, \delta_{n-1}) \in \{0,1\}^n\) is prefix-feasible for target \(\theta\) if \(\sum_{k=0}^{j}\delta_k\alpha_k \leq \theta\) for every \(j = 0, \ldots, n-1\) (i.e., the running total never exceeds \(\theta\)).

Claim: the greedy selection maximises \(A_n = \sum_{k=0}^{n-1}\delta_k\alpha_k\) over all prefix-feasible selections.

Proof of claim by exchange argument. Suppose \((\delta_0', \ldots, \delta_{n-1}')\) is an alternative prefix-feasible selection with total \(A' > A_n^{\mathrm{greedy}}\). Let \(j\) be the first index where the two differ.

Step 2 (Monotonicity). If \(\theta_1 \leq \theta_2\), then every selection that is prefix-feasible for \(\theta_1\) is also prefix-feasible for \(\theta_2\) (since the constraint \(\sum \leq \theta\) is relaxed). Therefore, the maximum accumulated angle over all feasible selections for \(\theta_2\) is at least as large as that for \(\theta_1\). By Step 1, the greedy accumulated angles satisfy \(A_n(\theta_1) \leq A_n(\theta_2)\).


6. Exact Selection Regions

Since selection monotonicity fails, we must characterise the actual sets \(\{\theta \in [0,\pi] : \delta_n(\theta) = 1\}\). These are unions of disjoint intervals that can be computed recursively.

Theorem 6.1 (Selection regions for small \(n\)). The selection regions are:
  1. \(\delta_0(\theta) = 1\) iff \(\theta \in [\pi/2,\, \pi]\). (One interval.)
  2. \(\delta_1(\theta) = 1\) iff \(\theta \in [\pi/3,\, \pi/2) \cup [5\pi/6,\, \pi]\). (Two intervals.)
  3. \(\delta_2(\theta) = 1\) iff \(\theta \in [\pi/4,\, \pi/3) \cup [3\pi/4,\, 5\pi/6)\). (Two intervals.)
  4. \(\delta_3(\theta) = 1\) iff \(\theta \in [\pi/5,\, \pi/4) \cup [7\pi/10,\, 3\pi/4)\). (Two intervals.)
  5. \(\delta_4(\theta) = 1\) iff \(\theta \in [\pi/6,\, \pi/5) \cup [2\pi/3,\, 7\pi/10) \cup \{\pi\}\). (Two or three intervals.)

(a) At step 0, \(\theta_0 = 0\) for all \(\theta\). So \(\delta_0(\theta) = \Theta(\theta - \pi/2)\), giving the interval \([\pi/2, \pi]\).

(b) At step 1, the accumulated angle is \(\theta_1(\theta) = \delta_0(\theta)\,\alpha_0\). We consider two cases:

Total: \([\pi/3, \pi/2) \cup [5\pi/6, \pi]\).

(c) At step 2, accumulated angle = \(\delta_0\,\alpha_0 + \delta_1\,\alpha_1\). Four cases based on \((\delta_0, \delta_1)\):

Total: \([\pi/4, \pi/3) \cup [3\pi/4, 5\pi/6)\).

Parts (d) and (e) follow by the same recursive case analysis, which becomes increasingly complex as the number of prior selection patterns grows.

Observation 6.2 (Self-similar interval structure). For \(n \geq 1\), the selection region of \(\delta_n\) appears to consist of exactly two intervals (with possible point masses at rational multiples of \(\pi\) for the Egyptian-fraction-type coincidences). Each interval has width approximately \(\alpha_n - \alpha_{n+1} = \pi/[(n+2)(n+3)]\), reflecting the narrow window in which the residual falls in the range \([\alpha_n, \alpha_{n-1})\) without being consumed by the next earlier angle. The two intervals are separated by approximately \(\pi/2\), reflecting the phase shift from the selection of \(\delta_0\) at \(\theta = \pi/2\).
Proposition 6.3 (Measure of selection regions). For each \(n \geq 0\), define \(\mu_n = |\{\theta \in [0,\pi] : \delta_n(\theta) = 1\}|\) (Lebesgue measure). Then \[ \mu_n = \frac{\pi}{(n+2)(n+3)} + O\!\left(\frac{1}{n^3}\right) \quad \text{as } n \to \infty. \] (The leading term is \(\alpha_n - \alpha_{n+1}\) summed over the two intervals.)
For \(n \geq 1\), the two selection intervals each have width approximately \(\alpha_n - \alpha_{n+1} = \pi/(n+2) - \pi/(n+3) = \pi/[(n+2)(n+3)]\), as shown in the examples of Theorem 6.1. The total measure is twice the interval width minus corrections from sub-interval endpoints that depend on higher-order selections. For the first few values: \[ \mu_0 = \pi/2, \quad \mu_1 = (\pi/2 - \pi/3) + (\pi - 5\pi/6) = \pi/6 + \pi/6 = \pi/3, \] \[ \mu_2 = (\pi/3 - \pi/4) + (5\pi/6 - 3\pi/4) = \pi/12 + \pi/12 = \pi/6. \] For general \(n\), the leading behaviour is \(\mu_n \asymp 1/n^2\), and \(\sum_{n=0}^{\infty}\mu_n = \pi\) (since every \(\theta \in (0,\pi)\) is selected by at least one index, but this latter claim requires a separate verification that the greedy algorithm always selects at least one index for any \(\theta > 0\), which follows from the fact that \(\alpha_n \to 0\)).

7. Greedy Dirichlet Sub-Sums and the Splitting Identity

Definition 7.1 (Greedy Dirichlet sub-sum). For \(s = \sigma + it \in \mathbb{C}\) with \(\sigma > 1\) and \(\theta \in [0,\pi]\), define \[ \Phi(s,\theta) = \sum_{n=0}^{\infty}\frac{\delta_n(\theta)}{(n+2)^s}, \] where the sum is over all selected indices. For rational \(\theta/\pi\), this is a finite sum (Lemma 3.3).
Error 7.2 (The Zeta Bridge is false). Versions 1 and 2 claimed: \[ \Phi(s,\pi) = \zeta(s) - 1 \qquad \text{(FALSE)}. \] The correct value is \[ \Phi(s,\pi) = 2^{-s} + 3^{-s} + 6^{-s}, \] since the greedy algorithm at \(\theta = \pi\) selects exactly \(\{0, 1, 4\}\), corresponding to denominators \(\{2, 3, 6\}\). At \(s = 2\): \(\Phi(2,\pi) = 1/4 + 1/9 + 1/36 = 14/36 = 7/18 \approx 0.389\), whereas \(\zeta(2) - 1 = \pi^2/6 - 1 \approx 0.645\).

The error in v2's proof of Proposition 4.3 is the claim: "When \(\theta = \pi\), the greedy algorithm selects every \(\delta_n = 1\) (since the residual \(r_n\) always exceeds \(\alpha_n\))." In fact, after selecting \(\alpha_0 = \pi/2\) and \(\alpha_1 = \pi/3\), the residual is \(\pi/6\), which is less than \(\alpha_2 = \pi/4\), so index 2 is not selected.

Definition 7.3 (Complement sum). Define the complementary Dirichlet sub-sum \[ \Omega(s,\theta) = \sum_{n=0}^{\infty}\frac{1 - \delta_n(\theta)}{(n+2)^s}, \] summing over non-selected indices.
Proposition 7.4 (Splitting identity). For \(\sigma > 1\) and any \(\theta \in [0,\pi]\): \[ \Phi(s,\theta) + \Omega(s,\theta) = \sum_{n=0}^{\infty}(n+2)^{-s} = \zeta(s) - 1. \] This is a tautological partition of the Dirichlet series for \(\zeta(s) - 1\) into selected and non-selected sub-sums.
For each \(n\): \(\delta_n(\theta) + (1 - \delta_n(\theta)) = 1\). Summing over \(n\): \[ \Phi(s,\theta) + \Omega(s,\theta) = \sum_{n=0}^{\infty}\frac{\delta_n + (1-\delta_n)}{(n+2)^s} = \sum_{n=0}^{\infty}(n+2)^{-s} = \zeta(s) - 1. \]
Lemma 7.5 (Structure of boundary values).
  1. \(\Phi(s, 0) = 0\) (no index is selected for \(\theta = 0\)), so \(\Omega(s, 0) = \zeta(s) - 1\).
  2. \(\Phi(s, \pi) = 2^{-s} + 3^{-s} + 6^{-s}\) (three terms), so \(\Omega(s, \pi) = \zeta(s) - 1 - 2^{-s} - 3^{-s} - 6^{-s}\).
  3. For \(\theta/\pi\) irrational, \(\Phi(s,\theta)\) is an infinite Dirichlet sub-series (infinitely many indices selected), convergent for \(\sigma > 1\).

(a) At \(\theta = 0\), no selection occurs since \(\alpha_n > 0 = \theta\) for all \(n\).

(b) Computed in Example 3.4: only \(\{0, 1, 4\}\) selected, giving denominators \(\{2, 3, 6\}\).

(c) For irrational \(\theta/\pi\), the accumulated angle $\theta_N = \sum_{k \in S_N}\alpha_k$ is always a rational multiple of \(\pi\), so it can never equal \(\theta\) exactly. The residual \(r_N > 0\) for all \(N\), and \(r_N \to 0\) by Theorem 3.2. Since \(r_N > 0\) for all \(N\) and \(r_N < \alpha_N\), each time the residual exceeds some \(\alpha_n\), it selects that index. The infinitely many steps with nonzero residual guarantee infinitely many selections.

Key observation. The splitting identity \(\Phi(s,\theta) + \Omega(s,\theta) = \zeta(s) - 1\) is correct but does not give a parametrisation from \(0\) to \(\zeta(s)-1\) as \(\theta\) varies from \(0\) to \(\pi\). At \(\theta = \pi\), the sub-sum \(\Phi(s,\pi)\) captures only three terms, not the full series. The map \(\theta \mapsto \Phi(s,\theta)\) is not continuous (it jumps as \(\theta\) crosses selection boundaries) and does not interpolate monotonically between the boundary values.

8. Selection Density: Corrected Analysis

Error 8.1 (Selection density formula is false). Version 2 (Lemma 3.3) claimed: "For \(\theta = \pi x\) with \(x \in (0,1)\), \(D(N,\theta) = x \ln(N+2) + O(1)\)." This is false. For rational \(x = p/q\), the algorithm terminates finitely and \(D(N,\theta) = O(1)\) (bounded constant independent of \(N\)).

Numerical verification at \(\theta = \pi\) (\(x = 1\)): \(D(N,\pi) = 3\) for all \(N \geq 5\), not \(\ln(N+2) + O(1)\).

The error in the v2 proof is the claim that "the greedy algorithm selects approximately a fraction \(x\) of terms." In fact, the greedy algorithm is an Egyptian fraction algorithm that terminates finitely for rational targets, selecting only \(O(1)\) terms.

Proposition 8.2 (Corrected density for rational targets). If \(\theta/\pi = p/q \in \mathbb{Q}\), then \(D(N,\theta) = K\) for all \(N\) sufficiently large, where \(K = K(p,q)\) is the number of terms in the greedy Egyptian fraction decomposition of \(p/q\). This satisfies \(K \leq C \log q\) for an absolute constant \(C\) (Erdős-Bleicher bound for the greedy algorithm).
Proposition 8.3 (Density for irrational targets). If \(\theta/\pi\) is irrational, then \(D(N,\theta) \to \infty\) as \(N \to \infty\), but the growth rate is much slower than \(\ln N\). The indices selected by the greedy algorithm correspond to the denominators of an infinite Egyptian fraction representation, whose denominators grow super-exponentially (analogous to the Sylvester–Fibonacci sequence). More precisely, if the selected indices are \(n_1 < n_2 < \cdots\), then \(n_{k+1} \geq n_k^2/(2\pi)\) asymptotically, giving \[ D(N,\theta) = O(\log\log N). \]

After selecting index \(n_k\), the residual is \(r < \alpha_{n_k} = \pi/(n_k+2)\). The next selected index \(n_{k+1}\) satisfies \(\alpha_{n_{k+1}} \leq r < \alpha_{n_k}\), i.e., \(\pi/(n_{k+1}+2) \leq r\), giving \(n_{k+1} \geq \pi/r - 2\). In the worst case (residual just barely exceeds \(\alpha_{n_{k+1}}\)), the new residual after selection is \(r' = r - \pi/(n_{k+1}+2)\). By properties of the greedy algorithm for Egyptian fractions, the denominators satisfy \(n_{k+1} + 2 \geq (n_k + 2 - 1)(n_k + 2)/(something)\), leading to roughly quadratic growth of denominators. This gives \(n_k \geq 2^{2^{O(k)}}\), so \(k = O(\log\log n_k)\), hence \(D(N,\theta) = O(\log\log N)\).


9. Critical Gap Analysis: Why the v2 Strategy Fails

We now systematically examine how the three errors identified above propagate through the v2 argument, identifying which components can be salvaged and which are irreparably broken.

9.1. Cascade of errors from selection monotonicity

Gap 9.1 (Kernel framework collapses). The correlation kernel \(K(n,m) = \int_0^\pi [\delta_n(\theta) - \theta/\pi][\delta_m(\theta) - \theta/\pi]\,d\theta\) is still well-defined (as an integral of measurable functions). However, v2's computation of \(K(n,m)\) via step-function integrals at threshold angles (Lemmas 5.5–5.6) is invalid because \(\delta_n(\theta)\) is not the step function \(\mathbf{1}[\theta \geq \theta_n^*]\).

With the correct multi-interval structure from Section 6, the kernel can in principle be recomputed, but the resulting formulas are far more complex than those in v2, and the structural properties (positive semi-definiteness, asymptotic behaviour) require a new analysis.

Gap 9.2 (Coherence defect framework weakened). The normalised residual \(R(s,\theta) = \Phi(s,\theta) - (\theta/\pi)\Phi(s,\pi)\) is still well-defined, but since \(\Phi(s,\pi) \neq \zeta(s) - 1\), the boundary condition at \(\theta = \pi\) is: \[ R(s,\pi) = \Phi(s,\pi) - \Phi(s,\pi) = 0, \] still zero. But at a zero \(\rho\) of \(\zeta\): \[ R(\rho,\theta) = \Phi(\rho,\theta) - (\theta/\pi)\Phi(\rho,\pi), \] where \(\Phi(\rho,\pi) = 2^{-\rho} + 3^{-\rho} + 6^{-\rho} \neq -1\). So the key identity \(R(\rho,\theta) = \Phi(\rho,\theta) + \theta/\pi\) (v2 Lemma 6.2) no longer holds. The coherence defect no longer directly relates to the zero structure of \(\zeta(s)\).

9.2. Cascade of errors from the false Zeta Bridge

Gap 9.3 (No direct connection between \(\Phi\) and \(\zeta\) zeros). Since \(\Phi(s,\pi) \neq \zeta(s) - 1\), the function \(\Phi(s,\theta)\) does not interpolate between 0 and \(\zeta(s) - 1\). There is no direct mechanism by which the behaviour of \(\Phi\) at a zero \(\rho\) of \(\zeta\) encodes the location of \(\rho\). The zeros of \(\zeta\) are properties of the full Dirichlet series, not of the greedy sub-sum.
Gap 9.4 (Functional equation constraint is vacuous). The functional equation \(\zeta(s) = \chi(s)\zeta(1-s)\) constrains the full series, not partial sub-sums. Since \(\Phi(s,\pi) \neq \zeta(s) - 1\), there is no functional equation for \(\Phi(s,\theta)\) and hence no regularity constraint on \(R(\rho,\theta)\). Proposition 9.3 of v2 is unsupported.

9.3. Cascade from the density error

Gap 9.5 (Partial summation arguments fail). Lemma 4.2 of v2 (convergence of \(\Phi\) for \(\sigma > 0\)) used partial summation with \(D(N) = x\ln N + O(1)\). Since the true density is \(O(1)\) for rational targets (or \(O(\log\log N)\) for irrational targets), the partial summation gives much simpler convergence: for rational targets, \(\Phi(s,\theta)\) is a finite sum converging trivially for all \(s\). For irrational targets, the series converges absolutely for \(\sigma > 0\) by comparison with \(\sum n^{-\sigma}\) over super-exponentially growing indices.

9.4. Summary: what is rigorously established

Result Status Section
Harmonic reconstruction of sine ✓ Correct §2–3
Convergence rate \(O(1/N)\) ✓ Correct §3
Finite termination for rational targets ✓ Correct (new in v3) §3
Threshold identity \(\theta_n^* = \alpha_n\) ✓ Correct §4
Accumulated angle monotonicity ✓ Correct §5
Selection indicator monotonicity False §5
Multi-interval selection regions ✓ Correct (new in v3) §6
Splitting identity \(\Phi + \Omega = \zeta - 1\) ✓ Correct §7
Zeta Bridge \(\Phi(s,\pi) = \zeta(s) - 1\) False §7
Selection density \(D(N) \sim x\ln N\) False §8
Kernel computation (step-function integrals) ✗ Invalid (depends on monotonicity)
Coherence defect bounds (Propositions 8.1, 8.4) ✗ Invalid (depends on kernel + Zeta Bridge)
Functional equation constraint (Proposition 9.3) ✗ Invalid (no functional equation for sub-sums)
Main theorem (RH via contradiction) Not established

10. Possible Reformulations

The harmonic sine reconstruction produces a beautiful deterministic geometric algorithm with interesting connections to Egyptian fractions and unit-circle combinatorics. We identify three directions that might restore a viable link to \(\zeta\)-function theory.

10.1. Direct truncation parametrisation

Instead of the greedy selection, one could directly parametrise sub-sums of \(\zeta(s) - 1\) by truncation level: \[ \Psi(s, T) = \sum_{n=2}^{\lfloor T \rfloor} n^{-s}, \qquad T \geq 2. \] Then \(\Psi(s, T) \to \zeta(s) - 1\) as \(T \to \infty\) (for \(\sigma > 1\)), and the partial sums DO interpolate between 0 and \(\zeta(s) - 1\). The approximate functional equation provides well-understood estimates for \(\Psi(s, T)\). However, this is standard Dirichlet series theory with no new geometric content, and the connection to sine reconstruction is lost.

10.2. Modified greedy algorithm with full coverage

The failure of the Zeta Bridge stems from the greedy algorithm's termination after finitely many selections. One could modify the algorithm to always select additional indices even after the residual reaches zero. For instance, define indicators \(\tilde{\delta}_n(\theta) = 1\) for all \(n \geq 0\), ignoring the greedy constraint. Then \(\tilde{\Phi}(s,\pi) = \zeta(s) - 1\) trivially, but the connection to the sine reconstruction (which is inherently greedy) is severed.

Alternatively, one could use a "perturbed greedy" algorithm that selects index \(n\) with probability depending on the residual, ensuring infinitely many selections even for rational targets. However, introducing randomness would require probabilistic methods and lose the deterministic character of the construction.

10.3. Egyptian fraction connection

The most natural mathematical content of the harmonic greedy decomposition is its connection to Egyptian fractions. For rational \(x = p/q\), the algorithm produces a representation \(p/q = 1/(n_1+2) + 1/(n_2+2) + \cdots + 1/(n_K+2)\) using the greedy (Fibonacci-Sylvester) strategy. The selected denominators \(\{n_k + 2\}\) carry number-theoretic information about \(p/q\).

One could study the "Egyptian fraction Dirichlet series" \(\Phi_{EF}(s, p/q) = \sum_{k=1}^{K}(n_k + 2)^{-s}\) and its behaviour as \(p/q\) varies. This is an interesting object in its own right (related to the Erdős-Straus conjecture and the distribution of Egyptian fraction denominators), but its connection to the Riemann zeta function, if any, is indirect and unexplored.

10.4. The splitting identity perspective

The splitting identity \(\Phi(s,\theta) + \Omega(s,\theta) = \zeta(s) - 1\) provides a genuine decomposition of \(\zeta(s) - 1\) into two sub-sums parametrised by \(\theta\). At a zero \(\rho\) of \(\zeta\): \[ \Phi(\rho,\theta) + \Omega(\rho,\theta) = -1, \] so \(\Omega(\rho,\theta) = -1 - \Phi(\rho,\theta)\). This identity constrains how the zero condition distributes across the selected and non-selected sub-sums, but it applies to any partition of the integers, not specifically to the greedy partition. The challenge is to identify a property unique to the greedy partition that provides information about \(\rho\).


11. Discussion

11.1. Comparison with v2

v2 Claim v3 Status Root Cause of Error
Selection monotonicity (Lemma 5.1) False (Proposition 5.2) Proof conflates accumulated-angle monotonicity with indicator monotonicity
Zeta Bridge (Proposition 4.3) False (Error 7.2) Greedy algorithm terminates finitely at \(\theta = \pi\); not all terms selected
Density \(D(N) \sim x\ln N\) (Lemma 3.3) False (Error 8.1) Algorithm gives Egyptian fraction decomposition with \(O(1)\) terms for rational targets
Kernel \(K(n,m)\) via step functions Invalid Based on false monotonicity
Coherence defect bounds Invalid Based on false kernel and false Zeta Bridge
Functional equation constraint Invalid No functional equation for sub-sums
Proof of RH via contradiction Not established All key ingredients are invalid

11.2. What went wrong: the monotonicity fallacy

The central error is the claim that the greedy selection indicators are monotone in the target angle. This seemed plausible because the accumulated angle IS monotone, and the proof in v2 attempted to derive indicator monotonicity from this. However, the residual \(r_n(\theta) = \theta - \theta_n(\theta)\) can decrease when \(\theta\) increases, because newly triggered earlier selections (at large\(\alpha_k\)) can consume more residual than the target increase provides.

This is a consequence of the greedy} nature of the algorithm: it processes indices in order \(0, 1, 2, \ldots\), and the harmonic angles \(\alpha_0 > \alpha_1 > \alpha_2 > \cdots\) are decreasing. When the target \(\theta\) increases past a large threshold \(\alpha_k\) (e.g., past \(\pi/2\)), the algorithm selects the large angle \(\alpha_k\), potentially overshooting the "budget" allocated to later indices. This creates "shadow zones" where later indices are deselected despite the larger target.

11.3. What went wrong: the Zeta Bridge

The false identity \(\Phi(s,\pi) = \zeta(s) - 1\) arose from the incorrect belief that the greedy algorithm selects all indices at \(\theta = \pi\). In fact, the algorithm finds the Egyptian fraction decomposition \(1 = 1/2 + 1/3 + 1/6\) and terminates with zero residual after selecting only three indices. The connection between the harmonic sine reconstruction (which sums three terms to get \(\sin(\pi) = 0\)) and the Riemann zeta function (which sums all terms) does not exist in the way v2 claimed.

11.4. The role of Egyptian fractions

The greedy harmonic decomposition is fundamentally an Egyptian fraction algorithm, not a Dirichlet series construction. For rational targets, it produces the Fibonacci-Sylvester Egyptian fraction representation. For irrational targets, it produces an infinite Egyptian fraction expansion with super-exponentially growing denominators. The connection to the zeta function, if it exists, must pass through the number theory of Egyptian fractions rather than through Dirichlet series manipulation.

11.5. Lessons on mathematical rigour

The progression from v1 to v3 illustrates the importance of verifying even "obvious" claims with concrete examples. The monotonicity of the accumulated angle (\(\theta_n(\theta)\) non-decreasing in \(\theta\)) is a true and elegant result, but the leap to indicator monotonicity (\(\delta_n(\theta)\) non-decreasing) requires an additional step that fails. Similarly, the identity \(\sum \alpha_n = \infty\) does not imply that the greedy algorithm selects infinitely many terms at \(\theta = \pi\), because the algorithm can reach the target exactly with finitely many terms.

As noted in Geere (2026, "On Mathematical Truth"): rigour requires exhaustive deductive reasoning, not inductive pattern-matching. Each step must be verified in its own right, especially when the inductive argument involves a subtle interplay of quantities (like accumulated angle vs. residual) that move in the same direction globally but can diverge locally.


References