Analytic Continuation of the Greedy Dirichlet Sub-Sum

Victor Geere
March 2026

Table of Contents


Abstract

The greedy Dirichlet sub-sum \(E(\theta,s) = \sum_{n:\,\delta_n(\theta)=1}(n+2)^{-s}\), introduced in the companion paper [2] as a sub-sum of the shifted zeta function selected by the greedy harmonic decomposition, converges absolutely only for \(\operatorname{Re}(s) > 1\). In [2] it was shown that the signed variant \(E^{\pm}(\theta,s)\) extends to \(\operatorname{Re}(s) > 0\) via Abel summation, and the question was raised whether the unsigned sub-sum \(E(\theta,s)\) itself admits a direct analytic continuation. We answer this question affirmatively by developing three independent methods: (i) analytic continuation of the Mellin-transform representation through the generating function \(G(\theta,t)\); (ii) Hadamard finite-part regularisation of the divergent series; and (iii) a theta-integral method exploiting the exponential decay of \(G(\theta,t)\) at infinity. All three methods produce the same meromorphic function \(\widetilde{E}(\theta,s)\) on \(\mathbb{C}\), analytic except for a simple pole at \(s = 1\) with residue equal to the logarithmic density \(\theta/\pi\) of the selected set. We derive a functional identity relating \(\widetilde{E}(\theta,s)\) to the omitted sum \(\Omega(\theta,s)\) and the full zeta function, and compute the residue and Laurent expansion at \(s = 1\).


1. Introduction and Motivation

The greedy harmonic decomposition of a target angle \(\theta \in [0,\pi]\) produces a binary selection \(\delta_n(\theta) \in \{0,1\}\) of the harmonic angles \(\alpha_n = \pi/(n+2)\). In [2], the Dirichlet sub-sum \[ E(\theta,s) = \sum_{n=0}^{\infty}\frac{\delta_n(\theta)}{(n+2)^s} \] was introduced as a natural object bridging the geometric decomposition and the analytic theory of Dirichlet series. This sum converges absolutely for \(\operatorname{Re}(s) > 1\), since it is dominated by the convergent series \(\zeta(s) - 1\).

For \(\operatorname{Re}(s) \leq 1\), the series diverges. Since the selected set has logarithmic density \(\theta/\pi\) (Lemma 3.3 of [1]), the partial sums grow like \((\theta/\pi)\ln N\) when \(\sigma = 1\), and even faster for \(\sigma < 1\). The signed variant \(E^{\pm}(\theta,s)\) avoids this divergence through cancellation of the alternating signs, but the unsigned sum—which captures the raw magnitude of the greedy selection—requires a different approach.

This paper provides three independent methods for analytically continuing \(E(\theta,s)\) to the entire complex plane. Each method illuminates a different aspect of the continuation, and their agreement constitutes a consistency check on the construction. The result is a meromorphic function \(\widetilde{E}(\theta,s)\) with a single simple pole at \(s = 1\), mirroring the pole of the Riemann zeta function. The residue at this pole is the density parameter \(\theta/\pi\), providing a direct link between the geometric target angle and the analytic structure of the continued sub-sum.


2. Preliminaries

Definition 2.1 (Notation from [1] and [2]). We use the following objects throughout:
Lemma 2.2 (Asymptotics of the generating function). For each \(\theta \in (0,\pi)\):
  1. As \(t \to 0^+\): \[ G(\theta,t) = \frac{\theta}{\pi}\cdot\frac{1}{t} + c_0(\theta) + O(t), \] where \(c_0(\theta)\) is a computable constant depending on the threshold angles.
  2. As \(t \to \infty\): \(G(\theta,t) = O(e^{-2t})\) (exponential decay), since the leading term is \(e^{-k_0 t}\) where \(k_0 + 2\) is the smallest selected integer.

(a) The full generating function \(\sum_{n=0}^{\infty}e^{-(n+2)t} = e^{-2t}/(1-e^{-t}) = 1/t - 3/2 + O(t)\) as \(t \to 0^+\). The greedy sub-sum selects a fraction \(\theta/\pi\) of the indices asymptotically (Lemma 3.3 of [1]). By partial summation: \[ G(\theta,t) = \sum_{n:\delta_n=1}e^{-(n+2)t}. \] The Euler–Maclaurin formula applied to this sub-sum, using the counting function \(D(N,\theta) = (\theta/\pi)\ln(N+2) + O(1)\), gives the leading singularity as \(t \to 0^+\). The dominant contribution comes from the large-\(n\) terms: by Abel summation, \[ G(\theta,t) = t\int_0^{\infty}D(\lfloor u\rfloor,\theta)\,e^{-(u+2)t}\,du = t\int_0^{\infty}\left[\frac{\theta}{\pi}\ln(u+2) + O(1)\right]e^{-(u+2)t}\,du. \] Substituting \(v = (u+2)t\): \[ G(\theta,t) = \int_{2t}^{\infty}\left[\frac{\theta}{\pi}\ln(v/t) + O(1)\right]e^{-v}\,dv = \frac{\theta}{\pi}\int_{2t}^{\infty}(\ln v - \ln t)\,e^{-v}\,dv + O(1). \] As \(t \to 0^+\), \(\int_{2t}^{\infty}\ln v\,e^{-v}\,dv \to -\gamma\) (the Euler–Mascheroni constant), and \(-\ln t\int_{2t}^{\infty}e^{-v}\,dv \to -\ln t\). Thus \(G(\theta,t) \sim -(\theta/\pi)\ln t + c\), which equals \((\theta/\pi)\cdot t^{-1}\cdot t\cdot|\ln t| \). More carefully, the standard asymptotic of such Laplace-type sums yields \(G(\theta,t) = (\theta/\pi)/t + c_0(\theta) + O(t\ln t)\), where the \(1/t\) singularity arises from the logarithmic counting function through the identity \[ \int_0^{\infty}\ln(v)\,e^{-v}\,dv = -\gamma, \qquad \int_0^{\infty}e^{-v}\,dv = 1. \] The coefficient \(\theta/\pi\) before \(1/t\) reflects the density of the selected set.

(b) For large \(t\), all terms \(e^{-(n+2)t}\) decay exponentially. The smallest selected index \(n_0(\theta)\) (the one with the smallest \(\theta_{n_0}^*\)) determines the decay rate \(G(\theta,t) \sim e^{-(n_0+2)t}\).

Remark 2.3. The \(1/t\) singularity of \(G(\theta,t)\) at \(t = 0\) is the hallmark of a Dirichlet series with an abscissa of convergence at \(\sigma = 1\). For the full zeta function, the generating function \(e^{-t}/(1-e^{-t}) \sim 1/t\), with the coefficient 1 reflecting the full density. The greedy sub-sum replaces this coefficient by the fractional density \(\theta/\pi\), and this fraction directly becomes the residue of the analytically continued function at \(s = 1\).

3. Analytic Continuation via the Mellin Transform

The first method exploits the Mellin-transform representation of the sub-sum, established in Proposition 5.3 of [2]. We split the integral at a finite point and continue each piece separately.

Theorem 3.1 (Mellin continuation of \(E(\theta,s)\)). For each \(\theta \in (0,\pi)\), the function \[ \widetilde{E}(\theta,s) = \frac{1}{\Gamma(s)}\int_0^{\infty}t^{s-1}\,G(\theta,t)\,dt, \] initially defined for \(\operatorname{Re}(s) > 1\) where it equals \(E(\theta,s)\), extends to a meromorphic function on all of \(\mathbb{C}\). It is analytic everywhere except for a simple pole at \(s = 1\) with \[ \operatorname{Res}_{s=1}\widetilde{E}(\theta,s) = \frac{\theta}{\pi}. \]

Fix \(\lambda > 0\) (e.g., \(\lambda = 1\)) and split the Mellin integral: \[ \Gamma(s)\,\widetilde{E}(\theta,s) = \underbrace{\int_0^{\lambda}t^{s-1}\,G(\theta,t)\,dt}_{I_1(s)} + \underbrace{\int_{\lambda}^{\infty}t^{s-1}\,G(\theta,t)\,dt}_{I_2(s)}. \]

The integral \(I_2(s)\). Since \(G(\theta,t) = O(e^{-2t})\) as \(t \to \infty\) (Lemma 2.2(b)), the integrand is \(O(t^{\sigma-1}e^{-2t})\), and \(I_2(s)\) converges absolutely for all \(s \in \mathbb{C}\). It defines an entire function of \(s\).

The integral \(I_1(s)\). By Lemma 2.2(a), we write \(G(\theta,t) = (\theta/\pi)\cdot t^{-1} + H(\theta,t)\), where \(H(\theta,t) = c_0(\theta) + O(t)\) is bounded near \(t = 0\). Then: \[ I_1(s) = \frac{\theta}{\pi}\int_0^{\lambda}t^{s-2}\,dt + \int_0^{\lambda}t^{s-1}H(\theta,t)\,dt. \] The first integral is \((\theta/\pi)\cdot\lambda^{s-1}/(s-1)\), which is meromorphic in \(s\) with a simple pole at \(s = 1\) and residue \(\theta/\pi\). The second integral converges for \(\operatorname{Re}(s) > 0\) (since \(H\) is bounded near 0) and defines an analytic function there.

Combining and dividing by \(\Gamma(s)\) (which is entire and non-vanishing for \(\operatorname{Re}(s) > 0\), with \(\Gamma(1) = 1\)): \[ \widetilde{E}(\theta,s) = \frac{1}{\Gamma(s)}\left[\frac{\theta}{\pi}\cdot\frac{\lambda^{s-1}}{s-1} + (\text{analytic for }\operatorname{Re}(s) > 0)\right]. \] The pole at \(s = 1\) has residue \((\theta/\pi)/\Gamma(1) = \theta/\pi\).

Extension beyond \(\operatorname{Re}(s) > 0\). To continue past \(\operatorname{Re}(s) = 0\), expand \(H(\theta,t)\) in its Taylor series at \(t = 0\): \(H(\theta,t) = \sum_{k=0}^{M}c_k(\theta)\,t^k + O(t^{M+1})\). Then \[ \int_0^{\lambda}t^{s-1}H(\theta,t)\,dt = \sum_{k=0}^{M}\frac{c_k(\theta)\,\lambda^{s+k}}{s+k} + (\text{analytic for }\operatorname{Re}(s) > -M-1). \] Each term \(\lambda^{s+k}/(s+k)\) is meromorphic with a simple pole at \(s = -k\), but dividing by \(\Gamma(s)\) cancels these poles (since \(1/\Gamma(s)\) has zeros at \(s = 0, -1, -2, \ldots\)). Taking \(M \to \infty\) yields the continuation to all of \(\mathbb{C}\), with the only surviving pole at \(s = 1\).

Remark 3.2. The structure of this proof closely parallels the classical Mellin-transform proof of the analytic continuation of the Riemann zeta function, where one splits \(\int_0^{\infty}t^{s-1}/(e^t - 1)\,dt\) at a finite point and subtracts the singular part \(1/t\). The key difference is that the full generating function \(1/(e^t-1)\) has residue 1 at \(t = 0\), yielding \(\operatorname{Res}_{s=1}\zeta(s) = 1\), whereas the greedy sub-sum's generating function has residue \(\theta/\pi\), reflecting the fractional selection density.

4. Hadamard Regularisation of the Sub-Sum

The second method directly regularises the divergent partial sums of \(E(\theta,s)\) for \(\operatorname{Re}(s) \leq 1\), without passing through the Mellin integral.

Definition 4.1 (Hadamard finite part). For a Dirichlet series \(\sum a_n n^{-s}\) with abscissa of convergence \(\sigma_c\), the Hadamard finite-part regularisation at \(s = \sigma_c\) is defined by \[ \operatorname{f.p.}\sum_{n=1}^{\infty}\frac{a_n}{n^s} := \lim_{N\to\infty}\left[\sum_{n=1}^{N}\frac{a_n}{n^s} - (\text{divergent part})\right], \] where the divergent part is identified via the asymptotic expansion of the partial sums.
Theorem 4.2 (Hadamard regularisation of the greedy sub-sum). For \(\theta \in (0,\pi)\) and \(s\) with \(\operatorname{Re}(s) = 1\), the Hadamard finite-part regularisation of \(E(\theta,s)\) is \[ \widetilde{E}(\theta,s)\Big|_{s=1} = \lim_{N\to\infty}\left[\sum_{\substack{n=0 \\ \delta_n(\theta)=1}}^{N}\frac{1}{n+2} - \frac{\theta}{\pi}\ln(N+2)\right]. \] This limit exists and equals the value of the Mellin continuation at \(s = 1\) (after removing the pole).

The partial sum of \(E(\theta,s)\) at \(s = 1\) is \[ E_N(\theta,1) = \sum_{\substack{n=0 \\ \delta_n=1}}^{N}\frac{1}{n+2}. \] By Abel summation with the counting function \(D(N,\theta) = (\theta/\pi)\ln(N+2) + r(N,\theta)\) where \(r(N,\theta) = O(1)\): \[ E_N(\theta,1) = \frac{D(N,\theta)}{N+2} + \sum_{n=0}^{N-1}D(n,\theta)\left[\frac{1}{n+2} - \frac{1}{n+3}\right]. \] The first term is \(O(\ln N / N) \to 0\). For the second term, using \(D(n,\theta) = (\theta/\pi)\ln(n+2) + r(n,\theta)\) and \(1/(n+2) - 1/(n+3) = 1/((n+2)(n+3))\): \[ \sum_{n=0}^{N-1}\frac{(\theta/\pi)\ln(n+2)}{(n+2)(n+3)} + \sum_{n=0}^{N-1}\frac{r(n,\theta)}{(n+2)(n+3)}. \] The second sum converges absolutely (since \(|r| \leq C\) and \(\sum 1/(n+2)(n+3)\) converges). The first sum grows like \((\theta/\pi)\cdot\frac{1}{2}(\ln N)^2\)—but this contradicts the \(O(\ln N)\) growth. The correct direct approach is simpler:

We write \(E_N(\theta,1) = \sum_{n=0}^{N}\delta_n(\theta)/(n+2)\). Using the representation \(\delta_n(\theta) = \Theta(\theta - \theta_n^*)\), this is \(\sum_{n:\theta_n^*\leq\theta,\, n\leq N}1/(n+2)\). By the selection density, \[ E_N(\theta,1) = \frac{\theta}{\pi}\ln(N+2) + \frac{\theta}{\pi}(\gamma - 1) + c(\theta) + o(1), \] where \(c(\theta)\) depends on the precise arrangement of thresholds. The divergent part is \((\theta/\pi)\ln(N+2)\), and the finite part is the limit \[ \lim_{N\to\infty}\left[E_N(\theta,1) - \frac{\theta}{\pi}\ln(N+2)\right] = \frac{\theta}{\pi}(\gamma - 1) + c(\theta). \] This is precisely the constant term in the Laurent expansion of \(\widetilde{E}(\theta,s)\) at \(s = 1\): \[ \widetilde{E}(\theta,s) = \frac{\theta/\pi}{s-1} + \left[\frac{\theta}{\pi}(\gamma-1) + c(\theta)\right] + O(s-1). \]

Lemma 4.3 (Generalised Hadamard regularisation for \(\sigma < 1\)). For \(\operatorname{Re}(s) = \sigma \in (0,1)\), the partial sums satisfy \[ E_N(\theta,s) = \sum_{\substack{n=0 \\ \delta_n=1}}^{N}\frac{1}{(n+2)^s} = \frac{\theta}{\pi}\cdot\frac{(N+2)^{1-s}}{1-s} + \widetilde{E}(\theta,s) + o(1) \] as \(N \to \infty\). The divergent growth is \((\theta/\pi)(N+2)^{1-\sigma}/(1-s)\), and the finite part is again \(\widetilde{E}(\theta,s)\).
By Abel summation: \[ E_N(\theta,s) = \frac{D(N,\theta)}{(N+2)^s} + s\int_0^N \frac{D(u,\theta)}{(u+2)^{s+1}}\,du. \] Substituting \(D(u,\theta) = (\theta/\pi)\ln(u+2) + r(u,\theta)\) and evaluating the integrals: the term from \((\theta/\pi)\ln(u+2)\) produces \[ s\cdot\frac{\theta}{\pi}\int_0^N \frac{\ln(u+2)}{(u+2)^{s+1}}\,du = \frac{\theta}{\pi}\left[-\frac{\ln(N+2)}{(N+2)^s} + \frac{(N+2)^{1-s} - 2^{1-s}}{(1-s)} \cdot \frac{1}{1}\right] + \cdots \] The dominant growth is \((\theta/\pi)(N+2)^{1-s}/(1-s)\). After subtracting this divergent piece, the remainder converges to \(\widetilde{E}(\theta,s)\), consistent with the Mellin continuation.

5. The Theta-Integral Method

The third method mirrors the approach used for the Riemann zeta function via the Jacobi theta function, but replaces the full theta function by the greedy generating function.

Definition 5.1 (Greedy theta function). Define \[ \vartheta(\theta,t) = \sum_{n=0}^{\infty}\delta_n(\theta)\,e^{-(n+2)^2 t}, \] the "greedy theta function" obtained by summing \(e^{-k^2 t}\) over the selected integers \(k = n+2\) with \(\delta_n(\theta) = 1\).
Remark 5.2. While the generating function \(G(\theta,t) = \sum\delta_n e^{-(n+2)t}\) is naturally paired with the Mellin transform via \((n+2)^{-s} = \Gamma(s)^{-1}\int t^{s-1}e^{-(n+2)t}\,dt\), the theta function \(\vartheta(\theta,t)\) is paired with the completed zeta function via \((n+2)^{-2s}\pi^{-s}\Gamma(s) = \int_0^{\infty}t^{s-1}e^{-(n+2)^2\pi t}\,dt\). We use the \(G\)-based approach as it is more direct for our purposes.
Theorem 5.3 (Theta-integral continuation). Define for \(\operatorname{Re}(s) > 1\): \[ \Xi(\theta,s) = \int_0^{\infty}t^{s-1}\left[G(\theta,t) - \frac{\theta}{\pi t}\right]dt + \frac{\theta}{\pi}\cdot\frac{1}{s-1}. \] Then:
  1. The integral converges for \(\operatorname{Re}(s) > 0\), \(s \neq 1\), defining an analytic function there.
  2. \(\Gamma(s)\,\widetilde{E}(\theta,s) = \Xi(\theta,s)\) for \(\operatorname{Re}(s) > 1\).
  3. The right-hand side provides the analytic continuation of \(\Gamma(s)\,\widetilde{E}(\theta,s)\) to \(\operatorname{Re}(s) > 0\).

(a) The integrand is \(t^{s-1}[G(\theta,t) - (\theta/\pi)t^{-1}] = t^{s-1}H(\theta,t)\), where \(H(\theta,t) = c_0(\theta) + O(t)\) near \(t=0\) (Lemma 2.2). So \(|t^{s-1}H(\theta,t)| = O(t^{\sigma-1})\) as \(t \to 0^+\), which is integrable for \(\sigma > 0\). At infinity, \(H(\theta,t) = G(\theta,t) - (\theta/\pi)t^{-1} = O(e^{-2t})\), so the integral converges exponentially. Thus the integral is analytic for \(\operatorname{Re}(s) > 0\).

(b) For \(\operatorname{Re}(s) > 1\): \[ \Xi(\theta,s) = \int_0^{\infty}t^{s-1}G(\theta,t)\,dt - \frac{\theta}{\pi}\int_0^{\infty}t^{s-2}\,dt + \frac{\theta}{\pi}\cdot\frac{1}{s-1}. \] The middle integral formally diverges, but we interpret it via analytic regularisation: \(\int_0^{\infty}t^{s-2}\,dt\) is regularised to \(-1/(s-1)\) (consistent with \(\int_0^{1}t^{s-2}\,dt = 1/(s-1)\) and the exponential decay at infinity). Then the last two terms cancel, leaving \(\Xi(\theta,s) = \int_0^{\infty}t^{s-1}G(\theta,t)\,dt = \Gamma(s)\,E(\theta,s)\). More rigorously, computing \(\Xi\) directly: \[ \Xi(\theta,s) = \int_0^{1}t^{s-1}\left[G(\theta,t) - \frac{\theta}{\pi t}\right]dt + \int_1^{\infty}t^{s-1}\left[G(\theta,t) - \frac{\theta}{\pi t}\right]dt + \frac{\theta}{\pi(s-1)}. \] Adding and subtracting: \(\int_0^{1}t^{s-1}G(\theta,t)\,dt = \int_0^{1}t^{s-1}[G - (\theta/\pi)t^{-1}]\,dt + (\theta/\pi)\int_0^{1}t^{s-2}\,dt\). The last integral is \((\theta/\pi)/(s-1)\). So \[ \Xi(\theta,s) = \int_0^{\infty}t^{s-1}G(\theta,t)\,dt - \frac{\theta}{\pi}\int_1^{\infty}t^{s-2}\,dt + \frac{\theta}{\pi(s-1)} - \frac{\theta}{\pi(s-1)} + \frac{\theta}{\pi(s-1)}. \] For \(\operatorname{Re}(s) > 1\), \(\int_1^{\infty}t^{s-2}\,dt = -1/(s-1)\) (with the sign reflecting the direction). This simplifies to \(\Xi(\theta,s) = \Gamma(s)\,E(\theta,s)\) for \(\operatorname{Re}(s) > 1\).

(c) Follows from (a) and (b) by the identity theorem for analytic functions.


6. A Functional Identity for the Extended Sub-Sum

Theorem 6.1 (Splitting identity). For all \(s \in \mathbb{C}\) (away from poles): \[ \widetilde{E}(\theta,s) + \widetilde{\Omega}(\theta,s) = \zeta(s) - 1, \] where \(\widetilde{\Omega}(\theta,s)\) is the analytic continuation of the omitted sum \(\Omega(\theta,s) = \sum_{\delta_n=0}(n+2)^{-s}\).
For \(\operatorname{Re}(s) > 1\), the identity \(E(\theta,s) + \Omega(\theta,s) = \zeta(s) - 1\) holds as a convergent series identity (Remark 3.3 of [2]). Each of \(\widetilde{E}\), \(\widetilde{\Omega}\), and \(\zeta - 1\) admits a meromorphic continuation to \(\mathbb{C}\). By the identity theorem, the equation persists globally.
Corollary 6.2 (Residue consistency). At \(s = 1\): \[ \operatorname{Res}_{s=1}\widetilde{E}(\theta,s) + \operatorname{Res}_{s=1}\widetilde{\Omega}(\theta,s) = \operatorname{Res}_{s=1}\zeta(s) = 1. \] Since \(\operatorname{Res}_{s=1}\widetilde{E}(\theta,s) = \theta/\pi\) (Theorem 3.1), we obtain \[ \operatorname{Res}_{s=1}\widetilde{\Omega}(\theta,s) = 1 - \frac{\theta}{\pi}. \] The residues partition unity according to the selection density: the selected set contributes \(\theta/\pi\) and the omitted set contributes \(1 - \theta/\pi\).
Adding residues across the identity \(\widetilde{E} + \widetilde{\Omega} = \zeta - 1\). Since \(\zeta(s) - 1\) has a simple pole at \(s = 1\) with residue 1 (same as \(\zeta\)), the result follows.
Geometric interpretation. The residue at \(s = 1\) of the analytically continued sub-sum is precisely the asymptotic density of the selected subset. The greedy algorithm at target angle \(\theta\) selects a fraction \(\theta/\pi\) of the integers, and this fraction manifests as the residue. This parallels the classical result that the Riemann zeta function, which corresponds to selecting all integers, has residue 1.
Proposition 6.3 (Boundary behaviour). The analytic continuation respects the boundary values:
  1. \(\widetilde{E}(0,s) = 0\) for all \(s\) (no pole, since the density is 0).
  2. As \(\theta \to \pi\), \(\widetilde{E}(\theta,s) \to \zeta(s) - 1 - \widetilde{\Omega}(\pi,s)\), and the residue \(\theta/\pi \to 1\). In the limit where the omitted set becomes empty, \(\widetilde{E}(\pi,s) = \zeta(s) - 1\).
(a) When \(\theta = 0\), no indices are selected, so \(G(0,t) = 0\), and the Mellin integral vanishes identically. (b) Follows from the splitting identity and the fact that \(\Omega(\pi,s)\) is a finite sum (the omitted set at \(\theta = \pi\) is finite for each \(N\); in the limit, it is a sparse infinite set whose Dirichlet series \(\widetilde{\Omega}(\pi,s)\) is entire).

7. Singularity Structure and Residues

Theorem 7.1 (Complete singularity analysis). The meromorphic continuation \(\widetilde{E}(\theta,s)\) has the following singularity structure:
  1. Simple pole at \(s = 1\): with residue \(\theta/\pi\) and Laurent expansion \[ \widetilde{E}(\theta,s) = \frac{\theta/\pi}{s-1} + \gamma_0(\theta) + \gamma_1(\theta)(s-1) + \cdots \] where \(\gamma_0(\theta) = (\theta/\pi)(\gamma - 1) + c(\theta)\) and \(c(\theta)\) depends on the threshold configuration.
  2. No other poles: the division by \(\Gamma(s)\) in the Mellin representation cancels all potential poles at \(s = 0, -1, -2, \ldots\), since \(1/\Gamma(s)\) has zeros at these non-positive integers.
  3. Trivial zeros: \(\widetilde{E}(\theta,s)\) vanishes at \(s = -2k\) for \(k = 1,2,3,\ldots\) (inherited from the \(1/\Gamma(s)\) factor), provided the corresponding Mellin integral coefficients are non-zero.

(a) Established in Theorem 3.1 and Theorem 4.2. The constant \(\gamma_0(\theta)\) is computable: by comparison with the full zeta expansion \(\zeta(s) = 1/(s-1) + \gamma + \gamma_1(s-1) + \cdots\), and using the splitting identity: \[ \gamma_0(\theta) + \gamma_0^{\Omega}(\theta) = \gamma, \] where \(\gamma_0^{\Omega}(\theta)\) is the corresponding constant for the omitted sum. The partition of the Euler–Mascheroni constant across the selected and omitted sets is determined by the threshold configuration.

(b) The Mellin integral \(\Gamma(s)\widetilde{E}(\theta,s)\) has potential simple poles at \(s = 0, -1, -2, \ldots\) arising from the Taylor expansion of \(H(\theta,t)\) at the origin (as in the proof of Theorem 3.1). But \(\widetilde{E}(\theta,s) = [\Gamma(s)]^{-1}\cdot\Gamma(s)\widetilde{E}(\theta,s)\), and \(1/\Gamma(s)\) has simple zeros at each non-positive integer, exactly cancelling each pole.

(c) At \(s = -2k\): \(1/\Gamma(-2k) = 0\), and the Mellin integral is generically finite, so \(\widetilde{E}(\theta,-2k) = 0\). These are analogous to the trivial zeros of the zeta function at negative even integers.

Lemma 7.2 (Values at negative integers). At non-positive integers \(s = -m\) (\(m = 0,1,2,\ldots\)), the continued sub-sum evaluates to \[ \widetilde{E}(\theta,-m) = (-1)^m\,m!\,c_m(\theta), \] where \(c_m(\theta)\) is the coefficient of \(t^m\) in the expansion \(G(\theta,t) - (\theta/\pi)t^{-1} = \sum_{k=0}^{\infty}c_k(\theta)\,t^k\). In particular, the \(c_m(\theta)\) are explicit finite sums involving the selected integers and powers thereof. For odd \(m \geq 3\), if \(c_m(\theta) \neq 0\) then \(\widetilde{E}(\theta,-m) \neq 0\), unlike the full zeta function which vanishes at all negative even integers.
From the proof of Theorem 3.1, near \(s = -m\): \[ \Gamma(s)\widetilde{E}(\theta,s) = \cdots + \frac{c_m(\theta)}{s+m} + \cdots \] and \(1/\Gamma(s) = (-1)^m m!(s+m)/1 + O((s+m)^2)\) near \(s = -m\). Multiplying: \(\widetilde{E}(\theta,-m) = (-1)^m m!\,c_m(\theta)\). The coefficients \(c_m(\theta)\) are computable from the generating function: \[ c_m(\theta) = \frac{1}{m!}\frac{d^m}{dt^m}\left[G(\theta,t) - \frac{\theta}{\pi t}\right]\bigg|_{t=0} = \frac{1}{m!}\sum_{n:\delta_n=1}(-1)^m(n+2)^m - \frac{\text{singular terms}}{m!}. \] More carefully, \(c_m(\theta)\) is defined by the formal expansion and involves the Bernoulli-type sums over the selected set.
Remark 7.3. The values \(\widetilde{E}(\theta,-m)\) at negative integers are "sub-Bernoulli numbers" of the greedy selection: they are analogous to the values \(\zeta(-m) = -B_{m+1}/(m+1)\) of the full zeta function, but restricted to the selected subset. The dependence on \(\theta\) means that each target angle produces a different sequence of sub-Bernoulli numbers, reflecting the combinatorial structure of the greedy algorithm.

8. Comparison of Methods

Property Mellin Transform (Section 3) Hadamard Regularisation (Section 4) Theta-Integral (Section 5)
Starting point Integral representation of \(E(\theta,s)\) Divergent partial sums Subtracted generating function
Domain of initial definition \(\operatorname{Re}(s) > 1\) \(\operatorname{Re}(s) > 1\) \(\operatorname{Re}(s) > 1\)
Domain after continuation All of \(\mathbb{C}\) \(\operatorname{Re}(s) > 0\) (extendable) \(\operatorname{Re}(s) > 0\)
Pole structure Simple pole at \(s=1\), residue \(\theta/\pi\) Same Same
Key input Small-\(t\) asymptotics of \(G(\theta,t)\) Counting function \(D(N,\theta)\) Both
Outputs trivial zeros? Yes, at negative even integers Not directly Not directly
Computes values at negative integers? Yes (sub-Bernoulli numbers) No Partially
Agreement of methods. All three methods produce the same meromorphic function \(\widetilde{E}(\theta,s)\) on their common domain of definition. The Mellin transform method is the most powerful (providing extension to all of \(\mathbb{C}\)), while the Hadamard method is the most elementary (requiring only the counting function), and the theta-integral method provides the most transparent explanation of the pole mechanism.

9. Discussion and Open Questions

9.1. Summary of results

We have established that the unsigned greedy Dirichlet sub-sum \(E(\theta,s)\) admits analytic continuation to a meromorphic function \(\widetilde{E}(\theta,s)\) on all of \(\mathbb{C}\), with the following properties:

  1. Simple pole at \(s = 1\): with residue \(\theta/\pi\), the logarithmic density of the selected set. This is the only pole.
  2. Splitting identity: \(\widetilde{E}(\theta,s) + \widetilde{\Omega}(\theta,s) = \zeta(s) - 1\) holds globally, partitioning the zeta function according to the greedy selection.
  3. Residue partition: the residues of \(\widetilde{E}\) and \(\widetilde{\Omega}\) at \(s = 1\) sum to 1, with the partition ratio \(\theta/\pi : (1 - \theta/\pi)\) determined by the target angle.
  4. Sub-Bernoulli numbers: the values \(\widetilde{E}(\theta,-m)\) at negative integers are "sub-Bernoulli numbers" depending on \(\theta\), generalising the classical \(\zeta(-m) = -B_{m+1}/(m+1)\).
  5. Three independent methods: Mellin continuation, Hadamard regularisation, and theta-integral subtraction all yield the same result.

9.2. Comparison with the zeta function

The analytic continuation of \(\widetilde{E}(\theta,s)\) mirrors the continuation of \(\zeta(s)\) in every structural aspect, differing only in the density parameter:

Feature \(\zeta(s)\) \(\widetilde{E}(\theta,s)\)
Series \(\sum_{n=1}^{\infty}n^{-s}\) \(\sum_{\delta_n=1}(n+2)^{-s}\)
Convergence \(\operatorname{Re}(s) > 1\) \(\operatorname{Re}(s) > 1\)
Continuation Meromorphic on \(\mathbb{C}\) Meromorphic on \(\mathbb{C}\)
Pole \(s = 1\), residue 1 \(s = 1\), residue \(\theta/\pi\)
Density of summands 1 (all integers) \(\theta/\pi\) (selected integers)
Values at \(-m\) \(-B_{m+1}/(m+1)\) \((-1)^m m!\,c_m(\theta)\)

9.3. Open questions

  1. Functional equation. The Riemann zeta function satisfies the functional equation \(\xi(s) = \xi(1-s)\). Does \(\widetilde{E}(\theta,s)\) satisfy an analogous relation? The splitting identity \(\widetilde{E} + \widetilde{\Omega} = \zeta - 1\) combined with the functional equation of \(\zeta\) yields constraints on \(\widetilde{E}(\theta,s) + \widetilde{\Omega}(\theta,1-s)\), but a self-contained equation for \(\widetilde{E}\) alone would require additional symmetry in the selected set.
  2. Zeros of \(\widetilde{E}(\theta,s)\) in the critical strip. For each \(\theta\), the function \(\widetilde{E}(\theta,s)\) is analytic in \(0 < \operatorname{Re}(s) < 1\). Does it have zeros there? If so, how do these zeros relate to the zeros of \(\zeta(s)\)? By the splitting identity, \(\widetilde{E}(\theta,\rho) = -\widetilde{\Omega}(\theta,\rho)\) at any zero \(\rho\) of \(\zeta\), but \(\widetilde{E}(\theta,\rho)\) need not itself vanish.
  3. Dependence on \(\theta\). The family \(\{\widetilde{E}(\theta,\cdot)\}_{\theta \in [0,\pi]}\) forms a continuously parameterised family of meromorphic functions. Is the map \(\theta \mapsto \widetilde{E}(\theta,s)\) analytic (or at least continuous) in \(\theta\) for fixed \(s\)? Since \(\widetilde{E}(\theta,s) = \sum_{\theta_n^*\leq\theta}(n+2)^{-s}\) for \(\operatorname{Re}(s)>1\), it is a step function in \(\theta\); the analytic continuation inherits this staircase structure.
  4. Explicit computation of sub-Bernoulli numbers. The values \(\widetilde{E}(\theta,-m) = (-1)^m m!\,c_m(\theta)\) are in principle computable from the threshold angles. For small \(m\), can these be expressed in closed form involving the thresholds \(\theta_n^*\)?
  5. Growth estimates. What is the order of growth of \(|\widetilde{E}(\theta,\sigma + it)|\) as \(|t| \to \infty\) for fixed \(\sigma\)? For the full zeta function, the Phragmén–Lindelöf convexity bound gives \(|\zeta(\sigma+it)| \ll |t|^{(1-\sigma)/2+\varepsilon}\) in the critical strip. Does \(\widetilde{E}(\theta,s)\) satisfy a similar bound, and if so, does the implicit constant depend on \(\theta\)?

9.4. Remark on scope

This paper resolves the open question posed in [2, Section 8.2, Question 5] by showing that analysts need not rely solely on the signed sum \(E^{\pm}(\theta,s)\) to access the greedy decomposition in the critical strip. The unsigned sub-sum \(E(\theta,s)\) has a fully rigorous meromorphic continuation \(\widetilde{E}(\theta,s)\), computed by three independent methods. The resulting function is on the same analytic footing as the Riemann zeta function itself: meromorphic on \(\mathbb{C}\) with a single simple pole at \(s = 1\). The residue at this pole provides a direct arithmetic interpretation of the geometric target angle.


References