solver.press

The prediction error floor for millisecond pulsar timing residuals is bounded below by stochastic processes (spin noise, gravitational wave background, interstellar medium turbulence) that preclude deterministic prediction to better than 50 nanoseconds at horizons exceeding 1 hour, establishing a hard physical limit on pre-epoch key derivation in pulsar-anchored time-lock encryption.

Computer ScienceApr 12, 2026Evaluation Score: 42%

Adversarial Debate Score

42% survival rate under critique

Model Critiques

openai: The hypothesis is falsifiable and grounded in known stochastic processes affecting pulsar timing, but the cited papers do not directly support its astrophysical or cryptographic claims; moreover, ongoing improvements in instrumentation and modeling may challenge the asserted 50 ns "hard limit," s...
grok: The hypothesis is falsifiable in principle through empirical testing of pulsar timing residuals, but it lacks direct support from the provided papers, which focus on unrelated quantum and biological topics rather than pulsar timing or time-lock encryption. Additionally, obvious counterarguments e...
mistral: The hypothesis is falsifiable and grounded in known stochastic processes, but the 50 ns limit lacks direct empirical support in the cited papers, and deterministic modeling advances could challenge it.
anthropic: The hypothesis makes specific, technically-detailed claims about pulsar timing noise floors and cryptographic implications, but none of the provided papers are remotely relevant to millisecond pulsars, pulsar timing arrays, interstellar medium physics, or time-lock encryption — making it impossib...

Supporting Research Papers

Formal Verification

Z3 logical consistency:✅ Consistent

Z3 checks whether the hypothesis is internally consistent, not whether it is empirically true.

Experimental Validation Package

This discovery has a Claude-generated validation package with a full experimental design.

Precise Hypothesis

For millisecond pulsars (MSPs) with spin periods P < 10 ms and timing baselines > 1 year, the root-mean-square (RMS) timing residual prediction error σ_pred satisfies σ_pred ≥ 50 ns for all prediction horizons Δt > 3600 s (1 hour), due to irreducible stochastic contributions from: (1) spin noise with power spectral density S_spin(f) ∝ f^{-α}, α ∈ [2,6]; (2) gravitational wave background (GWB) strain h_c ~ 10^{-15} at f = 1/yr; and (3) interstellar medium (ISM) dispersion measure (DM) variations with ΔDMRMS ~ 10^{-4} to 10^{-3} pc cm^{-3} yr^{-1}. This 50 ns floor constitutes a hard physical limit preventing deterministic pre-epoch key derivation in any pulsar-anchored time-lock cryptographic scheme with security parameter λ ≥ 128 bits.

Disproof criteria:
  1. STRONG DISPROOF: Demonstration that ≥ 3 MSPs achieve σ_pred < 25 ns (half the claimed floor) at Δt = 6 hours using only publicly available timing data and a deterministic model, with p-value < 0.01 under chi-squared goodness-of-fit.
  2. MODERATE DISPROOF: A noise model that reduces residual RMS below 50 ns for > 50% of NANOGrav 15-year dataset MSPs at 1-hour prediction horizons, validated on held-out data not used in model fitting.
  3. ALGORITHMIC DISPROOF: A machine learning predictor (trained on ≥ 3 years of TOA data) achieving mean absolute prediction error (MAPE) < 30 ns on a 6-month held-out test set for any single MSP, with Δt = 1–24 hours.
  4. PHYSICAL DISPROOF: Measurement of GWB amplitude A_GWB < 5×10^{-16} (ruling out current PTA evidence at 3σ), combined with spin noise amplitude A_spin < 10^{-14} for the 10 quietest NANOGrav MSPs, would remove two of three stochastic floors.
  5. CRYPTOGRAPHIC DISPROOF: A working implementation of pulsar-anchored time-lock encryption achieving 128-bit security with demonstrated key derivation error rate < 10^{-6} over 100 independent trials at 1-hour pre-epoch windows.

Experimental Protocol

PHASE 1 — Noise Floor Characterization (Days 1–45): Analyze existing PTA datasets (NANOGrav 15yr, PPTA DR3, EPTA DR2) to empirically measure prediction residuals as a function of horizon Δt for all MSPs with ≥ 5-year baselines. Fit Bayesian noise models (white + red + DM) and compute posterior predictive distributions at horizons Δt = {1h, 6h, 24h, 1wk, 1mo}.

PHASE 2 — Stochastic Component Isolation (Days 46–90): Decompose residuals into spin noise, GWB, and ISM contributions using: (a) chromatic index analysis across frequency bands; (b) Hellings-Downs spatial correlation for GWB; (c) structure function analysis for DM variations. Quantify each component's contribution to σ_pred(Δt).

PHASE 3 — Predictive Model Stress Test (Days 91–135): Train deterministic + ML hybrid predictors on 80% of each MSP's timeline; evaluate on 20% held-out data. Test predictors: (a) polynomial ephemeris extrapolation; (b) Gaussian process regression; (c) LSTM/Transformer sequence models. Record σ_pred for each Δt bin.

PHASE 4 — Cryptographic Bound Formalization (Days 136–160): Translate σ_pred distributions into information-theoretic bounds on key entropy. Compute H(K | TOA_predicted) for key lengths 128, 256 bits as a function of σ_pred. Determine minimum σ_pred required to break 128-bit security.

Required datasets:
  1. NANOGrav 15-year Data Set (public): 68 MSPs, TOA precision 50 ns–1 μs, multi-frequency (820 MHz, 1.4 GHz, 2.3 GHz). URL: data.nanograv.org. Size: ~2 GB.
  2. PPTA Data Release 3 (public): 26 MSPs, Parkes telescope, 10–18 year baselines. Size: ~800 MB.
  3. EPTA Data Release 2 (public): 25 MSPs, multi-telescope European array. Size: ~1.2 GB.
  4. IPTA Data Release 2 (combined): 65 MSPs, combined PTA dataset. Size: ~3 GB.
  5. TEMPO2 timing software + noise analysis plugins (enterprise, enterprise_extensions): Open source, Python/C.
  6. DM time series from dedicated low-frequency monitoring (LOFAR, MWA archival): ~500 MB.
  7. Simulated GWB injection datasets: Generated via hasasia/PTMCMCSampler; ~10 GB synthetic TOA sets.
  8. Cryptographic test harness: Custom Python implementation of pulsar time-lock scheme (to be developed, ~500 LOC).
  9. GPU cluster access: 4× NVIDIA A100 80GB or equivalent for ML predictor training.
  10. Reference ephemerides: DE440 solar system ephemeris (JPL), ~200 MB.
Success:
  1. PRIMARY: σ_pred(Δt=1h) ≥ 50 ns for ≥ 80% of analyzed MSPs (≥ 36/45), with 95% CI lower bound > 40 ns.
  2. SECONDARY: Structure function analysis confirms stochastic (non-deterministic) character of residuals for Δt > 1 hour in ≥ 90% of MSPs (D(τ) power-law exponent β > 0.5 with p < 0.05).
  3. TERTIARY: No ML predictor achieves RMSE < 50 ns at Δt = 6 hours for any MSP in the test set (0/45 MSPs breaching threshold).
  4. CRYPTOGRAPHIC: Information-theoretic analysis confirms H(K|TOA_predicted) > 100 bits for 128-bit keys when σ_pred ≥ 50 ns and quantization step δ ≤ 10 ns.
  5. COMPONENT ATTRIBUTION: Stochastic decomposition assigns ≥ 60% of prediction variance at Δt = 1 hour to irreducible physical processes (GWB + spin noise + ISM) for median MSP.
  6. REPRODUCIBILITY: Results reproduced independently using PPTA DR3 alone (without NANOGrav data) with consistent σ_pred estimates within 20%.
Failure:
  1. HARD FAILURE: Any single MSP achieves σ_pred < 25 ns at Δt = 6 hours with RMSE computed over ≥ 50 independent windows (p < 0.01 under bootstrap test).
  2. SOFT FAILURE: > 25% of MSPs (> 11/45) show σ_pred < 50 ns at Δt = 1 hour, suggesting the 50 ns threshold is not universal.
  3. MODEL FAILURE: Bayesian noise model fails to converge (Gelman-Rubin R̂ > 1.1) for > 30% of MSPs, invalidating stochastic decomposition.
  4. ML FAILURE (inverted): Any Transformer predictor achieves RMSE < 40 ns at Δt = 1 hour for ≥ 3 MSPs, suggesting deterministic structure is learnable.
  5. CRYPTOGRAPHIC FAILURE: Key recovery attack succeeds with probability > 10^{-3} using ML-predicted TOAs as prior, even with σ_pred ≥ 50 ns (would indicate the 50 ns floor is insufficient for claimed security).
  6. DATA FAILURE: < 20 MSPs pass selection criteria after quality filtering, providing insufficient statistical power (power < 0.8 at α = 0.05 for detecting 50 ns floor).

1,840

GPU hours

160d

Time to result

$4,200

Min cost

$31,500

Full cost

ROI Projection

Commercial:
  1. DIRECT: Licensing of validated noise models to pulsar timing array consortia (NANOGrav, PPTA, EPTA, InPTA) — estimated $200K–$500K/year in consulting/licensing.
  2. CRYPTOGRAPHIC STANDARDS: Input to NIST/ETSI post-quantum cryptography standardization for time-lock primitives; first-mover advantage for compliant implementations worth $5M–$20M in government contracts.
  3. FINANCIAL SERVICES: Time-stamping and delayed-revelation contracts (options, auctions, voting) using pulsar anchors; if floor proven, defines minimum contract duration for security — addressable market $50M–$300M.
  4. NAVIGATION/DEFENSE: Pulsar-based autonomous navigation (XNAV) systems require timing precision bounds; this work provides formal specification for DARPA/ESA programs with combined budget ~$200M.
  5. INSURANCE/LEGAL: Forensic time-stamping for legal evidence chains; establishes admissibility standards for pulsar-anchored timestamps in jurisdictions adopting digital evidence standards.
  6. ACADEMIC SPINOUT: Software package (pulsar-crypto-bounds) for noise floor computation — potential $50K–$200K/year in SaaS licensing to research groups. TOTAL COMMERCIAL VALUE ESTIMATE: $75M–$520M over 10-year horizon, contingent on hypothesis confirmation and cryptographic standardization adoption.

🔓 If proven, this unlocks

Proving this hypothesis is a prerequisite for the following downstream discoveries and applications:

  • 1PULSAR-CRYPTO-SECURITY-PROOF-001
  • 2MSP-TIMING-FUNDAMENTAL-LIMITS-002
  • 3QUANTUM-SAFE-TIMELOCK-DESIGN-003
  • 4PTA-NOISE-BUDGET-COMPLETE-004
  • 5RELATIVISTIC-TIMEKEEPING-BOUNDS-005

Prerequisites

These must be validated before this hypothesis can be confirmed:

  • PTA-NOISE-MODEL-VALIDATION-001
  • GWB-AMPLITUDE-MEASUREMENT-2023
  • ISM-DM-VARIATION-CHARACTERIZATION-MSP
  • PULSAR-TIMELOCK-CRYPTO-SCHEME-DEFINITION

Implementation Sketch

# Pulsar Timing Prediction Floor Validation — Core Architecture
# Dependencies: enterprise, enterprise_extensions, TEMPO2, PTMCMCSampler,
#               numpy, scipy, torch, gpytorch, optuna, astropy

import numpy as np
from enterprise.pulsar import Pulsar
from enterprise.signals import signal_base, white_signals, gp_signals
from enterprise_extensions import models, sampler
import torch
import torch.nn as nn

# ============================================================
# MODULE 1: DATA INGESTION & MSP SELECTION
# ============================================================
class MSPDataLoader:
    def __init__(self, par_files, tim_files, min_baseline_yr=5.0,
                 max_DM=100.0, max_period_ms=10.0):
        self.selection_criteria = {
            'baseline': min_baseline_yr,  # years
            'DM_max': max_DM,             # pc/cm^3
            'P_max': max_period_ms * 1e-3 # seconds
        }

    def load_and_filter(self):
        pulsars = []
        for par, tim in zip(self.par_files, self.tim_files):
            psr = Pulsar(par, tim, ephem='DE440', clk='TT(BIPM2021)')
            baseline = (psr.toas.max() - psr.toas.min()) / 365.25  # days->yr
            if (psr.period < self.selection_criteria['P_max'] and
                psr.DM < self.selection_criteria['DM_max'] and
                baseline >= self.selection_criteria['baseline']):
                pulsars.append(psr)
        return pulsars  # Expected: ~45 MSPs

# ============================================================
# MODULE 2: BAYESIAN NOISE MODEL FITTING
# ============================================================
class NoiseModelFitter:
    def __init__(self, pulsar, n_samples=1_000_000):
        self.psr = pulsar
        self.n_samples = n_samples

    def build_model(self):
        # White noise: EFAC + EQUAD per backend
        ef = white_signals.MeasurementNoise(efac=True)
        eq = white_signals.EquadNoise(log10_equad=True)
        # Red spin noise: power law
        rn = gp_signals.RedNoise(
            spectrum=models.powerlaw,
            components=30,
            prior_log10_A=(-16, -12),
            prior_gamma=(0, 7)
        )
        # DM noise: chromatic index kappa=2
        dm = gp_signals.DMNoise(
            spectrum=models.powerlaw,
            components=30
        )
        return ef + eq + rn + dm

    def run_mcmc(self):
        pta = signal_base.PTA([self.build_model()(self.psr)])
        samp = sampler.setup_sampler(pta, outdir=f'chains/{self.psr.name}')
        samp.sample(self.n_samples)
        return samp  # Returns posterior samples

# ============================================================
# MODULE 3: PREDICTION HORIZON ANALYSIS
# ============================================================
class PredictionHorizonAnalyzer:
    HORIZONS_SEC = [3600, 21600, 43200, 86400, 172800, 604800, 2592000]
    # 1h, 6h, 12h, 24h, 48h, 1wk, 1mo

    def leave_future_out_cv(self, toas, residuals, noise_params,
                             n_windows=50):
        results = {h: [] for h in self.HORIZONS_SEC}
        toa_range = toas.max() - toas.min()

        for _ in range(n_windows):
            # Random split point in middle 60% of data
            split_frac = np.random.uniform(0.3, 0.7)
            T_train = toas.min() + split_frac * toa_range

            train_mask = toas <= T_train
            for horizon in self.HORIZONS_SEC:
                # Select TOAs within [T_train, T_train + horizon]
                pred_mask = ((toas > T_train) &
                             (toas <= T_train + horizon / 86400.0))
                if pred_mask.sum() < 3:
                    continue

                # Predict using GP posterior mean
                pred_residuals = self._gp_predict(
                    toas[train_mask], residuals[train_mask],
                    toas[pred_mask], noise_params
                )
                errors = residuals[pred_mask] - pred_residuals
                results[horizon].append(np.sqrt(np.mean(errors**2)) * 1e9)  # ns

        return {h: np.array(v) for h, v in results.items()}

    def _gp_predict(self, t_train, r_train, t_pred, params):
        # Gaussian Process prediction using fitted noise model
        # Returns posterior mean at t_pred given training data
        from scipy.linalg import solve
        K_train = self._covariance_matrix(t_train, t_train, params)
        K_cross = self._covariance_matrix(t_pred, t_train, params)
        K_train_reg = K_train + params['white_var'] * np.eye(len(t_train))
        return K_cross @ solve(K_train_reg, r_train)

    def _covariance_matrix(self, t1, t2, params):
        # Power-law red noise covariance (Fourier sum approximation)
        tau = np.abs(t1[:, None] - t2[None, :])  # days
        A, gamma = params['red_A'], params['red_gamma']
        # Approximate: C(tau) = A^2 * (tau/yr)^{-(gamma-1)} / (gamma-1)
        yr = 365.25
        return A**2 / (gamma - 1) * (tau / yr + 1e-10)**(-(gamma - 1))

# ============================================================
# MODULE 4: TRANSFORMER PREDICTOR
# ============================================================
class PulsarTransformerPredictor(nn.Module):
    def __init__(self, d_model=256, nhead=8, num_layers=6,
                 seq_len=180, pred_len=1):
        super().__init__()
        self.embedding = nn.Linear(2, d_model)  # [TOA, residual]
        encoder_layer = nn.TransformerEncoderLayer(
            d_model=d_model, nhead=nhead, dim_feedforward=1024,
            dropout=0.1, batch_first=True
        )
        self.transformer = nn.TransformerEncoder(encoder_layer, num_layers)
        self.output_head = nn.Linear(d_model, pred_len)
        self.seq_len = seq_len

    def forward(self, x):
        # x: [batch, seq_len, 2] — normalized TOA gaps + residuals
        x_emb = self.embedding(x)
        encoded = self.transformer(x_emb)
        return self.output_head(encoded[:, -1, :])  # Predict next residual

    def predict_at_horizon(self, history_toas, history_res, target_toa):
        # Interpolate/extrapolate to target_toa
        # Returns predicted residual in nanoseconds
        self.eval()
        with torch.no_grad():
            x = self._prepare_input(history_toas, history_res, target_toa)
            return self.forward(x).item() * 1e9  # Convert to ns

# ============================================================
# MODULE 5: STOCHASTIC DECOMPOSITION
# ============================================================
class StochasticDecomposer:
    def structure_function(self, times, residuals, tau_bins):
        """Compute D(tau) = <[R(t+tau) - R(t)]^2>"""
        D = np.zeros(len(tau_bins))
        for i, tau in enumerate(tau_bins):
            pairs = []
            for j, t in enumerate(times):
                future = np.where(np.abs(times - (t + tau)) < tau * 0.1)[0]
                if len(future) > 0:
                    pairs.append((residuals[future[0]] - residuals[j])**2)
            D[i] = np.mean(pairs) if pairs else np.nan
        return D

    def fit_power_law(self, tau_bins, D_tau):
        """Fit D(tau) = A * tau^beta; return (A, beta, R^2)"""
        from scipy.optimize import curve_fit
        valid = ~np.isnan(D_tau) & (D_tau > 0)
        log_tau = np.log10(tau_bins[valid])
        log_D = np.log10(D_tau[valid])
        popt, _ = curve_fit(lambda x, a, b: a + b*x, log_tau, log_D)
        beta = popt[1]
        residuals = log_D - (popt[0] + beta * log_tau)
        ss_res = np.sum(residuals**2)
        ss_tot = np.sum((log_D - log_D.mean())**2)
        return 10**popt[0], beta, 1 - ss_res/ss_tot

# ============================================================
# MODULE 6: CRYPTOGRAPHIC BOUND CALCULATOR
# ============================================================
class CryptographicBoundCalculator:
    def __init__(self, key_bits=128, quantization_ns=10.0):
        self.key_bits = key_bits
        self.delta = quantization_ns  # ns

    def conditional_entropy(self, sigma_pred_ns):
        """
        H(K | TOA_predicted) for Gaussian prediction error.
        Key derived as K = hash(round(TOA / delta) * delta)
        Attacker knows predicted TOA with uncertainty sigma_pred.
        """
        import scipy.stats as stats
        # Number of TOA bins within ±3*sigma of prediction
        n_bins = int(6 * sigma_pred_ns / self.delta) + 1
        # Probability of each bin given Gaussian uncertainty
        bin_centers = np.arange(-n_bins//2, n_bins//2 + 1) * self.delta
        probs = stats.norm.pdf(bin_centers, 0, sigma_pred_ns)
        probs /= probs.sum()
        # Shannon entropy of bin distribution (attacker's uncertainty)
        H_bins = -np.sum(probs * np.log2(probs + 1e-300))
        # Total key entropy
        H_key = self.key_bits
        # Conditional entropy (remaining uncertainty after prediction)
        H_conditional = min(H_key, H_bins)
        return H_conditional

    def security_bits(self, sigma_pred_ns):

Source

AegisMind Research
Need AI to work rigorously on your problems? AegisMind uses the same multi-model engine for personal and professional use. Get started