solver.press

The timing residuals of millisecond pulsar arrays, as measured by IPTA-grade radio telescopes, contain sufficient Shannon entropy density (≥256 bits per observation epoch from a 3-pulsar array) to serve as a cryptographically secure key derivation source, with residual unpredictability arising from gravitational wave background noise and spin noise that is irreducible by any known forecasting method.

Computer ScienceApr 12, 2026Evaluation Score: 33%

Adversarial Debate Score

33% survival rate under critique

Model Critiques

openai: The hypothesis is falsifiable but is not well-supported by the provided literature, which does not address pulsar timing, entropy estimates, or cryptographic unpredictability; further, the assumption that noise components are irreducible and sufficient for cryptographic security is highly questio...
grok: The hypothesis is falsifiable through empirical testing of entropy density in pulsar timing residuals, but it lacks direct support from the provided papers, which focus on unrelated quantum and biological topics rather than pulsar timing or cryptographic applications. Additionally, counterargumen...
mistral: The hypothesis is ambitious and intriguing, with a clear falsifiability criterion (entropy density measurement), but it lacks direct empirical support from the provided papers and faces significant counterarguments (e.g., potential predictability of noise sources, cryptographic assumptions about ...
anthropic: The hypothesis is technically falsifiable in principle, but none of the provided papers have any relevance to millisecond pulsars, timing residuals, gravitational wave backgrounds, or cryptographic entropy — making it completely unsupported by the cited literature; additionally, the specific quan...

Supporting Research Papers

Formal Verification

Z3 logical consistency:✅ Consistent

Z3 checks whether the hypothesis is internally consistent, not whether it is empirically true.

Experimental Validation Package

This discovery has a Claude-generated validation package with a full experimental design.

Precise Hypothesis

A minimum 3-pulsar array observed with IPTA-grade radio telescopes (sensitivity ≥ 10 μJy, timing precision ≤ 100 ns RMS) produces timing residuals per observation epoch that contain ≥ 256 bits of Shannon entropy, where that entropy is sourced from gravitational wave background (GWB) noise and pulsar spin noise components that are provably irreducible by any polynomial-time forecasting algorithm, making the residuals suitable as input to a cryptographically secure pseudorandom number generator (CSPRNG) or key derivation function (KDF) per NIST SP 800-90B standards.

Disproof criteria:
  1. QUANTITATIVE ENTROPY FAILURE: Shannon entropy density measured via NIST SP 800-90B IID tests falls below 0.5 bits/sample across ≥ 90% of epochs from a 3-pulsar array, yielding < 256 bits total per epoch
  2. PREDICTABILITY: A forecasting model (e.g., Gaussian process with known noise parameters) achieves residual prediction accuracy reducing effective entropy below 128 bits/epoch with probability > 0.01
  3. CORRELATION STRUCTURE: Cross-pulsar residual correlations (Hellings-Downs or otherwise) allow an adversary to reduce entropy by > 50% using known GWB correlation models
  4. DETERMINISTIC NOISE DOMINANCE: Instrumental systematics, dispersion measure variations, or solar wind effects account for > 80% of residual variance, leaving < 20% from irreducible stochastic sources
  5. NIST FAILURE: Residual bitstreams fail ≥ 3 of 15 NIST SP 800-22 statistical randomness tests at p < 0.01 significance after standard whitening
  6. REPRODUCIBILITY ATTACK: Any published algorithm can reconstruct > 1% of residual bits from publicly available ephemeris and noise models with probability > 0.05
  7. ENTROPY RATE COLLAPSE: Min-entropy (H_min) per epoch falls below 128 bits, disqualifying use as a NIST-compliant entropy source

Experimental Protocol

PHASE 1 (Weeks 1–4): Archival data entropy audit using existing IPTA Data Release 2 (DR2) and NANOGrav 15-yr datasets PHASE 2 (Weeks 5–12): Noise decomposition and entropy attribution (GWB vs. spin noise vs. instrumental) PHASE 3 (Weeks 13–20): Adversarial predictability testing using state-of-the-art GP forecasting PHASE 4 (Weeks 21–28): NIST SP 800-90B compliance testing on extracted bitstreams PHASE 5 (Weeks 29–36): Prototype KDF implementation and end-to-end security evaluation

Required datasets:
  1. IPTA Data Release 2 (DR2): Publicly available at ipta4gw.org; 65 MSPs, multi-telescope, ~30-year baseline; PRIMARY
  2. NANOGrav 15-year Data Set: 67 MSPs, timing residuals, noise models; doi:10.3847/2041-8213/acdac6; PRIMARY
  3. EPTA DR2: European Pulsar Timing Array second data release; 25 MSPs; SUPPLEMENTARY
  4. PPTA DR3: Parkes PTA third data release; 30 MSPs; SUPPLEMENTARY
  5. NIST SP 800-90B reference implementation: github.com/usnistgov/SP800-90B_EntropyAssessment; REQUIRED TOOL
  6. NIST SP 800-22 test suite: Statistical randomness tests; REQUIRED TOOL
  7. Enterprise noise modeling software: github.com/nanograv/enterprise; REQUIRED
  8. TEMPO2 pulsar timing package: hobbs.github.io/tempo2; REQUIRED
  9. Simulated GWB realizations: Generated via hasasia or PTMCMCSampler for null hypothesis testing; GENERATED
  10. Synthetic MSP timing data: Generated via libstempo with known entropy budget for ground-truth validation; GENERATED
Success:
  1. PRIMARY: H_min ≥ 256 bits per epoch from 3-pulsar array in ≥ 80% of observed epochs (p < 0.001 vs. null)
  2. SECONDARY: H_Shannon ≥ 512 bits per epoch (factor-of-2 margin above threshold)
  3. ADVERSARIAL: GP/LSTM/Transformer forecasting reduces effective entropy by < 50% (H_min post-prediction ≥ 128 bits)
  4. NIST COMPLIANCE: ≥ 13/15 NIST SP 800-22 tests passed at p > 0.01 for whitened bitstream
  5. NIST 800-90B: Min-entropy estimate ≥ 0.5 bits/sample for 8-bit quantized residuals (≥ 4 bits/byte)
  6. INDEPENDENCE: Mutual information between any two pulsars < 32 bits per epoch (< 12.5% of threshold)
  7. KDF SECURITY: Generated 256-bit keys pass AES distinguisher test (p > 0.05, indistinguishable from uniform)
  8. REPRODUCIBILITY: Results replicated on ≥ 2 independent PTA datasets (e.g., NANOGrav + EPTA)
Failure:
  1. H_min < 128 bits per epoch in > 50% of epochs from 3-pulsar array
  2. Any single adversarial model reduces entropy below 128 bits/epoch
  3. 3 NIST SP 800-22 test failures at p < 0.01

  4. Instrumental systematics account for > 60% of residual variance (entropy not from astrophysical sources)
  5. Cross-pulsar mutual information > 128 bits/epoch (residuals too correlated to sum independently)
  6. Entropy estimate not reproducible across NANOGrav and EPTA datasets (discrepancy > 2×)
  7. NIST SP 800-90B min-entropy < 0.25 bits/sample after whitening

420

GPU hours

180d

Time to result

$4,200

Min cost

$31,500

Full cost

ROI Projection

Commercial:
  1. HARDWARE SECURITY MODULES (HSMs): Integration of pulsar-derived entropy into HSM seed generation for financial institutions; market size ~$1.2B globally; pulsar entropy could command premium pricing as "cosmic randomness" for high-assurance applications.
  2. CERTIFICATE AUTHORITIES: Root CA key generation using pulsar entropy; 5 major CAs × $500K licensing = $2.5M/yr potential.
  3. BLOCKCHAIN/WEB3: Verifiable random function (VRF) based on publicly observable pulsar data; on-chain entropy oracle market estimated $50M–$200M.
  4. NATIONAL STANDARDS: Potential inclusion in NIST SP 800-90 series as approved entropy source; regulatory value to compliant vendors estimated $10M–$50M.
  5. SPACE COMMUNICATIONS: Deep-space cryptographic key generation using onboard pulsar timing receivers; NASA/ESA mission value $5M–$20M per mission.
  6. INSURANCE/ACTUARIAL: Entropy source for Monte Carlo risk modeling with certified unpredictability; financial services market $100M+.

🔓 If proven, this unlocks

Proving this hypothesis is a prerequisite for the following downstream discoveries and applications:

  • 1pulsar-based-TRNG-hardware-implementation-101
  • 2distributed-PTA-entropy-beacon-102
  • 3GWB-entropy-rate-vs-redshift-103
  • 4quantum-gravity-entropy-floor-104
  • 5space-based-cryptographic-timing-infrastructure-105

Prerequisites

These must be validated before this hypothesis can be confirmed:

  • IPTA-DR2-noise-model-validation-001
  • NANOGrav-15yr-GWB-characterization-002
  • NIST-SP800-90B-astrophysical-source-precedent-003

Implementation Sketch

# Pulsar Entropy Extraction Pipeline (PEEP) — Architecture Sketch

# === MODULE 1: DATA INGESTION ===
class PTADataLoader:
    def load_ipta_dr2(self, pulsars: list[str]) -> dict[str, TimingResiduals]:
        # Load .tim and .par files via libstempo/TEMPO2
        # Returns: {pulsar_name: TimingResiduals(epochs, residuals_ns, uncertainties_ns)}
        pass
    
    def select_top_pulsars(self, n=3, criterion='timing_rms') -> list[str]:
        # Rank by RMS timing residual, return top-n
        # Target: J0437-4715 (~30ns), J1909-3744 (~50ns), J1713+0747 (~70ns)
        pass

# === MODULE 2: NOISE DECOMPOSITION ===
class EnterpriseNoiseModel:
    def fit_noise_model(self, residuals: TimingResiduals) -> NoiseComponents:
        # Enterprise PTMCMC: white noise + red noise + DM + GWB
        # Returns variance fractions: {gwb: 0.xx, spin: 0.xx, dm: 0.xx, white: 0.xx}
        # Runtime: ~48 CPU-hours per pulsar
        pass
    
    def extract_stochastic_residuals(self, residuals, noise_model) -> np.ndarray:
        # Subtract deterministic timing model
        # Return: residuals_stochastic [n_epochs] in nanoseconds
        pass

# === MODULE 3: ENTROPY ESTIMATION ===
class EntropyEstimator:
    def quantize_residuals(self, residuals_ns: np.ndarray, 
                           n_bits: int = 8) -> np.ndarray:
        # Uniform quantization to n_bits integers
        # Scale: [-5*sigma, +5*sigma] -> [0, 2^n_bits - 1]
        return np.clip(
            ((residuals_ns - residuals_ns.mean()) / 
             (10 * residuals_ns.std()) * 256 + 128).astype(int), 
            0, 255
        ).astype(np.uint8)
    
    def estimate_shannon_entropy(self, samples: np.ndarray) -> float:
        # H = -sum(p_i * log2(p_i)) in bits
        counts = np.bincount(samples, minlength=256)
        probs = counts[counts > 0] / len(samples)
        return -np.sum(probs * np.log2(probs))
    
    def estimate_min_entropy(self, samples: np.ndarray) -> float:
        # H_min = -log2(max(p_i))
        counts = np.bincount(samples, minlength=256)
        p_max = counts.max() / len(samples)
        return -np.log2(p_max)
    
    def nist_800_90b_assessment(self, samples: np.ndarray) -> dict:
        # Call NIST reference implementation via subprocess
        # Returns: {min_entropy_estimate, iid_result, restart_test_result}
        import subprocess
        result = subprocess.run(
            ['./ea_non_iid', '-v', 'samples.bin'],
            capture_output=True
        )
        return parse_nist_output(result.stdout)
    
    def compute_epoch_entropy(self, psr_residuals: dict[str, np.ndarray],
                               epoch_idx: int) -> float:
        # Per-epoch entropy: sum individual - mutual information
        H_individual = []
        for psr, res in psr_residuals.items():
            window = res[max(0, epoch_idx-10):epoch_idx+10]  # 20-epoch window
            H_individual.append(self.estimate_min_entropy(
                self.quantize_residuals(window)))
        
        # Mutual information via k-NN estimator (sklearn)
        MI_pairs = compute_pairwise_MI(psr_residuals, epoch_idx, k=5)
        
        H_total = sum(H_individual) - sum(MI_pairs)
        return H_total  # in bits

# === MODULE 4: ADVERSARIAL TESTING ===
class AdversarialForecaster:
    def gp_forecast(self, residuals: np.ndarray, 
                    train_frac: float = 0.8) -> np.ndarray:
        from sklearn.gaussian_process import GaussianProcessRegressor
        from sklearn.gaussian_process.kernels import RBF, Matern, WhiteKernel
        kernel = RBF() + Matern(nu=1.5) + WhiteKernel()
        gpr = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=10)
        n_train = int(len(residuals) * train_frac)
        t = np.arange(len(residuals)).reshape(-1, 1)
        gpr.fit(t[:n_train], residuals[:n_train])
        predictions = gpr.predict(t[n_train:])
        return residuals[n_train:] - predictions  # prediction residuals
    
    def lstm_forecast(self, residuals: np.ndarray) -> np.ndarray:
        # PyTorch LSTM: 2 layers, 128 hidden units, seq_len=50
        # Returns: prediction residuals on held-out 20%
        pass
    
    def entropy_after_prediction(self, prediction_residuals: np.ndarray) -> float:
        return EntropyEstimator().estimate_min_entropy(
            EntropyEstimator().quantize_residuals(prediction_residuals))

# === MODULE 5: KDF PROTOTYPE ===
class PulsarKDF:
    def extract_bits(self, residuals_ns: np.ndarray, 
                     method: str = 'hash') -> bytes:
        if method == 'hash':
            # SHA3-256 of 64-byte blocks of quantized residuals
            import hashlib
            quantized = EntropyEstimator().quantize_residuals(residuals_ns)
            blocks = [quantized[i:i+64].tobytes() 
                      for i in range(0, len(quantized)-64, 64)]
            return b''.join(hashlib.sha3_256(b).digest() for b in blocks)
        elif method == 'lsb':
            # Least significant bits of quantized residuals
            quantized = EntropyEstimator().quantize_residuals(residuals_ns)
            return np.packbits(quantized & 0x01).tobytes()
    
    def derive_key(self, pulsar_residuals: dict[str, np.ndarray],
                   epoch_idx: int, key_length: int = 32) -> bytes:
        # HKDF (RFC 5869) with SHA3-256
        import hkdf
        ikm = b''.join(
            self.extract_bits(res[epoch_idx-20:epoch_idx], method='hash')
            for res in pulsar_residuals.values()
        )
        salt = f"pulsar-entropy-epoch-{epoch_idx}".encode()
        info = b"cryptographic-key-v1"
        return hkdf.hkdf_expand(
            hkdf.hkdf_extract(salt, ikm), info, key_length)

# === MODULE 6: MAIN PIPELINE ===
def run_validation_pipeline():
    loader = PTADataLoader()
    pulsars = loader.select_top_pulsars(n=3)
    data = loader.load_ipta_dr2(pulsars)
    
    noise_model = EnterpriseNoiseModel()
    stochastic_residuals = {}
    for psr in pulsars:
        nm = noise_model.fit_noise_model(data[psr])
        stochastic_residuals[psr] = noise_model.extract_stochastic_residuals(
            data[psr], nm)
    
    estimator = EntropyEstimator()
    epoch_entropies = []
    for epoch in range(50, len(list(stochastic_residuals.values())[0])):
        H = estimator.compute_epoch_entropy(stochastic_residuals, epoch)
        epoch_entropies.append(H)
    
    # Report: fraction of epochs with H >= 256 bits
    success_rate = np.mean(np.array(epoch_entropies) >= 256)
    print(f"Epochs with H_min >= 256 bits: {success_rate:.1%}")
    print(f"Median H_min per epoch: {np.median(epoch_entropies):.1f} bits")
    
    # NIST compliance
    all_residuals = np.concatenate(list(stochastic_residuals.values()))
    nist_result = estimator.nist_800_90b_assessment(
        estimator.quantize_residuals(all_residuals))
    print(f"NIST 800-90B min-entropy: {nist_result['min_entropy']:.3f} bits/sample")
    
    return epoch_entropies, nist_result

# === ABORT LOGIC ===
def check_abort_conditions(epoch_entropies_so_far: list) -> bool:
    if len(epoch_entropies_so_far) < 20:
        return False
    median_H = np.median(epoch_entropies_so_far)
    if median_H < 64:  # Less than 25% of threshold
        print("ABORT: Median entropy < 64 bits. Hypothesis likely false.")
        return True
    return False
Abort checkpoints:
  1. CHECKPOINT 1 (Day 14, end of Phase 1): If median H_min across all available epochs < 64 bits for best single pulsar, abort. Estimated cost saved: $25,000. Trigger: H_min(best_pulsar) < 64 bits in > 70% of epochs.
  2. CHECKPOINT 2 (Day 28, noise decomposition complete): If GWB + spin noise variance fraction < 10% of total residual variance (instrumental noise dominates), abort. Trigger: Var(GWB+spin)/Var(total) < 0.10. Cost saved: $20,000.
  3. CHECKPOINT 3 (Day 56, adversarial testing): If GP forecaster reduces entropy below 64 bits/epoch, abort full KDF development. Trigger: H_min(post-GP) < 64 bits in > 50% of epochs. Cost saved: $15,000.
  4. CHECKPOINT 4 (Day 84, NIST 800-90B preliminary): If NIST min-entropy estimate < 0.25 bits/sample on whitened residuals, abort NIST compliance phase. Trigger: NIST_H_min < 0.25 bits/sample. Cost saved: $10,000.
  5. CHECKPOINT 5 (Day 120, cross-dataset replication): If entropy estimates from NANOGrav and EPTA datasets differ by > 3× (not reproducible), abort KDF prototype. Trigger: |H_NANOGrav - H_EPTA| / min(H_NANOGrav, H_EPTA) > 2.0. Cost saved: $8,000.
  6. CHECKPOINT 6 (Day 150, KDF security evaluation): If AES distinguisher test rejects key uniformity at p < 0.001, abort commercial development pathway. Trigger: Distinguisher advantage > 0.01. Cost saved: $5,000.

Source

AegisMind Research
Need AI to work rigorously on your problems? AegisMind uses the same multi-model engine for personal and professional use. Get started