Comprehensive cryptographic and economic security review
The audit covered the OmniSync on-chain programs and off-chain ZK proof generation pipeline. All Solana programs were reviewed in their compiled Anchor BPF form as well as source, with particular focus on the Proof-of-Computation verifier, stake/slashing mechanics, and reward distribution logic.
| Component | File / Module | Lines | Focus |
|---|---|---|---|
| PoC Verifier | programs/poc-verifier/src/lib.rs | ~920 | ZK proof validation, Groth16 pairing checks |
| Stake Manager | programs/stake-manager/src/lib.rs | ~680 | Stake lock/unlock, slashing invariants |
| Reward Router | programs/reward-router/src/lib.rs | ~430 | Epoch reward calculation, distribution |
| Registry | programs/node-registry/src/lib.rs | ~310 | Node registration, metadata updates |
| Proof Generator | off-chain/zkp/generator.ts | ~560 | Client-side ZK circuit, witness computation |
| Economic Model | tokenomics/model.py | ~220 | Inflation schedule, token emission |
The Groth16 verifier accepted proof elements in non-canonical form. The BN254 curve's scalar field has a specific order (r), and points can be represented with multiple valid byte encodings. Without enforcing canonical encoding, a valid proof (π_A, π_B, π_C) could be "randomized" post-generation to produce a distinct but equally valid byte sequence. An attacker who observed a provider's valid PoC submission could generate a malleated proof variant and submit it first—or replay it in a separate epoch—to claim rewards for work they did not perform.
// Verify Groth16 proof elements pub fn verify_proof(proof: &ProofData, vk: &VerifyingKey) -> bool { let pi_a = G1Affine::from_compressed(&proof.pi_a); let pi_b = G2Affine::from_compressed(&proof.pi_b); let pi_c = G1Affine::from_compressed(&proof.pi_c); // No uniqueness / canonical check if pi_a.is_none() || pi_b.is_none() || pi_c.is_none() { return false; } groth16_verify(vk, &pi_a.unwrap(), &pi_b.unwrap(), &pi_c.unwrap()) }
pub fn verify_proof(proof: &ProofData, vk: &VerifyingKey) -> Result<()> { let pi_a = G1Affine::from_compressed_unchecked(&proof.pi_a) .filter(|p| p.is_on_curve() && p.is_torsion_free()) .ok_or(ErrorCode::InvalidProofElement)?; let pi_b = G2Affine::from_compressed_unchecked(&proof.pi_b) .filter(|p| p.is_on_curve() && p.is_torsion_free()) .ok_or(ErrorCode::InvalidProofElement)?; let pi_c = G1Affine::from_compressed_unchecked(&proof.pi_c) .filter(|p| p.is_on_curve() && p.is_torsion_free()) .ok_or(ErrorCode::InvalidProofElement)?; // Canonical check: re-serialize and compare require!(pi_a.to_compressed() == proof.pi_a, ErrorCode::NonCanonicalProof); require!(pi_b.to_compressed() == proof.pi_b, ErrorCode::NonCanonicalProof); require!(pi_c.to_compressed() == proof.pi_c, ErrorCode::NonCanonicalProof); groth16_verify(vk, &pi_a, &pi_b, &pi_c) }
Enforce canonical encoding for all curve points before verification. Serialize the deserialized point back and compare byte-for-byte with the input. Additionally, track submitted proof hashes in a Solana account to prevent epoch-level replay. Both fixes were applied and verified in the final commit.
The epoch reward calculation accumulated per-node GPU-hours using a u64 accumulator. With a large number of nodes each contributing a large work unit, the intermediate product total_gpu_hours * base_rate_per_hour could exceed u64::MAX (≈1.84 × 10¹⁹). In Rust, integer overflow in release builds silently wraps, producing an astronomically small reward total rather than panicking. This would result in either vastly inflated or deflated reward distributions depending on the wrap depth.
let total_reward = total_gpu_hours * BASE_RATE_PER_HOUR; // silent wrap possible let per_node = total_reward / active_node_count;
let total_reward = total_gpu_hours .checked_mul(BASE_RATE_PER_HOUR) .ok_or(ErrorCode::RewardOverflow)?; let per_node = total_reward .checked_div(active_node_count) .ok_or(ErrorCode::ZeroNodeCount)?;
Use Rust's checked_mul / checked_div throughout all arithmetic in reward and accounting paths. Consider enabling overflow-checks = true in the release profile during development and test runs to surface these issues earlier.
The governance-controlled slashing_threshold_bps parameter (basis points fraction of stake to slash on violation) accepted the full u16 range including 0 and 10000. A value of 0 would disable slashing entirely, removing the economic disincentive for misbehavior. A value of 10000 (100%) would slash an operator's entire stake on a single infraction, which could be weaponized through governance to mass-eject legitimate operators.
pub fn set_slash_params(ctx: Context<Admin>, threshold_bps: u16) -> Result<()> { ctx.accounts.config.slashing_threshold_bps = threshold_bps; Ok(()) }
const MIN_SLASH_BPS: u16 = 100; // 1% minimum const MAX_SLASH_BPS: u16 = 5000; // 50% maximum pub fn set_slash_params(ctx: Context<Admin>, threshold_bps: u16) -> Result<()> { require!( threshold_bps >= MIN_SLASH_BPS && threshold_bps <= MAX_SLASH_BPS, ErrorCode::InvalidSlashThreshold ); ctx.accounts.config.slashing_threshold_bps = threshold_bps; Ok(()) }
Enforce sensible bounds on all governance parameters that have security implications. Consider adding a time-lock delay to governance parameter changes so the community has time to respond before changes take effect.
The following security properties were confirmed as correctly implemented throughout the codebase:
Zellic's engagement combined automated tooling with deep manual review of cryptographic and economic security properties, areas where automated tools have limited coverage.
The OmniSync Protocol demonstrates a thoughtful approach to security-critical design. The Proof-of-Computation system is architecturally sound, with the ZK circuit correctly encoding the necessary bindings to prevent reward fraud. The two issues identified at Medium severity (proof malleability and reward overflow) were both promptly addressed with correct, idiomatic Rust fixes.
The one Low severity finding regarding slash parameter bounds was also remediated and the team additionally implemented a governance time-lock, going beyond the minimum fix — a positive indicator of the team's security posture.
Zellic found no evidence of backdoors, privileged upgrade paths without governance controls, or economic design flaws that could destabilize the protocol at scale. The codebase is suitable for mainnet deployment with the above remediations applied.