Skip to main content
WAVIS ships two Python packages:
  • wavis-fhe — local FHE engine (Rust core via PyO3). Generates keys, encrypts, evaluates gates locally. Apache 2.0.
  • wavis — REST client for api.wavis.xyz. Manages keys/compute/webhooks through the cloud API. Apache 2.0.
You can use either independently or both together.

Install

pip install wavis-fhe       # local FHE engine
pip install wavis           # REST client
pip install wavis-fhe[cuda] # local engine with CUDA acceleration (BYO GPU)
See Installation for system requirements.

wavis_fhe — local engine

keygen()

Generate a fresh TFHE keypair.
import wavis_fhe as wv

keys = wv.keygen()                     # default: fast_128
keys = wv.keygen("standard_128")       # explicit preset
keys = wv.keygen("conservative_128")   # max safety margin
keys = wv.keygen(seed=42)              # deterministic for tests
Returns: TfheKeys object holding (secret_key, public_key, eval_key). Time: ~5 seconds first call, <1 ms for cached presets.

keygen_gpu() (BYO GPU)

keys = wv.keygen_gpu()                  # uses default CUDA device
keys = wv.keygen_gpu(device=0)          # explicit device index
keys = wv.keygen_gpu("standard_128", device=1)
Requires: pip install wavis-fhe[cuda] and an NVIDIA GPU with CUDA 11.8+. GPU keygen is ~3× faster than CPU. Subsequent gate evaluations on the returned TfheKeysGpu use the GPU automatically.

TfheKeys.encrypt(bit)

ct = keys.encrypt(True)
ct = keys.encrypt(False)
Returns: Ciphertext. Each ciphertext is ~2 KB. Encryption is constant-time.

TfheKeys.encrypt_batch(bits)

cts = keys.encrypt_batch([True, False, True, True])
# returns list[Ciphertext]
Faster than calling encrypt() in a loop because it amortizes randomness sampling.

TfheKeys.decrypt(ct)

bit = keys.decrypt(ct)   # → bool
Returns: bool — the recovered plaintext bit. Decryption requires the secret key, which only the TfheKeys instance has. A Ciphertext decrypted with the wrong key returns garbage (the gate model guarantees CCA1 security; CCA2 is out of scope for FHE).

Boolean gates

All gate methods follow the same shape: take 1–3 ciphertexts, return one ciphertext. Each call performs one bootstrap (resets noise to a fixed level), so circuit depth is unlimited.
ct_out = keys.nand(ct_a, ct_b)   # NAND
ct_out = keys.and_(ct_a, ct_b)   # AND  (note trailing underscore — `and` is a Python keyword)
ct_out = keys.or_(ct_a, ct_b)    # OR
ct_out = keys.xor(ct_a, ct_b)    # XOR
ct_out = keys.nor(ct_a, ct_b)    # NOR
ct_out = keys.xnor(ct_a, ct_b)   # XNOR
ct_out = keys.not_(ct_a)         # NOT  (unary)
ct_out = keys.mux(sel, d1, d0)   # MUX(sel ? d1 : d0)
GateBootstrap opsLatency (fast_128 CPU)
NAND, AND, OR, NOR114 ms
XOR, XNOR456 ms
NOT0<1 µs (free, no bootstrap)
MUX342 ms

TfheKeys.batch_nand(pairs)

pairs = [(keys.encrypt(False), keys.encrypt(True)) for _ in range(128)]
results = keys.batch_nand(pairs)   # list[Ciphertext]
On GPU, batches of ≥32 amortize the bootstrap-key DRAM read and run at ~5.2 ms/gate (vs. 48 ms single). On CPU, batch is equivalent to serial.

TfheKeys.eval_key_bytes()

ek_bytes = keys.eval_key_bytes()    # bytes, ~40 MB for standard_128
Returns the serialized evaluation key. Safe to upload to a server — it cannot be used to recover the secret key. See Server Mode.

Ciphertext serialization

ct_bytes = ct.to_bytes()              # bytes
ct = wv.ciphertext_from_bytes(ct_bytes)  # round-trip
For network transport, base64-encode:
import base64
ct_b64 = base64.b64encode(ct.to_bytes()).decode()
ct = wv.ciphertext_from_bytes(base64.b64decode(ct_b64))

wavis — REST client

The REST client handles auth, retries, and JSON serialization for api.wavis.xyz.

Setup

from wavis.sdk.client import WAVISClient

client = WAVISClient(
    api_key="wvs_live_...",          # or set WAVIS_API_KEY env var
    base_url="https://api.wavis.xyz", # default
    timeout_ms=30_000,                # default
)
If api_key is omitted, the client reads the WAVIS_API_KEY env variable.

client.health.check()

health = client.health.check()
print(health["status"])    # "healthy"
print(health["dependencies"])  # [{"name": "redis", ...}, ...]

client.keys.* — CKKS key management

# Generate keys server-side (CKKS)
key = client.keys.generate(
    poly_degree=8192,
    scale_bits=40,
    security_level="128-bit",
    max_depth=10,
)
print(key["key_id"])

# List
keys_list = client.keys.list(limit=20)

# Delete
client.keys.delete(key["key_id"])

client.compute.* — CKKS compute

result = client.compute.run({
    "key_id": "key_...",
    "operation": "add",
    "ciphertext_ids": ["ct_a", "ct_b"],
})
print(result["compute_time_ms"])
print(result["noise_budget_remaining"])

# Async / cancellable workflows
status = client.compute.status(result["job_id"])
client.compute.cancel(result["job_id"])
Operations: add, multiply, add-plain, multiply-plain, negate, rescale, rotate, matmul, poly-eval, bootstrap. See Compute Reference for full schemas.

client.webhooks.*

wh = client.webhooks.register({
    "url": "https://example.com/webhook",
    "events": ["compute_complete", "usage_alert"],
    "secret": "your-32-char-shared-secret-here",
})

# List, update, delete
client.webhooks.list()
client.webhooks.update(wh["webhook_id"], {"events": ["limit_reached"]})
client.webhooks.delete(wh["webhook_id"])

# Inspect deliveries / replay failures
deliveries = client.webhooks.deliveries(wh["webhook_id"], limit=50)
client.webhooks.replay(wh["webhook_id"], delivery_id="...")
See Webhooks Reference for event payloads and HMAC verification.

client.dashboard.usage()

usage = client.dashboard.usage()
print(usage["operations_total"])
print(usage["estimated_cost_usd"])

Error handling

from wavis.sdk.client import WAVISApiError

try:
    client.keys.generate(poly_degree=64, security_level="128-bit")
except WAVISApiError as e:
    print(e.status_code)    # 400
    print(e.error_code)     # "INSUFFICIENT_SECURITY"
    print(e.message)        # human-readable explanation
All non-2xx responses raise WAVISApiError with the parsed error body. The client retries idempotent requests (GET, DELETE) up to 3 times on 5xx responses with exponential backoff.

High-level wavis API (CKKS arithmetic)

For numerical workloads, the wavis package exposes a NumPy-like API:
import wavis

wavis.api_key = "wvs_live_..."

# Session encapsulates a key + active ciphertexts
session = wavis.create_session(scheme="rns-ckks", poly_degree=8192)

# Encrypt vectors
a_enc = session.encrypt([1.0, 2.0, 3.0, 4.0])
b_enc = session.encrypt([0.5, 0.5, 0.5, 0.5])

# Operate
sum_enc = wavis.add(a_enc, b_enc)
prod_enc = wavis.multiply(a_enc, b_enc)
poly_enc = wavis.poly_eval(a_enc, coeffs=[1, 0.5, -0.1])  # 1 + 0.5x - 0.1x²

# Decrypt
result = session.decrypt(sum_enc)
print(result)   # [1.5, 2.5, 3.5, 4.5]
FunctionDescription
wavis.add(a, b)Element-wise addition
wavis.multiply(a, b)Element-wise multiplication
wavis.scale(a, c)Multiply by plaintext constant
wavis.dot(a, b)Dot product (scalar result)
wavis.matmul(M, v)Matrix-vector multiplication
wavis.rotate(a, k)Cyclic rotation by k slots
wavis.poly_eval(a, coeffs)Polynomial evaluation

Concurrency

The WAVISClient is thread-safe and connection-pooled (up to 32 concurrent connections per client). For multi-process workloads, instantiate one client per process.
import asyncio
from wavis.sdk.client_async import AsyncWAVISClient  # if your version ships async

async def main():
    async with AsyncWAVISClient(api_key=...) as client:
        results = await asyncio.gather(*[
            client.compute.run({...}) for _ in range(64)
        ])

Next Steps

Server Mode Example

Full FHE-blind workflow end-to-end

GPU Batch Example

Local GPU acceleration step-by-step