When running Geneva UDFs on Ray, your code is serialized locally and executed on remote workers. If the worker environment differs from your local environment, you may encounter subtle and difficult-to-debug errors.
Example environment mismatch errors
| Symptom | Likely Cause |
|---|
TypeError: Enum.__new__() missing 1 required positional argument | attrs version mismatch |
TypeError: Can't instantiate abstract class | Package structure differences |
ArrowInvalid: cannot cast / serialization errors | NumPy 1.x vs 2.x mismatch |
ModuleNotFoundError on workers | Package only installed locally |
| Model loading failures | PyTorch version mismatch |
| Permission denied errors | Missing API keys in envrionment variables |
These issues are notoriously difficult to debug because the error messages often don’t indicate the root cause.
Geneva provides a diagnostic tool to compare your local environment against Ray workers.
If you are encountering a hang or exception you can use the following diagnosis worklflow to resolve the problem.
Run the diagnostic tool programatically or via the CLI.
Check PACKAGES and ENV VARS output sections for mismatches.
Identify critical packages: numpy, torch, pyarrow, attrs, pydantic.
Identify inconsistent environment variables: AWS_*, GOOGLE_APPLICATION_CREDENTIALS
Fix with manifest for quick testing:from geneva.manifest.builder import GenevaManifestBuilder
manifest = GenevaManifestBuilder.create("fix").pip(["numpy==1.26.4"]).build()
OPTIONAL: Build custom image for production (if using KubeRay).
Programmatic Usage
from geneva.runners.ray.compare_env import compare_ray_environments
# Compare and print (requires Geneva context to be initialize via `with db.context(..)`)
result = compare_ray_environments()
# Compare environments, filtering environment variables with specified prefix
result = compare_ray_environments(env_prefix="PY")
CLI Usage
# Connect to existing Ray cluster
python -m geneva.runners.ray.compare_env
# Start new local Ray cluster
python -m geneva.runners.ray.compare_env --address local
# Filter env vars by prefix
python -m geneva.runners.ray.compare_env --env-prefix RAY
# Show full JSON snapshots
python -m geneva.runners.ray.compare_env --show-all
# Skip sys.path comparison
python -m geneva.runners.ray.compare_env --no-sys-path
Understanding the Output
The tool outputs several sections to help you identify mismatches.
Shows Python version and OS information for both environments:
=== PYTHON / PLATFORM ===
Local:
Python: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0]
Impl : CPython
Exec : /home/user/.venv/bin/python
OS : Linux 5.15.0-generic (x86_64)
Remote:
Python: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0]
Impl : CPython
Exec : /home/ray/anaconda3/bin/python
OS : Linux 5.4.0-aws (x86_64)
Watch for different Python versions or different OS types (macOS local vs Linux remote).
Architecture Mismatch (macOS to Linux)
If you see different OS types (e.g., Darwin locally vs Linux remotely), compiled extensions may fail with ModuleNotFoundError or segfaults.
Solution: Run Geneva from the same OS/architecture as your cluster (typically Linux x86_64). Use a Linux VM, container, or remote development environment.
Environment Variables
Environment variables present in only one environment:
=== ENV VARS: keys only in LOCAL ===
+ CONDA_PREFIX
+ VIRTUAL_ENV
=== ENV VARS: keys only in REMOTE ===
+ RAY_ADDRESS
+ KUBERNETES_SERVICE_HOST
Missing AWS_* or GOOGLE_APPLICATION_CREDENTIALS can cause storage authentication failures.
Passing Environment Variables to Workers
If critical environment variables are missing on workers, you can pass them via the manifest or cluster configuration.
Option 1: Via Manifest
from geneva.manifest.builder import GenevaManifestBuilder
import os
manifest = (
GenevaManifestBuilder.create("my-manifest")
.env({
"AWS_ACCESS_KEY_ID": os.environ["AWS_ACCESS_KEY_ID"],
"AWS_SECRET_ACCESS_KEY": os.environ["AWS_SECRET_ACCESS_KEY"],
"MY_API_KEY": os.environ["MY_API_KEY"],
})
.build()
)
Option 2: Via Cluster Configuration
from geneva.cluster.builder import GenevaClusterBuilder
import os
cluster = (
GenevaClusterBuilder.create("my-cluster")
.ray_init_kwargs({
"runtime_env": {
"env_vars": {
"AWS_ACCESS_KEY_ID": os.environ["AWS_ACCESS_KEY_ID"],
"AWS_SECRET_ACCESS_KEY": os.environ["AWS_SECRET_ACCESS_KEY"],
}
}
})
.build()
)
Avoid hardcoding secrets. Use os.environ to pass values from your local environment, or use a secrets manager in production.
Packages
The tool shows version mismatches and packages only present in one environment:
=== PACKAGES: version mismatches ===
* numpy: local=1.26.4 remote=2.2.6
* torch: local=2.0.1 remote=2.8.0+cpu
* attrs: local=23.2.0 remote=24.2.0
* pyarrow: local=14.0.1 remote=17.0.0
=== PACKAGES: only in LOCAL ===
+ my-custom-package
+ dev-tools
=== PACKAGES: only in REMOTE ===
+ kuberay-client
Watch for major version differences (NumPy 1.x vs 2.x) and PyTorch version mismatches.
Common Package Issues
| Issue | Symptoms | Fix |
|---|
| NumPy 1.x vs 2.x | ArrowInvalid, ValueError: cannot convert, serialization failures | Pin numpy==1.26.4 |
| PyTorch mismatch | Model loading failures, CUDA errors, unexpected inference results | Pin to matching torch version |
| attrs mismatch | TypeError: Enum.__new__() missing 1 required positional argument | Pin attrs to local version |
| Missing package | ModuleNotFoundError: No module named 'xyz' | Add package to manifest |
Fixing Package Mismatches
Option 1: Manifest pip Dependencies
Specify packages in a Geneva manifest for a quick fix:
from geneva.manifest.builder import GenevaManifestBuilder
manifest = (
GenevaManifestBuilder.create("my-manifest")
.pip([
"numpy==1.26.4",
"torch==2.0.1",
"attrs==23.2.0",
])
.build()
)
# Then use with db.context()
conn = geneva.connect("s3://my-bucket/my-db")
conn.define_manifest("my-manifest", manifest)
with conn.context(cluster="my-cluster", manifest="my-manifest"):
conn.open_table("my-table").backfill("my-column")
Pros: Quick, reusable across sessions, stored with your database.
Cons: Slower startup (downloads packages), may not work for complex dependencies.
Option 2: Custom Ray Worker Image
For KubeRay deployments, build a custom worker image:
# Dockerfile.ray-worker
FROM rayproject/ray:2.30.0-py311
# Install exact versions
RUN pip install \
numpy==1.26.4 \
torch==2.0.1 \
attrs==23.2.0 \
geneva==0.8.0
# Copy any custom packages
COPY ./my_udfs /app/my_udfs
Then reference in RayCluster spec:
spec:
workerGroupSpecs:
- template:
spec:
containers:
- image: your-registry/ray-worker:latest
Pros: Fastest startup, reproducible.
Cons: Requires image build/push workflow.
Option 3: Conda Environment
Use a conda environment on workers via the cluster builder:
from geneva.cluster.builder import GenevaClusterBuilder
cluster = (
GenevaClusterBuilder.create("my-cluster")
.ray_init_kwargs({
"runtime_env": {"conda": "environment.yml"}
})
.build()
)
Or specify conda channels and dependencies inline:
cluster = (
GenevaClusterBuilder.create("my-cluster")
.ray_init_kwargs({
"runtime_env": {
"conda": {
"channels": ["conda-forge"],
"dependencies": [
"python=3.10",
"ffmpeg<8",
"torchvision=0.22.1"
]
},
"config": {"eager_install": True}
}
})
.build()
)
Pros: Best for complex dependencies with native libraries (ffmpeg, CUDA).
Cons: Slower environment creation, requires conda on workers.
API Reference
For detailed API documentation on the environment comparison functions, see the Geneva Diagnostics API Reference.