Farming Cluster
Farming Cluster
The farming cluster is designed for larger-scale farmers, addressing various challenges associated with scaling up. Essentially, the cluster consists of four distinct components:
- Controller
- Cache
- Farmer
- Plotter
Benefits
- Bandwidth Efficiency: Centralized storage of the piece cache conserves bandwidth.
- Remote Compute Capability: Multiple PCs can contribute their CPU power for plotting (and eventually GPUs), without needing to store the plots locally.
- Redundancy: Running multiple computers for each process enhances redundancy.
- Additional Space: By moving the piece sync cache to a central location frees up roughly 1% of the SSD for larger plots.
Core Messaging Technology: NATS.io
At the core of this process is a third-party software called NATS.io, which is used for communication between farmer processes. The simplest way to install NATS.io is via Docker.
To start NATS, create a configuration file with the following content:
max_payload = 2MB
Save this as nats.config
and start the NATS server with Docker:
docker run \
--name nats \
--restart unless-stopped \
--publish 4222:4222 \
--volume ./nats.config:/nats.config:ro \
nats -c /nats.config
Component Configuration
Each of the four components requires a few additional parameters. All need the cluster
specification, a URL to the NATS server, and the specific component being executed.
Controller
The controller should be the first component to run. It connects to the node, so the node-rpc-url
must be included.
subspace-farmer cluster --nats-server nats://<NATS_IP>:4222 \
controller \
--base-path /path/to/controller-dir \
--node-rpc-url ws://<NODE_IP>:<NODE_PORT>
Replace <NATS_IP>
with your NATS server IP address, and <NODE_IP>:<NODE_PORT>
with your node’s IP address and port. Specify the working directory with --base-path
.
The controller logs farm connections, disconnections, and piece cache sync progress. Optional connection-related options include --in-connections
, --out-connections
, --pending-in-connections
, and --pending-out-connections
.
Cache
Next, run the cache component. Although you can run multiple cache processes to distribute the load, only one is required.
subspace-farmer cluster --nats-server nats://<NATS_IP>:4222 \
cache \
path=/path/to/cache,size=SIZE
Replace <NATS_IP>
with your NATS server IP address. Provide the cache file path with path=
, and specify the cache size. While an SSD is recommended, a hard disk can also be used. 200 GB is a good size to use for cache based on the current state of Gemini 3h.