Tips & Tricks
Additional Tips
Welcome to the Additional Tips section! Whether you're a seasoned farmer or just starting out with the Autonomys Network, these tips and tricks are designed to enhance your experience and efficiency. Here, we delve into practical advice and lesser-known techniques to help you fine-tune your farming setup and navigate common challenges with ease. From configuring your environment to managing background processes, these insights are tailored to ensure your Farmer operates smoothly and effectively. Let's dive in and explore how you can get the most out of your Autonomys Network Farmer.
Switching to a new snapshot from older/different versions of Autonomys
Unless specifically mentioned by the Development team you should NOT have to wipe your configuration on new releases.
In general you should be able to download the latest release, and re-start the Node & Farmer with the same commands as you started to prior version with no errors.
There are some cases where version updates will cause issue with your Node & Farmer and you may have to wipe your node, typically when errors occur. If you have any issues you can always check our Forums and hop in our Discord Server to ask for help.
Wipe
If you were running a node previously, and want to switch to a new network, please perform these steps and then follow the guide again:
# Replace `FARMER_FILE_NAME` with the name of the farmer file you downloaded from releases
./FARMER_FILE_NAME wipe PATH_TO_FARM
# Replace `NODE_FILE_NAME` with the name of the node file you downloaded from releases
./NODE_FILE_NAME wipe NODE_DATA_PATH
Now follow the installation guide from the beginning.
Utilizing Multiple Disks
To maximize storage capabilities, you can engage multiple disks directly. This is often more efficient than relying on RAID configurations:
Example:
./FARMER_FILE_NAME farm --reward-address WALLET_ADDRESS \
path=/media/ssd1,size=100GiB \
path=/media/ssd2,size=10T \
path=/media/ssd3,size=10T
Optimizing DSN Syncing
The DSN can be a nuanced topic, to better understand our Decentralized Storage Network, please refer to this guide from our Academy.
--out-peers
--in-peers
--dsn-target-connections
--dsn-pending-in-connections
--dsn-in-connections
Recommended Parameters
The default parameters are set with the capabilities of common consumer modem/routers in mind. Adjusting certain parameters could enhance DSN sync performance by increasing parallelism. However, if you decide to increase them significantly, ensure that your modem/router is performant enough to handle the increased traffic. Node:
--dsn-out-connections
--dsn-pending-out-connections
Farmer: Increasing the values of the farmer parameters could increase the plotting speed.
--out-connections
--pending-out-connections
Help
Both the node and the farmer have a variety of flags and parameters. To see a full list, append the --help
flag to either the node or the farmer command.
Some Additional Tips for Node, Farmer and Docker
- Farmer
- Node
- Docker
Plotting concurrency (CPU only)
During plotting, there are both parallel and sequential parts of the table generation. By generating several tables simultaneously, we can overlap the sequential parts with parallel parts, thus improving CPU utilization. While generating tables for all records requires significant RAM, producing tables for only a few records at a time offers an optimal balance between CPU and RAM usage.
We added the --cpu-record-encoding-concurrency
option to override the default behavior, which allocates one record for every two cores but does not exceed eight in parallel. According to our internal testing with P-cores, E-cores, and combinations of P+E cores, this setting appears to achieve peak performance.
If you prefer to use the previous behavior, or if your RAM usage becomes too high, you can set --cpu-record-encoding-concurrency
to 1
. You may also experiment with setting it to 2
, 3
, etc., which may yield better results depending on your specific CPU/RAM configuration.
Create Missing Farms
A farmer has the option to specify whether the system should automatically create missing farms upon startup. If you provide a path to a plot that isn't found, the system will generate a new one. However, this may not be desirable if a drive fails to mount properly.
By default, this option is set to true
, but you can override it by adding --create false
. Setting this flag to false
after establishing your plots can prevent unintentional file writes to the wrong drive.
Record Chunks Reading Mode
Upon startup, each farm will benchmark the performance of every plot to identify the most efficient proving method, yielding either ConcurrentChunks
or WholeSector
results. If you already know the optimal method for your setup, you can specify it as an argument for each farm to save time benchmarking.
For example, you can include record-chunks-mode= after defining the path and size, assigning the desired value, e.g., path=/mnt/subspace1,size=100G,record-chunks-mode=ConcurrentChunks
. If you do not provide this parameter, the system will benchmark each disk on startup to identify the most efficient method.
Benchmarking Your Farmer
Benchmarking helps you test the plotting speed of your farmer against different versions of the Autonomys Network.
./subspace-farmer benchmark audit /path/to/your/plot
Replace /path/to/your/plot with the actual path to your plot. This will provide you with metrics and insights regarding plotting performance.
At the moment, Autonomys Node supports rayon
implementation mechanism, that opens files as many times as there are farming threads and, for each thread, uses a dedicated file handle.
To interpret the results: typically, you assess throughput to determine the maximum auditable size across any number of disks. Both CPU and disk performance influence this metric.
To read more about audit benchmarking, read this forum post.
There is a second command available, which helps you determine how much time your farmer has after auditing to provide proof.
./subspace-farmer benchmark prove /path/to/your/plot
Each farmer has ~4 seconds to audit and prove, so depending on how fast auditing is, the remaining time will be used for proving. Proving performance doesn't depend on the plot size.
If an environment does not have enough time for the proving step, a message such as this is logged by the farmer:
Proving for solution skipped due to farming time limit slot=6723846 sector_index=46
To read more about prove benchmarking, read this forum post.
Scrubbing Your Farmer
In certain situations, especially when the farmer terminates unexpectedly or encounters an error, it might fail to restart correctly. The scrub command assists in resolving such issues by cleaning or resetting the specified plot.
./subspace-farmer scrub /path/to/your/plot
Ensure to replace /path/to/your/plot with your actual plot path. Use this command cautiously as it modifies the plot state to recover from errors.
Specify Exact CPU Cores for Plotting/Replotting (CPU only)
This option will override any custom logic that the subspace farmer might otherwise use.
You can specify the plotting CPU cores by adding --cpu-plotting-cores
, followed by the desired cores parameters. Cores should be listed as comma-separated values, with whitespace used to separate different thread pools or encoding instances.
For example, --cpu-plotting-cores 0,1 2,3
will result in two sectors being encoded simultaneously, each using a pair of CPU cores.
Similarly, you can customize the replotting CPU cores using --cpu-replotting-cores
, followed by arguments similar to those used in the --cpu-plotting-cores
example above.
Note that setting --cpu-plotting-cores
requires --cpu-replotting-cores
to be configured with the same number of CPU core groups. If the --cpu-replotting-cores
setting is omitted, the node will default to using the same thread pool as for plotting.
Snap Sync
Snap Sync works by jumping to near the end of the chain, allowing you to sync to the current block more quickly. Once the initial jump is complete, it behaves like a full sync. In fact, if you haven't synced for more than a few days, it might be quicker to remove the node database and let Snap Sync quickly sync to the most current block.
There are cases where you might not want to use Snap Sync. To perform a full sync you can either omit the sync parameter, or use --sync full
. This is useful if you want to run an RPC node for public use, become an operator or indexing. In that case, you would need to run an archival node, not just a pruned node. An archival node requires a full sync.
Docker Wipe
In case of Docker setup run docker compose down -v
(and manually delete custom directories if you have specified them).
Now follow the installation guide.
If you are having issues with running a node or farmer for Autonomys, feel free to join our Discord or even better you can take a look at our forums and review existing questions or post a new one if needed!
---
Timekeepers
Gemini-3g introduces Proof-of-Time and a new, optional role has been added to the node. Timekeepers help run PoT to ensure the security of the network. Timekeeping requires a high-performance core CPU but can be undertaken by any node runner. You can enable timekeeping with the following commands.
--timekeeper
: To enable timekeeping on the node--timekeeper-cpu-cores
: To specify which cores the Timekeeper will run on.
Read more about Timekeepers
Common Command Examples
For both the node and the farmer, here are some frequently used commands:
- Display farm information:
./FARMER_FILE_NAME info PATH_TO_FARM
- Scrub the farm for errors:
./FARMER_FILE_NAME scrub PATH_TO_FARM
- Erase all farmer-related data:
./FARMER_FILE_NAME wipe PATH_TO_FARM
- Erase all node-related data:
./NODE_FILE_NAME wipe NODE_DATA_PATH
---
NUMA Support
NUMA support will benefit farmers with large, powerful CPUs and lots of RAM available. The required RAM amount depends on the processor and the number of threads.\
Previously plotting/replotting thread pools were created for each farm separately even though only a configured number of them can be used at a time (by default just one). With the introduction of NUMA support, the farmer application has a thread pool manager that will create a necessary number of thread pools that will be allocated to currently plotting/replotting farms. When a thread pool is created, it is assigned to a set of CPU cores and will only be able to use those cores. Pinning doesn’t pin threads to cores 1:1, instead, the OS is free to move threads between cores, but only within CPU cores allocated for the thread pool. This will ensure plotting for a particular sector only happens on a particular CPU/NUMA node.
Enabling NUMA on Windows/Linux machines
On Linux and Windows, the farmer will detect NUMA systems and create a number of thread pools that correspond to the number of NUMA nodes. This means the default behavior will change for large CPUs and consume more memory as a result, but that can be changed to the previous behavior with CLI options if desired. To use NUMA, you need to enable it via the BIOS of your motherboard for the farmer to know it exists. This option is present in motherboards for Threadripper/Epyc processors but might exist in others too. If you don’t enable it, both OS and farmer will think you have a single UMA processor and will not be able to apply optimizations.
To read more about NUMA support and the benefits it provides for large CPUs read this forum post.
Changing number of open files limit (Linux)
Changing the number of open files limit on Linux is sometimes necessary for both system and application performance tuning. Applications that manage file sharing or distribution, including peer-to-peer systems, may require opening numerous connections to different peers simultaneously. A higher open files limit allows these applications to maintain more connections, improving data transfer rates and system efficiency.
This can also help to mitigate the Too Many Open Files
error.
You can use the ulimit
command to change the limit for open files temporarily. For example, setting ulimit -n 2048
will set the limit to 2048. This change is reverted after logging out or a reboot.
For a permanent change, you have two approaches:
- To modify the kernel parameter
fs.file-max
. You can do this using thesysctl
commandsysctl -w fs.file-max=500000
. - To set limits for individual users, you need to edit the
/etc/security/limits.conf
file. You can specify hard and soft limits for the number of open filesusername soft nofile 10000
for the soft limit andusername hard nofile 30000
for the hard limit.