infrablockchain-docs
en
en
  • InfraBlockchain
    • Learn
      • Architecture
        • Architecture
        • Network Participants
        • Parachain
          • System Parachains
      • Protocol
        • System Token
        • Transaction Fee
        • Proof of Transaction
      • Substrate
        • Learn
          • Basic
            • Cryptography
            • Blockchain Basics
            • Consensus
            • Networks and Nodes
            • Blockchain Transaction
            • Transaction Life Cycle
            • Offchain Operations
            • Light Client
            • Rust for Substrate
            • Introduction to Library
            • Architecture and Rust Libraries
            • File Architecture
            • Accounts, Addresses, and Keys
            • Transaction Format
            • Blockchain Randomness
          • FRAME
            • FRAME Pallets
            • FRAME Macros
            • Custom Pallets
            • Pallet Coupling
            • Origin
            • Events and Erros
            • Runtime Storage
            • State Transitions and Storage
            • SCALE Encoding
            • Weight and Fee
            • Runtime API
            • Runtime Development
          • Account
          • Address Format
          • Glossary
          • CLI
            • Archive
            • Memory Profiler
            • Node Template
            • sidecar
            • srtool
            • Subkey
            • subxt
            • try-runtime
            • tx-wrapper
          • Runtime Development
            • Basics
              • Configure Genesis State
              • Configure Runtime Constants
              • Customize a Chain Spec
              • Import a Pallet
              • Use Helper Function
            • Consensus Model
              • PoW
              • Create a Hybrid Node
            • Offchain Worker
              • Request Offchain HTTP
              • Offchain Indexing
              • Offchain Local Storage
            • Pallet Design
              • Create a Storage Structure
              • Implement Lockable Currency
              • Incorporate Randomness
              • Loose Coupling
              • Tight Coupling
            • Parachain Development
              • Add HRMP Channel
              • Add Paranodes
              • Connect to a Local Relay Chain
              • Convert a Solo Chain
              • Prepare to Launch
              • Select Collator
              • Upgrade a Parachain
            • Storage Migration
              • Basics
              • Trigger Migration
            • Test
              • Basics
              • Test a Transfer Transaction
            • Tools
              • Create a TxWrapper
              • Use Sidecar
              • try-runtime
              • Verify WASM
            • Weigths
              • Benchmark
              • Calculate Fees
              • Use Conditional Weights
              • Use Custom Weights
        • Build
          • Decide What to Build
          • Build Process
          • Determinisitc Runtime
          • Chain Spec
          • Genesis Configuration
          • Application Development
          • RPC
          • Troubleshoot Your Code
        • Tutorials
          • Install
            • Developer Tools
            • Linux
            • macOS
            • Rust Toolchain
            • Issues
            • Windows
          • Quick Start
            • Explore the Code
            • Modify Runtime
            • Start a Node
            • Substrate Basics
          • Build a Blockchain
            • Add Trusted Nodes
            • Authorize Specific Nodes
            • Build a Local Blockchain
            • Simulate Network
            • Upgrade a Running Network
          • Build Application Logic
            • Add a Pallet
            • Add Offchasin Workers
            • Publish Custom Pallets
            • Specify Origin for a Call
            • Use Macros in a Custom Pallet
          • Integrate with Tools
            • Access EVM Accounts
            • EVM Integration
            • Explore Sidecar Endpoints
            • Integrate a Light Client Node
          • Smart Contracts
            • Strategy
            • Build a Token Contract
            • Develop a Smart Contract
            • Prepare Your First Contract
            • Troubleshoot Smart Contracts
            • Use Maps for Storing Values
      • XCM
        • XCM
        • XCM Format
    • Service Chains
      • InfraDID
      • InfraEVM
      • URAuth(Universal Resource Auth)
    • DevOps
      • Build
      • Deploy
      • Monitoring
      • Runtime Upgrade
    • Tutorials
      • Basic
        • How to Interact with System Token
        • How To Pay Transaction Fee
        • How To Vote with TaaV
        • Hot to Get Validator Reward
      • Build
        • Build InfraRelayChain
        • Build Parachain
        • Open Message Passing Channels
        • Transfer Assets with XCM
      • Test
        • Benchmark
        • Check Runtime
        • Debug
        • Simulate Parachains
        • Unit Testing
      • Service Chains
        • Play with InfraDID
          • Build
          • Add Keys
          • Add Service Endpoint
          • Create InfraDID
        • Play with InfraEVM
          • Build
          • Deposit and Withdraw Token
          • Deploy ERC20 Contract
          • Deploy ERC721 Contract
          • Deploy ERC1155 Contract
  • Newnal Data Market
Powered by GitBook
On this page
  • Why benchmark a pallet
  • Developing a linear model
  • Benchmarking and weight
  • Benchmarking tools
  • Writing benchmarks
  • Testing benchmarks
  • Adding benchmarks
  • Running benchmarks
  • Where to go next
  1. InfraBlockchain
  2. Tutorials
  3. Test

Benchmark

Describes the benchmarking framework you can use to estimate the computational resources required to execute the functions in the runtime logic.

PreviousTestNextCheck Runtime

Last updated 1 year ago

Substrate and FRAME provide a flexible framework for developing custom logic for your blockchain. This flexibility enables you to design complex and interactive pallets and implement sophisticated runtime logic. However, determining the appropriate to assign to the functions in your pallets can be a difficult task. Benchmarking enables you to measure the time it takes to execute different functions in the runtime and under different conditions. If you use benchmarking to assign accurate weights to function calls, you can prevent your blockchain from being overloaded and unable to produce blocks or vulnerable to Denial of service(DoS) attacks by malicious actors.

Why benchmark a pallet

It is important to understand the computational resources required to execute different functions—including runtime functions like on_initialize and verify_unsigned—to keep the runtime safe and to enable the runtime to include or exclude transactions based on the resources available.

The ability to include or exclude transactions based on available resources ensures that the runtime can continue to produce and import blocks without service interruptions. For example, if you have a function call that requires particularly intensive computation, executing the call might exceed the maximum time allowed for producing or importing a block, disrupting the block handling process or stopping blockchain progress altogether. Benchmarking helps you validate that the execution time required for different functions is within reasonable boundaries.

Similarly, a malicious user might attempt to disrupt network service by repeatedly executing a function call that requires intensive computation or that doesn't accurately reflect the computation it requires. If the cost for executing a function call doesn't accurately reflect the computation involved, there's no incentive to deter a malicious user from attacking the network. Because benchmarking helps you evaluate the weight associated with executing transactions, it also helps you to determine appropriate transaction fees. Based on your benchmarks, you can set fees that represent the resources consumed by executing specific calls on the blockchain.

Developing a linear model

At a high level, benchmarking requires you to perform the following steps:

  • Write custom benchmarking logic that executes a specific code path for a function.

  • Execute the benchmark logic in the WebAssembly execution environment on a specific set of hardware and with a specific runtime configuration.

  • Execute the benchmark logic across a controlled range of possible values that might affect the execution time a function requires.

  • Execute the benchmark multiple times for each component in a function to isolate and remove outliers.

From the results generated by executing the benchmark logic, the benchmarking tool creates a linear model of the function across all of its components. The linear model for a function enables you to estimate how long it takes to execute a specific code path and to make informed decisions without actually spending any significant resources at runtime. Benchmarking assumes all transactions have linear complexity because higher complexity functions are considered to be dangerous to the runtime as the weight of these functions may explode as the runtime state or input becomes too complex.

Benchmarking and weight

As discussed in , Substrate-based chains use the concept of weight to represent the time it takes to execute the transactions in a block. The time required to execute any particular call in a transaction depends on a several factors, including the following:

  • Computational complexity

  • Storage complexity

  • Database read and write operations required

  • Hardware used

To calculate an appropriate weight for a transaction, you can use benchmark parameters to measure the time it takes to execute the function calls on different hardware, using different variable values, and repeated multiple times. You can then use the results of the benchmarking tests to establish an approximate worst case weight to represent the resources required to execute each function call and each code path. Fees are then based on the worst case weight. If the actual call performs better than the worst case, the weight is adjusted and any excess fees can be returned.

Because weight is a generic unit of measurement based on computation time for a specific physical machine, the weight of any function can change based on the specific hardware used for benchmarking.

By modeling the expected weight of each runtime function, the blockchain is able to calculate how many transactions or system level calls it can execute within a certain period of time.

Within FRAME, each function call that can be dispatched must have a #[weight] annotation that can return the expected weight for the worst case scenario execution of that function given its inputs. The benchmarking framework automatically generates a file with those formulas for you.

Benchmarking tools

The end-to-end benchmarking pipeline is disabled by default when compiling a node. If you want to run benchmarks, you need to compile a node with the runtime-benchmarks Rust feature flag.

Writing benchmarks

Writing a runtime benchmark is similar to writing a unit test for your pallet. Like unit tests, benchmarks must execute specific logical paths in your code. In unit tests, you check the code for specific success and failure results. For benchmarks, you want to execute the most computationally intensive path.

In writing benchmarks, you should consider the specific conditions—such as storage or runtime state—that might affect the complexity of the function. For example, if triggering more iterations in a for loop increases the number of database read and write operations, you should set up a benchmark that triggers this condition to get a more accurate representation of how the function would perform.

If a function executes different code paths depending on user input or other conditions, you might not know which path is the most computationally intensive. To help you see where complexity in the code might become unmanageable, you should create a benchmark for each possible execution path. The benchmarks can help you identify places in the code where you might want to enforce boundaries—for example, by limiting the number of elements in a vector or limiting the number of iterations in a for loop—to control how users interact with your pallet.

Testing benchmarks

You can test benchmarks using the same mock runtime that you created for unit testing your pallet. The benchmarking macro you use in your benchmarking.rs module automatically generates test functions for you. For example:

fn test_benchmark_[benchmark_name]<T>::() -> Result<(), &'static str>

You can add the benchmark functions to a unit test and ensure that the result of the function is Ok(()).

Verify blocks

In general, you only need to check that a benchmark returned Ok(()) because that result indicates that the function was executed successfully. However, you can optionally include a verify block with your benchmarks if you want to verify any final conditions, such as the final state of your runtime. The additional verify blocks don't affect the results of your final benchmarking process.

Run the unit tests with benchmarks

To run the benchmarking tests, you need to specify the package to test and enable the runtime-benchmarks feature. For example, you can test the benchmarks for the Balances pallet by running the following command:

cargo test --package pallet-balances --features runtime-benchmarks

Adding benchmarks

Assuming there are already some benchmarks set up on your node, you just need to add the pallet to the define_benchmarks! macro:

#[cfg(feature = "runtime-benchmarks")]
mod benches {
	define_benchmarks!(
		[frame_benchmarking, BaselineBench::<Runtime>]
		[pallet_assets, Assets]
		[pallet_babe, Babe]
    ...
    [pallet_mycustom, MyCustom]
    ...

After you have added your pallet, compile your node binary with the runtime-benchmarks feature flag. For example:

cd bin/node/cli
cargo build --profile=production --features runtime-benchmarks

The production profile applies various compiler optimizations. These optimizations slow down the compilation process a lot. If you are just testing things out and don't need final numbers, use the --release command-line option instead of the production profile.

Running benchmarks

After you have compiled a node binary with benchmarks enabled, you need to execute the benchmarks. If you used the production profile to compile the node, you can list the available benchmarks by running the following command:

./target/production/node-template benchmark pallet --list

Benchmark all functions in all pallets

To execute all benchmarks for the runtime, you can run a command similar to the following:

./target/production/node-template benchmark pallet \
    --chain dev \
    --execution=wasm \
    --wasm-execution=compiled \
    --pallet "*" \
    --extrinsic "*" \
    --steps 50 \
    --repeat 20 \
    --output pallets/all-weight.rs

This command creates an output file—in this case, a file named all-weight.rs—that implements the WeightInfo trait for your runtime.

Benchmark a specific functions in a pallet

To execute the benchmark for a specific function in a specific pallet, you can run a command similar to the following:

./target/production/node-template benchmark pallet \
    --chain dev \
    --execution=wasm \
    --wasm-execution=compiled \
    --pallet pallet_balances \
    --extrinsic transfer \
    --steps 50 \
    --repeat 20 \
    --output pallets/transfer-weight.rs

This command creates an output file for the selected pallet—for example, transfer-weight.rs—that implements the WeightInfo trait for the pallet_balances pallet.

Use a template to format benchmarks

The benchmarking command-line interface uses a Handlebars template to format the final output file. You can optionally pass the --template command-line option to specify a custom template instead of the default. Within the template, you have access to all the data provided by the TemplateData struct in the benchmarking command-line interface.

There are some custom Handlebars helpers included with the output generation:

  • underscore: Add an underscore to every 3rd character from the right of a string. Primarily to be used for delimiting large numbers.

  • join: Join an array of strings into a space-separated string for the template. Primarily to be used for joining all the arguments passed to the CLI.

To get a full list of benchmark subcommands, run:

./target/production/node-template benchmark --help

To get a full list of available options for the benchmark pallet subcommand, run:

./target/production/node-template benchmark pallet --help

Where to go next

The provides tools that help you add, test, run, and analyze benchmarks for the functions in the runtime. The benchmarking tools that help you determine the time it takes to execute function calls include the following:

to help you write, test, and add runtime benchmarks.

for processing benchmark data.

to enable you to execute benchmarks on your node.

You can find examples of end-to-end benchmarks in all of the prebuilt .

The benchmarks included with each pallet are not automatically added to your node. To execute these benchmarks, you need to implement the frame_benchmarking::Benchmark trait. You can see an example of how to do this in the .

weight
Transactions, weights, and fees
benchmarking framework
Benchmark macros
Linear regression analysis functions
Command-line interface (CLI)
FRAME pallets
Substrate node
Frame-benchmarking
Substrate Seminar: Benchmarking Your Substrate Pallet
How-to: Add benchmarks
Command reference: node-template benchmark