infrablockchain-docs
en
en
  • InfraBlockchain
    • Learn
      • Architecture
        • Architecture
        • Network Participants
        • Parachain
          • System Parachains
      • Protocol
        • System Token
        • Transaction Fee
        • Proof of Transaction
      • Substrate
        • Learn
          • Basic
            • Cryptography
            • Blockchain Basics
            • Consensus
            • Networks and Nodes
            • Blockchain Transaction
            • Transaction Life Cycle
            • Offchain Operations
            • Light Client
            • Rust for Substrate
            • Introduction to Library
            • Architecture and Rust Libraries
            • File Architecture
            • Accounts, Addresses, and Keys
            • Transaction Format
            • Blockchain Randomness
          • FRAME
            • FRAME Pallets
            • FRAME Macros
            • Custom Pallets
            • Pallet Coupling
            • Origin
            • Events and Erros
            • Runtime Storage
            • State Transitions and Storage
            • SCALE Encoding
            • Weight and Fee
            • Runtime API
            • Runtime Development
          • Account
          • Address Format
          • Glossary
          • CLI
            • Archive
            • Memory Profiler
            • Node Template
            • sidecar
            • srtool
            • Subkey
            • subxt
            • try-runtime
            • tx-wrapper
          • Runtime Development
            • Basics
              • Configure Genesis State
              • Configure Runtime Constants
              • Customize a Chain Spec
              • Import a Pallet
              • Use Helper Function
            • Consensus Model
              • PoW
              • Create a Hybrid Node
            • Offchain Worker
              • Request Offchain HTTP
              • Offchain Indexing
              • Offchain Local Storage
            • Pallet Design
              • Create a Storage Structure
              • Implement Lockable Currency
              • Incorporate Randomness
              • Loose Coupling
              • Tight Coupling
            • Parachain Development
              • Add HRMP Channel
              • Add Paranodes
              • Connect to a Local Relay Chain
              • Convert a Solo Chain
              • Prepare to Launch
              • Select Collator
              • Upgrade a Parachain
            • Storage Migration
              • Basics
              • Trigger Migration
            • Test
              • Basics
              • Test a Transfer Transaction
            • Tools
              • Create a TxWrapper
              • Use Sidecar
              • try-runtime
              • Verify WASM
            • Weigths
              • Benchmark
              • Calculate Fees
              • Use Conditional Weights
              • Use Custom Weights
        • Build
          • Decide What to Build
          • Build Process
          • Determinisitc Runtime
          • Chain Spec
          • Genesis Configuration
          • Application Development
          • RPC
          • Troubleshoot Your Code
        • Tutorials
          • Install
            • Developer Tools
            • Linux
            • macOS
            • Rust Toolchain
            • Issues
            • Windows
          • Quick Start
            • Explore the Code
            • Modify Runtime
            • Start a Node
            • Substrate Basics
          • Build a Blockchain
            • Add Trusted Nodes
            • Authorize Specific Nodes
            • Build a Local Blockchain
            • Simulate Network
            • Upgrade a Running Network
          • Build Application Logic
            • Add a Pallet
            • Add Offchasin Workers
            • Publish Custom Pallets
            • Specify Origin for a Call
            • Use Macros in a Custom Pallet
          • Integrate with Tools
            • Access EVM Accounts
            • EVM Integration
            • Explore Sidecar Endpoints
            • Integrate a Light Client Node
          • Smart Contracts
            • Strategy
            • Build a Token Contract
            • Develop a Smart Contract
            • Prepare Your First Contract
            • Troubleshoot Smart Contracts
            • Use Maps for Storing Values
      • XCM
        • XCM
        • XCM Format
    • Service Chains
      • InfraDID
      • InfraEVM
      • URAuth(Universal Resource Auth)
    • DevOps
      • Build
      • Deploy
      • Monitoring
      • Runtime Upgrade
    • Tutorials
      • Basic
        • How to Interact with System Token
        • How To Pay Transaction Fee
        • How To Vote with TaaV
        • Hot to Get Validator Reward
      • Build
        • Build InfraRelayChain
        • Build Parachain
        • Open Message Passing Channels
        • Transfer Assets with XCM
      • Test
        • Benchmark
        • Check Runtime
        • Debug
        • Simulate Parachains
        • Unit Testing
      • Service Chains
        • Play with InfraDID
          • Build
          • Add Keys
          • Add Service Endpoint
          • Create InfraDID
        • Play with InfraEVM
          • Build
          • Deposit and Withdraw Token
          • Deploy ERC20 Contract
          • Deploy ERC721 Contract
          • Deploy ERC1155 Contract
  • Newnal Data Market
Powered by GitBook
On this page
  • Add benchmarking to the pallet
  • Add a benchmarking module
  • Test the benchmarks
  • Add benchmarking to the runtime
  • Run your benchmarks
  • Examples
  1. InfraBlockchain
  2. Learn
  3. Substrate
  4. Learn
  5. Runtime Development
  6. Weigths

Benchmark

Demonstrates how to use the benchmarking framework to estimate execution requirements for a pallet.

PreviousWeigthsNextCalculate Fees

Last updated 1 year ago

This guide illustrates how to write a simple benchmark for a pallet, test the benchmark, and run commands to generate realistic estimates about the execution time required for the functions in a pallet. This guide does not cover how to use the benchmarking results to update transaction weights.

Add benchmarking to the pallet

  1. Open the file for your pallet in a text editor.

  2. Add the frame-benchmarking crate to the [dependencies] for the pallet using the same version and branch as the other dependencies in the pallet.

    For example:

    frame-benchmarking = { version = "4.0.0-dev", default-features = false, git = "https://github.com/paritytech/polkadot-sdk.git", branch = "polkadot-v1.0.0", optional = true }
  3. Add runtime-benchmarks to the list of [features] for the pallet.

    For example:

    [features]
    runtime-benchmarks = ["frame-benchmarking/runtime-benchmarks"]
  4. Add frame-benchmarking/std to the list of std features for the pallet.

    For example:

    std = [
       ...
       "frame-benchmarking/std",
       ...
    ]

Add a benchmarking module

  1. Create a new text file—for example, benchmarking.rs—in the src folder for your pallet.

  2. Open the benchmarking.rs file in a text editor and create a Rust module that defines benchmarks for your pallet.

    You can use the benchmarking.rs for any prebuilt pallet as an example of what to include in the Rust module. In general, the module should include code similar to the following:

    #![cfg(feature = "runtime-benchmarks")]
    mod benchmarking;
    
    use crate::*;
    use frame_benchmarking::{benchmarks, whitelisted_caller};
    use frame_system::RawOrigin;
    
    benchmarks! {
       // Add individual benchmarks here
       benchmark_name {
          /* code to set the initial state */
       }: {
          /* code to test the function benchmarked */
       }
       verify {
          /* optional verification */
       }
    }
  3. Write individual benchmarks to test the most computationally expensive paths for the functions in the pallet.

    The benchmarking macro automatically generates a test function for each benchmark you include in the benchmarking module. For example, the macro creates test functions similar to the following:

    fn test_benchmarking_[benchmark_name]<T>::() -> Result<(), &'static str>
    benchmarks! {
      set_dummy_benchmark {
        // Benchmark setup phase
        let b in 1 .. 1000;
      }: set_dummy(RawOrigin::Root, b.into()) // Execution phase
      verify {
    	   // Optional verification phase
        assert_eq!(Pallet::<T>::dummy(), Some(b.into()))
      }
    }

    In this sample code:

    • The name of the benchmark is set_dummy_benchmark.

    • The variable b stores input that is used to test the execution time of the set_dummy function.

    • The value of b varies between 1 to 1,000, so you can run the benchmark test repeatedly to measure the execution time using different input values.

Test the benchmarks

After you have added benchmarks to the benchmarks! macros in the benchmarking module for your pallet, you can use a mock runtime to do unit testing and ensure that the test functions for your benchmarks return Ok(()) as a result.

  1. Open the benchmarking.rs benchmarking module in a text editor.

  2. Add the impl_benchmark_test_suite! macro to the bottom of your benchmarking module:

    impl_benchmark_test_suite!(
      MyPallet,
      crate::mock::new_test_ext(),
      crate::mock::Test,
    );

    The impl_benchmark_test_suite! macro takes the following input:

    • The Pallet struct generated by your pallet, in this example MyPallet.

    • A function that generates a test genesis storage, new_text_ext().

    • The full mock runtime struct, Test.

    This is the same information you use to set up a mock runtime for unit testing. If all benchmark tests pass in the mock runtime test environment, it's likely that they will work when you run the benchmarks in the actual runtime.

  3. Execute the benchmark unit tests generated for your pallet in a mock runtime by running a command similar to the following for a pallet named pallet-mycustom:

    cargo test --package pallet-mycustom --features runtime-benchmarks
  4. Verify the test results.

For example:

running 4 tests
test mock::__construct_runtime_integrity_test::runtime_integrity_tests ... ok
test tests::it_works_for_default_value ... ok
test tests::correct_error_for_none_value ... ok
test benchmarking::bench_do_something ... ok

test result: ok. 4 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

Add benchmarking to the runtime

After you have added benchmarking to your pallet, you must also update the runtime to include the pallet and the benchmarks for the pallet.

  1. Open the Cargo.toml file for your runtime in a text editor.

  2. Add your pallet to the list of [dependencies] for the runtime:

    pallet-mycustom = { default-features = false, path = "../pallets/pallet-mycustom"}
  3. Update the [features] for the runtime to include the runtime-benchmarks for your pallet:

    [features]
    runtime-benchmarks = [
      ...
      'pallet-mycustom/runtime-benchmarks'
      ...
    ]
  4. Update the std features for the runtime to include your pallet:

     std = [
      # -- snip --
      'pallet-mycustom/std'
    ]
  5. Add the configuration trait for your pallet to the runtime.

  6. Add the pallet the the construct_runtime! macro.

  7. Add your pallet to the define_benchmark! macro in the runtime-benchmarks feature.

    #[cfg(feature = "runtime-benchmarks")]
    mod benches {
        define_benchmarks!(
          [frame_benchmarking, BaselineBench::<Runtime>]
          [pallet_assets, Assets]
          [pallet_babe, Babe]
          ...
          [pallet_mycustom, MyPallet]
          ...
        );
    }

Run your benchmarks

After you update the runtime, you are ready to compile it with the runtime-benchmarks features enabled and start the benchmarking analysis for your pallet.

  1. Build your project with the runtime-benchmarks feature enabled by running the following command:

    cargo build --package node-template --release --features runtime-benchmarks
  2. Review the command-line options for the node benchmark pallet subcommand:

    ./target/release/node-template benchmark pallet --help

    The benchmark pallet subcommand supports several command-line options that can help you automate your benchmarking. For example, you can set the --steps and --repeat command-line options to execute function calls multiple times with different values.

  3. Start benchmarking for your pallet by running a command similar to the following:

    ./target/release/node-template benchmark pallet \
     --chain dev \
     --pallet pallet_mycustom \
     --extrinsic '*' \
     --steps 20 \
     --repeat 10 \
     --output pallets/pallet-mycustom/src/weights.rs

Examples

You can use the benchmarking.rs and weights.rs files for any prebuilt pallet to learn more about benchmarking different types of functions.

The benchmarking module for provides a few simple sample benchmarks. For example:

If you need more details about adding a pallet to the runtime, see or .

This command creates a weights.rs file in the specified directory. For information about how to configure your pallet to use those weights, see .

benchmarking
Cargo.toml
pallet-example-basic
Add a pallet to the runtime
Import a pallet
Use custom weights
Example pallet: Benchmarks
Example pallet: Weights
Balances pallet: Benchmarks
Balances pallet: Weights