The NEAR Protocol Specification
Near Protocol is the scalable blockchain protocol.
For the overview of the NEAR Protocol, read the following documents in numerical order.
Standards
Standards such as Fungible Token Standard can be found in Standards page.
Terminology
Abstraction definitions
Chain
A chain is a replication machinery, that provides for any type of state a way to replicate across the network and reach consensus on the state.
Primitives
Accounts
Account ID
NEAR Protocol has an account names system. Account ID is similar to a username. Account IDs have to follow the rules.
Account ID Rules
- minimum length is 2
- maximum length is 64
- Account ID consists of Account ID parts separated by
.
- Account ID part consists of lowercase alphanumeric symbols separated by either
_
or-
. - Account ID that is 64 characters long consists of lowercase hex characters is a specific implicit account ID.
Account names are similar to a domain names.
Top level account (TLA) like near
, com
, eth
can only be created by registrar
account (see next section for more details).
Only near
can create alice.near
. And only alice.near
can create app.alice.near
and so on.
Note, near
can NOT create app.alice.near
directly.
Additionally, there is an implicit account creation path. Account ids, that are 64 character long, can only be created with AccessKey
that matches account id via hex
derivation. Allowing to create new key pair - and the sender of funds to this account to actually create an account.
Regex for a full account ID, without checking for length:
^(([a-z\d]+[\-_])*[a-z\d]+\.)*([a-z\d]+[\-_])*[a-z\d]+$
Top Level Accounts
Name | Value |
---|---|
REGISTRAR_ACCOUNT_ID | registrar |
MIN_ALLOWED_TOP_LEVEL_ACCOUNT_LENGTH | 32 |
Top level account names (TLAs) are very valuable as they provide root of trust and discoverability for companies, applications and users.
To allow for fair access to them, the top level account names that are shorter than MIN_ALLOWED_TOP_LEVEL_ACCOUNT_LENGTH
characters going to be auctioned off.
Specifically, only REGISTRAR_ACCOUNT_ID
account can create new top level accounts that are shorter than MIN_ALLOWED_TOP_LEVEL_ACCOUNT_LENGTH
characters. REGISTRAR_ACCOUNT_ID
implements standard Account Naming (link TODO) interface to allow create new accounts.
def action_create_account(predecessor_id, account_id):
"""Called on CreateAccount action in receipt."""
if len(account_id) < MIN_ALLOWED_TOP_LEVEL_ACCOUNT_LENGTH and predecessor_id != REGISTRAR_ACCOUNT_ID:
raise CreateAccountOnlyByRegistrar(account_id, REGISTRAR_ACCOUNT_ID, predecessor_id)
# Otherwise, create account with given `account_id`.
Note: we are not going to deploy registrar
auction at launch, instead allow to deploy it by Foundation after initial launch. The link to details of the auction will be added here in the next spec release post MainNet.
Examples
Valid accounts:
ok
bowen
ek-2
ek.near
com
google.com
bowen.google.com
near
illia.cheap-accounts.near
max_99.near
100
near2019
over.9000
a.bro
// Valid, but can't be created, because "a" is too short
bro.a
Invalid accounts:
not ok // Whitespace characters are not allowed
a // Too short
100- // Suffix separator
bo__wen // Two separators in a row
_illia // Prefix separator
.near // Prefix dot separator
near. // Suffix dot separator
a..near // Two dot separators in a row
$$$ // Non alphanumeric characters are not allowed
WAT // Non lowercase characters are not allowed
me@google.com // @ is not allowed (it was allowed in the past)
// TOO LONG:
abcdefghijklmnopqrstuvwxyz.abcdefghijklmnopqrstuvwxyz.abcdefghijklmnopqrstuvwxyz
Implicit account IDs
Implicit accounts work similarly to Bitcoin/Ethereum accounts. It allows you to reserve an account ID before it's created by generating a ED25519 key-pair locally. This key-pair has a public key that maps to the account ID. The account ID is a lowercase hex representation of the public key. ED25519 Public key is 32 bytes that maps to 64 characters account ID.
Example: public key in base58 BGCCDDHfysuuVnaNVtEhhqeT4k9Muyem3Kpgq2U1m9HX
will map to an account ID 98793cd91a3f870fb126f66285808c7e094afcfc4eda8a970f6648cdf0dbd6de
.
The corresponding secret key allows you to sign transactions on behalf of this account once it's created on chain.
Implicit account creation
An account with implicit account ID can only be created by sending a transaction/receipt with a single Transfer
action to the implicit account ID receiver:
- The account will be created with the account ID.
- The account will have a new full access key with the ED25519-curve public key of
decode_hex(account_id)
and nonce0
. - The account balance will have a transfer balance deposited to it.
This account can not be created using CreateAccount
action to avoid being able to hijack the account without having the corresponding private key.
Once an implicit account is created it acts as a regular account until it's deleted.
Account
Data for an single account is collocated in one shard. The account data consists of the following:
- Balance
- Locked balance (for staking)
- Code of the contract
- Key-value storage of the contract. Stored in a ordered trie
- Access Keys
- Postponed ActionReceipts
- Received DataReceipts
Balances
Total account balance consists of unlocked balance and locked balance.
Unlocked balance is tokens that the account can use for transaction fees, transfers staking and other operations.
Locked balance is the tokens that are currently in use for staking to be a validator or to become a validator. Locked balance may become unlocked at the beginning of an epoch. See [Staking] for details.
Contracts
A contract (AKA smart contract) is a program in WebAssembly that belongs to a specific account. When account is created, it doesn't have a contract. A contract has to be explicitly deployed, either by the account owner, or during the account creation. A contract can be executed by anyone who calls a method on your account. A contract has access to the storage on your account.
Storage
Every account has its own storage. It's a persistent key-value trie. Keys are ordered in lexicographical order. The storage can only be modified by the contract on the account. Current implementation on Runtime only allows your account's contract to read from the storage, but this might change in the future and other accounts's contracts will be able to read from your storage.
NOTE: Accounts are charged recurrent rent for the total storage. This includes storage of the account itself, contract code, contract storage and all access keys.
Access Keys
An access key grants an access to a account. Each access key on the account is identified by a unique public key. This public key is used to validate signature of transactions. Each access key contains a unique nonce to differentiate or order transactions signed with this access key.
An access keys have a permission associated with it. The permission can be one of two types:
- Full permission. It grants full access to the account.
- Function call permission. It grants access to only issue function call transactions.
See [Access Keys] for more details.
Access Keys
Access key provides an access for a particular account. Each access key belongs to some account and
is identified by a unique (within the account) public key. Access keys are stored as account_id,public_key
in a trie state. Account can have from zero to multiple access keys.
#![allow(unused)] fn main() { pub struct AccessKey { /// The nonce for this access key. /// NOTE: In some cases the access key needs to be recreated. If the new access key reuses the /// same public key, the nonce of the new access key should be equal to the nonce of the old /// access key. It's required to avoid replaying old transactions again. pub nonce: Nonce, /// Defines permissions for this access key. pub permission: AccessKeyPermission, } }
There are 2 types of AccessKeyPermission
in Near currently: FullAccess
and FunctionCall
. FunctionCall
grants a permission to issue any action on account like DeployContract, Transfer tokens to other account, call functions FunctionCall, Stake and even delete account DeleteAccountAction. FullAccess
also allow to manage access keys. AccessKeyPermission::FunctionCall
limits to do only contract calls.
#![allow(unused)] fn main() { pub enum AccessKeyPermission { FunctionCall(FunctionCallPermission), FullAccess, } }
AccessKeyPermission::FunctionCall
Grants limited permission to make FunctionCall to a specified receiver_id
and methods of a particular contract with a limit of allowed balance to spend.
#![allow(unused)] fn main() { pub struct FunctionCallPermission { /// Allowance is a balance limit to use by this access key to pay for function call gas and /// transaction fees. When this access key is used, both account balance and the allowance is /// decreased by the same value. /// `None` means unlimited allowance. /// NOTE: To change or increase the allowance, the old access key needs to be deleted and a new /// access key should be created. pub allowance: Option<Balance>, /// The access key only allows transactions with the given receiver's account id. pub receiver_id: AccountId, /// A list of method names that can be used. The access key only allows transactions with the /// function call of one of the given method names. /// Empty list means any method name can be used. pub method_names: Vec<String>, } }
Account without access keys
If account has no access keys attached it means that it has no owner who can run transactions from its behalf. However, if such accounts has code it can be invoked by other accounts and contracts.
Transaction
Architecture
Near node consists roughly of a blockchain layer and a runtime layer. These layers are designed to be independent from each other: the blockchain layer can in theory support runtime that processes transactions differently, has a different virtual machine (e.g. RISC-V), has different fees; on the other hand the runtime is oblivious to where the transactions are coming from. It is not aware whether the blockchain it runs on is sharded, what consensus it uses, and whether it runs as part of a blockchain at all.
The blockchain layer and the runtime layer share the following components and invariants:
Transactions and Receipts
Transactions and receipts are a fundamental concept in Near Protocol. Transactions represent actions requested by the blockchain user, e.g. send assets, create account, execute a method, etc. Receipts, on the other hand is an internal structure; think of a receipt as a message which is used inside a message-passing system.
Transactions are created outside the Near Protocol node, by the user who sends them via RPC or network communication. Receipts are created by the runtime from transactions or as the result of processing other receipts.
Blockchain layer cannot create or process transactions and receipts, it can only manipulate them by passing them around and feeding them to a runtime.
Account-Based System
Similar to Ethereum, Near Protocol is an account-based system. Which means that each blockchain user is roughly associated with one or several accounts (there are exceptions though, when users share an account and are separated through the access keys).
The runtime is essentially a complex set of rules on what to do with accounts based on the information from the transactions and the receipts. It is therefore deeply aware of the concept of account.
Blockchain layer however is mostly aware of the accounts through the trie (see below) and the validators (see below). Outside these two it does not operate on the accounts directly.
Assume every account belongs to its own shard
Every account at NEAR belongs to some shard. All the information related to this account also belongs to the same shard. The information includes:
- Balance
- Locked balance (for staking)
- Code of the contract
- Key-value storage of the contract
- All Access Keys
Runtime assumes, it's the only information that is available for the contract execution. While other accounts may belong to the same shards, the Runtime never uses or provides them during contract execution. We can just assume that every account belongs to its own shard. So there is no reason to intentionally try to collocate accounts.
Trie
Near Protocol is a stateful blockchain -- there is a state associated with each account and the user actions performed through transactions mutate that state. The state then is stored as a trie, and both the blockchain layer and the runtime layer are aware of this technical detail.
The blockchain layer manipulates the trie directly. It partitions the trie between the shards to distribute the load. It synchronizes the trie between the nodes, and eventually it is responsible for maintaining the consistency of the trie between the nodes through its consensus mechanism and other game-theoretic methods.
The runtime layer is also aware that the storage that it uses to perform the operations on is a trie. In general it does not have to know this technical detail and in theory we could have abstracted out the trie as a generic key-value storage. However, we allow some trie-specific operations that we expose to the smart contract developers so that they utilize Near Protocol to its maximum efficiency.
Tokens and gas
Even though tokens is a fundamental concept of the blockchain, it is neatly encapsulated inside the runtime layer together with the gas, fees, and rewards.
The only way the blockchain layer is aware of the tokens and the gas is through the computation of the exchange rate and the inflation which is based strictly on the block production mechanics.
Validators
Both the blockchain layer and the runtime layer are aware of a special group of participants who are responsible for maintaining the integrity of the Near Protocol. These participants are associated with the accounts and are rewarded accordingly. The reward part is what the runtime layer is aware of, while everything around the orchestration of the validators is inside the blockchain layer.
Blockchain Layer Concepts
Interestingly, the following concepts are for the blockchain layer only and the runtime layer is not aware of them:
- Sharding -- the runtime layer does not know that it is being used in a sharded blockchain, e.g. it does not know that the trie it works on is only a part of the overall blockchain state;
- Blocks or chunks -- the runtime does not know that the receipts that it processes constitute a chunk and that the output receipts will be used in other chunks. From the runtime perspective it consumes and outputs batches of transactions and receipts;
- Consensus -- the runtime does not know how consistency of the state is maintained;
- Communication -- the runtime don't know anything about the current network topology. Receipt has only a receiver_id (a recipient account), but knows nothing about the destination shard, so it's a responsibility of a blockchain layer to route a particular receipt.
Runtime Layer Concepts
- Fees and rewards -- fees and rewards are neatly encapsulated in the runtime layer. The blockchain layer, however has an indirect knowledge of them through the computation of the tokens-to-gas exchange rate and the inflation.
Chain Specification
Consensus
Definitions and notation
For the purpose of maintaining consensus, transactions are grouped into blocks. There is a single preconfigured block \(G\) called genesis block. Every block except \(G\) has a link pointing to the previous block \(\operatorname{prev}(B)\), where \(B\) is the block, and \(G\) is reachable from every block by following those links (that is, there are no cycles).
The links between blocks give rise to a partial order: for blocks \(A\) and \(B\), \(A < B\) means that \(A \ne B\) and \(A\) is reachable from \(B\) by following links to previous blocks, and \(A \le B\) means that \(A < B\) or \(A = B\). The relations \(>\) and \(\ge\) are defined as the reflected versions of \(<\) and \(\le\), respectively. Finally, \(A \sim B\) means that either \(A < B\), \(A = B\) or \(A > B\), and \(A \nsim B\) means the opposite.
A chain \(\operatorname{chain}(T)\) is a set of blocks reachable from block \(T\), which is called its tip. That is, \(\operatorname{chain}(T) = \{B \mid B \le T\}\). For any blocks \(A\) and \(B\), there is a chain that both \(A\) and \(B\) belong to iff \(A \sim B\). In this case, \(A\) and \(B\) are said to be on the same chain.
Each block has an integer height \(\operatorname{h}(B)\). It is guaranteed that block heights are monotonic (that is, for any block \(B \ne G\), \(\operatorname{h}(B) > \operatorname{h}(\operatorname{prev}(B))\)), but they need not be consecutive. Also, \(\operatorname{h}(G)\) may not be zero. Each node keeps track of a valid block with the largest height it knows about, which is called its head.
Blocks are grouped into epochs. In a chain, the set of blocks that belongs to some epoch forms a contiguous range: if blocks \(A\) and \(B\) such that \(A < B\) belong to the same epoch, then every block \(X\) such that \(A < X < B\) also belongs to that epoch. Epochs can be identified by sequential indices: \(G\) belongs to an epoch with index \(0\), and for every other block \(B\), the index of its epoch is either the same as that of \(\operatorname{prev}(B)\), or one greater.
Each epoch is associated with a set of block producers that are validating blocks in that epoch, as well as an assignment of block heights to block producers that are responsible for producing a block at that height. A block producer responsible for producing a block at height \(h\) is called block proposer at \(h\). This information (the set and the assignment) for an epoch with index \(i \ge 2\) is determined by the last block of the epoch with index \(i-2\). For epochs with indices \(0\) and \(1\), this information is preconfigured. Therefore, if two chains share the last block of some epoch, they will have the same set and the same assignment for the next two epochs, but not necessarily for any epoch after that.
The consensus protocol defines a notion of finality. Informally, if a block \(B\) is final, any future final blocks may only be built on top of \(B\). Therefore, transactions in \(B\) and preceding blocks are never going to be reversed. Finality is not a function of a block itself, rather, a block may be final or not final in some chain it is a member of. Specifically, \(\operatorname{final}(B, T)\), where \(B \le T\), means that \(B\) is final in \(\operatorname{chain}(T)\). A block that is final in a chain is final in all of its extensions: specifically, if \(\operatorname{final}(B, T)\) is true, then \(\operatorname{final}(B, T')\) is also true for all \(T' \ge T\).
Data structures
The fields in the Block header relevant to the consensus process are:
#![allow(unused)] fn main() { struct BlockHeader { ... prev_hash: BlockHash, height: BlockHeight, epoch_id: EpochId, last_final_block_hash: BlockHash, approvals: Vec<Option<Signature>> ... } }
Block producers in the particular epoch exchange many kinds of messages. The two kinds that are relevant to the consensus are Blocks and Approvals. The approval contains the following fields:
#![allow(unused)] fn main() { enum ApprovalInner { Endorsement(BlockHash), Skip(BlockHeight), } struct Approval { inner: ApprovalInner, target_height: BlockHeight, signature: Signature, account_id: AccountId } }
Where the parameter of the Endorsement
is the hash of the approved block, the parameter of the Skip
is the height of the approved block, target_height
is the specific height at which the approval can be used (an approval with a particular target_height
can be only included in the approvals
of a block that has height = target_height
), account_id
is the account of the block producer who created the approval, and signature
is their signature on the tuple (inner, target_height)
.
Approvals Requirements
Every block \(B\) except the genesis block must logically contain approvals of a form described in the next paragraph from block producers whose cumulative stake exceeds \(^2\!/_3\) of the total stake in the current epoch, and in specific conditions described in section epoch switches also the approvals of the same form from block producers whose cumulative stake exceeds \(^2\!/_3\) of the total stake in the next epoch.
The approvals logically included in the block must be an Endorsement
with the hash of \(\operatorname{prev}(B)\) if and only if \(\operatorname{h}(B) = \operatorname{h}(\operatorname{prev}(B))+1\), otherwise it must be a Skip
with the height of \(\operatorname{prev}(B)\). See this section below for details on why the endorsements must contain the hash of the previous block, and skips must contain the height.
Note that since each approval that is logically stored in the block is the same for each block producer (except for the account_id
of the sender and the signature
), it is redundant to store the full approvals. Instead physically we only store the signatures of the approvals. The specific way they are stored is the following: we first fetch the ordered set of block producers from the current epoch. If the block is on the epoch boundary and also needs to include approvals from the next epoch (see epoch switches), we add new accounts from the new epoch
def get_accounts_for_block_ordered(h, prev_block):
cur_epoch = get_next_block_epoch(prev_block)
next_epoch = get_next_block_next_epoch(prev_block)
account_ids = get_epoch_block_producers_ordered(cur_epoch)
if next_block_needs_approvals_from_next_epoch(prev_block):
for account_id in get_epoch_block_producers_ordered(next_epoch):
if account_id not in account_ids:
account_ids.append(account_id)
return account_ids
The block then contains a vector of optional signatures of the same or smaller size than the resulting set of account_ids
, with each element being None
if the approval for such account is absent, or the signature on the approval message if it is present. It's easy to show that the actual approvals that were signed by the block producers can easily be reconstructed from the information available in the block, and thus the signatures can be verified. If the vector of signatures is shorter than the length of account_ids
, the remaining signatures are assumed to be None
.
Messages
On receipt of the approval message the participant just stores it in the collection of approval messages.
def on_approval(self, approval):
self.approvals.append(approval)
Whenever a participant receives a block, the operations relevant to the consensus include updating the head
and initiating a timer to start sending the approvals on the block to the block producers at the consecutive target_height
s. The timer delays depend on the height of the last final block, so that information is also persisted.
def on_block(self, block):
header = block.header
if header.height <= self.head_height:
return
last_final_block = store.get_block(header.last_final_block_hash)
self.head_height = header.height
self.head_hash = block.hash()
self.largest_final_height = last_final_block.height
self.timer_height = self.head_height + 1
self.timer_started = time.time()
self.endorsement_pending = True
The timer needs to be checked periodically, and contain the following logic:
def get_delay(n):
min(MAX_DELAY, MIN_DELAY + DELAY_STEP * (n-2))
def process_timer(self):
now = time.time()
skip_delay = get_delay(self.timer_height - self.largest_final_height)
if self.endorsement_pending and now > self.timer_started + ENDORSEMENT_DELAY:
if self.head_height >= self.largest_target_height:
self.largest_target_height = self.head_height + 1
self.send_approval(head_height + 1)
self.endorsement_pending = False
if now > self.timer_started + skip_delay:
assert not self.endorsement_pending
self.largest_target_height = max(self.largest_target_height, self.timer_height + 1)
self.send_approval(self.timer_height + 1)
self.timer_started = now
self.timer_height += 1
def send_approval(self, target_height):
if target_height == self.head_height + 1:
inner = Endorsement(self.head_hash)
else:
inner = Skip(self.head_height)
approval = Approval(inner, target_height)
send(approval, to_whom = get_block_proposer(self.head_hash, target_height))
Where get_block_proposer
returns the next block proposer given the previous block and the height of the next block.
It is also necessary that ENDORSEMENT_DELAY < MIN_DELAY
. Moreover, while not necessary for correctness, we require that ENDORSEMENT_DELAY * 2 <= MIN_DELAY
.
Block Production
We first define a convenience function to fetch approvals that can be included in a block at particular height:
def get_approvals(self, target_height):
return [approval for approval
in self.approvals
if approval.target_height == target_height and
(isinstance(approval.inner, Skip) and approval.prev_height == self.head_height or
isinstance(approval.inner, Endorsement) and approval.prev_hash == self.head_hash)]
A block producer assigned for a particular height produces a block at that height whenever they have get_approvals
return approvals from block producers whose stake collectively exceeds 2/3 of the total stake.
Finality condition
A block \(B\) is final in \(\operatorname{chain}(T)\), where \(T \ge B\), when either \(B = G\) or there is a block \(X \le T\) such that \(B = \operatorname{prev}(\operatorname{prev}(X))\) and \(\operatorname{h}(X) = \operatorname{h}(\operatorname{prev}(X))+1 = \operatorname{h}(B)+2\). That is, either \(B\) is the genesis block, or \(\operatorname{chain}(T)\) includes at least two blocks on top of \(B\), and these three blocks (\(B\) and the two following blocks) have consecutive heights.
Epoch switches
There's a parameter \(epoch\_length \ge 3\) that defines the minimum length of an epoch. Suppose that a particular epoch \(e\_cur\) started at height \(h\), and say the next epoch will be \(e\_next\). Say \(\operatorname{BP}(e)\) is a set of block producers in epoch \(e\). Say \(\operatorname{last\_final}(T)\) is the highest final block in \(\operatorname{chain}(T)\). The following are the rules of what blocks contain approvals from what block producers, and belong to what epoch.
- Any block \(B\) with \(\operatorname{h}(\operatorname{prev}(B)) < h+epoch\_length-3\) is in the epoch \(e\_cur\) and must have approvals from more than \(^2\!/_3\) of \(\operatorname{BP}(e\_cur)\) (stake-weighted).
- Any block \(B\) with \(\operatorname{h}(\operatorname{prev}(B)) \ge h+epoch\_length-3\) for which \(\operatorname{h}(\operatorname{last\_final}(\operatorname{prev}(B))) < h+epoch\_length-3\) is in the epoch \(e\_cur\) and must logically include approvals from both more than \(^2\!/_3\) of \(\operatorname{BP}(e\_cur)\) and more than \(^2\!/_3\) of \(\operatorname{BP}(e\_next)\) (both stake-weighted).
- The first block \(B\) with \(\operatorname{h}(\operatorname{last\_final}(\operatorname{prev}(B))) >= h+epoch\_length-3\) is in the epoch \(e\_next\) and must logically include approvals from more than \(^2\!/_3\) of \(\operatorname{BP}(e\_next)\) (stake-weighted).
(see the definition of logically including approvals in approval requirements)
Safety
Note that with the implementation above a honest block producer can never produce two endorsements with the same prev_height
(call this condition conflicting endorsements), neither can they produce a skip message s
and an endorsement e
such that s.prev_height < e.prev_height and s.target_height >= e.target_height
(call this condition conflicting skip and endorsement).
Theorem Suppose that there are blocks \(B_1\), \(B_2\), \(T_1\) and \(T_2\) such that \(B_1 \nsim B_2\), \(\operatorname{final}(B_1, T_1)\) and \(\operatorname{final}(B_2, T_2)\). Then, more than \(^1\!/_3\) of the block producer in some epoch must have signed either conflicting endorsements or conflicting skip and endorsement.
Proof Without loss of generality, we can assume that these blocks are chosen such that their heights are smallest possible. Specifically, we can assume that \(\operatorname{h}(T_1) = \operatorname{h}(B_1)+2\) and \(\operatorname{h}(T_2) = \operatorname{h}(B_2)+2\). Also, letting \(B_c\) be the highest block that is an ancestor of both \(B_1\) and \(B_2\), we can assume that there is no block \(X\) such that \(\operatorname{final}(X, T_1)\) and \(B_c < X < B_1\) or \(\operatorname{final}(X, T_2)\) and \(B_c < X < B_2\).
Lemma There is such an epoch \(E\) that all blocks \(X\) such that \(B_c < X \le T_1\) or \(B_c < X \le T_2\) include approvals from more than \(^2\!/_3\) of the block producers in \(E\).
Proof There are two cases.
Case 1: Blocks \(B_c\), \(T_1\) and \(T_2\) are all in the same epoch. Because the set of blocks in a given epoch in a given chain is a contiguous range, all blocks between them (specifically, all blocks \(X\) such that \(B_c < X < T_1\) or \(B_c < X < T_2\)) are also in the same epoch, so all those blocks include approvals from more than \(^2\!/_3\) of the block producers in that epoch.
Case 2: Blocks \(B_c\), \(T_1\) and \(T_2\) are not all in the same epoch. Suppose that \(B_c\) and \(T_1\) are in different epochs. Let \(E\) be the epoch of \(T_1\) and \(E_p\) be the preceding epoch (\(T_1\) cannot be in the same epoch as the genesis block). Let \(R\) and \(S\) be the first and the last block of \(E_p\) in \(\operatorname{chain}(T_1)\). Then, there must exist a block \(F\) in epoch \(E_p\) such that \(\operatorname{h}(F)+2 = \operatorname{h}(S) < \operatorname{h}(T_1)\). Because \(\operatorname{h}(F) < \operatorname{h}(T_1)-2\), we have \(F < B_1\), and since there are no final blocks \(X\) such that \(B_c < X < B_1\), we conclude that \(F \le B_c\). Because there are no epochs between \(E\) and \(E_p\), we conclude that \(B_c\) is in epoch \(E_p\). Also, \(\operatorname{h}(B_c) \ge \operatorname{h}(F) \ge \operatorname{h}(R)+epoch\_length-3\). Thus, any block after \(B_c\) and until the end of \(E\) must include approvals from more than \(^2\!/_3\) of the block producers in \(E\). Applying the same argument to \(\operatorname{chain}(T_2)\), we can determine that \(T_2\) is either in \(E\) or \(E_p\), and in both cases all blocks \(X\) such that \(B_c < X \le T_2\) include approvals from more than \(^2\!/_3\) of block producers in \(E\) (the set of block producers in \(E\) is the same in \(\operatorname{chain}(T_1)\) and \(\operatorname{chain}(T_2)\) because the last block of the epoch preceding \(E_p\), if any, is before \(B_c\) and thus is shared by both chains). The case where \(B_c\) and \(T_1\) are in the same epoch, but \(B_c\) and \(T_2\) are in different epochs is handled similarly. Thus, the lemma is proven.
Now back to the theorem. Without loss of generality, assume that \(\operatorname{h}(B_1) \le \operatorname{h}(B_2)\). On the one hand, if \(\operatorname{chain}(T_2)\) doesn't include a block at height \(\operatorname{h}(B_1)\), then the first block at height greater than \(\operatorname{h}(B_1)\) must include skips from more than \(^2\!/_3\) of the block producers in \(E\) which conflict with endorsements in \(\operatorname{prev}(T_1)\), therefore, more than \(^1\!/_3\) of the block producers in \(E\) must have signed conflicting skip and endorsement. Similarly, if \(\operatorname{chain}(T_2)\) doesn't include a block at height \(\operatorname{h}(B_1)+1\), more than \(^1\!/_3\) of the block producers in \(E\) signed both an endorsement in \(T_1\) and a skip in the first block in \(\operatorname{chain}(T_2)\) at height greater than \(\operatorname{h}(T_1)\). On the other hand, if \(\operatorname{chain}(T_2)\) includes both a block at height \(\operatorname{h}(B_1)\) and a block at height \(\operatorname{h}(B_1)+1\), the latter must include endorsements for the former, which conflict with endorsements for \(B_1\). Therefore, more than \(^1\!/_3\) of the block producers in \(E\) must have signed conflicting endorsements. Thus, the theorem is proven.
Liveness
See the proof of liveness in near.ai/doomslug. The consensus in this section differs in that it requires two consecutive blocks with endorsements. The proof in the linked paper trivially extends, by observing that once the delay is sufficiently long for a honest block producer to collect enough endorsements, the next block producer ought to have enough time to collect all the endorsements too.
Approval condition
The approval condition above
Any valid block must logically include approvals from block producers whose cumulative stake exceeds 2/3 of the total stake in the epoch. For a block
B
and its previous blockB'
each approval inB
must be anEndorsement
with the hash ofB'
if and only ifB.height == B'.height + 1
, otherwise it must be aSkip
with the height ofB'
Is more complex that desired, and it is tempting to unify the two conditions. Unfortunately, they cannot be unified.
It is critical that for endorsements each approval has the prev_hash
equal to the hash of the previous block, because otherwise the safety proof above doesn't work, in the second case the endorsements in B1
and Bx
can be the very same approvals.
It is critical that for the skip messages we do not require the hashes in the approvals to match the hash of the previous block, because otherwise a malicious actor can create two blocks at the same height, and distribute them such that half of the block producers have one as their head, and the other half has the other. The two halves of the block producers will be sending skip messages with different prev_hash
but the same prev_height
to the future block producers, and if there's a requirement that the prev_hash
in the skip matches exactly the prev_hash
of the block, no block producer will be able to create their blocks.
Upgradability
This part of specification describes specifics of upgrading the protocol, and touches on few different parts of the system.
Three different levels of upgradability are:
- Updating without any changes to underlaying data structures or protocol;
- Updating when underlaying data structures changed (config, database or something else internal to the node and probably client specific);
- Updating with protocol changes that all validating nodes must adjust to.
Versioning
There are 2 different important versions:
- Version of binary defines it's internal data structures / database and configs. This version is client specific and doesn't need to be matching between nodes.
- Version of the protocol, defining the "language" nodes are speaking.
#![allow(unused)] fn main() { /// Latest version of protocol that this binary can work with. type ProtocolVersion = u32; }
Client versioning
Clients should follow semantic versioning. Specifically:
- MAJOR version defines protocol releases.
- MINOR version defines changes that are client specific but require database migration, change of config or something similar. This includes client-specific features. Client should execute migrations on start, by detecting that information on disk is produced by previous version and auto-migrate it to new one.
- PATCH version defines when bug fixes, which should not require migrations or protocol changes.
Clients can define how current version of data is stored and migrations applied. General recommendation is to store version in the database and on binary start, check version of database and perform required migrations.
Protocol Upgrade
Generally, we handle data structure upgradability via enum wrapper around it. See BlockHeader
structure for example.
Versioned data structures
Given we expect many data structures to change or get updated as protocol evolves, a few changes are required to support that.
The major one is adding backward compatible Versioned
data structures like this one:
#![allow(unused)] fn main() { enum VersionedBlockHeader { BlockHeaderV1(BlockHeaderV1), /// Current version, where `BlockHeader` is used internally for all operations. BlockHeaderV2(BlockHeader), } }
Where VersionedBlockHeader
will be stored on disk and sent over the wire.
This allows to encode and decode old versions (up to 256 given https://borsh.io specficiation). If some data structures has more than 256 versions, old versions are probably can be retired and reused.
Internally current version is used. Previous versions either much interfaces / traits that are defined by different components or are up-casted into the next version (saving for hash validation).
Consensus
Name | Value |
---|---|
PROTOCOL_UPGRADE_BLOCK_THRESHOLD | 80% |
PROTOCOL_UPGRADE_NUM_EPOCHS | 2 |
The way the version will be indicated by validators, will be via
#![allow(unused)] fn main() { /// Add `version` into block header. struct BlockHeaderInnerRest { ... /// Latest version that current producing node binary is running on. version: ProtocolVersion, } }
The condition to switch to next protocol version is based on % of stake PROTOCOL_UPGRADE_NUM_EPOCHS
epochs prior indicated about switching to the next version:
def next_epoch_protocol_version(last_block):
"""Determines next epoch's protocol version given last block."""
epoch_info = epoch_manager.get_epoch_info(last_block)
# Find epoch that decides if version should change by walking back.
for _ in PROTOCOL_UPGRADE_NUM_EPOCHS:
epoch_info = epoch_manager.prev_epoch(epoch_info)
# Stop if this is the first epoch.
if epoch_info.prev_epoch_id == GENESIS_EPOCH_ID:
break
versions = collections.defaultdict(0)
# Iterate over all blocks in previous epoch and collect latest version for each validator.
authors = {}
for block in epoch_info:
author_id = epoch_manager.get_block_producer(block.header.height)
if author_id not in authors:
authors[author_id] = block.header.rest.version
# Weight versions with stake of each validator.
for author in authors:
versions[authors[author] += epoch_manager.validators[author].stake
(version, stake) = max(versions.items(), key=lambda x: x[1])
if stake > PROTOCOL_UPGRADE_BLOCK_THRESHOLD * epoch_info.total_block_producer_stake:
return version
# Otherwise return version that was used in that deciding epoch.
return epoch_info.version
Transactions in the Blockchain Layer
A client creates a transaction, computes the transaction hash and signs this hash to get a signed transaction. Now this signed transaction can be sent to a node.
When a node receives a new signed transaction, it validates the transaction (if the node tracks the shard) and gossips it to the peers. Eventually, the valid transaction is added to a transaction pool.
Every validating node has its own transaction pool. The transaction pool maintains transactions that were not yet discarded and not yet included into the chain.
Before producing a chunk transactions are ordered and validated again. This is done to produce chunks with only valid transactions.
Transaction ordering
The transaction pool groups transactions by a pair of (signer_id, signer_public_key)
.
The signer_id
is the account ID of the user who signed the transaction, the signer_public_key
is the public key of the account's access key that was used to sign the transactions.
Transactions within a group are not ordered.
The valid order of the transactions in a chunk is the following:
- transactions are ordered in batches.
- within a batch all transactions keys should have different.
- a set of transaction keys in each subsequent batch should be a sub-set of keys from the previous batch.
- transactions with the same key should be ordered in strictly increasing order of their corresponding nonces.
Note:
- the order within a batch is undefined. Each node should use a unique secret seed for that ordering to users from finding the lowest keys to get advantage of every node.
Transaction pool provides a draining structure that allows to pull transactions in a proper order.
Transaction validation
The transaction validation happens twice, once before adding to the transaction pool, next before adding to a chunk.
Before adding to a transaction pool
This is done to quickly filter out transactions that have an invalid signature or are invalid on the latest state.
Before adding to a chunk
A chunk producer has to create a chunk with valid and ordered transactions up to some limits. One limit is the maximum number of transactions, another is the total gas burnt for transactions.
To order and filter transactions, chunk producer gets a pool iterator and passes it to the runtime adapter. The runtime adapter pulls transactions one by one. The valid transactions are added to the result, invalid transactions are discarded. Once the limit is reached, all the remaining transactions from the iterator are returned back to the pool.
Pool iterator
Pool Iterator is a trait that iterates over transaction groups until all transaction group are empty. Pool Iterator returns a mutable reference to a transaction group that implements a draining iterator. The draining iterator is like a normal iterator, but it removes the returned entity from the group. It pulls transactions from the group in order from the smallest nonce to largest.
The pool iterator and draining iterators for transaction groups allow the runtime adapter to create proper order. For every transaction group, the runtime adapter keeps pulling transactions until the valid transaction is found. If the transaction group becomes empty, then it's skipped.
The runtime adapter may implement the following code to pull all valid transactions:
#![allow(unused)] fn main() { let mut valid_transactions = vec![]; let mut pool_iter = pool.pool_iterator(); while let Some(group_iter) = pool_iter.next() { while let Some(tx) = group_iter.next() { if is_valid(tx) { valid_transactions.push(tx); break; } } } valid_transactions }
Transaction ordering example using pool iterator.
Let's say:
- account IDs as uppercase letters (
"A"
,"B"
,"C"
...) - public keys are lowercase letters (
"a"
,"b"
,"c"
...) - nonces are numbers (
1
,2
,3
...)
A pool might have group of transactions in the hashmap:
transactions: {
("A", "a") -> [1, 3, 2, 1, 2]
("B", "b") -> [13, 14]
("C", "d") -> [7]
("A", "c") -> [5, 2, 3]
}
There are 3 accounts ("A"
, "B"
, "C"
). Account "A"
used 2 public keys ("a"
, "c"
). Other accounts used 1 public key each.
Transactions within each group may have repeated nonces while in the pool.
That's because the pool doesn't filter transactions with the same nonce, only transactions with the same hash.
For this example, let's say that transactions are valid if the nonce is even and strictly greater than the previous nonce for the same key.
Initialization
When .pool_iterator()
is called, a new PoolIteratorWrapper
is created and it holds the mutuable reference to the pool,
so the pool can't be modified outside of this iterator. The wrapper looks like this:
pool: {
transactions: {
("A", "a") -> [1, 3, 2, 1, 2]
("B", "b") -> [13, 14]
("C", "d") -> [7]
("A", "c") -> [5, 2, 3]
}
}
sorted_groups: [],
sorted_groups
is a queue of sorted transaction groups that were already sorted and pulled from the pool.
Transaction #1
The first group to be selected is for key ("A", "a")
, the pool iterator sorts transactions by nonces and returns the mutable references to the group. Sorted nonces are:
[1, 1, 2, 2, 3]
. Runtime adapter pulls 1
, then 1
, and then 2
. Both transactions with nonce 1
are invalid because of odd nonce.
Transaction with nonce 2
is added to the list of valid transactions.
The transaction group is dropped and the pool iterator wrapper becomes the following:
pool: {
transactions: {
("B", "b") -> [13, 14]
("C", "d") -> [7]
("A", "c") -> [5, 2, 3]
}
}
sorted_groups: [
("A", "a") -> [2, 3]
],
Transaction #2
The next group is for key ("B", "b")
, the pool iterator sorts transactions by nonces and returns the mutable references to the group. Sorted nonces are:
[13, 14]
. Runtime adapter pulls 13
, then 14
. The transaction with nonce 13
is invalid because of odd nonce.
Transaction with nonce 14
is added to the list of valid transactions.
The transaction group is dropped, but it's empty, so the pool iterator drops it completely:
pool: {
transactions: {
("C", "d") -> [7]
("A", "c") -> [5, 2, 3]
}
}
sorted_groups: [
("A", "a") -> [2, 3]
],
Transaction #3
The next group is for key ("C", "d")
, the pool iterator sorts transactions by nonces and returns the mutable references to the group. Sorted nonces are:
[7]
. Runtime adapter pulls 7
. The transaction with nonce 7
is invalid because of odd nonce.
No valid transactions is added for this group.
The transaction group is dropped, it's empty, so the pool iterator drops it completely:
pool: {
transactions: {
("A", "c") -> [5, 2, 3]
}
}
sorted_groups: [
("A", "a") -> [2, 3]
],
The next group is for key ("A", "c")
, the pool iterator sorts transactions by nonces and returns the mutable references to the group. Sorted nonces are:
[2, 3, 5]
. Runtime adapter pulls 2
.
It's a valid transaction, so it's added to the list of valid transactions.
The transaction group is dropped, so the pool iterator drops it completely:
pool: {
transactions: { }
}
sorted_groups: [
("A", "a") -> [2, 3]
("A", "c") -> [3, 5]
],
Transaction #4
The next group is pulled not from the pool, but from the sorted_groups. The key is ("A", "a")
.
It's already sorted, so the iterator returns the mutable reference. Nonces are:
[2, 3]
. Runtime adapter pulls 2
, then pulls 3
.
The transaction with nonce 2
is invalid, because we've already pulled a transaction #1 from this group and it had nonce 2
.
The new nonce has to be larger than the previous nonce, so this transaction is invalid.
The transaction with nonce 3
is invalid because of odd nonce.
No valid transactions is added for this group.
The transaction group is dropped, it's empty, so the pool iterator drops it completely:
pool: {
transactions: { }
}
sorted_groups: [
("A", "c") -> [3, 5]
],
The next group is for key ("A", "c")
, with nonces [3, 5]
.
Runtime adapter pulls 3
, then pulls 5
. Both transactions are invalid, because the nonce is odd.
No transactions are added.
The transaction group is dropped, the pool iterator wrapper becomes empty:
pool: {
transactions: { }
}
sorted_groups: [ ],
When runtime adapter tries to pull the next group, the pool iterator returns None
, so the runtime adapter drops the iterator.
Dropping iterator
If the iterator was not fully drained, but some transactions still remained. They would be reinserted back into the pool.
Chunk Transactions
Transactions that were pulled from the pool:
// First batch
("A", "a", 1),
("A", "a", 1),
("A", "a", 2),
("B", "b", 13),
("B", "b", 14),
("C", "d", 7),
("A", "c", 2),
// Next batch
("A", "a", 2),
("A", "a", 3),
("A", "c", 3),
("A", "c", 5),
The valid transactions are:
("A", "a", 2),
("B", "b", 14),
("A", "c", 2),
In total there were only 3 valid transactions, that resulted in one batch.
Order validation
Other validators need to check the order of transactions in the produced chunk. It can be done in linear time, using a greedy algorithm.
To select a first batch we need to iterate over transactions one by one until we see a transaction with the key that we've already included in the first batch. This transaction belongs to the next batch.
Now all transactions in the N+1 batch should have a corresponding transaction with the same key in the N batch. If there are no transaction with the same key in the N batch, then the order is invalid.
We also enforce the order of the sequence of transactions for the same key, the nonces of them should be in strictly increasing order.
Here is the algorithm that validates the order:
#![allow(unused)] fn main() { fn validate_order(txs: &Vec<Transaction>) -> bool { let mut nonces: HashMap<Key, Nonce> = HashMap::new(); let mut batches: HashMap<Key, usize> = HashMap::new(); let mut current_batch = 1; for tx in txs { let key = tx.key(); // Verifying nonce let nonce = tx.nonce(); if let Some(last_nonce) = nonces.get(key) { if nonce <= last_nonce { // Nonces should increase. return false; } } nonces.insert(key, nonce); // Verifying batch if let Some(last_batch) = batches.get(key) { if last_batch == current_batch { current_batch += 1; } else if last_batch < current_batch - 1 { // Was skipped this key in the previous batch return false; } } else { if current_batch > 1 { // Not in first batch return false; } } batches.insert(key, batch); } true } }
Light Client
The state of the light client is defined by:
BlockHeaderInnerLiteView
for the current head (which containsheight
,epoch_id
,next_epoch_id
,prev_state_root
,outcome_root
,timestamp
, the hash of the block producers set for the next epochnext_bp_hash
, and the merkle root of all the block hashesblock_merkle_root
);- The set of block producers for the current and next epochs.
The epoch_id
refers to the epoch to which the block that is the current known head belongs, and next_epoch_id
is the epoch that will follow.
Light clients operate by periodically fetching instances of LightClientBlockView
via particular RPC end-point desdribed below.
Light client does't need to receive LightClientBlockView
for all the blocks. Having the LightClientBlockView
for block B
is sufficient to be able to verify any statement about state or outcomes in any block in the ancestry of B
(including B
itself). In particular, having the LightClientBlockView
for the head is sufficient to locally verify any statement about state or outcomes in any block on the canonical chain.
However, to verify the validity of a particular LightClientBlockView
, the light client must have verified a LightClientBlockView
for at least one block in the preceding epoch, thus to sync to the head the light client will have to fetch and verify a LightClientBlockView
per epoch passed.
Validating Light Client Block Views
#![allow(unused)] fn main() { pub enum ApprovalInner { Endorsement(CryptoHash), Skip(BlockHeight) } pub struct ValidatorStakeView { pub account_id: AccountId, pub public_key: PublicKey, pub stake: Balance, } pub struct BlockHeaderInnerLiteView { pub height: BlockHeight, pub epoch_id: CryptoHash, pub next_epoch_id: CryptoHash, pub prev_state_root: CryptoHash, pub outcome_root: CryptoHash, pub timestamp: u64, pub next_bp_hash: CryptoHash, pub block_merkle_root: CryptoHash, } pub struct LightClientBlockLiteView { pub prev_block_hash: CryptoHash, pub inner_rest_hash: CryptoHash, pub inner_lite: BlockHeaderInnerLiteView, } pub struct LightClientBlockView { pub prev_block_hash: CryptoHash, pub next_block_inner_hash: CryptoHash, pub inner_lite: BlockHeaderInnerLiteView, pub inner_rest_hash: CryptoHash, pub next_bps: Option<Vec<ValidatorStakeView>>, pub approvals_after_next: Vec<Option<Signature>>, } }
Recall that the hash of the block is
#![allow(unused)] fn main() { sha256(concat( sha256(concat( sha256(borsh(inner_lite)), sha256(borsh(inner_rest)) )), prev_hash )) }
The fields prev_block_hash
, next_block_inner_hash
and inner_rest_hash
are used to reconstruct the hashes of the current and next block, and the approvals that will be signed, in the following way (where block_view
is an instance of LightClientBlockView
):
def reconstruct_light_client_block_view_fields(block_view):
current_block_hash = sha256(concat(
sha256(concat(
sha256(borsh(block_view.inner_lite)),
block_view.inner_rest_hash,
)),
block_view.prev_block_hash
))
next_block_hash = sha256(concat(
block_view.next_block_inner_hash,
current_block_hash
))
approval_message = concat(
borsh(ApprovalInner::Endorsement(next_block_hash)),
little_endian(block_view.inner_lite.height + 2)
)
return (current_block_hash, next_block_hash, approval_message)
The light client updates its head with the information from LightClientBlockView
iff:
- The height of the block is higher than the height of the current head;
- The epoch of the block is equal to the
epoch_id
ornext_epoch_id
known for the current head; - If the epoch of the block is equal to the
next_epoch_id
of the head, thennext_bps
is notNone
; approvals_after_next
contain valid signatures onapproval_message
from the block producers of the corresponding epoch (see next section);- The signatures present in
approvals_after_next
correspond to more than 2/3 of the total stake (see next section). - If
next_bps
is not none,sha256(borsh(next_bps))
corresponds to thenext_bp_hash
ininner_lite
.
def validate_and_update_head(block_view):
global head
global epoch_block_producers_map
current_block_hash, next_block_hash, approval_message = reconstruct_light_client_block_view_fields(block_view)
# (1)
if block_view.inner_lite.height <= head.inner_lite.height:
return False
# (2)
if block_view.inner_lite.epoch_id not in [head.inner_lite.epoch_id, head.inner_lite.next_epoch_id]:
return False
# (3)
if block_view.inner_lite.epoch_id == head.inner_lite.next_epoch_id and block_view.next_bps is None:
return False
# (4) and (5)
total_stake = 0
approved_stake = 0
epoch_block_producers = epoch_block_producers_map[block_view.inner_lite.epoch_id]
for maybe_signature, block_producer in zip(block_view.approvals_after_next, epoch_block_producers):
total_stake += block_producer.stake
if maybe_signature is None:
continue
approved_stake += block_producer.stake
if not verify_signature(
public_key: block_producer.public_key,
signature: maybe_signature,
message: approval_message
):
return False
threshold = total_stake * 2 // 3
if approved_stake <= threshold:
return False
# (6)
if block_view.next_bps is not None:
if sha256(borsh(block_view.next_bps)) != block_view.inner_lite.next_bp_hash:
return False
epoch_block_producers_map[block_view.inner_lite.next_epoch_id] = block_view.next_bps
head = block_view
Signature verification
To simplify the protocol we require that the next block and the block after next are both in the same epoch as the block that LightClientBlockView
corresponds to. It is guaranteed that each epoch has at least one final block for which the next two blocks that build on top of it are in the same epoch.
By construction by the time the LightClientBlockView
is being validated, the block producers set for its epoch is known. Specifically, when the first light client block view of the previous epoch was processed, due to (3) above the next_bps
was not None
, and due to (6) above it was corresponding to the next_bp_hash
in the block header.
The sum of all the stakes of next_bps
in the previous epoch is total_stake
referred to in (5) above.
The signatures in the LightClientBlockView::approvals_after_next
are signatures on approval_message
. The i
-th signature in approvals_after_next
, if present, must validate against the i
-th public key in next_bps
from the previous epoch. approvals_after_next
can contain fewer elements than next_bps
in the previous epoch.
approvals_after_next
can also contain more signatures than the length of next_bps
in the previous epoch. This is due to the fact that, as per consensus specification, the last blocks in each epoch contain signatures from both the block producers of the current epoch, and the next epoch. The trailing signatures can be safely ignored by the light client implementation.
Proof Verification
Transaction Outcome Proofs
To verify that a transaction or receipt happens on chain, a light client can request a proof through rpc by providing id
, which is of type
#![allow(unused)] fn main() { pub enum TransactionOrReceiptId { Transaction { hash: CryptoHash, sender: AccountId }, Receipt { id: CryptoHash, receiver: AccountId }, } }
and the block hash of light client head. The rpc will return the following struct
#![allow(unused)] fn main() { pub struct RpcLightClientExecutionProofResponse { /// Proof of execution outcome pub outcome_proof: ExecutionOutcomeWithIdView, /// Proof of shard execution outcome root pub outcome_root_proof: MerklePath, /// A light weight representation of block that contains the outcome root pub block_header_lite: LightClientBlockLiteView, /// Proof of the existence of the block in the block merkle tree, /// which consists of blocks up to the light client head pub block_proof: MerklePath, } }
which includes everything that a light client needs to prove the execution outcome of the given transaction or receipt.
Here ExecutionOutcomeWithIdView
is
#![allow(unused)] fn main() { pub struct ExecutionOutcomeWithIdView { /// Proof of the execution outcome pub proof: MerklePath, /// Block hash of the block that contains the outcome root pub block_hash: CryptoHash, /// Id of the execution (transaction or receipt) pub id: CryptoHash, /// The actual outcome pub outcome: ExecutionOutcomeView, } }
The proof verification can be broken down into two steps, execution outcome root verification and block merkle root verification.
Execution Outcome Root Verification
If the outcome root of the transaction or receipt is included in block H
, then outcome_proof
includes the block hash
of H
, as well as the merkle proof of the execution outcome in its given shard. The outcome root in H
can be
reconstructed by
shard_outcome_root = compute_root(sha256(borsh(execution_outcome)), outcome_proof.proof)
block_outcome_root = compute_root(sha256(borsh(shard_outcome_root)), outcome_root_proof)
This outcome root must match the outcome root in block_header_lite.inner_lite
.
Block Merkle Root Verification
Recall that block hash can be computed from LightClientBlockLiteView
by
#![allow(unused)] fn main() { sha256(concat( sha256(concat( sha256(borsh(inner_lite)), sha256(borsh(inner_rest)) )), prev_hash )) }
The expected block merkle root can be computed by
block_hash = compute_block_hash(block_header_lite)
block_merkle_root = compute_root(block_hash, block_proof)
which must match the block merkle root in the light client block of the light client head.
RPC end-points
Light Client Block
There's a single end-point that full nodes exposed that light clients can use to fetch new LightClientBlockView
s:
http post http://127.0.0.1:3030/ jsonrpc=2.0 method=next_light_client_block params:="[<last known hash>]" id="dontcare"
The RPC returns the LightClientBlock
for the block as far into the future from the last known hash as possible for the light client to still accept it. Specifically, it either returns the last final block of the next epoch, or the last final known block. If there's no newer final block than the one the light client knows about, the RPC returns an empty result.
A standalone light client would bootstrap by requesting next blocks until it receives an empty result, and then periodically request the next light client block.
A smart contract-based light client that enables a bridge to NEAR on a different blockchain naturally cannot request blocks itself. Instead external oracles query the next light client block from one of the full nodes, and submit it to the light client smart contract. The smart contract-based light client performs the same checks described above, so the oracle doesn't need to be trusted.
Light Client Proof
The following rpc end-point returns RpcLightClientExecutionProofResponse
that a light client needs for verifying execution outcomes.
For transaction execution outcome, the rpc is
http post http://127.0.0.1:3030/ jsonrpc=2.0 method=EXPERIMENTAL_light_client_proof params:="{"type": "transaction", "transaction_hash": <transaction_hash>, "sender_id": <sender_id>, "light_client_head": <light_client_head>}" id="dontcare"
For receipt execution outcome, the rpc is
http post http://127.0.0.1:3030/ jsonrpc=2.0 method=EXPERIMENTAL_light_client_proof params:="{"type": "receipt", "receipt_id": <receipt_id>, "receiver_id": <receiver_id>, "light_client_head": <light_client_head>}" id="dontcare"
Runtime Specification
See:
Runtime
Runtime layer is used to execute smart contracts and other actions created by the users and preserve the state between the executions. It can be described from three different angles: going step-by-step through various scenarios, describing the components of the runtime, and describing the functions that the runtime performs.
Scenarios
- Financial transaction -- we examine what happens when the runtime needs to process a simple financial transaction;
- Cross-contract call -- the scenario when the user calls a contract that in turn calls another contract.
Components
The components of the runtime can be described through the crates:
near-vm-logic
-- describes the interface that smart contract uses to interact with the blockchain. Encapsulates the behavior of the blockchain visible to the smart contract, e.g. fee rules, storage access rules, promise rules;near-vm-runner
crate -- a wrapper around Wasmer that does the actual execution of the smart contract code. It exposes the interface provided bynear-vm-logic
to the smart contract;runtime
crate -- encapsulates the logic of how transactions and receipts should be handled. If it encounters a smart contract call within a transaction or a receipt it callsnear-vm-runner
, for all other actions, like account creation, it processes them in-place.
The utility crates are:
near-runtime-fees
-- a convenience crate that encapsulates configuration of fees. We might get rid ot it later;near-vm-errors
-- contains the hierarchy of errors that can be occurred during transaction or receipt processing;near-vm-runner-standalone
-- a runnable tool that allows running the runtime without the blockchain, e.g. for integration testing of L2 projects;runtime-params-estimator
-- benchmarks the runtime and generates the config with the fees.
Separately, from the components we describe the Bindings Specification which is an
important part of the runtime that specifies the functions that the smart contract can call from its host -- the runtime.
The specification is defined in near-vm-logic
, but it is exposed to the smart contract in near-vm-runner
.
Functions
- Receipt consumption and production
- Fees
- Virtual machine
- Verification
Function Call
In this section we provide an explanation how the FunctionCall
action execution works, what are the inputs and what are the outputs. Suppose runtime received the following ActionReceipt:
#![allow(unused)] fn main() { ActionReceipt { id: "A1", signer_id: "alice", signer_public_key: "6934...e248", receiver_id: "dex", predecessor_id: "alice", input_data_ids: [], output_data_receivers: [], actions: [FunctionCall { gas: 100000, deposit: 100000u128, method_name: "exchange", args: "{arg1, arg2, ...}", ... }], } }
input_data_ids to PromiseResult's
ActionReceipt.input_data_ids
must be satisfied before execution (see Receipt Matching). Each of ActionReceipt.input_data_ids
will be converted to the PromiseResult::Successful(Vec<u8>)
if data_id.data
is Some(Vec<u8>)
otherwise if data_id.data
is None
promise will be PromiseResult::Failed
.
Input
The FunctionCall
executes in the receiver_id
account environment.
- a vector of Promise Results which can be accessed by a
promise_result
import PromisesAPIpromise_result
) - the original Transaction
signer_id
,signer_public_key
data from the ActionReceipt (e.g.method_name
,args
,predecessor_id
,deposit
,prepaid_gas
(which isgas
in FunctionCall)) - a general blockchain data (e.g.
block_index
,block_timestamp
) - read data from the account storage
A full list of the data available for the contract can be found in Context API and Trie
Execution
First of all, runtime does prepare the Wasm binary to be executed:
- loads the contract code from the
receiver_id
account storage - deserializes and validates the
code
Wasm binary (seeprepare::prepare_contract
) - injects the gas counting function
gas
which will charge gas on the beginning of the each code block - instantiates Bindings Spec with binary and calls the
FunctionCall.method_name
exported function
During execution, VM does the following:
- counts burnt gas on execution
- counts used gas (which is
burnt gas
+ gas attached to the new created receipts) - counts how accounts storage usage increased by the call
- collects logs produced by the contract
- sets the return data
- creates new receipts through PromisesAPI
Output
The output of the FunctionCall
:
- storage updates - changes to the account trie storage which will be applied on a successful call
burnt_gas
- irreversible amount of gas witch was spent on computationsused_gas
- includesburnt_gas
and gas attached to the newActionReceipt
s created during the method execution. In case of failure, createdActionReceipt
s not going to be sent thus account will pay only forburnt_gas
balance
- unspent account balance (account balance could be spent on deposits of newly createdFunctionCall
s orTransferAction
s to other contracts)storage_usage
- storage_usage after ActionReceipt applicationlogs
- during contract execution, utf8/16 string log records could be created. Logs are not persistent currently.new_receipts
- newActionReceipts
created during the execution. These receipts are going to be sent to the respectivereceiver_id
s (see Receipt Matching explanation)- result could be either
ReturnData::Value(Vec<u8>)
orReturnData::ReceiptIndex(u64)
`
Value Result
If applied ActionReceipt
contains output_data_receivers
, runtime will create DataReceipt
for each of data_id
and receiver_id
and data
equals returned value. Eventually, these DataReceipt
will be delivered to the corresponding receivers.
ReceiptIndex Result
Successful result could not return any Value, but generates a bunch of new ActionReceipts instead. One example could be a callback. In this case, we assume the the new Receipt will send its Value Result to the output_data_receivers
of the current ActionReceipt
.
Errors
As with other actions, errors can be divided into two categories: validation error and execution error.
Validation Error
- If there is zero gas attached to the function call, a
#![allow(unused)] fn main() { /// The attached amount of gas in a FunctionCall action has to be a positive number. FunctionCallZeroAttachedGas, }
error will be returned
- If the length of the method name to be called exceeds
max_length_method_name
, a genesis parameter whose current value is256
, a
#![allow(unused)] fn main() { /// The length of the method name exceeded the limit in a Function Call action. FunctionCallMethodNameLengthExceeded { length: u64, limit: u64 } }
error is returned.
- If the length of the argument to the function call exceeds
max_arguments_length
, a genesis parameter whose current value is4194304
(4MB), a
#![allow(unused)] fn main() { /// The length of the arguments exceeded the limit in a Function Call action. FunctionCallArgumentsLengthExceeded { length: u64, limit: u64 } }
error is returned.
Execution Error
There can be three types of errors returned when applying a function call action:
FunctionCallError
, ExternalError
, and StorageError
.
FunctionCallError
includes everything from around the execution of the wasm binary, from compiling wasm to native to traps occurred while executing the compiled native binary. More specifically, it includes the following errors:
#![allow(unused)] fn main() { pub enum FunctionCallError { /// Wasm compilation error CompilationError(CompilationError), /// Wasm binary env link error LinkError { msg: String, }, /// Import/export resolve error MethodResolveError(MethodResolveError), /// A trap happened during execution of a binary WasmTrap(WasmTrap), WasmUnknownError, HostError(HostError), } }
CompilationError
includes errors that can occur during the compilation of wasm binary.LinkError
is returned when wasmer runtime is unable to link the wasm module with provided imports.MethodResolveError
occurs when the method in the action cannot be found in the contract code.WasmTrap
error happens when a trap occurs during the execution of the binary. Traps here include
#![allow(unused)] fn main() { pub enum WasmTrap { /// An `unreachable` opcode was executed. Unreachable, /// Call indirect incorrect signature trap. IncorrectCallIndirectSignature, /// Memory out of bounds trap. MemoryOutOfBounds, /// Call indirect out of bounds trap. CallIndirectOOB, /// An arithmetic exception, e.g. divided by zero. IllegalArithmetic, /// Misaligned atomic access trap. MisalignedAtomicAccess, /// Breakpoint trap. BreakpointTrap, /// Stack overflow. StackOverflow, /// Generic trap. GenericTrap, } }
WasmUnknownError
occurs when something inside wasmer goes wrongHostError
includes errors that might be returned during the execution of a host function. Those errors are
#![allow(unused)] fn main() { pub enum HostError { /// String encoding is bad UTF-16 sequence BadUTF16, /// String encoding is bad UTF-8 sequence BadUTF8, /// Exceeded the prepaid gas GasExceeded, /// Exceeded the maximum amount of gas allowed to burn per contract GasLimitExceeded, /// Exceeded the account balance BalanceExceeded, /// Tried to call an empty method name EmptyMethodName, /// Smart contract panicked GuestPanic { panic_msg: String }, /// IntegerOverflow happened during a contract execution IntegerOverflow, /// `promise_idx` does not correspond to existing promises InvalidPromiseIndex { promise_idx: u64 }, /// Actions can only be appended to non-joint promise. CannotAppendActionToJointPromise, /// Returning joint promise is currently prohibited CannotReturnJointPromise, /// Accessed invalid promise result index InvalidPromiseResultIndex { result_idx: u64 }, /// Accessed invalid register id InvalidRegisterId { register_id: u64 }, /// Iterator `iterator_index` was invalidated after its creation by performing a mutable operation on trie IteratorWasInvalidated { iterator_index: u64 }, /// Accessed memory outside the bounds MemoryAccessViolation, /// VM Logic returned an invalid receipt index InvalidReceiptIndex { receipt_index: u64 }, /// Iterator index `iterator_index` does not exist InvalidIteratorIndex { iterator_index: u64 }, /// VM Logic returned an invalid account id InvalidAccountId, /// VM Logic returned an invalid method name InvalidMethodName, /// VM Logic provided an invalid public key InvalidPublicKey, /// `method_name` is not allowed in view calls ProhibitedInView { method_name: String }, /// The total number of logs will exceed the limit. NumberOfLogsExceeded { limit: u64 }, /// The storage key length exceeded the limit. KeyLengthExceeded { length: u64, limit: u64 }, /// The storage value length exceeded the limit. ValueLengthExceeded { length: u64, limit: u64 }, /// The total log length exceeded the limit. TotalLogLengthExceeded { length: u64, limit: u64 }, /// The maximum number of promises within a FunctionCall exceeded the limit. NumberPromisesExceeded { number_of_promises: u64, limit: u64 }, /// The maximum number of input data dependencies exceeded the limit. NumberInputDataDependenciesExceeded { number_of_input_data_dependencies: u64, limit: u64 }, /// The returned value length exceeded the limit. ReturnedValueLengthExceeded { length: u64, limit: u64 }, /// The contract size for DeployContract action exceeded the limit. ContractSizeExceeded { size: u64, limit: u64 }, /// The host function was deprecated. Deprecated { method_name: String }, } }
ExternalError
includes errors that occur during the execution insideExternal
, which is an interface between runtime and the rest of the system. The possible errors are:
#![allow(unused)] fn main() { pub enum ExternalError { /// Unexpected error which is typically related to the node storage corruption. /// It's possible the input state is invalid or malicious. StorageError(StorageError), /// Error when accessing validator information. Happens inside epoch manager. ValidatorError(EpochError), } }
StorageError
occurs when state or storage is corrupted.
Transactions
A transaction in Near is a list of actions and additional information:
#![allow(unused)] fn main() { pub struct Transaction { /// An account on which behalf transaction is signed pub signer_id: AccountId, /// An access key which was used to sign a transaction pub public_key: PublicKey, /// Nonce is used to determine order of transaction in the pool. /// It increments for a combination of `signer_id` and `public_key` pub nonce: Nonce, /// Receiver account for this transaction. If pub receiver_id: AccountId, /// The hash of the block in the blockchain on top of which the given transaction is valid pub block_hash: CryptoHash, /// A list of actions to be applied pub actions: Vec<Action>, } }
Signed Transaction
SignedTransaction
is what the node receives from a wallet through JSON-RPC endpoint and then routed to the shard where receiver_id
account lives. Signature proves an ownership of the corresponding public_key
(which is an AccessKey for a particular account) as well as authenticity of the transaction itself.
#![allow(unused)] fn main() { pub struct SignedTransaction { pub transaction: Transaction, /// A signature of a hash of the Borsh-serialized Transaction pub signature: Signature, }
Take a look some scenarios how transaction can be applied.
Batched Transaction
A Transaction
can contain a list of actions. When there are more than one action in a transaction, we refer to such
transaction as batched transaction. When such a transaction is applied, it is equivalent to applying each of the actions
separately, except:
- After processing a
CreateAccount
action, the rest of the action is applied on behalf of the account that is just created. This allows one to, in one transaction, create an account, deploy a contract to the account, and call some initialization function on the contract. DeleteAccount
action, if present, must be the last action in the transaction.
The number of actions in one transaction is limited by VMLimitConfig::max_actions_per_receipt
, the current value of which
is 100.
Transaction Validation and Errors
When a transaction is received, various checks will be performed to ensure its validity. This section lists the checks and potentially errors returned when they fail.
Basic Validation
Basic validation of a transaction can be done without the state. Such validation includes
- Whether
signer_id
is valid. If not, a
#![allow(unused)] fn main() { /// TX signer_id is not in a valid format or not satisfy requirements see `near_core::primitives::utils::is_valid_account_id` InvalidSignerId { signer_id: AccountId }, }
error is returned.
- Whether
receiver_id
is valid. If not, a
#![allow(unused)] fn main() { /// TX receiver_id is not in a valid format or not satisfy requirements see `near_core::primitives::utils::is_valid_account_id` InvalidReceiverId { receiver_id: AccountId }, }
error is returned.
- Whether
signature
is signed bypublic_key
. If not, a
#![allow(unused)] fn main() { /// TX signature is not valid InvalidSignature }
error is returned.
- Whether the number of actions included in the transaction is no greater than
max_actions_per_receipt
. If not, a
#![allow(unused)] fn main() { /// The number of actions exceeded the given limit. TotalNumberOfActionsExceeded { total_number_of_actions: u64, limit: u64 } }
is returned.
- Among the actions in the transaction, whether
DeleteAccount
, if present, is the last action. If not, a
#![allow(unused)] fn main() { /// The delete action must be a final aciton in transaction DeleteActionMustBeFinal }
error is returned.
- Whether total prepaid gas does not exceed
max_total_prepaid_gas
. If not, a
#![allow(unused)] fn main() { /// The total prepaid gas (for all given actions) exceeded the limit. TotalPrepaidGasExceeded { total_prepaid_gas: Gas, limit: Gas } }
error is returned.
- Whether each action included is valid. Details of such check can be found in action.
Validation With State
After the basic validation is done, we check the transaction against current state to perform further validation. This includes
- Whether
signer_id
exists. If not, a
#![allow(unused)] fn main() { /// TX signer_id is not found in a storage SignerDoesNotExist { signer_id: AccountId }, }
error is returned.
- Whether the transaction nonce is greater than the existing nonce on the access key. If not, a
#![allow(unused)] fn main() { /// Transaction nonce must be account[access_key].nonce + 1 InvalidNonce { tx_nonce: Nonce, ak_nonce: Nonce }, }
error is returned.
- If
signer_id
account has enough balance to cover the cost of the transaction. If not, a
#![allow(unused)] fn main() { /// Account does not have enough balance to cover TX cost NotEnoughBalance { signer_id: AccountId, balance: Balance, cost: Balance, } }
error is returned.
- If the transaction is signed by a function call access key and the function call access key does not have enough allowance to cover the cost of the transaction, a
#![allow(unused)] fn main() { /// Access Key does not have enough allowance to cover transaction cost NotEnoughAllowance { account_id: AccountId, public_key: PublicKey, allowance: Balance, cost: Balance, } }
error is returned.
- If
signer_id
account does not have enough balance to cover its storage after paying for the cost of the transaction, a
#![allow(unused)] fn main() { /// Signer account doesn't have enough balance after transaction. LackBalanceForState { /// An account which doesn't have enough balance to cover storage. signer_id: AccountId, /// Required balance to cover the state. amount: Balance, } }
error is returned.
- If a transaction is signed by a function call access key, the following errors are possible:
InvalidAccessKeyError::RequiresFullAccess
if the transaction contains more than one action or if the only action it contains is not aFunctionCall
action.InvalidAccessKeyError::DepositWithFunctionCall
if the function call action has nonzerodeposit
.
#![allow(unused)] fn main() { /// Transaction `receiver_id` doesn't match the access key receiver_id InvalidAccessKeyError::ReceiverMismatch { tx_receiver: AccountId, ak_receiver: AccountId }, }
is returned when transaction's receiver_id
does not match the receiver_id
of the access key.
#![allow(unused)] fn main() { /// Transaction method name isn't allowed by the access key InvalidAccessKeyError::MethodNameMismatch { method_name: String }, }
is returned if the name of the method that the transaction tries to call is not allowed by the access key.
Actions
There are a several action types in Near:
#![allow(unused)] fn main() { pub enum Action { CreateAccount(CreateAccountAction), DeployContract(DeployContractAction), FunctionCall(FunctionCallAction), Transfer(TransferAction), Stake(StakeAction), AddKey(AddKeyAction), DeleteKey(DeleteKeyAction), DeleteAccount(DeleteAccountAction), } }
Each transaction consists a list of actions to be performed on the receiver_id
side. Since transactions are first
converted to receipts when they are processed, we will mostly concern ourselves with actions in the context of receipt
processing.
For the following actions, predecessor_id
and receiver_id
are required to be equal:
DeployContract
Stake
AddKey
DeleteKey
DeleteAccount
NOTE: if the first action in the action list is CreateAccount
, predecessor_id
becomes receiver_id
for the rest of the actions until DeleteAccount
. This gives permission by another account to act on the newly created account.
CreateAccountAction
#![allow(unused)] fn main() { pub struct CreateAccountAction {} }
If receiver_id
has length == 64, this account id is considered to be hex(public_key)
, meaning creation of account only succeeds if followed up with AddKey(public_key)
action.
Outcome:
- creates an account with
id
=receiver_id
- sets Account
storage_usage
toaccount_cost
(genesis config)
Errors
Execution Error:
- If the action tries to create a top level account whose length is no greater than 32 characters, and
predecessor_id
is notregistrar_account_id
, which is defined by the protocol, the following error will be returned
#![allow(unused)] fn main() { /// A top-level account ID can only be created by registrar. CreateAccountOnlyByRegistrar { account_id: AccountId, registrar_account_id: AccountId, predecessor_id: AccountId, } }
- If the action tries to create an account that is neither a top-level account or a subaccount of
predecessor_id
, the following error will be returned
#![allow(unused)] fn main() { /// A newly created account must be under a namespace of the creator account CreateAccountNotAllowed { account_id: AccountId, predecessor_id: AccountId }, }
DeployContractAction
#![allow(unused)] fn main() { pub struct DeployContractAction { pub code: Vec<u8> } }
Outcome:
- sets the contract code for account
Errors
Validation Error:
- if the length of
code
exceedsmax_contract_size
, which is a genesis parameter, the following error will be returned:
#![allow(unused)] fn main() { /// The size of the contract code exceeded the limit in a DeployContract action. ContractSizeExceeded { size: u64, limit: u64 }, }
Execution Error:
- If state or storage is corrupted, it may return
StorageError
.
FunctionCallAction
#![allow(unused)] fn main() { pub struct FunctionCallAction { /// Name of exported Wasm function pub method_name: String, /// Serialized arguments pub args: Vec<u8>, /// Prepaid gas (gas_limit) for a function call pub gas: Gas, /// Amount of tokens to transfer to a receiver_id pub deposit: Balance, } }
Calls a method of a particular contract. See details.
TransferAction
#![allow(unused)] fn main() { pub struct TransferAction { /// Amount of tokens to transfer to a receiver_id pub deposit: Balance, } }
Outcome:
- transfers amount specified in
deposit
frompredecessor_id
to areceiver_id
account
Errors
Execution Error:
- If the deposit amount plus the existing amount on the receiver account exceeds
u128::MAX
, aStorageInconsistentState("Account balance integer overflow")
error will be returned.
StakeAction
#![allow(unused)] fn main() { pub struct StakeAction { // Amount of tokens to stake pub stake: Balance, // This public key is a public key of the validator node pub public_key: PublicKey, } }
Outcome:
- A validator proposal that contains the staking public key and the staking amount is generated and will be included in the next block.
Errors
Validation Error:
- If the
public_key
is not an ristretto compatible ed25519 key, the following error will be returned:
#![allow(unused)] fn main() { /// An attempt to stake with a public key that is not convertible to ristretto. UnsuitableStakingKey { public_key: PublicKey }, }
Execution Error:
- If an account has not staked but it tries to unstake, the following error will be returned:
#![allow(unused)] fn main() { /// Account is not yet staked, but tries to unstake TriesToUnstake { account_id: AccountId }, }
- If an account tries to stake more than the amount of tokens it has, the following error will be returned:
#![allow(unused)] fn main() { /// The account doesn't have enough balance to increase the stake. TriesToStake { account_id: AccountId, stake: Balance, locked: Balance, balance: Balance, } }
- If the staked amount is below the minimum stake threshold, the following error will be returned:
#![allow(unused)] fn main() { InsufficientStake { account_id: AccountId, stake: Balance, minimum_stake: Balance, } }
The minimum stake is determined by last_epoch_seat_price / minimum_stake_divisor
where last_epoch_seat_price
is the
seat price determined at the end of last epoch and minimum_stake_divisor
is a genesis config parameter and its current
value is 10.
AddKeyAction
#![allow(unused)] fn main() { pub struct AddKeyAction { pub public_key: PublicKey, pub access_key: AccessKey, } }
Outcome:
- Adds a new AccessKey to the receiver's account and associates it with a
public_key
provided.
Errors:
Validation Error:
If the access key is of type FunctionCallPermission
, the following errors can happen
- If
receiver_id
inaccess_key
is not a valid account id, the following error will be returned
#![allow(unused)] fn main() { /// Invalid account ID. InvalidAccountId { account_id: AccountId }, }
- If the length of some method name exceed
max_length_method_name
, which is a genesis parameter (current value is 256), the following error will be returned
#![allow(unused)] fn main() { /// The length of some method name exceeded the limit in a Add Key action. AddKeyMethodNameLengthExceeded { length: u64, limit: u64 }, }
- If the sum of length of method names (with 1 extra character for every method name) exceeds
max_number_bytes_method_names
, which is a genesis parameter (current value is 2000), the following error will be returned
#![allow(unused)] fn main() { /// The total number of bytes of the method names exceeded the limit in a Add Key action. AddKeyMethodNamesNumberOfBytesExceeded { total_number_of_bytes: u64, limit: u64 } }
Execution Error:
- If an account tries to add an access key with a given public key, but an existing access key with this public key already exists, the following error will be returned
#![allow(unused)] fn main() { /// The public key is already used for an existing access key AddKeyAlreadyExists { account_id: AccountId, public_key: PublicKey } }
- If state or storage is corrupted, a
StorageError
will be returned.
DeleteKeyAction
#![allow(unused)] fn main() { pub struct DeleteKeyAction { pub public_key: PublicKey, } }
Outcome:
- Deletes the AccessKey associated with
public_key
.
Errors
Execution Error:
- When an account tries to delete an access key that doesn't exist, the following error is returned
#![allow(unused)] fn main() { /// Account tries to remove an access key that doesn't exist DeleteKeyDoesNotExist { account_id: AccountId, public_key: PublicKey } }
StorageError
is returned if state or storage is corrupted.
DeleteAccountAction
#![allow(unused)] fn main() { pub struct DeleteAccountAction { /// The remaining account balance will be transferred to the AccountId below pub beneficiary_id: AccountId, } }
Outcomes:
- The account, as well as all the data stored under the account, is deleted and the tokens are transferred to
beneficiary_id
.
Errors
Validation Error
- If
beneficiary_id
is not a valid account id, the following error will be returned
#![allow(unused)] fn main() { /// Invalid account ID. InvalidAccountId { account_id: AccountId }, }
- If this action is not the last action in the action list of a receipt, the following error will be returned
#![allow(unused)] fn main() { /// The delete action must be a final action in transaction DeleteActionMustBeFinal }
- If the account still has locked balance due to staking, the following error will be returned
#![allow(unused)] fn main() { /// Account is staking and can not be deleted DeleteAccountStaking { account_id: AccountId } }
Execution Error:
- If state or storage is corrupted, a
StorageError
is returned.
Receipt
All cross-contract (we assume that each account lives in it's own shard) communication in Near happens trough Receipts. Receipts are stateful in a sense that they serve not only as messages between accounts but also can be stored in the account storage to await DataReceipts.
Each receipt has a predecessor_id
(who sent it) and receiver_id
the current account.
Receipts are one of 2 types: action receipts or data receipts.
Data Receipts are receipts that contains some data for some ActionReceipt
with the same receiver_id
.
Data Receipts has 2 fields: the unique data identifier data_id
and data
the received result.
data
is an Option
field and it indicates whether the result was a success or a failure. If it's Some
, then it means
the remote execution was successful and it contains the vector of bytes of the result.
Each ActionReceipt
also contains fields related to data:
input_data_ids
- a vector of input data with thedata_id
s required for the execution of this receipt.output_data_receivers
- a vector of output data receivers. It indicates where to send outgoing data. EachDataReceiver
consists ofdata_id
andreceiver_id
for routing.
Before any action receipt is executed, all input data dependencies need to be satisfied. Which means all corresponding data receipts has to be received. If any of the data dependencies is missing, the action receipt is postponed until all missing data dependency arrives.
Because Chain and Runtime guarantees that no receipts are missing, we can rely that every action receipt will be executed eventually (Receipt Matching explanation).
Each Receipt
has the following fields:
predecessor_id
type
:AccountId
The account_id which issued a receipt.
In case of a gas or deposit refund, the account ID is system
.
receiver_id
type
:AccountId
The destination account_id.
receipt_id
type
:AccountId
An unique id for the receipt.
receipt
type
: ActionReceipt | DataReceipt
There is a 2 types of Receipts: ActionReceipt and DataReceipt. ActionReceipt is a request to apply Actions, while DataReceipt is a result of application of these actions.
ActionReceipt
ActionReceipt
represents a request to apply actions on the receiver_id
side. It could be a derived as a result of a Transaction
execution or a another ActionReceipt
processing. ActionReceipt
consists the following fields:
signer_id
type
:AccountId
An account_id which signed the original transaction.
In case of a deposit refund, the account ID is system
.
signer_public_key
type
:PublicKey
The public key of an AccessKey which was used to sign the original transaction. In case of a deposit refund, the public key is empty (all bytes are 0).
gas_price
type
:u128
Gas price is a gas price which was set in a block where original transaction has been applied.
output_data_receivers
type
:[DataReceiver{ data_id: CryptoHash, receiver_id: AccountId }]
If smart contract finishes its execution with some value (not Promise), runtime creates a [DataReceipt
]s for each of the output_data_receivers
.
input_data_ids
type
:[CryptoHash]_
input_data_ids
are the receipt data dependencies. input_data_ids
correspond to DataReceipt.data_id
.
actions
type
:FunctionCall
|TransferAction
|StakeAction
|AddKeyAction
|DeleteKeyAction
|CreateAccountAction
|DeleteAccountAction
DataReceipt
DataReceipt represents a final result of some contract execution.
data_id
type
:CryptoHash
An a unique DataReceipt identifier.
data
type
:Option([u8])
An an associated data in bytes. None
indicates an error during execution.
Creating Receipt
Receipts can be generated during the execution of the SignedTransaction (see example) or during application of some ActionReceipt
which contains FunctionCall
action. The result of the FunctionCall
could either
Receipt Matching
Runtime doesn't expect that Receipts are coming in a particular order. Each Receipt is processed individually. The goal of the Receipt Matching
process is to match all ActionReceipt
s to the corresponding DataReceipt
s.
Processing ActionReceipt
For each incoming ActionReceipt
runtime checks whether we have all the DataReceipt
s (defined as ActionsReceipt.input_data_ids
) required for execution. If all the required DataReceipt
s are already in the storage, runtime can apply this ActionReceipt
immediately. Otherwise we save this receipt as a Postponed ActionReceipt. Also we save Pending DataReceipts Count and a link from pending DataReceipt
to the Postponed ActionReceipt
. Now runtime will wait all the missing DataReceipt
s to apply the Postponed ActionReceipt
.
Postponed ActionReceipt
A Receipt which runtime stores until all the designated DataReceipt
s arrive.
key
=account_id
,receipt_id
value
=[u8]
Where account_id
is Receipt.receiver_id
, receipt_id
is Receipt.receiver_id
and value is a serialized Receipt
(which type must be ActionReceipt).
Pending DataReceipt Count
A counter which counts pending DataReceipt
s for a Postponed Receipt initially set to the length of missing input_data_ids
of the incoming ActionReceipt
. It's decrementing with every new received DataReceipt
:
key
=account_id
,receipt_id
value
=u32
Where account_id
is AccountId, receipt_id
is CryptoHash and value is an integer.
Pending DataReceipt for Postponed ActionReceipt
We index each pending DataReceipt
so when a new DataReceipt
arrives we can find to which Postponed Receipt it belongs.
key
=account_id
,data_id
value
=receipt_id
Processing DataReceipt
Received DataReceipt
First of all, runtime saves the incoming DataReceipt
to the storage as:
key
=account_id
,data_id
value
=[u8]
Where account_id
is Receipt.receiver_id
, data_id
is DataReceipt.data_id
and value is a DataReceipt.data
(which is typically a serialized result of the call to a particular contract).
Next, runtime checks if there is any Postponed ActionReceipt
awaits for this DataReceipt
by querying Pending DataReceipt
to the Postponed Receipt. If there is no postponed receipt_id
yet, we do nothing else. If there is a postponed receipt_id
, we do the following:
- decrement
Pending Data Count
for the postponedreceipt_id
- remove found
Pending DataReceipt
to thePostponed ActionReceipt
If Pending DataReceipt Count
is now 0 that means all the Receipt.input_data_ids
are in storage and runtime can safely apply the Postponed Receipt and remove it from the store.
Case 1: Call to multiple contracts and await responses
Suppose runtime got the following ActionReceipt
:
# Non-relevant fields are omitted.
Receipt{
receiver_id: "alice",
receipt_id: "693406"
receipt: ActionReceipt {
input_data_ids: []
}
}
If execution return Result::Value
Suppose runtime got the following ActionReceipt
(we use a python-like pseudo code):
# Non-relevant fields are omitted.
Receipt{
receiver_id: "alice",
receipt_id: "5e73d4"
receipt: ActionReceipt {
input_data_ids: ["e5fa44", "7448d8"]
}
}
We can't apply this receipt right away: there are missing DataReceipt'a with IDs: ["e5fa44", "7448d8"]. Runtime does the following:
postponed_receipts["alice,5e73d4"] = borsh_serialize(
Receipt{
receiver_id: "alice",
receipt_id: "5e73d4"
receipt: ActionReceipt {
input_data_ids: ["e5fa44", "7448d8"]
}
}
)
pending_data_receipt_store["alice,e5fa44"] = "5e73d4"
pending_data_receipt_store["alice,7448d8"] = "5e73d4"
pending_data_receipt_count = 2
Note: the subsequent Receipts could arrived in the current block or next, that's why we save Postponed ActionReceipt in the storage
Then the first pending Pending DataReceipt
arrives:
# Non-relevant fields are omitted.
Receipt {
receiver_id: "alice",
receipt: DataReceipt {
data_id: "e5fa44",
data: "some data for alice",
}
}
data_receipts["alice,e5fa44"] = borsh_serialize(Receipt{
receiver_id: "alice",
receipt: DataReceipt {
data_id: "e5fa44",
data: "some data for alice",
}
};
pending_data_receipt_count["alice,5e73d4"] = 1`
del pending_data_receipt_store["alice,e5fa44"]
And finally the last Pending DataReceipt
arrives:
# Non-relevant fields are omitted.
Receipt{
receiver_id: "alice",
receipt: DataReceipt {
data_id: "7448d8",
data: "some more data for alice",
}
}
data_receipts["alice,7448d8"] = borsh_serialize(Receipt{
receiver_id: "alice",
receipt: DataReceipt {
data_id: "7448d8",
data: "some more data for alice",
}
};
postponed_receipt_id = pending_data_receipt_store["alice,5e73d4"]
postponed_receipt = postponed_receipts[postponed_receipt_id]
del postponed_receipts[postponed_receipt_id]
del pending_data_receipt_count["alice,5e73d4"]
del pending_data_receipt_store["alice,7448d8"]
apply_receipt(postponed_receipt)
Receipt Validation Error
Some postprocessing validation is done after an action receipt is applied. The validation includes:
- Whether the generated receipts are valid. A generated receipt can be invalid, if, for example, a function call generates a receipt to call another function on some other contract, but the contract name is invalid. Here there are mainly two types of errors:
- account id is invalid. If the receiver id of the receipt is invalid, a
#![allow(unused)] fn main() { /// The `receiver_id` of a Receipt is not valid. InvalidReceiverId { account_id: AccountId }, }
error is returned.
- some action is invalid. The errors returned here are the same as the validation errors mentioned in actions.
- Whether the account still has enough balance to pay for storage. If, for example, the execution of one function call action leads to some receipts that require transfer to be generated as a result, the account may no longer have enough balance after the transferred amount is deducted. In this case, a
#![allow(unused)] fn main() { /// ActionReceipt can't be completed, because the remaining balance will not be enough to cover storage. LackBalanceForState { /// An account which needs balance account_id: AccountId, /// Balance required to complete an action. amount: Balance, }, }
Refunds
When execution of a receipt fails or there are left some unused amount of prepaid gas after a function call, the Runtime generates refund receipts.
The are 2 types of refunds.
- Refunds for the failed receipt for attached deposits. Let's call them deposit refunds.
- Refunds for the unused gas and fees. Let's call them gas refunds.
Refunds are identified by having predecessor_id == "system"
. They don't cost any fees to generate and don't provide burnt gas.
If the execution of a refund fails, the refund amount is burnt.
The refund receipt is an ActionReceipt
that consists of a single action Transfer
with the deposit
amount of the refund.
Deposit Refunds
Deposit refunds are generated when an action receipt fails to execute. All attached deposit amounts are summed together and
send as a refund to a predecessor_id
. Because of only the predecessor can attach deposits.
Deposit refunds have the following fields in the ActionReceipt:
signer_id
issystem
signer_public_key
is ED25519 key with data equal to 32 bytes of0
.
Gas Refunds
Gas refunds are generated when a receipt used the amount of gas lower than the attached amount of gas.
If the receipt execution succeeded, the gas amount is equal to prepaid_gas + execution_gas - used_gas
.
If the receipt execution failed, the gas amount is equal to prepaid_gas + execution_gas - burnt_gas
.
The difference between burnt_gas
and used_gas
is the used_gas
also includes the fees and the prepaid gas of
newly generated receipts, e.g. from cross-contract calls in function calls actions.
Then the gas amount is converted to tokens by multiplying by the gas price at which the original transaction was generated.
Gas refunds have the following fields in the ActionReceipt:
signer_id
is the actualsigner_id
from the receipt that generates this refund.signer_public_key
is thesigner_public_key
from the receipt that generates this refund.
Access Key Allowance refunds
When an account used a restricted access key with FunctionCallPermission
, it may have had a limited allowance.
The allowance was charged for the full amount of receipt fees including full prepaid gas.
To refund the allowance we distinguish between Deposit refunds and Gas refunds using signer_id
in the action receipt.
If the signer_id == receiver_id && predecessor_id == "system"
it means it's a gas refund and the runtime should try to refund the allowance.
Note, that it's not always possible to refund the allowance, because the access key can be deleted between the moment when the transaction was issued and when the gas refund arrived. In this case we use the best effort to refund the allowance. It means:
- the access key on the
signer_id
account with the public keysigner_public_key
should exist - the access key permission should be
FunctionCallPermission
- the allowance should be set to
Some
limited value, instead of unlimited allowance (None
) - the runtime uses saturating add to increase the allowance, to avoid overflows
Runtime Fees
Runtime fees are measured in Gas. Gas price will be discussed separately.
When a transaction is converted into a receipt, the signer account is charged for the full cost of the transaction. This cost consists of extra attached gas, attached deposits and the transaction fee.
The total transaction fee is the sum of the following:
- A fee for creation of the receipt
- A fee for every action
Every Fee consists of 3 values measured in gas:
send_sir
andsend_not_sir
- the gas burned when the action is being created to be sent to a receiver.send_sir
is used whencurrent_account_id == receiver_id
(current_account_id
is asigner_id
for a signed transaction).send_not_sir
is used whencurrent_account_id != receiver_id
execution
- the gas burned when the action is being executed on the receiver's account.
Burning gas is different from charging gas:
- Burnt gas is not refunded.
- Charged gas can potentially be refunded in case the execution stopped earlier and the remaining actions are not going to be executed. So the charged gas for the remaining actions can be refunded.
Receipt creation cost
There are 2 types of receipts:
- Action receipts ActionReceipt
- Data receipts DataReceipt
A transaction is converted into an ActionReceipt. Data receipts are used for data dependencies and will be discussed separately.
The Fee
for an action receipt creation is described in the config action_receipt_creation_config
.
Example: when a signed transaction is being converted into a receipt, the gas for action_receipt_creation_config.send
is being burned immediately,
while the gas for action_receipt_creation_config.execution
is only charged, but not burned. It'll be burned when
the newly created receipt is executed on the receiver's account.
Fees for actions
Every Action
has a corresponding Fee(s) described in the config action_creation_config
.
Similar to a receipt creation costs, the send
gas is burned when an action is added to a receipt to be sent, and the execution
gas is only charged, but not burned.
Fees are either a base fee or a fee per byte of some data within the action.
Here is the list of actions and their corresponding fees:
- CreateAccount uses
- the base fee
create_account_cost
- the base fee
- DeployContract uses the sum of the following fees:
- the base fee
deploy_contract_cost
- the fee per byte of the contract code to be deployed with the fee
deploy_contract_cost_per_byte
To compute the number of bytes for a deploy contract actiondeploy_contract_action
usedeploy_contract_action.code.len()
- the base fee
- FunctionCall uses the sum of the following fees:
- the base fee
function_call_cost
- the fee per byte of method name string and per byte of arguments with the fee
function_call_cost_per_byte
. To compute the number of bytes for a function call actionfunction_call_action
usefunction_call_action.method_name.as_bytes().len() + function_call_action.args.len()
- the base fee
- Transfer uses one of the following fees:
- if the
receiver_id
is an Implicit Account ID, then a sum of base fees is used:- the create account base fee
create_account_cost
- the transfer base fee
transfer_cost
- the add full access key base fee
add_key_cost.full_access_cost
- the create account base fee
- if the
receiver_id
is NOT an Implicit Account ID, then only the base fee is used:- the transfer base fee
transfer_cost
- the transfer base fee
- if the
- Stake uses
- the base fee
stake_cost
- the base fee
- AddKey uses one of the following fees:
- if the access key is
AccessKeyPermission::FullAccess
the base fee is used- the add full access key base fee
add_key_cost.full_access_cost
- the add full access key base fee
- if the access key is
AccessKeyPermission::FunctionCall
the sum of the fees is used- the add function call permission access key base fee
add_key_cost.function_call_cost
- the fee per byte of method names with extra byte for every method with the fee
add_key_cost.function_call_cost_per_byte
To compute the number of bytes forfunction_call_permission
usefunction_call_permission.method_names.iter().map(|name| name.as_bytes().len() as u64 + 1).sum::<u64>()
- the add function call permission access key base fee
- if the access key is
- DeleteKey uses
- the base fee
delete_key_cost
- the base fee
- DeleteAccount uses
- the base fee
delete_account_cost
- the base fee
Example
Let's say we have the following transaction:
#![allow(unused)] fn main() { Transaction { signer_id: "alice.near", public_key: "2onVGYTFwyaGetWckywk92ngBiZeNpBeEjuzSznEdhRE", nonce: 23, receiver_id: "lockup.alice.near", block_hash: "3CwEMonK6MmKgjKePiFYgydbAvxhhqCPHKuDMnUcGGTK", actions: [ Action::CreateAccount(CreateAccountAction {}), Action::Transfer(TransferAction { deposit: 100000000000000000000000000, }), Action::DeployContract(DeployContractAction { code: vec![/*<...128000 bytes...>*/], }), Action::FunctionCall(FunctionCallAction { method_name: "new", args: b"{\"owner_id\": \"alice.near\"}".to_vec(), gas: 25000000000000, deposit: 0, }), ], } }
It has signer_id != receiver_id
so it will use send_not_sir
for send fees.
It contains 4 actions with 2 actions that requires to compute number of bytes.
We assume code
in DeployContractAction
contains 128000
bytes. And FunctionCallAction
has
method_name
with length of 3
and args
length of 26
, so total of 29
.
First let's compute the the amount that will be burned immediately for sending a receipt.
burnt_gas = \
config.action_receipt_creation_config.send_not_sir + \
config.action_creation_config.create_account_cost.send_not_sir + \
config.action_creation_config.transfer_cost.send_not_sir + \
config.action_creation_config.deploy_contract_cost.send_not_sir + \
128000 * config.action_creation_config.deploy_contract_cost_per_byte.send_not_sir + \
config.action_creation_config.function_call_cost.send_not_sir + \
29 * config.action_creation_config.function_call_cost_per_byte.send_not_sir
Now, by using burnt_gas
, we can calculate the total transaction fee
total_transaction_fee = burnt_gas + \
config.action_receipt_creation_config.execution + \
config.action_creation_config.create_account_cost.execution + \
config.action_creation_config.transfer_cost.execution + \
config.action_creation_config.deploy_contract_cost.execution + \
128000 * config.action_creation_config.deploy_contract_cost_per_byte.execution + \
config.action_creation_config.function_call_cost.execution + \
29 * config.action_creation_config.function_call_cost_per_byte.execution
This total_transaction_fee
is the amount of gas required to create a new receipt from the transaction.
NOTE: There are extra amounts required to prepay for deposit in TransferAction
and gas in FunctionCallAction
, but this is not part of the total transaction fee.
Scenarios
In the following sections we go over the common scenarios that runtime takes care of.
Financial Transaction
Suppose Alice wants to transfer 100 tokens to Bob. In this case we are talking about native Near Protocol tokens, oppose to user-defined tokens implemented through a smart contract. There are several way this can be done:
- Direct transfer through a transaction containing transfer action;
- Alice calling a smart contract that in turn creates a financial transaction towards Bob.
In this section we are talking about the former simpler scenario.
Pre-requisites
For this to work both Alice and Bob need to have accounts and an access to them through the full access keys.
Suppose Alice has account alice_near
and Bob has account bob_near
. Also, some time in the past,
each of them has created a public-secret key-pair, saved the secret key somewhere (e.g. in a wallet application)
and created a full access key with the public key for the account.
We also need to assume that both Alice and Bob has some number of tokens on their accounts. Alice needs >100 tokens on the account so that she could transfer 100 tokens to Bob, but also Alice and Bob need to have some tokens to pay for the rent of their account -- which is essentially the cost of the storage occupied by the account in the Near Protocol network.
Creating a transaction
To send the transaction neither Alice nor Bob need to run a node. However, Alice needs a way to create and sign a transaction structure. Suppose Alice uses near-shell or any other third-party tool for that. The tool then creates the following structure:
Transaction {
signer_id: "alice_near",
public_key: "ed25519:32zVgoqtuyRuDvSMZjWQ774kK36UTwuGRZMmPsS6xpMy",
nonce: 57,
receiver_id: "bob_near",
block_hash: "CjNSmWXTWhC3EhRVtqLhRmWMTkRbU96wUACqxMtV1uGf",
actions: vec![
Action::Transfer(TransferAction {deposit: 100} )
],
}
Which contains one token transfer action, the id of the account that signs this transaction (alice_near
)
the account towards which this transaction is addressed (bob_near
). Alice also uses the public key
associated with one of the full access keys of alice_near
account.
Additionally, Alice uses the nonce which is unique value that allows Near Protocol to differentiate the transactions (in case there are several transfers coming in rapid succession) which should be strictly increasing with each transaction. Unlike in Ethereum, nonces are associated with access keys, oppose to the entire accounts, so several users using the same account through different access keys need not to worry about accidentally reusing each other's nonces.
The block hash is used to calculate the transaction "freshness". It is used to make sure the transaction does
not get lost (let's say somewhere in the network) and then arrive hours, days, or years later when it is not longer relevant
or would be undesirable to execute. The transaction does not need to arrive at the specific block, instead it is required to
arrive within certain number of blocks from the bock identified by the block_hash
(as of 2019-10-27 the constant is 10 blocks).
Any transaction arriving outside this threshold is considered to be invalid.
near-shell or other tool that Alice uses then signs this transaction, by: computing the hash of the transaction and signing it
with the secret key, resulting in a SignedTransaction
object.
Sending the transaction
To send the transaction, near-shell connects through the RPC to any Near Protocol node and submits it.
If users wants to wait until the transaction is processed they can use send_tx_commit
JSONRPC method which waits for the
transaction to appear in a block. Otherwise the user can use send_tx_async
.
Transaction to receipt
We skip the details on how the transaction arrives to be processed by the runtime, since it is a part of the blockchain layer
discussion.
We consider the moment where SignedTransaction
is getting passed to Runtime::apply
of the
runtime
crate.
Runtime::apply
immediately passes transaction to Runtime::process_transaction
which in turn does the following:
- Verifies that transaction is valid;
- Applies initial reversible and irreversible charges to
alice_near
account; - Creates a receipt with the same set of actions directed towards
bob_near
.
The first two items are performed inside Runtime::verify_and_charge_transaction
method.
Specifically it does the following checks:
- Verifies that
alice_near
andbob_near
are syntactically valid account ids; - Verifies that the signature of the transaction is correct based on the transaction hash and the attached public key;
- Retrieves the latest state of the
alice_near
account, and simultaneously checks that it exists; - Retrieves the state of the access key of that
alice_near
used to sign the transaction; - Checks that transaction nonce is greater than the nonce of the latest transaction executed with that access key;
- Checks whether the account that signed the transaction is the same as the account that receives it. In our case the sender (
alice_near
) and the receiver (bob_near
) are not the same. We apply different fees if receiver and sender is the same account; - Applies the storage rent to the
alice_near
account; - Computes how much gas we need to spend to convert this transaction to a receipt;
- Computes how much balance we need to subtract from
alice_near
, in this case it is 100 tokens; - Deducts the tokens and the gas from
alice_near
balance, using the current gas price; - Checks whether after all these operations account has enough balance to passively pay for the rent for the next several blocks (an economical constant defined by Near Protocol). Otherwise account will be open for an immediate deletion, which we do not want;
- Updates the
alice_near
account with the new balance and the used access key with the new nonce; - Computes how much reward should be paid to the validators from the burnt gas.
If any of the above operations fail all of the changes will be reverted.
Processing receipt
The receipt created in the previous section will eventually arrive to a runtime on the shard that hosts bob_near
account.
Again, it will be processed by Runtime::apply
which will immediately call Runtime::process_receipt
.
It will check that this receipt does not have data dependencies (which is only the case of function calls) and will then call Runtime::apply_action_receipt
on TransferAction
.
Runtime::apply_action_receipt
will perform the following checks:
- Retrieves the state of
bob_near
account, if it still exists (it is possible that Bob has deleted his account concurrently with the transfer transaction); - Applies the rent to Bob's account;
- Computes the cost of processing a receipt and a transfer action;
- Checks if
bob_near
still exists and if it is deposits the transferred tokens; - Computes how much reward should be paid to the validators from the burnt gas.
Cross-Contract Call
This guide assumes that you have read the Financial Transaction section.
Suppose Alice is a calling a function reserve_trip(city: String, date: u64)
on a smart contract deployed to a travel_agency
account which in turn calls reserve(date: u64)
on a smart contract deployed to a hotel_near
account and attaches
a callback to method hotel_reservation_complete(date: u64)
on travel_agency
.
Pre-requisites
It possible for Alice to call the travel_agency
in several different ways.
In the simplest scenario Alice has an account alice_near
and she has a full access key.
She then composes the following transaction that calls the travel_agency
:
Transaction {
signer_id: "alice_near",
public_key: "ed25519:32zVgoqtuyRuDvSMZjWQ774kK36UTwuGRZMmPsS6xpMy",
nonce: 57,
receiver_id: "travel_agency",
block_hash: "CjNSmWXTWhC3EhRVtqLhRmWMTkRbU96wUACqxMtV1uGf",
actions: vec![
Action::FunctionCall(FunctionCallAction {
method_name: "reserve_trip",
args: "{\"city\": \"Venice\", \"date\": 20191201}",
gas: 1000000,
tokens: 100,
})
],
}
Here the public key corresponds to the full access key of alice_near
account. All other fields in Transaction
were
discussed in the Financial Transaction section. The FunctionCallAction
action describes how
the contract should be called. The receiver_id
field in Transaction
already establishes what contract should be executed,
FunctionCallAction
merely describes how it should be executed. Interestingly, the arguments is just a blob of bytes,
it is up to the contract developer what serialization format they choose for their arguments. In this example, the contract
developer has chosen to use JSON and so the tool that Alice uses to compose this transaction is expected to use JSON too
to pass the arguments. gas
declares how much gas alice_near
has prepaid for dynamically calculated fees of the smart
contract executions and other actions that this transaction may spawn. The tokens
is the amount of alice_near
attaches
to be deposited to whatever smart contract that it is calling to. Notice, gas
and tokens
are in different units of
measurement.
Now, consider a slightly more complex scenario. In this scenario Alice uses a restricted access key to call the function.
That is the permission of the access key is not AccessKeyPermission::FullAccess
but is instead: AccessKeyPermission::FunctionCall(FunctionCallPermission)
where
FunctionCallPermission {
allowance: Some(3000),
receiver_id: "travel_agency",
method_names: [ "reserve_trip", "cancel_trip" ]
}
This scenario might arise when someone Alice's parent has given them a restricted access to alice_near
account by
creating an access key that can be used strictly for trip management.
This access key allows up to 3000
tokens to be spent (which includes token transfers and payments for gas), it can
be only used to call travel_agency
and it can be only used with the reserve_trip
and cancel_trip
methods.
The way runtime treats this case is almost exactly the same as the previous one, with the only difference on how it verifies
the signature of on the signed transaction, and that it also checks for allowance to not be exceeded.
Finally, in the last scenario, Alice does not have an account (or the existence of alice_near
is irrelevant). However,
alice has full or restricted access key directly on travel_agency
account. In that case signer_id == receiver_id
in the
Transaction
object and runtime will convert transaction to the first receipt and apply that receipt in the same block.
This section will focus on the first scenario, since the other two are the same with some minor differences.
Transaction to receipt
The process of converting transaction to receipt is very similar to the Financial Transaction with several key points to note:
- Since Alice attaches 100 tokens to the function call, we subtract them from
alice_near
upon converting transaction to receipt, similar to the regular financial transaction; - Since we are attaching 1000000 prepaid gas, we will not only subtract the gas costs of processing the receipt from
alice_near
, but will also purchase 1000000 gas using the current gas price.
Processing the reserve_trip
receipt
The receipt created on the shard that hosts alice_near
will eventually arrive to the shard hosting travel_agency
account.
It will be processed in Runtime::apply
which will check that receipt does not have data dependencies (which is the case because
this function call is not a callback) and will call Runtime::apply_action_receipt
.
At this point receipt processing is similar to receipt processing from the Financial Transaction
section, with one difference that we will also call action_function_call
which will do the following:
- Retrieve the Wasm code of the smart contract (either from the database or from the cache);
- Initialize runtime context through
VMContext
and createRuntimeExt
which provides access to the trie when the smart contract call the storage API. Specifically"{\"city\": \"Venice\", \"date\": 20191201}"
arguments will be set inVMContext
. - Calls
near_vm_runner::run
which does the following:- Inject gas, stack, and other kinds of metering;
- Verify that Wasm code does not use floats;
- Checks that bindings API functions that the smart contract is trying to call are actually those provided by
near_vm_logic
; - Compiles Wasm code into the native binary;
- Calls
reserve_trip
on the smart contract.- During the execution of the smart contract it will at some point call
promise_create
andpromise_then
, which will call method onRuntimeExt
that will record that two promises were created and that the second one should wait on the first one. Specifically,promise_create
will callRuntimeExt::create_receipt(vec![], "hotel_near")
returning0
and thenRuntimeExt::create_receipt(vec![0], "travel_agency")
;
- During the execution of the smart contract it will at some point call
action_function_call
then collects receipts fromVMContext
along with the execution result, logs, and information about used gas;apply_action_receipt
then goes over the collected receipts from each action and returns them at the end ofRuntime::apply
together with other receipts.
Processing the reserve
receipt
This receipt will have output_data_receivers
with one element corresponding to the receipt that calls hotel_reservation_complete
,
which will tell the runtime that it should create DataReceipt
and send it towards travel_agency
once the execution of reserve(date: u64)
is complete.
The rest of the smart contract execution is similar to the above.
Processing the hotel_reservation_complete
receipt
Upon receiving the hotel_reservation_complete
receipt the runtime will notice that its input_data_ids
is not empty
which means that it cannot be executed until reserve
receipt is complete. It will store the receipt in the trie together
with the counter of how many DataReceipt
it is waiting on.
It will not call the Wasm smart contract at this point.
Processing the DataReceipt
Once the runtime receives the DataReceipt
it takes the receipt with hotel_reservation_complete
function call
and executes it following the same execution steps as with the reserve_trip
receipt.
Components
Here is the high-level diagram of various runtime components, including some blockchain layer components.
Runtime crate
Runtime crate encapsulates the logic of how transactions and receipts should be handled. If it encounters
a smart contract call within a transaction or a receipt it calls near-vm-runner
, for all other actions, like account
creation, it processes them in-place.
Runtime class
The main entry point of the Runtime
is method apply
.
It applies new singed transactions and incoming receipts for some chunk/shard on top of
given trie and the given state root.
If the validator accounts update is provided, updates validators accounts.
All new signed transactions should be valid and already verified by the chunk producer.
If any transaction is invalid, the method returns an InvalidTxError
.
In case of success, the method returns ApplyResult
that contains the new state root, trie changes,
new outgoing receipts, stats for validators (e.g. total rent paid by all the affected accounts),
execution outcomes.
Apply arguments
It takes the following arguments:
trie: Arc<Trie>
- the trie that contains the latest state.root: CryptoHash
- the hash of the state root in the trie.validator_accounts_update: &Option<ValidatorAccountsUpdate>
- optional field that contains updates for validator accounts. It's provided at the beginning of the epoch or when someone is slashed.apply_state: &ApplyState
- contains block index and timestamp, epoch length, gas price and gas limit.prev_receipts: &[Receipt]
- the list of incoming receipts, from the previous block.transactions: &[SignedTransaction]
- the list of new signed transactions.
Apply logic
The execution consists of the following stages:
- Snapshot the initial state.
- Apply validator accounts update, if available.
- Convert new signed transactions into the receipts.
- Process receipts.
- Check that incoming and outgoing balances match.
- Finalize trie update.
- Return
ApplyResult
.
Validator accounts update
Validator accounts are accounts that staked some tokens to become a validator. The validator accounts update usually happens when the current chunk is the first chunk of the epoch. It also happens when there is a challenge in the current block with one of the participants belong to the current shard.
This update distributes validator rewards, return locked tokens and maybe slashes some accounts out of their stake.
Signed Transaction conversion
New signed transaction transactions are provided by the chunk producer in the chunk. These transactions should be ordered and already validated. Runtime does validation again for the following reasons:
- to charge accounts for transactions fees, transfer balances, prepaid gas and account rents;
- to create new receipts;
- to compute burnt gas;
- to validate transactions again, in case the chunk producer was malicious.
If the transaction has the the same signer_id
and receiver_id
, then the new receipt is added to the list of new local receipts,
otherwise it's added to the list of new outgoing receipts.
Receipt processing
Receipts are processed one by one in the following order:
- Previously delayed receipts from the state.
- New local receipts.
- New incoming receipts.
After each processed receipt, we compare total gas burnt (so far) with the gas limit. When the total gas burnt reaches or exceeds the gas limit, the processing stops. The remaining receipts are considered delayed and stored into the state.
Delayed receipts
Delayed receipts are stored as a persistent queue in the state. Initially, the first unprocessed index and the next available index are initialized to 0. When a new delayed receipt is added, it's written under the next available index in to the state and the next available index is incremented by 1. When a delayed receipt is processed, it's read from the state using the first unprocessed index and the first unprocessed index is incremented. At the end of the receipt processing, the all remaining local and incoming receipts are considered to be delayed and stored to the state in their respective order. If during receipt processing, we've changed indices, then the delayed receipt indices are stored to the state as well.
Receipt processing algorithm
The receipt processing algorithm is the following:
- Read indices from the state or initialize with zeros.
- While the first unprocessed index is less than the next available index do the following
- If the total burnt gas is at least the gas limit, break.
- Read the receipt from the first unprocessed index.
- Remove the receipt from the state.
- Increment the first unprocessed index.
- Process the receipt.
- Add the new burnt gas to the total burnt gas.
- Remember that the delayed queue indices has changed.
- Process the new local receipts and then the new incoming receipts
- If the total burnt gas is less then the gas limit:
- Process the receipt.
- Add the new burnt gas to the total burnt gas.
- Else:
- Store the receipt under the next available index.
- Increment the next available index.
- Remember that the delayed queue indices has changed.
- If the total burnt gas is less then the gas limit:
- If the delayed queue indices has changed, store the new indices to the state.
Balance checker
Balance checker computes the total incoming balance and the total outgoing balance.
The total incoming balance consists of the following:
- Incoming validator rewards from validator accounts update.
- Sum of the initial accounts balances for all affected accounts. We compute it using the snapshot of the initial state.
- Incoming receipts balances. The prepaid fees and gas multiplied their gas prices with the attached balances from transfers and function calls. Refunds are considered to be free of charge for fees, but still has attached deposits.
- Balances for the processed delayed receipts.
- Initial balances for the postponed receipts. Postponed receipts are receipts from the previous blocks that were processed, but were not executed. They are action receipts with some expected incoming data. Usually for a callback on top of awaited promise. When the expected data arrives later than the action receipt, then the action receipt is postponed. Note, the data receipts are 0 cost, because they are completely prepaid when issued.
The total outgoing balance consists of the following:
- Sum of the final accounts balance for all affected accounts.
- Outgoing receipts balances.
- New delayed receipts. Local and incoming receipts that were not processed this time.
- Final balances for the postponed receipts.
- Total rent paid by all affected accounts.
- Total new validator rewards. It's computed from total gas burnt rewards.
- Total balance burnt. In case the balance is burnt for some reason (e.g. account was deleted during the refund), it's accounted there.
- Total balance slashed. In case a validator is slashed for some reason, the balance is account here.
When you sum up incoming balances and outgoing balances, they should match. If they don't match, we throw an error.
Bindings Specification
This is the low-level interface available to the smart contracts, it consists of the functions that the host (represented by Wasmer inside near-vm-runner) exposes to the guest (the smart contract compiled to Wasm).
Due to Wasm restrictions the methods operate only with primitive types, like u64
.
Also for all functions in the bindings specification the following is true:
- Method execution could result in
MemoryAccessViolation
error if one of the following happens:- The method causes host to read a piece of memory from the guest but it points outside the guest's memory;
- The guest causes host to read from the register, but register id is invalid.
Execution of a bindings function call result in an error being generated. This error causes execution of the smart contract to be terminated and the error message written into the logs of the transaction that caused the execution. Many bindings functions can throw specialized error messages, but there is also a list of error messages that can be thrown by almost any function:
IntegerOverflow
-- happens when guest passes some data to the host but when host tries to apply arithmetic operation on it it causes overflow or underflow;GasExceeded
-- happens when operation performed by the guest causes more gas than the remaining prepaid gas;GasLimitExceeded
-- happens when the execution uses more gas than allowed by the global limit imposed in the economics config;StorageError
-- happens when method fails to do some operation on the trie.
The following binding methods cannot be invoked in a view call:
signer_account_id
signer_account_pk
predecessor_account_id
attached_deposit
prepaid_gas
used_gas
promise_create
promise_then
promise_and
promise_batch_create
promise_batch_then
promise_batch_action_create_account
promise_batch_action_deploy_account
promise_batch_action_function_call
promise_batch_action_transfer
promise_batch_action_stake
promise_batch_action_add_key_with_full_access
promise_batch_action_add_key_with_function_call
promise_batch_action_delete_key
promise_batch_action_delete_account
promise_results_count
promise_result
promise_return
If they are invoked the smart contract execution will panic with ProhibitedInView(<method name>)
.
Registers API
Registers allow the host function to return the data into a buffer located inside the host oppose to the buffer located on the client. A special operation can be used to copy the content of the buffer into the host. Memory pointers can then be used to point either to the memory on the guest or the memory on the host, see below. Benefits:
- We can have functions that return values that are not necessarily used, e.g. inserting key-value into a trie can also return the preempted old value, which might not be necessarily used. Previously, if we returned something we would have to pass the blob from host into the guest, even if it is not used;
- We can pass blobs of data between host functions without going through the guest, e.g. we can remove the value from the storage and insert it into under a different key;
- It makes API cleaner, because we don't need to pass
buffer_len
andbuffer_ptr
as arguments to other functions; - It allows merging certain functions together, see
storage_iter_next
; - This is consistent with other APIs that were created for high performance, e.g. allegedly Ewasm have implemented SNARK-like computations in Wasm by exposing a bignum library through stack-like interface to the guest. The guest can manipulate then with the stack of 256-bit numbers that is located on the host.
Host → host blob passing
The registers can be used to pass the blobs between host functions. For any function that
takes a pair of arguments *_len: u64, *_ptr: u64
this pair is pointing to a region of memory either on the guest or
the host:
- If
*_len != u64::MAX
it points to the memory on the guest; - If
*_len == u64::MAX
it points to the memory under the register*_ptr
on the host.
For example:
storage_write(u64::MAX, 0, u64::MAX, 1, 2)
-- insert key-value into storage, where key is read from register 0,
value is read from register 1, and result is saved to register 2.
Note, if some function takes register_id
then it means this function can copy some data into this register. If
register_id == u64::MAX
then the copying does not happen. This allows some micro-optimizations in the future.
Note, we allow multiple registers on the host, identified with u64
number. The guest does not have to use them in
order and can for instance save some blob in register 5000
and another value in register 1
.
Specification
#![allow(unused)] fn main() { read_register(register_id: u64, ptr: u64) }
Writes the entire content from the register register_id
into the memory of the guest starting with ptr
.
Panics
- If the content extends outside the memory allocated to the guest. In Wasmer, it returns
MemoryAccessViolation
error message; - If
register_id
is pointing to unused register returnsInvalidRegisterId
error message.
Undefined Behavior
- If the content of register extends outside the preallocated memory on the host side, or the pointer points to a wrong location this function will overwrite memory that it is not supposed to overwrite causing an undefined behavior.
#![allow(unused)] fn main() { register_len(register_id: u64) -> u64 }
Returns the size of the blob stored in the given register.
Normal operation
- If register is used, then returns the size, which can potentially be zero;
- If register is not used, returns
u64::MAX
Trie API
Here we provide a specification of trie API. After this NEP is merged, the cases where our current implementation does not follow the specification are considered to be bugs that need to be fixed.
#![allow(unused)] fn main() { storage_write(key_len: u64, key_ptr: u64, value_len: u64, value_ptr: u64, register_id: u64) -> u64 }
Writes key-value into storage.
Normal operation
- If key is not in use it inserts the key-value pair and does not modify the register;
- If key is in use it inserts the key-value and copies the old value into the
register_id
.
Returns
- If key was not used returns
0
; - If key was used returns
1
.
Panics
- If
key_len + key_ptr
orvalue_len + value_ptr
exceeds the memory container or points to an unused register it panics withMemoryAccessViolation
. (When we say that something panics with the given error we mean that we use Wasmer API to create this error and terminate the execution of VM. For mocks of the host that would only cause a non-name panic.) - If returning the preempted value into the registers exceed the memory container it panics with
MemoryAccessViolation
;
Current bugs
External::storage_set
trait can return an error which is then converted to a generic non-descriptiveStorageUpdateError
, here however the actual implementation does not return error at all, see;- Does not return into the registers.
#![allow(unused)] fn main() { storage_read(key_len: u64, key_ptr: u64, register_id: u64) -> u64 }
Reads the value stored under the given key.
Normal operation
- If key is used copies the content of the value into the
register_id
, even if the content is zero bytes; - If key is not present then does not modify the register.
Returns
- If key was not present returns
0
; - If key was present returns
1
.
Panics
- If
key_len + key_ptr
exceeds the memory container or points to an unused register it panics withMemoryAccessViolation
; - If returning the preempted value into the registers exceed the memory container it panics with
MemoryAccessViolation
;
Current bugs
- This function currently does not exist.
#![allow(unused)] fn main() { storage_remove(key_len: u64, key_ptr: u64, register_id: u64) -> u64 }
Removes the value stored under the given key.
Normal operation
Very similar to storage_read
:
- If key is used, removes the key-value from the trie and copies the content of the value into the
register_id
, even if the content is zero bytes. - If key is not present then does not modify the register.
Returns
- If key was not present returns
0
; - If key was present returns
1
.
Panics
- If
key_len + key_ptr
exceeds the memory container or points to an unused register it panics withMemoryAccessViolation
; - If the registers exceed the memory limit panics with
MemoryAccessViolation
; - If returning the preempted value into the registers exceed the memory container it panics with
MemoryAccessViolation
;
Current bugs
- Does not return into the registers.
#![allow(unused)] fn main() { storage_has_key(key_len: u64, key_ptr: u64) -> u64 }
Checks if there is a key-value pair.
Normal operation
- If key is used returns
1
, even if the value is zero bytes; - Otherwise returns
0
.
Panics
- If
key_len + key_ptr
exceeds the memory container it panics withMemoryAccessViolation
;
#![allow(unused)] fn main() { storage_iter_prefix(prefix_len: u64, prefix_ptr: u64) -> u64 }
DEPRECATED, calling it will result result in HostError::Deprecated
error.
Creates an iterator object inside the host.
Returns the identifier that uniquely differentiates the given iterator from other iterators that can be simultaneously
created.
Normal operation
- It iterates over the keys that have the provided prefix. The order of iteration is defined by the lexicographic order of the bytes in the keys. If there are no keys, it creates an empty iterator, see below on empty iterators;
Panics
- If
prefix_len + prefix_ptr
exceeds the memory container it panics withMemoryAccessViolation
;
#![allow(unused)] fn main() { storage_iter_range(start_len: u64, start_ptr: u64, end_len: u64, end_ptr: u64) -> u64 }
DEPRECATED, calling it will result result in HostError::Deprecated
error.
Similarly to storage_iter_prefix
creates an iterator object inside the host.
Normal operation
Unless lexicographically start < end
, it creates an empty iterator.
Iterates over all key-values such that keys are between start
and end
, where start
is inclusive and end
is exclusive.
Note, this definition allows for start
or end
keys to not actually exist on the given trie.
Panics:
- If
start_len + start_ptr
orend_len + end_ptr
exceeds the memory container or points to an unused register it panics withMemoryAccessViolation
;
#![allow(unused)] fn main() { storage_iter_next(iterator_id: u64, key_register_id: u64, value_register_id: u64) -> u64 }
DEPRECATED, calling it will result result in HostError::Deprecated
error.
Advances iterator and saves the next key and value in the register.
Normal operation
- If iterator is not empty (after calling next it points to a key-value), copies the key into
key_register_id
and value intovalue_register_id
and returns1
; - If iterator is empty returns
0
.
This allows us to iterate over the keys that have zero bytes stored in values.
Panics
- If
key_register_id == value_register_id
panics withMemoryAccessViolation
; - If the registers exceed the memory limit panics with
MemoryAccessViolation
; - If
iterator_id
does not correspond to an existing iterator panics withInvalidIteratorId
- If between the creation of the iterator and calling
storage_iter_next
any modification to storage was done throughstorage_write
orstorage_remove
the iterator is invalidated and the error message isIteratorWasInvalidated
.
Current bugs
- Not implemented, currently we have
storage_iter_next
anddata_read
+DATA_TYPE_STORAGE_ITER
that together fulfill the purpose, but have unspecified behavior.
Promises API
#![allow(unused)] fn main() { promise_create(account_id_len: u64, account_id_ptr: u64, method_name_len: u64, method_name_ptr: u64, arguments_len: u64, arguments_ptr: u64, amount_ptr: u64, gas: u64) -> u64 }
Creates a promise that will execute a method on account with given arguments and attaches the given amount.
amount_ptr
point to slices of bytes representing u128
.
Panics
- If
account_id_len + account_id_ptr
ormethod_name_len + method_name_ptr
orarguments_len + arguments_ptr
oramount_ptr + 16
points outside the memory of the guest or host, withMemoryAccessViolation
. - If called in a view function panics with
ProhibitedInView
.
Returns
- Index of the new promise that uniquely identifies it within the current execution of the method.
#![allow(unused)] fn main() { promise_then(promise_idx: u64, account_id_len: u64, account_id_ptr: u64, method_name_len: u64, method_name_ptr: u64, arguments_len: u64, arguments_ptr: u64, amount_ptr: u64, gas: u64) -> u64 }
Attaches the callback that is executed after promise pointed by promise_idx
is complete.
Panics
- If
promise_idx
does not correspond to an existing promise panics withInvalidPromiseIndex
. - If
account_id_len + account_id_ptr
ormethod_name_len + method_name_ptr
orarguments_len + arguments_ptr
oramount_ptr + 16
points outside the memory of the guest or host, withMemoryAccessViolation
. - If called in a view function panics with
ProhibitedInView
.
Returns
- Index of the new promise that uniquely identifies it within the current execution of the method.
#![allow(unused)] fn main() { promise_and(promise_idx_ptr: u64, promise_idx_count: u64) -> u64 }
Creates a new promise which completes when time all promises passed as arguments complete. Cannot be used with registers.
promise_idx_ptr
points to an array of u64
elements, with promise_idx_count
denoting the number of elements.
The array contains indices of promises that need to be waited on jointly.
Panics
- If
promise_ids_ptr + 8 * promise_idx_count
extend outside the guest memory withMemoryAccessViolation
; - If any of the promises in the array do not correspond to existing promises panics with
InvalidPromiseIndex
. - If called in a view function panics with
ProhibitedInView
.
Returns
- Index of the new promise that uniquely identifies it within the current execution of the method.
#![allow(unused)] fn main() { promise_results_count() -> u64 }
If the current function is invoked by a callback we can access the execution results of the promises that caused the callback. This function returns the number of complete and incomplete callbacks.
Note, we are only going to have incomplete callbacks once we have promise_or
combinator.
Normal execution
- If there is only one callback
promise_results_count()
returns1
; - If there are multiple callbacks (e.g. created through
promise_and
)promise_results_count()
returns their number. - If the function was called not through the callback
promise_results_count()
returns0
.
Panics
- If called in a view function panics with
ProhibitedInView
.
#![allow(unused)] fn main() { promise_result(result_idx: u64, register_id: u64) -> u64 }
If the current function is invoked by a callback we can access the execution results of the promises that caused the callback. This function returns the result in blob format and places it into the register.
Normal execution
- If promise result is complete and successful copies its blob into the register;
- If promise result is complete and failed or incomplete keeps register unused;
Returns
- If promise result is not complete returns
0
; - If promise result is complete and successful returns
1
; - If promise result is complete and failed returns
2
.
Panics
- If
result_idx
does not correspond to an existing result panics withInvalidResultIndex
. - If copying the blob exhausts the memory limit it panics with
MemoryAccessViolation
. - If called in a view function panics with
ProhibitedInView
.
Current bugs
- We currently have two separate functions to check for result completion and copy it.
#![allow(unused)] fn main() { promise_return(promise_idx: u64) }
When promise promise_idx
finishes executing its result is considered to be the result of the current function.
Panics
- If
promise_idx
does not correspond to an existing promise panics withInvalidPromiseIndex
.
Current bugs
- The current name
return_promise
is inconsistent with the naming convention of Promise API.
#![allow(unused)] fn main() { promise_batch_create(account_id_len: u64, account_id_ptr: u64) -> u64 }
Creates a new promise towards given account_id
without any actions attached to it.
Panics
- If
account_id_len + account_id_ptr
points outside the memory of the guest or host, withMemoryAccessViolation
. - If called in a view function panics with
ProhibitedInView
.
Returns
- Index of the new promise that uniquely identifies it within the current execution of the method.
#![allow(unused)] fn main() { promise_batch_then(promise_idx: u64, account_id_len: u64, account_id_ptr: u64) -> u64 }
Attaches a new empty promise that is executed after promise pointed by promise_idx
is complete.
Panics
- If
promise_idx
does not correspond to an existing promise panics withInvalidPromiseIndex
. - If
account_id_len + account_id_ptr
points outside the memory of the guest or host, withMemoryAccessViolation
. - If called in a view function panics with
ProhibitedInView
.
Returns
- Index of the new promise that uniquely identifies it within the current execution of the method.
#![allow(unused)] fn main() { promise_batch_action_create_account(promise_idx: u64) }
Appends CreateAccount
action to the batch of actions for the given promise pointed by promise_idx
.
Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R48
Panics
- If
promise_idx
does not correspond to an existing promise panics withInvalidPromiseIndex
. - If the promise pointed by the
promise_idx
is an ephemeral promise created bypromise_and
. - If called in a view function panics with
ProhibitedInView
.
#![allow(unused)] fn main() { promise_batch_action_deploy_contract(promise_idx: u64, code_len: u64, code_ptr: u64) }
Appends DeployContract
action to the batch of actions for the given promise pointed by promise_idx
.
Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R49
Panics
- If
promise_idx
does not correspond to an existing promise panics withInvalidPromiseIndex
. - If the promise pointed by the
promise_idx
is an ephemeral promise created bypromise_and
. - If
code_len + code_ptr
points outside the memory of the guest or host, withMemoryAccessViolation
. - If called in a view function panics with
ProhibitedInView
.
#![allow(unused)] fn main() { promise_batch_action_function_call(promise_idx: u64, method_name_len: u64, method_name_ptr: u64, arguments_len: u64, arguments_ptr: u64, amount_ptr: u64, gas: u64) }
Appends FunctionCall
action to the batch of actions for the given promise pointed by promise_idx
.
Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R50
NOTE: Calling promise_batch_create
and then promise_batch_action_function_call
will produce the same promise as calling promise_create
directly.
Panics
- If
promise_idx
does not correspond to an existing promise panics withInvalidPromiseIndex
. - If the promise pointed by the
promise_idx
is an ephemeral promise created bypromise_and
. - If
account_id_len + account_id_ptr
ormethod_name_len + method_name_ptr
orarguments_len + arguments_ptr
oramount_ptr + 16
points outside the memory of the guest or host, withMemoryAccessViolation
. - If called in a view function panics with
ProhibitedInView
.
#![allow(unused)] fn main() { promise_batch_action_transfer(promise_idx: u64, amount_ptr: u64) }
Appends Transfer
action to the batch of actions for the given promise pointed by promise_idx
.
Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R51
Panics
- If
promise_idx
does not correspond to an existing promise panics withInvalidPromiseIndex
. - If the promise pointed by the
promise_idx
is an ephemeral promise created bypromise_and
. - If
amount_ptr + 16
points outside the memory of the guest or host, withMemoryAccessViolation
. - If called in a view function panics with
ProhibitedInView
.
#![allow(unused)] fn main() { promise_batch_action_stake(promise_idx: u64, amount_ptr: u64, bls_public_key_len: u64, bls_public_key_ptr: u64) }
Appends Stake
action to the batch of actions for the given promise pointed by promise_idx
.
Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R52
Panics
- If
promise_idx
does not correspond to an existing promise panics withInvalidPromiseIndex
. - If the promise pointed by the
promise_idx
is an ephemeral promise created bypromise_and
. - If the given BLS public key is not a valid BLS public key (e.g. wrong length)
InvalidPublicKey
. - If
amount_ptr + 16
orbls_public_key_len + bls_public_key_ptr
points outside the memory of the guest or host, withMemoryAccessViolation
. - If called in a view function panics with
ProhibitedInView
.
#![allow(unused)] fn main() { promise_batch_action_add_key_with_full_access(promise_idx: u64, public_key_len: u64, public_key_ptr: u64, nonce: u64) }
Appends AddKey
action to the batch of actions for the given promise pointed by promise_idx
.
Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R54
The access key will have FullAccess
permission, details: https://github.com/nearprotocol/NEPs/blob/master/text/0005-access-keys.md#guide-level-explanation
Panics
- If
promise_idx
does not correspond to an existing promise panics withInvalidPromiseIndex
. - If the promise pointed by the
promise_idx
is an ephemeral promise created bypromise_and
. - If the given public key is not a valid public key (e.g. wrong length)
InvalidPublicKey
. - If
public_key_len + public_key_ptr
points outside the memory of the guest or host, withMemoryAccessViolation
. - If called in a view function panics with
ProhibitedInView
.
#![allow(unused)] fn main() { promise_batch_action_add_key_with_function_call(promise_idx: u64, public_key_len: u64, public_key_ptr: u64, nonce: u64, allowance_ptr: u64, receiver_id_len: u64, receiver_id_ptr: u64, method_names_len: u64, method_names_ptr: u64) }
Appends AddKey
action to the batch of actions for the given promise pointed by promise_idx
.
Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-156752ec7d78e7b85b8c7de4a19cbd4R54
The access key will have FunctionCall
permission, details: https://github.com/nearprotocol/NEPs/blob/master/text/0005-access-keys.md#guide-level-explanation
- If the
allowance
value (not the pointer) is0
, the allowance is set toNone
(which means unlimited allowance). And positive value represents aSome(...)
allowance. - Given
method_names
is autf-8
string with,
used as a separator. The vm will split the given string into a vector of strings.
Panics
- If
promise_idx
does not correspond to an existing promise panics withInvalidPromiseIndex
. - If the promise pointed by the
promise_idx
is an ephemeral promise created bypromise_and
. - If the given public key is not a valid public key (e.g. wrong length)
InvalidPublicKey
. - if
method_names
is not a validutf-8
string, fails withBadUTF8
. - If
public_key_len + public_key_ptr
,allowance_ptr + 16
,receiver_id_len + receiver_id_ptr
ormethod_names_len + method_names_ptr
points outside the memory of the guest or host, withMemoryAccessViolation
. - If called in a view function panics with
ProhibitedInView
.
#![allow(unused)] fn main() { promise_batch_action_delete_key(promise_idx: u64, public_key_len: u64, public_key_ptr: u64) }
Appends DeleteKey
action to the batch of actions for the given promise pointed by promise_idx
.
Details for the action: https://github.com/nearprotocol/NEPs/pull/8/files#diff-15b6752ec7d78e7b85b8c7de4a19cbd4R55
Panics
- If
promise_idx
does not correspond to an existing promise panics withInvalidPromiseIndex
. - If the promise pointed by the
promise_idx
is an ephemeral promise created bypromise_and
. - If the given public key is not a valid public key (e.g. wrong length)
InvalidPublicKey
. - If
public_key_len + public_key_ptr
points outside the memory of the guest or host, withMemoryAccessViolation
. - If called in a view function panics with
ProhibitedInView
.
#![allow(unused)] fn main() { promise_batch_action_delete_account(promise_idx: u64, beneficiary_id_len: u64, beneficiary_id_ptr: u64) }
Appends DeleteAccount
action to the batch of actions for the given promise pointed by promise_idx
.
Action is used to delete an account. It can be performed on a newly created account, on your own account or an account with
insufficient funds to pay rent. Takes beneficiary_id
to indicate where to send the remaining funds.
Panics
- If
promise_idx
does not correspond to an existing promise panics withInvalidPromiseIndex
. - If the promise pointed by the
promise_idx
is an ephemeral promise created bypromise_and
. - If
beneficiary_id_len + beneficiary_id_ptr
points outside the memory of the guest or host, withMemoryAccessViolation
. - If called in a view function panics with
ProhibitedInView
.
Context API
Context API mostly provides read-only functions that access current information about the blockchain, the accounts (that originally initiated the chain of cross-contract calls, the immediate contract that called the current one, the account of the current contract), other important information like storage usage.
Many of the below functions are currently implemented through data_read
which allows to read generic context data.
However, there is no reason to have data_read
instead of the specific functions:
data_read
does not solve forward compatibility. If later we want to add another context function, e.g.executed_operations
we can just declare it as a new function, instead of encoding it asDATA_TYPE_EXECUTED_OPERATIONS = 42
which is passed as the first argument todata_read
;data_read
does not help with renaming. If later we decide to renamesigner_account_id
tooriginator_id
then one could argue that contracts that rely ondata_read
would not break, while contracts relying onsigner_account_id()
would. However the name change often means the change of the semantics, which means the contracts using this function are no longer safe to execute anyway.
However there is one reason to not have data_read
-- it makes API
more human-like which is a general direction Wasm APIs, like WASI are moving towards to.
#![allow(unused)] fn main() { current_account_id(register_id: u64) }
Saves the account id of the current contract that we execute into the register.
Panics
- If the registers exceed the memory limit panics with
MemoryAccessViolation
;
#![allow(unused)] fn main() { signer_account_id(register_id: u64) }
All contract calls are a result of some transaction that was signed by some account using some access key and submitted into a memory pool (either through the wallet using RPC or by a node itself). This function returns the id of that account.
Normal operation
- Saves the bytes of the signer account id into the register.
Panics
- If the registers exceed the memory limit panics with
MemoryAccessViolation
; - If called in a view function panics with
ProhibitedInView
.
Current bugs
- Currently we conflate
originator_id
andsender_id
in our code base.
#![allow(unused)] fn main() { signer_account_pk(register_id: u64) }
Saves the public key fo the access key that was used by the signer into the register. In rare situations smart contract might want to know the exact access key that was used to send the original transaction, e.g. to increase the allowance or manipulate with the public key.
Panics
- If the registers exceed the memory limit panics with
MemoryAccessViolation
; - If called in a view function panics with
ProhibitedInView
.
Current bugs
- Not implemented.
#![allow(unused)] fn main() { predecessor_account_id(register_id: u64) }
All contract calls are a result of a receipt, this receipt might be created by a transaction that does function invocation on the contract or another contract as a result of cross-contract call.
Normal operation
- Saves the bytes of the predecessor account id into the register.
Panics
- If the registers exceed the memory limit panics with
MemoryAccessViolation
; - If called in a view function panics with
ProhibitedInView
.
Current bugs
- Not implemented.
#![allow(unused)] fn main() { input(register_id: u64) }
Reads input to the contract call into the register. Input is expected to be in JSON-format.
Normal operation
- If input is provided saves the bytes (potentially zero) of input into register.
- If input is not provided does not modify the register.
Returns
- If input was not provided returns
0
; - If input was provided returns
1
; If input is zero bytes returns1
, too.
Panics
- If the registers exceed the memory limit panics with
MemoryAccessViolation
;
Current bugs
- Implemented as part of
data_read
. However there is no reason to have one unified function, likedata_read
that can be used to read all
#![allow(unused)] fn main() { block_index() -> u64 }
Returns the current block height from genesis.
#![allow(unused)] fn main() { block_timestamp() -> u64 }
Returns the current block timestamp (number of non-leap-nanoseconds since January 1, 1970 0:00:00 UTC).
#![allow(unused)] fn main() { epoch_height() -> u64 }
Returns the current epoch height from genesis.
#![allow(unused)] fn main() { storage_usage() -> u64 }
Returns the number of bytes used by the contract if it was saved to the trie as of the invocation. This includes:
- The data written with
storage_*
functions during current and previous execution; - The bytes needed to store the account protobuf and the access keys of the given account.
Economics API
Accounts own certain balance; and each transaction and each receipt have certain amount of balance and prepaid gas
attached to them.
During the contract execution, the contract has access to the following u128
values:
account_balance
-- the balance attached to the given account. This includes theattached_deposit
that was attached to the transaction;attached_deposit
-- the balance that was attached to the call that will be immediately deposited before the contract execution starts;prepaid_gas
-- the tokens attached to the call that can be used to pay for the gas;used_gas
-- the gas that was already burnt during the contract execution and attached to promises (cannot exceedprepaid_gas
);
If contract execution fails prepaid_gas - used_gas
is refunded back to signer_account_id
and attached_deposit
is refunded back to predecessor_account_id
.
The following spec is the same for all functions:
#![allow(unused)] fn main() { account_balance(balance_ptr: u64) attached_deposit(balance_ptr: u64) }
-- writes the value into the u128
variable pointed by balance_ptr
.
Panics
- If
balance_ptr + 16
points outside the memory of the guest withMemoryAccessViolation
; - If called in a view function panics with
ProhibitedInView
.
Current bugs
- Use a different name;
#![allow(unused)] fn main() { prepaid_gas() -> u64 used_gas() -> u64 }
Panics
- If called in a view function panics with
ProhibitedInView
.
Math API
#![allow(unused)] fn main() { random_seed(register_id: u64) }
Returns random seed that can be used for pseudo-random number generation in deterministic way.
Panics
- If the size of the registers exceed the set limit
MemoryAccessViolation
;
#![allow(unused)] fn main() { sha256(value_len: u64, value_ptr: u64, register_id: u64) }
Hashes the random sequence of bytes using sha256 and returns it into register_id
.
Panics
- If
value_len + value_ptr
points outside the memory or the registers use more memory than the limit withMemoryAccessViolation
.
#![allow(unused)] fn main() { keccak256(value_len: u64, value_ptr: u64, register_id: u64) }
Hashes the random sequence of bytes using keccak256 and returns it into register_id
.
Panics
- If
value_len + value_ptr
points outside the memory or the registers use more memory than the limit withMemoryAccessViolation
.
#![allow(unused)] fn main() { keccak512(value_len: u64, value_ptr: u64, register_id: u64) }
Hashes the random sequence of bytes using keccak512 and returns it into register_id
.
Panics
- If
value_len + value_ptr
points outside the memory or the registers use more memory than the limit withMemoryAccessViolation
.
Miscellaneous API
#![allow(unused)] fn main() { value_return(value_len: u64, value_ptr: u64) }
Sets the blob of data as the return value of the contract.
Panics
- If
value_len + value_ptr
exceeds the memory container or points to an unused register it panics withMemoryAccessViolation
;
#![allow(unused)] fn main() { panic() }
Terminates the execution of the program with panic GuestPanic("explicit guest panic")
.
#![allow(unused)] fn main() { panic_utf8(len: u64, ptr: u64) }
Terminates the execution of the program with panic GuestPanic(s)
, where s
is the given UTF-8 encoded string.
Normal behavior
If len == u64::MAX
then treats the string as null-terminated with character '\0'
;
Panics
- If string extends outside the memory of the guest with
MemoryAccessViolation
; - If string is not UTF-8 returns
BadUtf8
. - If string length without null-termination symbol is larger than
config.max_log_len
returnsBadUtf8
.
#![allow(unused)] fn main() { log_utf8(len: u64, ptr: u64) }
Logs the UTF-8 encoded string.
Normal behavior
If len == u64::MAX
then treats the string as null-terminated with character '\0'
;
Panics
- If string extends outside the memory of the guest with
MemoryAccessViolation
; - If string is not UTF-8 returns
BadUtf8
. - If string length without null-termination symbol is larger than
config.max_log_len
returnsBadUtf8
.
#![allow(unused)] fn main() { log_utf16(len: u64, ptr: u64) }
Logs the UTF-16 encoded string. len
is the number of bytes in the string.
See https://stackoverflow.com/a/5923961 that explains that null termination is not defined through encoding.
Normal behavior
If len == u64::MAX
then treats the string as null-terminated with two-byte sequence of 0x00 0x00
.
Panics
- If string extends outside the memory of the guest with
MemoryAccessViolation
;
#![allow(unused)] fn main() { abort(msg_ptr: u32, filename_ptr: u32, line: u32, col: u32) }
Special import kept for compatibility with AssemblyScript contracts. Not called by smart contracts directly, but instead called by the code generated by AssemblyScript.
Future Improvements
In the future we can have some of the registers to be on the guest. For instance a guest can tell the host that it has some pre-allocated memory that it wants to be used for the register, e.g.
#![allow(unused)] fn main() { set_guest_register(register_id: u64, register_ptr: u64, max_register_size: u64) }
will assign register_id
to a span of memory on the guest. Host then would also know the size of that buffer on guest
and can throw a panic if there is an attempted copying that exceeds the guest register size.
GenesisConfig
protocol_version
type: u32
Protocol version that this genesis works with.
genesis_time
type: DateTime
Official time of blockchain start.
chain_id
type: String
ID of the blockchain. This must be unique for every blockchain. If your testnet blockchains do not have unique chain IDs, you will have a bad time.
num_block_producers
type: u32
Number of block producer seats at genesis.
block_producers_per_shard
type: [ValidatorId]
Defines number of shards and number of validators per each shard at genesis.
avg_fisherman_per_shard
type: [ValidatorId]
Expected number of fisherman per shard.
dynamic_resharding
type: bool
Enable dynamic re-sharding.
epoch_length
type: BlockIndex,
Epoch length counted in blocks.
gas_limit
type: Gas,
Initial gas limit for a block
gas_price
type: Balance,
Initial gas price
block_producer_kickout_threshold
type: u8
Criterion for kicking out block producers (this is a number between 0 and 100)
chunk_producer_kickout_threshold
type: u8
Criterion for kicking out chunk producers (this is a number between 0 and 100)
gas_price_adjustment_rate
type: Fraction
Gas price adjustment rate
runtime_config
type: RuntimeConfig
Runtime configuration (mostly economics constants).
validators
type: [AccountInfo]
List of initial validators.
records
type: Vec<StateRecord>
Records in storage at genesis (get split into shards at genesis creation).
transaction_validity_period
type: u64
Number of blocks for which a given transaction is valid
developer_reward_percentage
type: Fraction
Developer reward percentage.
protocol_reward_percentage
type: Fraction
Protocol treasury percentage.
max_inflation_rate
type: Fraction
Maximum inflation on the total supply every epoch.
total_supply
type: Balance
Total supply of tokens at genesis.
num_blocks_per_year
type: u64
Expected number of blocks per year
protocol_treasury_account
type: AccountId
Protocol treasury account
protocol economics
For the specific economic specs, refer to Economics Section.
RuntimeConfig
The structure that holds the parameters of the runtime, mostly economics.
storage_cost_byte_per_block
type: Balance
The cost to store one byte of storage per block.
storage_cost_byte_per_block
type: Balance
Costs of different actions that need to be performed when sending and processing transaction and receipts.
poke_threshold
type: BlockIndex
The minimum number of blocks of storage rent an account has to maintain to prevent forced deletion.
transaction_costs
type: RuntimeFeesConfig
Costs of different actions that need to be performed when sending and processing transaction and receipts.
wasm_config
type: VMConfig,
Config of wasm operations.
account_length_baseline_cost_per_block
type: Balance
The baseline cost to store account_id of short length per block.
The original formula in NEP#0006 is 1,000 / (3 ^ (account_id.length - 2))
for cost per year.
This value represents 1,000
above adjusted to use per block
RuntimeFeesConfig
Economic parameters for runtime
action_receipt_creation_config
type: Fee
Describes the cost of creating an action receipt, ActionReceipt
, excluding the actual cost
of actions.
data_receipt_creation_config
type: DataReceiptCreationConfig
Describes the cost of creating a data receipt, DataReceipt
.
action_creation_config
type: ActionCreationConfig
Describes the cost of creating a certain action, Action
. Includes all variants.
storage_usage_config
type: StorageUsageConfig
Describes fees for storage rent
burnt_gas_reward
type: Fraction
Fraction of the burnt gas to reward to the contract account for execution.
AccessKeyCreationConfig
Describes the cost of creating an access key.
full_access_cost
type: Fee Base cost of creating a full access access-key.
function_call_cost
type: Fee Base cost of creating an access-key restricted to specific functions.
function_call_cost_per_byte
type: Fee Cost per byte of method_names of creating a restricted access-key.
action_creation_config
Describes the cost of creating a specific action, Action
. Includes all variants.
create_account_cost
type: Fee
Base cost of creating an account.
deploy_contract_cost
type: Fee
Base cost of deploying a contract.
deploy_contract_cost_per_byte
type: Fee
Cost per byte of deploying a contract.
function_call_cost
type: Fee
Base cost of calling a function.
function_call_cost_per_byte
type: Fee
Cost per byte of method name and arguments of calling a function.
transfer_cost
type: Fee
Base cost of making a transfer.
NOTE: If the account ID is an implicit account ID (64-length hex account ID), then the cost of the transfer fee
will be transfer_cost + create_account_cost + add_key_cost.full_access_cost
.
This is needed to account for the implicit account creation costs.
stake_cost
type: Fee
Base cost of staking.
add_key_cost:
type: AccessKeyCreationConfig Base cost of adding a key.
delete_key_cost
type: Fee
Base cost of deleting a key.
delete_account_cost
type: Fee
Base cost of deleting an account.
DataReceiptCreationConfig
Describes the cost of creating a data receipt, DataReceipt
.
base_cost
type: Fee Base cost of creating a data receipt.
cost_per_byte
type: Fee Additional cost per byte sent.
StorageUsageConfig
Describes cost of storage per block
account_cost
type: Gas Base storage usage for an account
data_record_cost
type: Gas Base cost for a k/v record
key_cost_per_byte:
type: Gas Cost per byte of key
value_cost_per_byte: Gas
type: Gas Cost per byte of value
code_cost_per_byte: Gas
type: Gas Cost per byte of contract code
Fee
Costs associated with an object that can only be sent over the network (and executed by the receiver).
send_sir
Fee for sending an object from the sender to itself, guaranteeing that it does not leave
send_not_sir
Fee for sending an object potentially across the shards.
execution
Fee for executing the object.
Fraction
numerator
type: u64
denominator
type: u64
VMConfig
Config of wasm operations.
ext_costs:
type: ExtCostsConfig
Costs for runtime externals
grow_mem_cost
type: u32
Gas cost of a growing memory by single page.
regular_op_cost
type: u32
Gas cost of a regular operation.
max_gas_burnt
type: Gas
Max amount of gas that can be used, excluding gas attached to promises.
max_stack_height
type: u32
How tall the stack is allowed to grow?
initial_memory_pages
type: u32
max_memory_pages
type: u32
The initial number of memory pages. What is the maximal memory pages amount is allowed to have for a contract.
registers_memory_limit
type: u64
Limit of memory used by registers.
max_register_size
type: u64
Maximum number of bytes that can be stored in a single register.
max_number_registers
type: u64
Maximum number of registers that can be used simultaneously.
max_number_logs
type: u64
Maximum number of log entries.
max_log_len
type: u64
Maximum length of a single log, in bytes.
ExtCostsConfig
base
type: Gas
Base cost for calling a host function.
read_memory_base
type: Gas
Base cost for guest memory read
read_memory_byte
type: Gas
Cost for guest memory read
write_memory_base
type: Gas
Base cost for guest memory write
write_memory_byte
type: Gas
Cost for guest memory write per byte
read_register_base
type: Gas
Base cost for reading from register
read_register_byte
type: Gas
Cost for reading byte from register
write_register_base
type: Gas
Base cost for writing into register
write_register_byte
type: Gas
Cost for writing byte into register
utf8_decoding_base
type: Gas
Base cost of decoding utf8.
utf8_decoding_byte
type: Gas
Cost per bye of decoding utf8.
utf16_decoding_base
type: Gas
Base cost of decoding utf16.
utf16_decoding_byte
type: Gas
Cost per bye of decoding utf16.
sha256_base
type: Gas
Cost of getting sha256 base
sha256_byte
type: Gas
Cost of getting sha256 per byte
keccak256_base
type: Gas
Cost of getting keccak256 base
keccak256_byte
type: Gas
Cost of getting keccak256 per byte
keccak512_base
type: Gas
Cost of getting keccak512 base
keccak512_byte
type: Gas
Cost of getting keccak512 per byte
log_base
type: Gas
Cost for calling logging.
log_byte
type: Gas
Cost for logging per byte
Storage API
storage_write_base
type: Gas
Storage trie write key base cost
storage_write_key_byte
type: Gas
Storage trie write key per byte cost
storage_write_value_byte
type: Gas
Storage trie write value per byte cost
storage_write_evicted_byte
type: Gas
Storage trie write cost per byte of evicted value.
storage_read_base
type: Gas
Storage trie read key base cost
storage_read_key_byte
type: Gas
Storage trie read key per byte cost
storage_read_value_byte
type: Gas
Storage trie read value cost per byte cost
storage_remove_base
type: Gas
Remove key from trie base cost
storage_remove_key_byte
type: Gas
Remove key from trie per byte cost
storage_remove_ret_value_byte
type: Gas
Remove key from trie ret value byte cost
storage_has_key_base
type: Gas
Storage trie check for key existence cost base
storage_has_key_byte
type: Gas
Storage trie check for key existence per key byte
storage_iter_create_prefix_base
type: Gas
Create trie prefix iterator cost base
storage_iter_create_prefix_byte
type: Gas
Create trie prefix iterator cost per byte.
storage_iter_create_range_base
type: Gas
Create trie range iterator cost base
storage_iter_create_from_byte
type: Gas
Create trie range iterator cost per byte of from key.
storage_iter_create_to_byte
type: Gas
Create trie range iterator cost per byte of to key.
storage_iter_next_base
type: Gas
Trie iterator per key base cost
storage_iter_next_key_byte
type: Gas
Trie iterator next key byte cost
storage_iter_next_value_byte
type: Gas
Trie iterator next key byte cost
touching_trie_node
type: Gas
Cost per touched trie node
Promise API
promise_and_base
type: Gas
Cost for calling promise_and
promise_and_per_promise
type: Gas
Cost for calling promise_and for each promise
promise_return
type: Gas
Cost for calling promise_return
StateRecord
type: Enum
Enum that describes one of the records in the state storage.
Account
type: Unnamed struct
Record that contains account information for a given account ID.
account_id
type: AccountId
The account ID of the account.
account
type: Account
The account structure. Serialized to JSON. U128 types are serialized to strings.
Data
type: Unnamed struct
Record that contains key-value data record for a contract at the given account ID.
account_id
type: AccountId
The account ID of the contract that contains this data record.
data_key
type: Vec<u8>
Data Key serialized in Base64 format.
NOTE: Key doesn't contain the data separator.
value
type: Vec<u8>
Value serialized in Base64 format.
Contract
type: Unnamed struct
Record that contains a contract code for a given account ID.
account_id
type: AccountId
The account ID of that has the contract.
code
type: Vec<u8>
WASM Binary contract code serialized in Base64 format.
AccessKey
type: Unnamed struct
Record that contains an access key for a given account ID.
account_id
type: AccountId
The account ID of the access key owner.
public_key
type: [PublicKey]
The public key for the access key in JSON-friendly string format. E.g. ed25519:5JFfXMziKaotyFM1t4hfzuwh8GZMYCiKHfqw1gTEWMYT
access_key
type: AccessKey
The access key serialized in JSON format.
PostponedReceipt
type: Box<
Receipt>
Record that contains a receipt that was postponed on a shard (e.g. it's waiting for incoming data).
The receipt is in JSON-friendly format. The receipt can only be an ActionReceipt
.
NOTE: Box is used to decrease fixed size of the entire enum.
ReceivedData
type: Unnamed struct
Record that contains information about received data for some action receipt, that is not yet received or processed for a given account ID.
The data is received using DataReceipt
before. See Receipts for details.
account_id
type: AccountId
The account ID of the receiver of the data.
data_id
type: [CryptoHash]
Data ID of the data in base58 format.
data
type: Option<Vec<u8>>
Optional data encoded as base64 format or null in JSON.
DelayedReceipt
type: Box<
Receipt>
Record that contains a receipt that was delayed on a shard. It means the shard was overwhelmed with receipts and it processes receipts from backlog. The receipt is in JSON-friendly format. See Delayed Receipts for details.
NOTE: Box is used to decrease fixed size of the entire enum.
Economics
This is under heavy development
Units
Name | Value |
---|---|
yoctoNEAR | smallest undividable amount of native currency NEAR. |
NEAR | 10**24 yoctoNEAR |
block | smallest on-chain unit of time |
gas | unit to measure usage of blockchain |
General Parameters
Name | Value |
---|---|
INITIAL_SUPPLY | 10**33 yoctoNEAR |
MIN_GAS_PRICE | 10**5 yoctoNEAR |
REWARD_PCT_PER_YEAR | 0.05 |
EPOCH_LENGTH | 43,200 blocks |
EPOCHS_A_YEAR | 730 epochs |
INITIAL_MAX_STORAGE | 10 * 2**40 bytes == 10 TB |
TREASURY_PCT | 0.1 |
TREASURY_ACCOUNT_ID | treasury |
CONTRACT_PCT | 0.3 |
INVALID_STATE_SLASH_PCT | 0.05 |
ADJ_FEE | 0.001 |
TOTAL_SEATS | 100 |
ONLINE_THRESHOLD_MIN | 0.9 |
ONLINE_THRESHOLD_MAX | 0.99 |
BLOCK_PRODUCER_KICKOUT_THRESHOLD | 0.9 |
CHUNK_PRODUCER_KICKOUT_THRESHOLD | 0.6 |
General Variables
Name | Description | Initial value |
---|---|---|
totalSupply[t] | Total supply of NEAR at given epoch[t] | INITIAL_SUPPLY |
gasPrice[t] | The cost of 1 unit of gas in NEAR tokens (see Transaction Fees section below) | MIN_GAS_PRICE |
storageAmountPerByte[t] | keeping constant, INITIAL_SUPPLY / INITIAL_MAX_STORAGE | ~9.09 * 10**19 yoctoNEAR |
Issuance
The protocol sets a ceiling for the maximum issuance of tokens, and dynamically decreases this issuance depending on the amount of total fees in the system.
Name | Description |
---|---|
reward[t] | totalSupply[t] * ((1 + REWARD_PCT_PER_YEAR ) ** (1/EPOCHS_A_YEAR ) - 1 ) |
epochFee[t] | sum([(1 - DEVELOPER_PCT_PER_YEAR) * block.txFee + block.stateFee for block in epoch[t]]) |
issuance[t] | The amount of token issued at a certain epoch[t], issuance[t] = reward[t] - epochFee[t] |
Where totalSupply[t]
is the total number of tokens in the system at a given time t.
If epochFee[t] > reward[t]
the issuance is negative, thus the totalSupply[t]
decreases in given epoch.
Transaction Fees
Each transaction before inclusion must buy gas enough to cover the cost of bandwidth and execution.
Gas unifies execution and bytes of bandwidth usage of blockchain. Each WASM instruction or pre-compiled function gets assigned an amount of gas based on measurements on common-denominator computer. Same goes for weighting the used bandwidth based on general unified costs. For specific gas mapping numbers see ???.
Gas is priced dynamically in NEAR
tokens. At each block t
, we update gasPrice[t] = gasPrice[t - 1] * (gasUsed[t - 1] / gasLimit[t - 1] - 0.5) * ADJ_FEE
.
Where gasUsed[t] = sum([sum([gas(tx) for tx in chunk]) for chunk in block[t]])
.
gasLimit[t]
is defined as gasLimit[t] = gasLimit[t - 1] + validatorGasDiff[t - 1]
, where validatorGasDiff
is parameter with which each chunk producer can either increase or decrease gas limit based on how long it to execute the previous chunk. validatorGasDiff[t]
can be only within ±0.1%
of gasLimit[t]
and only if gasUsed[t - 1] > 0.9 * gasLimit[t - 1]
.
State Stake
Amount of NEAR
on the account represents right for this account to take portion of the blockchain's overall global state. Transactions fail if account doesn't have enough balance to cover the storage required for given account.
def check_storage_cost(account):
# Compute requiredAmount given size of the account.
requiredAmount = sizeOf(account) * storageAmountPerByte
return Ok() if account.amount + account.locked >= requiredAmount else Error(requiredAmount)
# Check when transaction is received to verify that it is valid.
def verify_transaction(tx, signer_account):
# ...
# Updates signer's account with the amount it will have after executing this tx.
update_post_amount(signer_account, tx)
result = check_storage_cost(signer_account)
# If enough balance OR account is been deleted by the owner.
if not result.ok() or DeleteAccount(tx.signer_id) in tx.actions:
assert LackBalanceForState(signer_id: tx.signer_id, amount: result.err())
# After account touched / changed, we check it still has enough balance to cover it's storage.
def on_account_change(block_height, account):
# ... execute transaction / receipt changes ...
# Validate post-condition and revert if it fails.
result = check_storage_cost(sender_account)
if not result.ok():
assert LackBalanceForState(signer_id: tx.signer_id, amount: result.err())
Where sizeOf(account)
includes size of account_id
, account
structure and size of all the data stored under the account.
Account can end up with not enough balance in case it gets slashed. Account will become unusable as all orginating transactions will fail (including deletion). The only way to recover it in this case is by sending extra funds from a different accounts.
Validators
NEAR validators provide their resources in exchange for a reward epochReward[t]
, where [t] represents the considered epoch
Name | Description |
---|---|
epochReward[t] | = coinbaseReward[t] + epochFee[t] |
coinbaseReward[t] | The maximum inflation per epoch[t], as a function of REWARD_PCT_PER_YEAR / EPOCHS_A_YEAR |
Validator Selection
Name | Description |
---|---|
proposals: Proposal[] | The array of all new staking transactions that have happened during the epoch (if one account has multiple only last one is used) |
current_validators | The array of all existing validators during the epoch |
epoch[T] | The epoch when validator[v] is selected from the proposals auction array |
seat_price | The minimum stake needed to become validator in epoch[T] |
stake[v] | The amount in NEAR tokens staked by validator[v] during the auction at the end of epoch[T-2], minus INCLUSION_FEE |
shard[v] | The shard is randomly assigned to validator[v] at epoch[T-1], such that its node can download and sync with its state |
num_allocated_seats[v] | Number of seats assigned to validator[v], calculated from stake[v]/seatPrice |
validatorAssignments | The resulting ordered array of all proposals with a stake higher than seatPrice |
#![allow(unused)] fn main() { struct Proposal { account_id: AccountId, stake: Balance, public_key: PublicKey, } }
During the epoch, outcome of staking transactions produce proposals
, which are collected, in the form of Proposal
s.
At the end of every epoch T
, next algorithm gets executed to determine validators for epoch T + 2
:
- For every validator in
current_validators
determinenum_blocks_produced
,num_chunks_produced
based on what they produced during the epoch. - Remove validators, for whom
num_blocks_produced < num_blocks_expected * BLOCK_PRODUCER_KICKOUT_THRESHOLD
ornum_chunks_produced < num_chunks_expected * CHUNK_PRODUCER_KICKOUT_THRESHOLD
. - Add validators from
proposals
, if validator is also incurrent_validators
, considered stake of the proposal is0 if proposal.stake == 0 else proposal.stake + reward[proposal.account_id]
. - Find seat price
seat_price = findSeatPrice(current_validators - kickedout_validators + proposals, num_seats)
, where each validator getsfloor(stake[v] / seat_price)
seats andseat_price
is highest integer number such that total number of seats is at leastnum_seats
. - Filter validators and proposals to only those with stake greater or equal than seat price.
- For every validator, replicate them by number of seats they get
floor(stake[v] / seat_price)
. - Randomly shuffle (TODO: define random number sampler) with seed from randomness generated on the last block of current epoch (via
VRF(block_producer.private_key, block_hash)
). - Cut off all seats which are over the
num_seats
needed. - Use this set for block producers and shifting window over it as chunk producers.
def findSeatPrice(stakes, num_seats):
"""Find seat price given set of stakes and number of seats required.
Seat price is highest integer number such that if you sum `floor(stakes[i] / seat_price)` it is at least `num_seats`.
"""
stakes = sorted(stakes)
total_stakes = sum(stakes)
assert total_stakes >= num_seats, "Total stakes should be above number of seats"
left, right = 1, total_stakes + 1
while True:
if left == right - 1:
return left
mid = (left + right) // 2
sum = 0
for stake in stakes:
sum += stake // mid
if sum >= num_seats:
left = mid
break
right = mid
Rewards Calculation
Name | Value |
---|---|
epochFee[t] | sum([(1 - DEVELOPER_PCT_PER_YEAR) * txFee[i]]) , where [i] represents any considered block within the epoch[t] |
Note: all calculations are done in Rational numbers first with final result converted into integer with rounding down.
Total reward every epoch t
is equal to:
reward[t] = totalSupply * ((1 + REWARD_PCT_PER_YEAR) ** (1 / EPOCHS_A_YEAR) - 1)
Uptime of a specific validator is computed:
pct_online[t][j] = (num_produced_blocks[t][j] / expected_produced_blocks[t][j] + num_produced_chunks[t][j] / expected_produced_chunks[t][j]) / 2
if pct_online > ONLINE_THRESHOLD:
uptime[t][j] = min(1., (pct_online[t][j] - ONLINE_THRESHOLD_MIN) / (ONLINE_THRESHOLD_MAX - ONLINE_THRESHOLD_MIN))
else:
uptime[t][j] = 0.
Where expected_produced_blocks
and expected_produced_chunks
is the number of blocks and chunks respectively that is expected to be produced by given validator j
in the epoch t
.
The specific validator[t][j]
reward for epoch t
is then proportional to the fraction of stake of this validator from total stake:
validatorReward[t][j] = (uptime[t][j] * stake[t][j] * reward[t]) / total_stake[t]
Slashing
ChunkProofs
# Check that chunk is invalid, because the proofs in header don't match the body.
def chunk_proofs_condition(chunk):
# TODO
# At the end of the epoch, run update validators and
# determine how much to slash validators.
def end_of_epoch_update_validators(validators):
# ...
for validator in validators:
if validator.is_slashed:
validator.stake -= INVALID_STATE_SLASH_PCT * validator.stake
ChunkState
# Check that chunk header post state root is invalid,
# because the execution of previous chunk doesn't lead to it.
def chunk_state_condition(prev_chunk, prev_state, chunk_header):
# TODO
# At the end of the epoch, run update validators and
# determine how much to slash validators.
def end_of_epoch(..., validators):
# ...
for validator in validators:
if validator.is_slashed:
validator.stake -= INVALID_STATE_SLASH_PCT * validator.stake
Protocol Treasury
Treasury account TREASURY_ACCOUNT_ID
receives fraction of reward every epoch t
:
# At the end of the epoch, update treasury
def end_of_epoch(..., reward):
# ...
accounts[TREASURY_ACCOUNT_ID].amount = TREASURY_PCT * reward
Standards
Fungible Token (NEP-21)
Version 0.2.0
Summary
A standard interface for fungible tokens allowing for ownership, escrow and transfer, specifically targeting third-party marketplace integration.
Changelog
0.2.0
- Introduce storage deposits. Make every change method payable (able receive attached deposits with the function calls). It requires the caller to attach enough deposit to cover potential storage increase. See core-contracts/#47
- Replace
set_allowance
withinc_allowance
anddec_allowance
to address the issue of allowance front-running. See core-contracts/#49 - Validate
owner_id
account ID. See core-contracts/#54 - Enforce that the
new_owner_id
is different from the currentowner_id
for transfer. See core-contracts/#55
Motivation
NEAR Protocol uses an asynchronous sharded Runtime. This means the following:
- Storage for different contracts and accounts can be located on the different shards.
- Two contracts can be executed at the same time in different shards.
While this increases the transaction throughput linearly with the number of shards, it also creates some challenges for cross-contract development. For example, if one contract wants to query some information from the state of another contract (e.g. current balance), by the time the first contract receive the balance the real balance can change. It means in the async system, a contract can't rely on the state of other contract and assume it's not going to change.
Instead the contract can rely on temporary partial lock of the state with a callback to act or unlock, but it requires careful engineering to avoid dead locks. In this standard we're trying to avoid enforcing locks, since most actions can still be completed without locks by transferring ownership to an escrow account.
Prior art:
- ERC-20 standard
- NEP#4 NEAR NFT standard: nearprotocol/neps#4
- For latest lock proposals see Safes (#26)
Guide-level explanation
We should be able to do the following:
- Initialize contract once. The given total supply will be owned by the given account ID.
- Get the total supply.
- Transfer tokens to a new user.
- Set a given allowance for an escrow account ID.
- Escrow will be able to transfer up this allowance from your account.
- Get current balance for a given account ID.
- Transfer tokens from one user to another.
- Get the current allowance for an escrow account on behalf of the balance owner. This should only be used in the UI, since a contract shouldn't rely on this temporary information.
There are a few concepts in the scenarios above:
- Total supply. It's the total number of tokens in circulation.
- Balance owner. An account ID that owns some amount of tokens.
- Balance. Some amount of tokens.
- Transfer. Action that moves some amount from one account to another account.
- Escrow. A different account from the balance owner who has permission to use some amount of tokens.
- Allowance. The amount of tokens an escrow account can use on behalf of the account owner.
Note, that the precision is not part of the default standard, since it's not required to perform actions. The minimum value is always 1 token.
The standard acknowledges NEAR storage staking model and accounts for the difference in storage that can be introduced by actions on this contract. Since multiple users use the contract, the contract has to account for potential storage increase. Thus every change method of the contract that can change the amount of storage must be payable. See reference implementation for storage deposits and refunds.
Simple transfer
Alice wants to send 5 wBTC tokens to Bob.
Assumptions
- The wBTC token contract is
wbtc
. - Alice's account is
alice
. - Bob's account is
bob
. - The precision on wBTC contract is
10^8
. - The 5 tokens is
5 * 10^8
or as a number is500000000
.
High-level explanation
Alice needs to issue one transaction to wBTC contract to transfer 5 tokens (multiplied by precision) to Bob.
Technical calls
alice
callswbtc::transfer({"new_owner_id": "bob", "amount": "500000000"})
.
Token deposit to a contract
Alice wants to deposit 1000 DAI tokens to a compound interest contract to earn extra tokens.
Assumptions
- The DAI token contract is
dai
. - Alice's account is
alice
. - The compound interest contract is
compound
. - The precision on DAI contract is
10^18
. - The 1000 tokens is
1000 * 10^18
or as a number is1000000000000000000000
. - The compound contract can work with multiple token types.
High-level explanation
Alice needs to issue 2 transactions. The first one to dai
to set an allowance for compound
to be able to withdraw tokens from alice
.
The second transaction is to the compound
to start the deposit process. Compound will check that the DAI tokens are supported and will try to withdraw the desired amount of DAI from alice
.
- If transfer succeeded,
compound
can increase local ownership foralice
to 1000 DAI - If transfer fails,
compound
doesn't need to do anything in current example, but maybe can notifyalice
of unsuccessful transfer.
Technical calls
alice
callsdai::set_allowance({"escrow_account_id": "compound", "allowance": "1000000000000000000000"})
.alice
callscompound::deposit({"token_contract": "dai", "amount": "1000000000000000000000"})
. During thedeposit
call,compound
does the following:- makes async call
dai::transfer_from({"owner_id": "alice", "new_owner_id": "compound", "amount": "1000000000000000000000"})
. - attaches a callback
compound::on_transfer({"owner_id": "alice", "token_contract": "dai", "amount": "1000000000000000000000"})
.
- makes async call
Multi-token swap on DEX
Charlie wants to exchange his wLTC to wBTC on decentralized exchange contract. Alex wants to buy wLTC and has 80 wBTC.
Assumptions
- The wLTC token contract is
wltc
. - The wBTC token contract is
wbtc
. - The DEX contract is
dex
. - Charlie's account is
charlie
. - Alex's account is
alex
. - The precision on both tokens contract is
10^8
. - The amount of 9001 wLTC tokens is Alex wants is
9001 * 10^8
or as a number is900100000000
. - The 80 wBTC tokens is
80 * 10^8
or as a number is8000000000
. - Charlie has 1000000 wLTC tokens which is
1000000 * 10^8
or as a number is100000000000000
- Dex contract already has an open order to sell 80 wBTC tokens by
alex
towards 9001 wLTC. - Without Safes implementation, DEX has to act as an escrow and hold funds of both users before it can do an exchange.
High-level explanation
Let's first setup open order by Alex on DEX. It's similar to Token deposit to a contract
example above.
- Alex sets an allowance on wBTC to DEX
- Alex calls deposit on Dex for wBTC.
- Alex calls DEX to make an new sell order.
Then Charlie comes and decides to fulfill the order by selling his wLTC to Alex on DEX. Charlie calls the DEX
- Charlie sets the allowance on wLTC to DEX
- Alex calls deposit on Dex for wLTC.
- Then calls DEX to take the order from Alex.
When called, DEX makes 2 async transfers calls to exchange corresponding tokens.
- DEX calls wLTC to transfer tokens DEX to Alex.
- DEX calls wBTC to transfer tokens DEX to Charlie.
Technical calls
alex
callswbtc::set_allowance({"escrow_account_id": "dex", "allowance": "8000000000"})
.alex
callsdex::deposit({"token": "wbtc", "amount": "8000000000"})
.dex
callswbtc::transfer_from({"owner_id": "alex", "new_owner_id": "dex", "amount": "8000000000"})
alex
callsdex::trade({"have": "wbtc", "have_amount": "8000000000", "want": "wltc", "want_amount": "900100000000"})
.charlie
callswltc::set_allowance({"escrow_account_id": "dex", "allowance": "100000000000000"})
.charlie
callsdex::deposit({"token": "wltc", "amount": "100000000000000"})
.dex
callswltc::transfer_from({"owner_id": "charlie", "new_owner_id": "dex", "amount": "100000000000000"})
charlie
callsdex::trade({"have": "wltc", "have_amount": "900100000000", "want": "wbtc", "want_amount": "8000000000"})
.dex
callswbtc::transfer({"new_owner_id": "charlie", "amount": "8000000000"})
dex
callswltc::transfer({"new_owner_id": "alex", "amount": "900100000000"})
Reference-level explanation
The full implementation in Rust can be found here: fungible-token
NOTES:
- All amounts, balances and allowance are limited by
U128
(max value2**128 - 1
). - Token standard uses JSON for serialization of arguments and results.
- Amounts in arguments and results have are serialized as Base-10 strings, e.g.
"100"
. This is done to avoid JSON limitation of max integer value of2**53
. - The contract tracks the change in storage before and after the call. If the storage increases, the contract requires the caller of the contract to attach enough deposit to the function call to cover the storage cost. This is done to prevent a denial of service attack on the contract by taking all available storage. It's because the gas cost of adding new escrow account is cheap, many escrow allowances can be added until the contract runs out of storage. If the storage decreases, the contract will issue a refund for the cost of the released storage. The unused tokens from the attached deposit are also refunded, so it's safe to attach more deposit than required.
- To prevent the deployed contract from being modified or deleted, it should not have any access keys on its account.
Interface:
#![allow(unused)] fn main() { /******************/ /* CHANGE METHODS */ /******************/ /// Increments the `allowance` for `escrow_account_id` by `amount` on the account of the caller of this contract /// (`predecessor_id`) who is the balance owner. /// Requirements: /// * Caller of the method has to attach deposit enough to cover storage difference at the /// fixed storage price defined in the contract. #[payable] pub fn inc_allowance(&mut self, escrow_account_id: AccountId, amount: U128); /// Decrements the `allowance` for `escrow_account_id` by `amount` on the account of the caller of this contract /// (`predecessor_id`) who is the balance owner. /// Requirements: /// * Caller of the method has to attach deposit enough to cover storage difference at the /// fixed storage price defined in the contract. #[payable] pub fn dec_allowance(&mut self, escrow_account_id: AccountId, amount: U128); /// Transfers the `amount` of tokens from `owner_id` to the `new_owner_id`. /// Requirements: /// * `amount` should be a positive integer. /// * `owner_id` should have balance on the account greater or equal than the transfer `amount`. /// * If this function is called by an escrow account (`owner_id != predecessor_account_id`), /// then the allowance of the caller of the function (`predecessor_account_id`) on /// the account of `owner_id` should be greater or equal than the transfer `amount`. /// * Caller of the method has to attach deposit enough to cover storage difference at the /// fixed storage price defined in the contract. #[payable] pub fn transfer_from(&mut self, owner_id: AccountId, new_owner_id: AccountId, amount: U128); /// Transfer `amount` of tokens from the caller of the contract (`predecessor_id`) to /// `new_owner_id`. /// Act the same was as `transfer_from` with `owner_id` equal to the caller of the contract /// (`predecessor_id`). /// Requirements: /// * Caller of the method has to attach deposit enough to cover storage difference at the /// fixed storage price defined in the contract. #[payable] pub fn transfer(&mut self, new_owner_id: AccountId, amount: U128); /****************/ /* VIEW METHODS */ /****************/ /// Returns total supply of tokens. pub fn get_total_supply(&self) -> U128; /// Returns balance of the `owner_id` account. pub fn get_balance(&self, owner_id: AccountId) -> U128; /// Returns current allowance of `escrow_account_id` for the account of `owner_id`. /// /// NOTE: Other contracts should not rely on this information, because by the moment a contract /// receives this information, the allowance may already be changed by the owner. /// So this method should only be used on the front-end to see the current allowance. pub fn get_allowance(&self, owner_id: AccountId, escrow_account_id: AccountId) -> U128; }
Drawbacks
- Current interface doesn't have minting, precision (decimals), naming. But it should be done as extensions, e.g. a Precision extension.
- It's not possible to exchange tokens without transferring them to escrow first.
- It's not possible to transfer tokens to a contract with a single transaction without setting the allowance first.
It should be possible if we introduce
transfer_with
function that transfers tokens and calls escrow contract. It needs to handle result of the execution and contracts have to be aware of this API.
Future possibilities
- Support for multiple token types
- Minting and burning
- Precision, naming and short token name.