Synchronous Wasm submodule execution

Synchronous Wasm submodule execution

The following is a proposal to extend NEAR runtime with the ability to dynamically execute Wasm bytecode submodules, other than the main deployed contract, in the context of the same account, synchronously.

Motivation

The Aurora smart contract represents a whole ecosystem of EVM contracts synchronously executed in the context of the same account: aurora.

If EVM contracts were to be transpiled into Wasm and natively executed by NEAR nodes, it could reduce cost up to 1500x. It will require, however, a mechanism to execute Wasm code that is not part of the deployed contract.

Requirements

  1. It MUST be possible to deploy multiple Wasm submodules to an account.
  2. Any submodule MUST be executable synchronously in the context of the account contract (i.e., allow synchronous access to storage and host functions).
    a. Control flow MUST return to the main contract once the submodule completes successfully.
    b. It MUST be allowed to call multiple submodules during a single execution of the main contract.
  3. The overhead of loading and calling a submodule SHOULD be as small as possible.
  4. The submodules SHOULD be able to import functions from the account contract.
  5. Submodules SHOULD be indexed by binary keys.
  6. The number of submodules in an account SHOULD be allowed to grow without limits
    a. At least a million submodules SHOULD be supported.

Discussion

Reference

Links to the previous discussion:

  1. Synchronous Contracts
  2. Running WASM in WASM
  3. Native EVM
  4. On EVM support in NEAR
  5. EVM Precompile
  6. Nearcore discussion. Category EVM
10 Likes

Thanks for starting this discussion @marcelo! Some initial questions that came up from during our discussions in person.

  • Should the submodules have access to host functions? All of them; a subset; etc.? Would an access control system make sense to have there if the submodules are not trusted and you do not want them to execute arbitrary host functions.
2 Likes

Hey @marcelo . Thank you for the proposal. What is our current confidence that EVM->Wasm transpilation is possible, especially if we want to ensure safety of such procedure? All the projects that I’ve heard were trying to do it did not succeed yet.

2 Likes

Trying to give some thought on potential implementation approaches that satisfy the requirements here.

Taking the stock of outstanding WebAssembly proposals, the component model seems like a closest fit, but not quite – for example that does not really permit dynamically instantiating the components at runtime, at least not yet, and even then the meaning of “Runtime dynamic linking” in that document is somewhat distinct from dynamic linking as I understand it.

I do think, however, there is not much else necessary on top of the components proposal to satisfy requirements given here. It might be enough to add some native function to instantiate a component module at runtime (a-la dlopen), returning a list of funcrefs that are exported by the component.


I wonder if this is actually required? I’ll admit I haven’t pondered this too much, but my intuition is that being able to deploy code to {submodule}.aurora.near or some such “sub-account” would be sufficient, would avoid needing to implement another storage mechanism, properly participate within the usual permissions scheme, etc. etc.

The requirement is somewhat ambiguous, but I wouldn’t hold my breath too much on it being possible to instantiate a component any quicker than a typical contract. If it were possible to achieve this, the contracts themselves would promptly move towards using the same mechanism, so the instantiation speed would largely remain at parity for the most part.

Do you mean through the WebAssembly import statement specifically? Would any other mechanism applicable here? For example, would it be valid to instead specify that the interface between the contract and the submodule/component/etc is a table of funcrefs to specific functions that the instantiating contract takes care to set-up?

3 Likes

There is an ongoing effort to build such a transpiler. Here is the work-in-progress project:

Early benchmarks indicate a 15x reduction of the NEAR gas cost using a simple approach (converts opcodes directly into equivalent NEAR Wasm snippets). We are optimistic about what could be achieved following this approach.

We have been thinking about this. There are potential problems in several parts of the pipeline. The code generated needs to be equivalent to the EVM bytecode source. The code deployed on-chain must be the same as the code generated. We don’t have a definite solution for them, but we are aware this needs to be handled, and we consider this to be solvable. I’m not going into the details here because I think this is not completely related to the main goal of this post.

I don’t think this is mandatory. The main reason is that if the submodules are able to import functions from the main contract (point 4), then it will be the responsibility of the main contract to expose access for relevant host functions to its submodules.

The real requirement is to allow us to execute these submodules synchronously within the same context as the main contract. I could conceive of an implementation in which this is done using sub-accounts, but that seems to me like it could be more work on the protocol/chain level. For example, requiring synchronous access means that there would have to be a way for the chain to know that these specific sub-accounts (which are really submodules) belong in the same shard as the main account. This also raises an interesting question of whether the submodule code should be externally callable as well (e.g. should bob.near be allowed to (asynchronously) call {submodule}.alice.near?). Personally I think it is simpler to ignore all those complexities altogether and to solve the issue of storing these submodules by having them be part of the existing contract storage (i.e. trie).

Yes, no problem. We discussed in person that this cost will be on the order of 1 or 2 Tgas, and I think that is fine for us. I think it is unlikely that we will have hundreds of submodule calls in a single receipt. If this does become a bottleneck in the future then we can ask the hard questions about caching which could potentially lower this cost.

I was thinking about the literal import statement, but I think it is ok to have a different implementation with the “spirit of import”, if that is easier. The key feature we want here is to have submodules not necessarily be self-contained; they can make function calls where the function is not defined in the submodule, it is defined in the parent instead. And such calls should be no more expensive than a typical Wasm function call. The reason for this ask is because then we can have all the heavy EVM logic in the main Aurora Engine contract rather than duplicating it into every submodule. This will cut down on the amount of code we store significantly.

1 Like

Something relevant I discovered today and wanted to make a note of in order to not forget. WebAssembly/tool-conventions/DynamicLinking.

2 Likes

There are complexities with allowing container contract execute arbitrary Wasm code around shared state alignment and access control. We are basically delegating these problems to the smart contract developers and they are already having difficulties working with our storage model. Aurora have already solved these problems specifically for EVM, but we also need to think about other developers who might want to use this feature in the future.

Prohibiting sharding from splitting state across shards following some criteria is not new and is already being used for internal contract state, which is what Aurora EVM is relying on.

The problem with arbitrary Wasm execution actually requires a new account model, while subaccounts are known and familiar to the developers.

This is actually a good argument for the proposal. But this is more of Ethereum-style thinking, where we expect all access control and versioning logic to be encapsulated inside a smart contract instead of being supported on a protocol level, for flexibility reasons. We know now that it leads to a more complex developer experience, e.g. see upgradability patterns for Etthereum.

2 Likes

Yes, I agree the existing account model is familiar, however that does not meet our use-case. We need the submodules to interact with the same state as the main contract because otherwise the migration path from a “normal” EVM contract on Aurora to an optimized WASM one is quite complex (we would need to copy the state into the subaccount).

I also agree that if this feature has no access controls whatsoever then it puts a burden on developers in terms of using it safely. What if the access control measure for submodule state access was to be allowed to use a subtrie of the main contract storage (i.e. all keys it accesses must start with a specified prefix)? This would still allow Aurora’s use-case, while making it possible for other developers to fully separate the main account storage from submodules if they wish.

As for access to main contract functions, maybe it makes sense to follow the same kind of pattern as for Function Call Access Keys, where an explicit list is given of functions the submodule can call from the main contract.

Doing it in this way probably requires creating a new Action type in the runtime, Deploy Submodule, where this access control information is specified.

2 Likes

Some thoughts on access control:

Implementation-wise, one approach to most of access control is to say that a subcontract doesn’t have access to anything in the parent contract, sans one generic callback. That is, require that the wasm interface of submodule is exactly two functions: exported no-argument main, and imported “buffer in/buffer out” callback. That way, the parent contract gets full flexibility to proxy or non-proxy various host functions using arbitrary runtime logic, at the cost of a) extra very dynamic indirection at runtime b) more complex "sdk"s for submodules. And of course there’s a meta problem that parent contract needs to implement such proxiying correctly.

Two isolation aspects which are not solved by this approach:

  • what happens if subcontract hits runtime error? Does it bring down the parent contract?
  • what happens if subcontract runs out of gas? Can the parent limit the amount of gas allocated to subcontract.

subcontract doesn’t have access to anything in the parent contract, sans one generic callback

One thing I like about this proposal is that it also solves the dynamic linking problem essentially for free. Essentially the host (NEAR runtime) becomes a message-passing interface for two otherwise independently running modules, and since they are running independently no dynamic linking is required.

One thing I worry about in this proposal is what the overhead would be if there is frequent communication between a submodule and the main contract (say hundreds of calls to the callback in a single execution). Maybe this question could only be answered with a prototype implementation though.

As for the other questions: I think it makes sense in this “shared-nothing” model for the submodule to be fully sandboxed from the main contract. This means that the parent contract receives an error back when the submodule crashes, and can choose to handle it however it wants (presumably the default would be to pass the error along and crash itself). I think it also makes sense for the parent to set a gas limit on the submodule and also have the option to handle an out of gas error gracefully (as with any other submodule error).