[Discussion] Synchronous contracts

Disclaimer. I cannot say to be entirely impartial in this discussion, since I lean towards “Wasm suspension” and “Wasm after Wasm” solutions while disliking both of them, but I tried to write this post objectively.

Background

It is pretty obvious that contract development with synchronous/blocking cross-contract calls is easier than with asynchronous calls. The reasons are the same as with regular non-blockchain development – one does not need to think about non-atomicity, synchronization primitives, callbacks, deadlocks, race conditions, etc. Unfortunately, just like with non-blockchain development the path to scalable applications lies through merging concurrency with the business logic. NEAR blockchain is scalable by design and so its contract development experience has embraced inevitable asynchrony from the first days. Unfortunately, even though asynchronous smart contract development is likely to be largely adopted in the future, forcing it on all NEAR developers creates an adoption barrier which inhibits our mission of making DevX as smooth as possible. Specifically, async contracts are uncommon in the most of the blockchain world which makes existing contract development patterns non-transferrable to NEAR. Additionally, even in web2 world, typical developers do not need to understand most of the intricacies of concurrency. It is clear, that NEAR has to allow contract development without intricate knowledge of concurrency.

Developer Needs

Concurrency goes against several developer needs. We need to separate them since the proposed solutions might focus on some of the needs more than on the others.

Transaction Atomicity

Developers want to know that when their code interacts with multiple contracts in a single transaction it is always “all or nothing”, meaning if one contract fails then so should all others, including those that have already been executed. There are actually two reasons to care about atomicity:

  • Security – asset operations should be only performed when “everything goes well”. Taking care of asset recovery when transaction fails midway introduces risks, because it is more error prone and adds potential abuse angles. Same goes for operations on any critical data and not just assets;
  • Business logic complexity. For async contracts, assuming that any contract can fail, the number of cases that developer needs to deal with is worst-case exponential from the number of contracts involved. While for sync contracts this number is always one. Savvy non-blockchain developers learnt to drastically reduce the number of cases to linear, by using concurrency architectural patterns like lock hierarchies. Unfortunately, these methods are very complex and the number of cases is still higher than one.

Code Simplicity

Code gets rapidly more complex when developer needs to deal with callbacks, channels, locks, actors, etc. It is a blessing that in many languages developer can just add async/await keywords to their synchronous code and magically make it scalable. Unfortunately, it is not a universal solution:

  • It does not provide atomicity just by itself;
  • It does not help with resource re-entrance, e.g. we would still sometimes need to lock the balance before the transfer;
  • Mindless usage can lead to very unscalable applications, e.g. having overly aggressive resource guards/locks;
  • Sometimes developer is forced to understand internals of async/await, which are quite complex.

Therefore, concurrency leads to code complexity, except for the cases when mindless async/await can be used.

Interoperability

Synchronous/blocking contracts are highly interoperable. Most of the time you only need to know the interface of another contract to interop with it. With async contracts, you also need to know the state of another contract, and even understand its internal state machine. Also, the interface can be statically enforced which prevents incorrectly constructed interop from being deployed, while the state machine interface cannot.

General Direction

A large number of web2 developers do not use explicit concurrency in their code today, except for decorating some of the code with async/await keywords. And there is a smaller group of developers that have deep understanding of concurrency who take care of the projects that require it. NEAR developers, on the other hand, are almost universally required to work with concurrency via callbacks and state machines. NEAR developer experience needs to match the distribution of the skillset among web2 developers for seamless developer adoption. Therefore, NEAR needs to have one or more solutions that allow a large number of the developers to develop on NEAR without working with concurrency, while still reserving tools concurrency experts to build highly-scalable applications.

Proposed Solutions

Fortunately, there are multiple ways for implementing synchronous experience in asynchronous environment. We split proposed solutions into several categories:

  • Emulation of sync environment on top of async environment;
  • Extensions of asyc environment with sync functionality;

Emulations

SDK lock

When contract has #[lockable] decorator, SDK would automatically create a lock for the contract that prevents it from being interacted with until the callback to this contract is getting resolved. All cross-contract calls would be forced to have callbacks and if there are multiple cross-contract calls then their callbacks will be merged into one using promise_and host function. The contract will fail if it is in “locked” state and being called by something else except the callback. If any of the callbacks indicate contract failure, the state is reverted in callback. The problem with callbacks failing because they run out of gas is solved by always writing state into a temporary storage first which gets committed on the callback or abandoned when callback times out.

Also, instead of locking the entire state of the contract, the lock might be locking a subset of keys predefined by the developer, e.g. #[lockable(key1, key2, ...)]. Carefully implemented contracts might be able to run in parallel if they store user data under non-overlapping keys.

Pros:

  • Partially addresses atomicity need. In the chain of contract calls A->B->C->D, A and B are reverted it C or D fail, while C and D are not reverted if A or B fail on callbacks;
  • Does not require protocol changes;
  • Can be deprecated from SDK and old dApps will continue working well. Which means we can iterate on its design as much as we want;
  • Works for all contracts, including those on different shards;

Cons:

  • Does not address simplicity and interoperability needs;
  • It is a gray box for developers – they still need to understand on surface level what is happening;
  • Off-chain code needs to differentiate transactions that fail because of the lock;
  • During congestion lots of transactions will consume gas, but will time out on callbacks;

Wasm suspension

We add a host function that exists the contract, records its Wasm memory and state diff into a temporary storage and waits for certain callback to be resolved, upon which it resumes Wasm execution. On SDK side developers use simple blocking interfaces to another contract which behinds the scenes does suspension and promises/callbacks. When contract interacts with state key-values it needs to grab either read or write reference which gets dropped on the callback. If another transaction tries to grab write reference to a key that has live read or write reference, it will fail.

To partially address the situation when transaction arrives at a wrong time and tries to grab write reference while there is another suspended contract holding a reference to the same key, instead of failing this transaction immediately we have two options:

  • Put receipt back into delayed receipt queue, but deduct burned gas;
  • Require such contracts to declare keys that they are going to touch so that we don’t schedule them for execution, while there is a competing reference being held by a suspended contract. This will potentially create livelocks though.

Pros:

  • Addresses atomicity, simplicity, and interoperability needs, since it will look exactly like regular sync function calls;
  • Works for all contracts, including those on different shards;
  • Only changes Contract Runtime and Transaction Runtime is unchanged, except for taking care of state diff;
  • Productionizing Wasm suspension tech might be overall beneficial for improving quality and understanding of our Wasm VM;

Cons:

  • Requires production-quality Wasm suspension tech;
  • Modifies Contract Runtime;
  • Will cost >2x more gas to write into the state and suspend Wasm execution;
  • Off-chain code needs to differentiate transactions that fail because they attempted to grab a reference at a wrong time;

Notes

Both approaches because they don’t use scheduler lead us to make an unfortunate choice:

  • Either some transactions will fail because they arrive at a wrong time;
  • Or design will allow some form of live-lock.

Extensions

Wasm in Wasm

The idea, suggested by @illia , is to add a host function that allows calling another contract without terminating the current contract, in a blocking way. This, however, requires contracts to be guaranteed to be on the same shard. Potential solution to this is to see it an enclave that can host multiple contracts with different states, which cannot be split by resharding. An additional feature of such enclave is an access control system for the state and contract code which allows code and state of one contract to be accessed by another contract.

We can choose to either preserve the existing cross-contract API and require people to still use promises/callbacks in SDK, or we can add new host function that allows a classic blocking call.

Pros:

  • Atomicity and Interoperability;
  • Simplicity – depending on cross-contract API;
  • No new Wasm tech;

Cons:

  • Might create complex resource interferences and removes some optimizations. During transaction execution we would need to have multiple Wasmer modules running at the same time. Each Wasm module requires resource allocation which makes it difficult to argue whether that would force shards compete for the resources. Also, we won’t be able to do performance optimizations like module pre-loading or reusing one Wasm memory for all contracts, since we won’t know the Wasm modules that need to be loaded ahead of time;
  • Breaks “no collocation incentives invariant”, see below;
  • Requires to define new type of collocated state with access control system, which will increase the complexity of the runtime;
  • Does not work for all contracts;

Wasm after Wasm

Similarly to “Wasm in Wasm” we could introduce an enclave with the new type of state and access control system. However, in this solution we will allow contracts to schedule a contract call in a regular way but indicate that it needs to be executed immediately after the current contract, in the same block. This will fail for contract calls outside the enclave. However, it can be extended to work with all types of receipts and not just contract calls. The benefit of it is that we still execute one contract at a time, and the perspective of systems like Explorer/DevConsole there will be no modification to the receipts.

Pros:

  • Atomicity;
  • No new Wasm tech;

Cons:

  • Does not fix contract development simplicity or interoperability;
  • Requires to define new type of collocated state with access control system, which will increase the complexity of the runtime;
  • Breaks “no collocation incentives invariant”, see below;
  • Does not work for all contracts;

Intra-Shard Scheduler

We could allow one or all shards to run multiple runtimes concurrently, which will allow executing multiple cross-contract calls within the same block, as long as they fit into the attached gas limit. Unfortunately, this requires developing complex scheduling mechanism and requiring transactions to be annotated with key-values that they will touch during execution. Since contracts would need to guarantee collocation on the same shard, we would need to also implement some form of enclave or pinning, similar to Wasm in/after Wasm.

Pros:

  • Atomicity;
  • Increases single chunk capacity;
  • No new Wasm tech;

Cons:

  • Requires scheduler and transaction annotations;
  • Does not fix contract development simplicity or interoperability;
  • Increases hardware requirements;
  • Requires to define new type of collocated state with access control system, which will increase the complexity of the runtime;
  • Breaks “no collocation incentives invariant”, see below;
  • Does not work for all contracts;

Notes

Runtime Invariant

Current NEAR runtime was designed with an important invariant in mind – there is no financial or other incentive to collocate with some smart contract on a shard. The way it processes transactions and receipts explicitly enforces this invariant. This allows to things:

  • Future proof. We can indefinitely iterate on protocol designs without being afraid of breaking some contracts apart. This allows aggressive strategies around resharding and shard size reductions which can lead to performance optimizations and/or reductions in hardware requirements;
  • No perverse incentives. We decided to completely eliminate the need to think of scenarios where projects would fight to be collocated with some popular project, like and exchange. Game-theoretic scenarios might get quite complex and we don’t want to deal with this extra complexity.

Both Wasm-in-Wasm and Intra-Shard Scheduler solutions require contracts to be collocated together, which breaks this invariant.

Pinning and Enclave

A potential more light-weight alternative to an enclave with access control system is allowing contracts to “pin” themselves to another contract upon deployment. Implementation-wise it is easier to do, since we only need to position this contract under the state trie of another contract. However, it would still require modification to Runtime code and it will still break the invariant. On the positive side, we can later roll out access control system incrementally on top of it.

Comparison

Qualities SDK lock Wasm suspension Wasm in Wasm Wasm after Wasm Intra-Shard Scheduler
Implementation Complexity High Medium High Medium Very High
Atomicity :white_check_mark:* :white_check_mark: :white_check_mark: :white_check_mark: :white_check_mark:
Code Simplicity :x: :white_check_mark: :white_check_mark:/ :x: :x: :x:
Interoperability :x: :white_check_mark: :white_check_mark:/ :x: :x: :x:
No Transaction Runtime Modifications :white_check_mark: :white_check_mark:* :x: :x: :x:
No Contract Runtime Modifications :white_check_mark: :x: :x: :white_check_mark: :white_check_mark:
No New Transaction Failures :x: :x: :white_check_mark: :white_check_mark: :white_check_mark:
Works for All Contracts :white_check_mark: :white_check_mark: :x: :x: :white_check_mark:
No Contract Code Modifications :x: :x: :x:/ :white_check_mark: :white_check_mark:* :white_check_mark:
Preserves Runtime Invariant :white_check_mark: :white_check_mark: :x: :x: :x:
Increases shard capacity :x: :x: :x: :x: :white_check_mark:
Does not increase hardware requirements :white_check_mark: :white_check_mark: :x:* :white_check_mark: :x:

Legend:

  • Asterisk means “almost”.
  • :white_check_mark:/ :x: – depends on whether we keep Promise API or have blocking API for Wasm in Wasm solution;

Call for Action

Before we select a solution(s), this becomes a NEP, and enters the quality control process, please contribute your thoughts and ideas in any form in this thread.

8 Likes

Thanks for detailed list of options and comparison.

I would like to make few points:

  • We already have runtime variant failure due to Aurora. Given how JS contracts are done via VM - I’m expecting more JS containers to be launched, which allow new JS code to be side loaded into an existing contract. So I would generally start thinking more of NEAR’s contracts as “chain”.
  • The reason why I liked WASM in WASM is because of the simplicity of implementation so I’m surprised by high complexity. I understand that there are some weird interplay can come in for performance, but the idea was to just implement basic host function and that’s all - everything else moves into the container contract. I actually think we should prototype this and see the real performance of this approach.
  • “WASM after WASM” combined with basic “WASM suspension” would give a “WASM in WASM” code simplicity of blocking calls. WASM after WASM would be a more proper implementation of WASM in WASM indeed although way more complex.
2 Likes

Thanks for the feedback! It got me thinking and it looks like we might not need to do anything besides building more language SDKs that work like enclaves. Here is how I arrived to it:

Why do we need to use enclaves

@illia you can skip this section.

In short, unless we do optimistic execution or do fundamental research on inter-chain schedulers, we are forced to use enclaves.

Optimistic execution (e.g. SDK lock or Wasm suspension that can fail if transaction arrives at a wrong time) is not great, because even if some off-chain code can automatically re-execute the transaction, this unpredictable behavior will still be visible to the developers which is very unpleasant. The only way to guarantee that execution finishes once it has started is to have it fully done under a lock or by a scheduler. If we want execution to be shard-agnostic then lock/scheduler need to span all shards. Lock spanning all shards will likely interfere with NEAR scalability, while scheduler that works across shards will require significant research.

So if we don’t want optimistic execution, lock/scheduler will need to work inside individual shards only. Unfortunately, we cannot define scope of lock/scheduler through the shard boundaries, because they will be dynamically adjustable through resharding and definition of a shard boundary is too low-level to be exposed to the developers. That means something needs to declare what contracts share common lock/scheduler. Selecting contracts for sharing lock/scheduler requires understanding of the business logic, which means it needs to be done by the developers.

Let’s call “an enclave” a thing where contracts share lock/scheduler and can collectively execute together.

Contract-level enclaves leave no use cases for protocol-level enclaves

In short, contract-level enclaves solve all issues protocol-level enclaves can solve, except for developers that are using compilable languages like Rust.

Right now I can think of several ways how we can design an enclave:

  • Protocol-level enclaves (Wasm in Wasm or Wasm after Wasm):
    • Nested contracts with separate access-control system between contract’s state. (Original Wasm in Wasm idea);
    • Nested contracts without access-control system. What I called “pinning contracts” to each other, which just prevents them from being moved into different shards;
  • Contract-level enclaves:
    • EVM-style large smart contract that hosts contracts written in interpretable language and emulates nested state trie on a contract level.

It is likely that we will end up with multiple language-specific enclaves as we develop SDKs for interpretable languages like JS and Python. Given that these languages will be also used for introducing developers to NEAR blockchain, they would benefit from allowing developers use blocking synchronous calls, so they will naturally become contract-level enclaves. We can go as far as teaching JS/Python developers to use sync calls by the default and async contract calls as an advanced feature.

Assuming that we are going to have lots of contract-level enclaves I am struggling to come up with a scenario where protocol-level enclaves would be useful. It is unlikely that we will be able to have a solution that allows moving contracts in, out, and between the enclaves, because it would be too complex. So if someone deploys a popular contract (e.g. an exchange) then it won’t matter where it lives since all contracts from all enclaves (except maybe one that hosts it) would need to use async to interact with it. So the only use-case where protocol-level enclaves would be useful are Rust/C/C++/Kotlin developers who would want to interact synchronously with other Rust/C/C++/Kotlin developers. Which might not be a broad user group, since a lot of DeFi will exist on EVM and lots of web2 tools might exist in JS/Python enclaves. Plus many Rust/C/C++/Kotlin developers might actually be comfortable enough with async.

Assuming the above use case is important enough, we can allow SDK-based enclave for Rust/C/C++/Kotlin by having execute_wasm host function, but without the special protocol-level state layout or access-control system. Basically since we cannot interpret Rust/C/C++/Kotlin directly we can run them in Wasm form similar to how EVM contract runs compiled Solidity, instead of Solidity itself.

An updated proposed solution

So I suggest we focus on building language-specific SDK-based enclaves for languages like JS and Python. We would also do a research with the community to understand whether there is a need for Rust/C/C++/Kotlin enclaves, and if there is, introduce execute_wasm host function without special state layout or access-control system first.

A short note on implementation complexity

The problem with the runtime is that we need to keep its code extremely simple or otherwise we are going to spend most of the time debugging fees and why they don’t correspond to the execution time (we already have multiple people working on just that); we also need to keep the spec simple, because even with the current spec we used to have hard to find bugs. So nesting Wasm modules instead of executing them sequentially is more complex because it increases the stack depth. Also, extending state structure and adding access control system complicates runtime, because right now state, is the least diagnosable part of the runtime.

1 Like

Have you had a look at Binaryen’s Asyncify ( Pause and Resume WebAssembly with Binaryen's Asyncify ) ?

It very much address much of what is needed to implement Wasm suspension (which I think sounds like a good idea here). It’s a Binaryen pass so it should be possible to use for any Wasm binary.

We also use this in wasm-git. The code inside wasm can be written as synchronous code (which was essential for being able to port a library like libgit2 to wasm), while the consuming javascript client can call wasm code with async/await.

2 Likes

I would recommend going for the lowest hanging fruit first and I believe in long term maybe multiple of these approaches can become real. Without deep research, I suspect this is lockable and other usages of Rust features to improve the developer experience, instead of manually managing the state rollbacks.

While web2 developers may not be familiar with concurrency programming, smart contract programming demands much more from the developer in the first place. It is good to assume that developers need to study and hone their skills in any case before they can do smart contract programming. Thus, I believe the tools that give the developers, who are experienced in the environment, the fastest impact could go first. Especially tools that make reasoning and auditing these code paths easier - e.g. some sort of special static analyzer if Rust macros cannot pull it off alone.

1 Like

Do we have a small model example we can evaluate solutions against? Some sort of litmus test for atomicity would be useful.


When discussing suspension/async, it’s important to realize that what we want here is different from typical async/await use cases. With async/await (or threads) you have the process suspended, but alive, during suspension point. For cross-contract calls, my understanding is that computer can literally be rebooted while one is .awaiting some promise. So, Asyncify can be only a part of the solution, we still need runtime bit which would save the runtime address space of the contract to disk and load it.


WASM suspension on the first glance feels like something prohibitively expensive – to suspend WASM, we need to save contract’s memory, and that is 64mb today.


Another good async pattern to keep in mind is history replay. Rather than saving/restoring 64 megabytes of memory, you can store just the inputs that the contract read, and restore the state of the contract by replaying it. This pattern is actually used rather successfully by some web2 devs. This is how forms in Django work. Here’s an example of U in CRUD in Django:

def renew_book_librarian(request, pk):
    book_instance = get_object_or_404(BookInstance, pk=pk)

    if request.method == 'POST':
        form = RenewBookForm(request.POST)
        if form.is_valid():
            book_instance.due_back = form.cleaned_data['renewal_date']
            book_instance.save()
            return HttpResponseRedirect(reverse('all-borrowed') )
    else: # 'GET'
        proposed_renewal_date = datetime.date.today() + datetime.timedelta(weeks=3)
        form = RenewBookForm(initial={'renewal_date': proposed_renewal_date})

    context = { 'form': form, 'book_instance': book_instance }
    return render(request, 'catalog/book_renew_librarian.html', context)

This view handles both a GET and a POST. The asynchronous flow is that browser first sends a GET request for specific book, and gets the form back. Some time later, users fills the form, click Submit, and browser hits the same url with POST. Note how book_instance = get_object_or_404(BookInstance, pk=pk) statement is re-executed, which creates a form of optimistic locking – if the book was deleted in the meantime, the request fails.

So, I think something like #[locked] can actually be enhanced to give good, blocking ergonomics on the sdk level. Essentially, rather than the runtime saving the literal memory of the contract, the SDK can save results of all host function calls, and “replay” them when the promise is resolved. Combined with something like Asyncify, I think this can even avoid the replay. Basically, rather than runtime saving the state of the contract, WASM can save the state of it’s own local variables to the storage itself.

:thinking: actually, I think we can do save contract memory from within the contract itself today? We can do

fn save_memory() {
  let memory_start = 0;
  let memory_len = 16mb;
  write_register(REG_ID, memory_len, memory_start)
  storage_write(STATE_KEY, REG_ID);
}

fn restore_memory() {
  let memory_start = 0;
  let memory_len = 16mb;
  storage_read(STATE_KEY, REG_ID);
  read_register(REG_ID, memory_len, memory_start)
}
2 Likes

Hi @nearmax , @illia , and the rest of the team!

I just started as a senior software engineer on the Education team at NEAR Foundation, and I’m glad Jacob Lindahl shared this post with me.

Although I don’t have opinions yet on the details of what you’re grappling with here, I just wanted to say hi and that I’m impressed with your thinking here and agree that these “developer needs” are worth considering (so it’s great that this discussion is happening).

Part of the discussion feels similar to trade-offs I’ve heard about Rust vs AssemblyScript (in an effort to be accessible to more developers).

I’m eager to learn more in the coming days and explore how I can help expand NEAR’s ecosystem! :muscle:

Asyncify works very much like history replay as your Django example form, and so there should be no need to store the entire contracts memory. Storing the Asyncify stack after unwinding should be enough, and then just restore it before rewinding.

When calling a wasm function with Asyncify, it will return when it reaches the first async call from within the wasm. At that point we will save the stack, and we can even shut down that wasm instance (and the computer).

We will then call our remote service and after we got the result, we will create a new wasm instance for our first contract. Then restore the stack, and call the function without input parameters this time. When the wasm now encounters the async call, then it should receive the result of the remote service, and when the wasm exits it contains the final result.

I’ve modified the example from the Asyncify blog to demonstrate separate Wasm instances and just storing/restoring the stack. See the actual flow in the bottom async function. Not very pretty, but hope you get the basic idea:

import binaryen from 'binaryen';

// Create a module from text
// It adds the two first parameters, then calls the remote service which adds again (see below), and finally add the 3rd parameter
const ir = new binaryen.parseText(`
    (module
        (memory 1 1)
        (import "env" "remoteService" (func $remoteService (param i32) (result i32)))
        (export "memory" (memory 0))
        (export "main" (func $main i32 i32 i32))
        (func $main (param $0 i32) (param $1 i32) (param $2 i32) (result i32)            
            (i32.add 
                (local.get $2)
                (call $remoteService
                    (i32.add (local.get $0) (local.get $1))
                )
            )
        )
    )
`);

// A remote service that adds to the input value
async function remoteService(inputValue) {
    console.log('in remote service');
    await new Promise(r => setTimeout(r,50));
    return inputValue + 16;
}

// Run the Asyncify pass, with (minor) optimizations.
binaryen.setOptimizeLevel(1);
ir.runPasses(['asyncify']);

// Get a WebAssembly binary and compile it to an instance.
const binary = ir.emitBinary();
const compiled = new WebAssembly.Module(binary);

function runWasmInstance(param1,param2,param3,stackData,remoteResult) {
    let firstResult;
    const instance = new WebAssembly.Instance(compiled, {
        env: {
            remoteService: function (remoteServiceInput) {
                if (!stackData) {
                    // We are called in order to start a remote execution/unwind.
                    console.log('pause...');
                    // Fill in the data structure. The first value has the stack location,
                    // which for simplicity we can start right after the data structure itself.
                    view[DATA_ADDR >> 2] = DATA_ADDR + 8;
                    // The end of the stack will not be reached here anyhow.
                    view[DATA_ADDR + 4 >> 2] = 1024;
                    wasmExports.asyncify_start_unwind(DATA_ADDR);

                    firstResult = remoteServiceInput;
                } else {
                    // We are called as part of a resume/rewind.
                    console.log('...resume');
                    wasmExports.asyncify_stop_rewind();
                    return remoteResult;
                }
            }
        }
    });
    const wasmExports = instance.exports;
    const view = new Int32Array(wasmExports.memory.buffer);

    // Global state for running the program.
    const DATA_ADDR = 16; // Where the unwind/rewind data structure will live.

    // Run the program. When it pauses control flow gets to here, as the
    // stack has unwound.
    if (stackData) {
        console.log('starting to rewind the stack');
        const stackView = new Uint8Array(wasmExports.memory.buffer);
        stackView.set(new Uint8Array(stackData), DATA_ADDR);

        wasmExports.asyncify_start_rewind(DATA_ADDR);
        // The code is now ready to rewind; to start the process, enter the
        // first function that should be on the call stack.
        const finalResult = wasmExports.main();
        return {result: finalResult};
    } else {
        wasmExports.main(param1, param2, param3);
        console.log('stack unwound');
        wasmExports.asyncify_stop_unwind();
        return {
            result: firstResult,
            stack: wasmExports.memory.buffer.slice(DATA_ADDR, view[DATA_ADDR >> 2])
        };
    }
}

(async function() {
    try {
        // First time give the input params
        const {result: firstResult, stack: stackData} = runWasmInstance(3,5,8);
        console.log('Result from first part of wasm', firstResult);

        // Call the remote service
        const remoteResult = await remoteService(firstResult);
        console.log('Result from remote service', remoteResult);

        // When resuming we don't need the input parameters, just the stack data and the remote service result
        const {result: finalResult} = runWasmInstance(null,null,null,stackData,remoteResult);
        console.log('Final result after resuming wasm', finalResult);
    } catch(e) {
        console.error(e);
    }
})();
2 Likes

Yeah…

And the cool bit, I think, is that the bit where we restore the state:

can presumably be done from-within the contract itself, as it has API for “load data from the database into memory”.

That is, in terms of the table in OP, it seems that WASM suspension can be implemented without modifications to contract runtime.

2 Likes

yeah, even better when you’re able to load data from within the contract itself.

when it comes to globals, changes in these might need to be tracked ( https://twitter.com/kripken/status/1479225903280721921?s=20 ), but I think this is not so much the case with smart contracts.

and great if this can be implemented without changes in the contract runtime.

Just want to mention there is also the Cytonic idea from @alexatnear that essentially builds a synchronous execution layer on the smart contract level without modifications to the protocol itself.

I wouldn’t necessarily downplay the importance of catering to these group of developers. After all, we still have large defi projects developed natively, such as Ref finance and Metapool. The Metapool incident that happened last year was, I believe, related to deadlock caused by asynchronous execution. With more defi projects building on NEAR natively, it seems important to me that we also polish the developer experience for rust contract developers.