Minimalist async evaluation framework for R.

Lightweight parallel code execution and distributed computing.

Designed for simplicity, a ‘mirai’ evaluates an R expression asynchronously, on local or network resources, resolving automatically upon completion.

mirai() returns a ‘mirai’ object immediately. ‘mirai’ (未来 みらい) is Japanese for ‘future’.

Efficient scheduling over fast inter-process communications or secure TLS connections over TCP/IP, built on ‘nanonext’ and ‘NNG’ (Nanomsg Next Gen).

{mirai} has a tiny pure R code base, relying solely on nanonext, a high-performance binding for the ‘NNG’ (Nanomsg Next Gen) C library with zero package dependencies.


Install the latest release from CRAN:

or the development version from rOpenSci R-universe:

install.packages("mirai", repos = "")

Quick Start

Use mirai() to evaluate an expression asynchronously in a separate, clean R process.

A ‘mirai’ object is returned immediately.


m <- mirai(
    res <- rnorm(x) + y ^ 2
    res / rev(res)
  x = 11,
  y = runif(1)

#> < mirai >
#>  - $data for evaluated result

Above, all specified name = value pairs are passed through to the ‘mirai’.

The ‘mirai’ yields an ‘unresolved’ logical NA whilst the async operation is ongoing.

#> 'unresolved' logi NA

To check whether a mirai has resolved:

#> [1] FALSE

Upon completion, the ‘mirai’ resolves automatically to the evaluated result.

#>  [1]  -0.04026068  -1.92115491   0.17933997   0.69404292   0.01749486
#>  [6]   1.00000000  57.15965086   1.44083309   5.57600189  -0.52052023
#> [11] -24.83812992

Alternatively, explicitly call and wait for the result using call_mirai().

#>  [1]  -0.04026068  -1.92115491   0.17933997   0.69404292   0.01749486
#>  [6]   1.00000000  57.15965086   1.44083309   5.57600189  -0.52052023
#> [11] -24.83812992


See the mirai vignette for full package functionality.

Key topics include:

  • Example use cases

  • Local daemons - persistent background processes

  • Distributed computing - remote daemons

  • Secure TLS connections

  • Serialization - registering custom functions

This may be accessed within R by:

vignette("mirai", package = "mirai")

Use with Parallel and Foreach

{mirai} provides an alternative communications backend for R’s base ‘parallel’ package.

cl <- make_cluster(4)
#> < miraiCluster >
#>  - cluster ID: `0`
#>  - nodes: 4
#>  - active: TRUE

make_cluster() creates a ‘miraiCluster’, a cluster fully compatible with all ‘parallel’ functions such as:

A ‘miraiCluster’ may also be registered for use with the foreach package by doParallel.

This functionality fulfils a request from R-Core at R Project Sprint 2023.

Use with Crew and Targets

The crew package is a distributed worker-launcher extending {mirai} to different distributed computing platforms, from traditional clusters to cloud services.

crew.cluster is a plug-in that enables mirai-based workflows on traditional high-performance computing clusters using:

  • LFS
  • SGE

targets, a Make-like pipeline tool for statistics and data science, has integrated and adopted crew as its default recommended high-performance computing backend.

Use with Shiny and Plumber

{mirai} serves as a backend for enterprise asynchronous shiny or plumber applications.

A ‘mirai’ may be used interchangeably with a ‘promise’ by using the the promise pipe %...>%, or explictly by promises::as.promise(), allowing side-effects to be performed upon asynchronous resolution of a ‘mirai’.

The following example outputs “hello” to the console after one second when the ‘mirai’ resolves.


p <- mirai({Sys.sleep(1); "hello"}) %...>% cat()
#> <Promise [pending]>

Alternatively, crew provides an interface that facilitates deploying {mirai} for shiny.

Use with Torch

The custom serialization interface in {mirai} is accessed via the serialization() function.

In the case of torch, this would involve making the following call once at the start of your session:

serialization(refhook = list(torch:::torch_serialize, torch::torch_load))
#> [ mirai ] serialization functions registered
  • Note that torch_serialize() is available via ::: since torch v0.9.0, and will be exported in v0.12.0.

This allows tensors, including models, optimizers etc. to be used seamlessly across local and remote processes like any other R object.

For more details, please refer to the relevant vignette chapter.


We would like to thank in particular:

William Landau, for being instrumental in shaping development of the package, from initiating the original request for persistent daemons, through to orchestrating robustness testing for the high performance computing requirements of crew and targets.

Henrik Bengtsson, for valuable and incisive insights leading to the interface accepting broader usage patterns.

Luke Tierney, R Core, for introducing R’s implementation of L’Ecuyer-CMRG streams, used to ensure statistical independence in parallel processing.

Daniel Falbel, for discussion around an efficient solution to serialization and transmission of ‘torch’ tensors.

« Back to ToC

mirai website:
mirai on CRAN:

Listed in CRAN Task View:
- High Performance Computing:

nanonext website:
nanonext on CRAN:

NNG website:

« Back to ToC

Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.