Why is enquo + !! preferable to substitute + eval

r unquote
rlang tutorial
programming with dplyr
tidy evaluation
r substitute

In the following example, why should we favour using f1 over f2? Is it more efficient in some sense? For someone used to base R, it seems more natural to use the "substitute + eval" option.

library(dplyr)

d = data.frame(x = 1:5,
               y = rnorm(5))

# using enquo + !!
f1 = function(mydata, myvar) {
  m = enquo(myvar)
  mydata %>%
    mutate(two_y = 2 * !!m)
}

# using substitute + eval    
f2 = function(mydata, myvar) {
  m = substitute(myvar)
  mydata %>%
    mutate(two_y = 2 * eval(m))
}

all.equal(d %>% f1(y), d %>% f2(y)) # TRUE

In other words, and beyond this particular example, my question is: can I get get away with programming using dplyr NSE functions with good ol' base R like substitute+eval, or do I really need to learn to love all those rlang functions because there is a benefit to it (speed, clarity, compositionality,...)?


I want to give an answer that is independent of dplyr, because there is a very clear advantage to using enquo over substitute. Both look in the calling environment of a function to identify the expression that was given to that function. The difference is that substitute() does it only once, while !!enquo() will correctly walk up the entire calling stack.

Consider a simple function that uses substitute():

f <- function( myExpr ) {
  eval( substitute(myExpr), list(a=2, b=3) )
}

f(a+b)   # 5
f(a*b)   # 6

This functionality breaks when the call is nested inside another function:

g <- function( myExpr ) {
  val <- f( substitute(myExpr) )
  ## Do some stuff
  val
}

g(a+b)
# myExpr     <-- OOPS

Now consider the same functions re-written using enquo():

library( rlang )

f2 <- function( myExpr ) {
  eval_tidy( enquo(myExpr), list(a=2, b=3) )
}

g2 <- function( myExpr ) {
  val <- f2( !!enquo(myExpr) )
  val
}

g2( a+b )    # 5
g2( b/a )    # 1.5

And that is why enquo() + !! is preferable to substitute() + eval(). dplyr simply takes full advantage of this property to build a coherent set of NSE functions.

UPDATE: rlang 0.4.0 introduced a new operator {{ (pronounced "curly curly"), which is effectively a short hand for !!enquo(). This allows us to simplify the definition of g2 to

g2 <- function( myExpr ) {
  val <- f2( {{myExpr}} )
  val
}

Why is enquo + !! preferable to substitute + eval, And that is why enquo() + !! is preferable to substitute() + eval() . dplyr simply takes full advantage of this property to build a coherent set of NSE functions. That’s why we use the function enquo () for en riched quo tation, which creates a quosure. A quosure is an object which contains an expression and an environment. Quosures redefine the internal promise object into something that can be used for programming. Thus, the following code is working, and we see the quote-and-unquote pattern again.


enquo() and !! also allows you to program with other dplyr verbs such as group_by and select. I'm not sure if substitute and eval can do that. Take a look at this example where I modify your data frame a little bit

library(dplyr)

set.seed(1234)
d = data.frame(x = c(1, 1, 2, 2, 3),
               y = rnorm(5),
               z = runif(5))

# select, group_by & create a new output name based on input supplied
my_summarise <- function(df, group_var, select_var) {

  group_var <- enquo(group_var)
  select_var <- enquo(select_var)

  # create new name
  mean_name <- paste0("mean_", quo_name(select_var))

  df %>%
    select(!!select_var, !!group_var) %>% 
    group_by(!!group_var) %>%
    summarise(!!mean_name := mean(!!select_var))
}

my_summarise(d, x, z)

# A tibble: 3 x 2
      x mean_z
  <dbl>  <dbl>
1    1.  0.619
2    2.  0.603
3    3.  0.292

Edit: also enquos & !!! make it easier to capture list of variables

# example
grouping_vars <- quos(x, y)
d %>%
  group_by(!!!grouping_vars) %>%
  summarise(mean_z = mean(z))

# A tibble: 5 x 3
# Groups:   x [?]
      x      y mean_z
  <dbl>  <dbl>  <dbl>
1    1. -1.21   0.694
2    1.  0.277  0.545
3    2. -2.35   0.923
4    2.  1.08   0.283
5    3.  0.429  0.292


# in a function
my_summarise2 <- function(df, select_var, ...) {

  group_var <- enquos(...)
  select_var <- enquo(select_var)

  # create new name
  mean_name <- paste0("mean_", quo_name(select_var))

  df %>%
    select(!!select_var, !!!group_var) %>% 
    group_by(!!!group_var) %>%
    summarise(!!mean_name := mean(!!select_var))
}

my_summarise2(d, z, x, y)

# A tibble: 5 x 3
# Groups:   x [?]
      x      y mean_z
  <dbl>  <dbl>  <dbl>
1    1. -1.21   0.694
2    1.  0.277  0.545
3    2. -2.35   0.923
4    2.  1.08   0.283
5    3.  0.429  0.292

Credit: Programming with dplyr

Programming with dplyr • dplyr, Introduction. Most dplyr verbs use tidy evaluation in some way. Tidy evaluation is a special type of non-standard evaluation used throughout the tidyverse. Our friendly little helpers are going to be enquo and quos. I am going to build a function that calculates the proportion and cumulative proportion of a grouping variable. I am going to build a function that calculates the proportion and cumulative proportion of a grouping variable.


Imagine there is a different x you want to multiply:

> x <- 3
> f1(d, !!x)
  x            y two_y
1 1 -2.488894875     6
2 2 -1.133517746     6
3 3 -1.024834108     6
4 4  0.730537366     6
5 5 -1.325431756     6

vs without the !!:

> f1(d, x)
  x            y two_y
1 1 -2.488894875     2
2 2 -1.133517746     4
3 3 -1.024834108     6
4 4  0.730537366     8
5 5 -1.325431756    10

!! gives you more control over scoping than substitute - with substitute you can only get the 2nd way easily.

Why is enquo + !! preferable to substitute + eval, I want to give an answer that is independent of dplyr , because there is a very clear advantage to using enquo over substitute . Both look in the calling  Comparing quo() and enquo() When we call quo() on an expression, it takes that expression and wraps it in a quosure, in this case, returning col_name as a quosure. However, what we actually wanted to do was return Species as a quosure, and this extra step can be performed by using enquo() instead. From the programming vignette from dplyr:


To add some nuance, these things are not necessarily that complex in base R.

It is important to remember to use eval.parent() when relevant to evaluate substituted arguments in the right environment, if you use eval.parent() properly the expression in nested calls will find their ways. If you don't you might discover environment hell :).

The base tool box that I use is made of quote(), substitute(), bquote(), as.call(), and do.call() (the latter useful when used with substitute()

Without going into details here is how to solve in base R the cases presented by @Artem and @Tung, without any tidy evaluation, and then the last example, not using quo / enquo, but still benefiting from splicing and unquoting (!!! and !!)

We'll see that splicing and unquoting makes code nicer (but requires functions to support it!), and that in the present cases using quosures doesn't improve things dramatically (but still arguably does).

solving Artem's case with base R
f0 <- function( myExpr ) {
  eval(substitute(myExpr), list(a=2, b=3))
}

g0 <- function( myExpr ) {
  val <- eval.parent(substitute(f0(myExpr)))
  val
}

f0(a+b)
#> [1] 5
g0(a+b)
#> [1] 5
solving Tung's 1st case with base R
my_summarise0 <- function(df, group_var, select_var) {

  group_var  <- substitute(group_var)
  select_var <- substitute(select_var)

  # create new name
  mean_name <- paste0("mean_", as.character(select_var))

  eval.parent(substitute(
  df %>%
    select(select_var, group_var) %>% 
    group_by(group_var) %>%
    summarise(mean_name := mean(select_var))))
}

library(dplyr)
set.seed(1234)
d = data.frame(x = c(1, 1, 2, 2, 3),
               y = rnorm(5),
               z = runif(5))
my_summarise0(d, x, z)
#> # A tibble: 3 x 2
#>       x mean_z
#>   <dbl>  <dbl>
#> 1     1  0.619
#> 2     2  0.603
#> 3     3  0.292
solving Tung's 2nd case with base R
grouping_vars <- c(quote(x), quote(y))
eval(as.call(c(quote(group_by), quote(d), grouping_vars))) %>%
  summarise(mean_z = mean(z))
#> # A tibble: 5 x 3
#> # Groups:   x [3]
#>       x      y mean_z
#>   <dbl>  <dbl>  <dbl>
#> 1     1 -1.21   0.694
#> 2     1  0.277  0.545
#> 3     2 -2.35   0.923
#> 4     2  1.08   0.283
#> 5     3  0.429  0.292

in a function:

my_summarise02 <- function(df, select_var, ...) {

  group_var  <- eval(substitute(alist(...)))
  select_var <- substitute(select_var)

  # create new name
  mean_name <- paste0("mean_", as.character(select_var))

  df %>%
    {eval(as.call(c(quote(select),quote(.), select_var, group_var)))} %>% 
    {eval(as.call(c(quote(group_by),quote(.), group_var)))} %>%
    {eval(bquote(summarise(.,.(mean_name) := mean(.(select_var)))))}
}

my_summarise02(d, z, x, y)
#> # A tibble: 5 x 3
#> # Groups:   x [3]
#>       x      y mean_z
#>   <dbl>  <dbl>  <dbl>
#> 1     1 -1.21   0.694
#> 2     1  0.277  0.545
#> 3     2 -2.35   0.923
#> 4     2  1.08   0.283
#> 5     3  0.429  0.292

solving Tung's 2nd case with base R but using !! and !!!
grouping_vars <- c(quote(x), quote(y))

d %>%
  group_by(!!!grouping_vars) %>%
  summarise(mean_z = mean(z))
#> # A tibble: 5 x 3
#> # Groups:   x [3]
#>       x      y mean_z
#>   <dbl>  <dbl>  <dbl>
#> 1     1 -1.21   0.694
#> 2     1  0.277  0.545
#> 3     2 -2.35   0.923
#> 4     2  1.08   0.283
#> 5     3  0.429  0.292

in a function :

my_summarise03 <- function(df, select_var, ...) {

  group_var  <- eval(substitute(alist(...)))
  select_var <- substitute(select_var)

  # create new name
  mean_name <- paste0("mean_", as.character(select_var))

  df %>%
    select(!!select_var, !!!group_var) %>% 
    group_by(!!!group_var) %>%
    summarise(.,!!mean_name := mean(!!select_var))
}

my_summarise03(d, z, x, y)
#> # A tibble: 5 x 3
#> # Groups:   x [3]
#>       x      y mean_z
#>   <dbl>  <dbl>  <dbl>
#> 1     1 -1.21   0.694
#> 2     1  0.277  0.545
#> 3     2 -2.35   0.923
#> 4     2  1.08   0.283
#> 5     3  0.429  0.292

The Primary Curriculum: A Creative Approach, Enquiry. Approach. to. Geography. A pedagogical approach that is implicit with Enquiry is often at the heart of geography fieldwork but it should not be seen as  enquo () shows an inconsistent behaviour when the default value for an argument is the same as the name of an argument.


Teacher Education Policy: Some Issues Arising From Research And , The comhination of methodologies within a practitioner enquiry approach yields practical improvement and also articulates and makes public teachers'  I am trying to use enquo in a summary() generic function. I'm having some trouble reproducing this - I know this is a red flag - but perhaps there is something missing about how I understand how this function works that someone can addre


Practitioner Enquiry: Professional Development with Impact for , As with identifying the focus of your enquiry, you will get better at selecting your sample of pupils as you get more experienced. You will make mistakes, this is  Some random explanations about programming with tidy eval.What on earth is evaluation?So, let’s start with a simple question: what is evaluation? Evaluation is the process of analyzing an expression, in order to give the user something back. For example, in R, the standard evaluations is : you type/send something to the console (called a symbol) press enter R does some magic stuffs R returns


A careful and strict enquiry into the modern prevailing notions of , man is allowed still the “ perfećter being, the more fixedly and constantly his will is “ determined by reafon and truth.” Enquiry into the Nature of the Hum. Soul. Most dplyr verbs use "tidy evaluation", a special type of non-standard evaluation. In this vignette, you'll learn the two basic forms, data masking and tidy selection, and how you can program with them using either functions or for loops.