Access hive using sparklyr package?

sparklyr hive functions
sparklyr disconnect
sparklyr spark install
the command to check that apache spark is running with rstudio ide
sparklyr spark dataframe
sparklyr api
rstudio spark connection
sparklyr tbl_cache example
library(sparklyr)
library(dplyr)
home <- "/usr/hdp/current/spark-client"
sc <- spark_connect(master = "yarn-client", spark_home = home, version = "1.6.2")
readFromSpark <- spark_read_csv(sc, name="test", path ="hdfs://hostname/user/test.csv",header=TRUE)

I already successfully access hdfs using sparklyr. But how to access hive table/command using sparklyr because I need to store this df into hive.

AFAIK, sparklyr doesn't have the function to create database/table directly. But you can use DBI to create database/table.

library(DBI)
iris_preview <- dbExecute(sc, "CREATE EXTERNAL TABLE...")

Using sparklyr with an Apache Spark cluster, Data are downloaded from the web and stored in Hive tables on HDFS across Familiarity with and access to an AWS account; Familiarity with basic linux Update your master node and install dependencies that will be used by R packages. In sparklyr: R Interface to Apache Spark. Description Usage Arguments Details Spark Context Java Spark Context Hive Context Spark Session. Description. Access the commonly-used Spark objects associated with a Spark instance. These objects provide access to different facets of the Spark API. Usage

You can try spark_write_table:

    spark_write_table(readFromSpark, 
    '<database_name>.readFromSpark', mode = 'overwrite')

If you're also creating schema, you can use DBI package:

    dbSendQuery(sc,"CREATE SCHEMA IF NOT EXISTS xyz")
    tbl_change_db(sc,"xyz")

Accessing data in Hadoop using dplyr and SQL – RStudio Support, You will need a data source specific driver (e.g., Hive, Impala, HBase) The sparklyr package communicates with the Spark API to run SQL  To do this we will need to create a SparkSession with Hive support which can access tables in the Hive MetaStore. Note that Spark should have been built with Hive support and more details can be found in the SQL programming guide. In SparkR, by default it will attempt to create a SparkSession with Hive support enabled (enableHiveSupport = TRUE).

This is how I achieve this:

The setup:

cc <- RxSpark(nameNode = hdfs_host(myADL))
rxSetComputeContext(cc)

myXDFname <- 'something'
hivTbl <- RxHiveData(table = myXDFname)

sc <- spark_connect('yarn-client')

tbl_cache(sc, myXDFname)
mytbl <- tbl(sc, myXDFname)

Now do something with it

mytbl %>% head

mytbl %>% 
   filter(rlike(<txt col>, pattern)) %>%
   group_by(something) %>%
   tally() %>%
   collect() %>% #this is important
   ggplot(., aes(...)) %>%
   geom_triforce(...)

Examples - Sparklyr, allows users to connect to Spark remotely using sparklyr with Databricks Connect. Team, RStudio Server Pro, RStudio Connect, and RStudio Package Manager. Data are downloaded from the web and stored in Hive tables on HDFS  How to Access Hive Tables using Spark SQL. Go to the location of build.sbt file and execute the “sbt compile” and “sbt package” commands.

sparklyr, Using sparklyr with an Apache Spark cluster. Hive Data are downloaded from the web and stored in Hive tables on HDFS across multiple Connect to Spark. We’re excited today to announce sparklyr, a new package that provides an interface between R and Apache Spark. Over the past couple of years we’ve heard time and time again that people want a native dplyr interface to Spark, so we built one! sparklyr also provides interfaces to Spark’s distributed machine learning algorithms and much more.

how to use data from Hive on YARN cluster with sparklyr? · Issue , Connect to Spark from R. The sparklyr package provides a If you use the RStudio IDE, you should also download the latest preview release of the IDE which  When one finally connects to YARN cluster with those commands #131 (comment), how then one can use datasets stored on Hive/HDFS to apply e.g. ml_kmeans on them? I imagine it would be: dataset &lt;-

Sparklyr R Interface for Apache Spark |, When one finally connects to YARN cluster with those commands #131 (​comment), how then one can use datasets stored on Hive/HDFS to  The dplyr package has a generalized backend for data sources that translates your R code into SQL. You can use RStudio and dplyr to work with several of the most popular software packages in the Hadoop ecosystem, including Hive, Impala, HBase and Spark. There are two methods for accessing data in Hadoop using dplyr and SQL. ODBC

Comments
  • Try with this: df_tbl <- copy_to(sc, readFromSpark, "yourTableName")
  • @JaimeCaffarel i dont want to put that df as df_tbl. I want to save readFromSpark into hive table, i need to create database,table then i can put readFromSpark into hive.
  • Great, but how do you put a sdf that's been registered or a sdf that's been cached using tbl_cache into Hive as the EXTERNAL TABLE that you're suggesting here? I don't see any instructions for creating a table using a local object that sparklyr and dplyr can manipulate in memory.