value toDF is not a member of org.apache.spark.rdd.RDD

Exception :

val people = sc.textFile("resources/people.txt").map(_.split(",")).map(p => Person(p(0), p(1).trim.toInt)).toDF()
value toDF is not a member of org.apache.spark.rdd.RDD[Person]

Here is TestApp.scala file:

package main.scala

import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf


case class Record1(k: Int, v: String)


object RDDToDataFramesWithCaseClasses {

    def main(args: Array[String]) {
        val conf = new SparkConf().setAppName("Simple Spark SQL Application With RDD To DF")

        // sc is an existing SparkContext.
        val sc = new SparkContext(conf)

        val sqlContext = new SQLContext(sc)

        // this is used to implicitly convert an RDD to a DataFrame.
        import sqlContext.implicits._

        // Define the schema using a case class.
        // Note: Case classes in Scala 2.10 can support only up to 22 fields. To work around this limit,package main.scala

And TestApp.scala

import org.apache.spark.SparkContext    
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf


case class Record1(k: Int, v: String)


object RDDToDataFramesWithCaseClasses {
    def main(args: Array[String]) {
        val conf = new SparkConf().setAppName("RDD To DF")

        // sc is an existing SparkContext.
        // you can use custom classes that implement the Product interface.
        case class Person(name: String, age: Int)

        // Create an RDD of Person objects and register it as a table.
        val people = sc.textFile("resources/people.txt").map(_.split(",")).map(p => Person(p(0), p(1).trim.toInt)).toDF() 
        people.registerTempTable("people")

        // SQL statements can be run by using the sql methods provided by sqlContext.
        val teenagers = sqlContext.sql("SELECT name, age FROM people WHERE age >= 13 AND age <= 19")

        // The results of SQL queries are DataFrames and support all the normal RDD operations.
        // The columns of a row in the result can be accessed by field index:
        teenagers.map(t => "Name: " + t(0)).collect().foreach(println)

        // or by field name:
        teenagers.map(t => "Name: " + t.getAs[String]("name")).collect().foreach(println)

        // row.getValuesMap[T] retrieves multiple columns at once into a Map[String, T]

        teenagers.map(_.getValuesMap[Any](List("name", "age"))).collect().foreach(println)

        // Map("name" -> "Justin", "age" -> 19)

    }
}

And SBT File

name := "SparkScalaRDBMS"
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.5.1"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "1.5.1"

now i found the reason, you should define case class in the object and outof the main function. look at here

Ok, I finally fixed the issue. 2 things needed to be done:

  1. Import implicits: Note that this should be done only after an instance of org.apache.spark.sql.SQLContext is created. It should be written as:

    val sqlContext= new org.apache.spark.sql.SQLContext(sc)

    import sqlContext.implicits._

  2. Move case class outside of the method: case class, by use of which you define the schema of the DataFrame, should be defined outside of the method needing it. You can read more about it here: https://issues.scala-lang.org/browse/SI-6649

Solved: Spark/Scala Error: value toDF is not a member of o , Solved: Hi all, I am trying to create a DataFrame of a text file which gives me error​: " value toDF is not a member of org.apache.spark.rdd.RDD. Issue with toDF, Value toDF is not a member of org.apache.spark.rdd.RDD 3 How to load extra spark properties using --properties-file option in spark yarn cluster mode?

In Spark 2, you need to import the implicits from the SparkSession:

val spark = SparkSession.builder().appName(appName).getOrCreate()
import spark.implicits._

See the Spark documentation for more options when creating the SparkSession.

Error: value toDF is not a member of org.apache.spark.rdd.RDD[U , toDF. :38: error: org.apache.spark.rdd.RDD[String] does not take parameters orders(r(0).toInt,r(1),r(2).toInt,r(3)) ^ :39: error: value toDF is not a member of  value toDF is not a member of org.apache.spark.rdd.RDD[(K, V)] Hot Network Questions The "Cares act" allows $100,000 of 401k withdrawal without penalty.

There are two problems with your code

  1. You need to import import sqlContext.implicits._ for Spark V 1.0 or import spark.implicits._ if you are using Spark V 2.0 or above

  2. Secondly case class Record1(k: Int, v: String) needs to be inside main function but outside def main(args: Array[String]) { val conf = new SparkConf().setAppName("RDD To DF") …

}

How to import org.apache.spark.sql.SQLContext.implicits in Spark , sql.SQLContext.implicits in Spark 1.6: error “value toDF is not a member of org.​apache.spark.rdd.RDD”. value toDF is not a member of org.apache.spark.rdd.RDD[Any] [error] possible cause: maybe a semicolon is missing before `value toDF'? [error] }).toDF(typedCols: _*) I found that to resolve this, the class has to be defined outside of the main method, but I need mine to be defined inside of it because I can't know which class I will be using

i.  scala> case class Employee(id: Int, name: String, age: Int)
defined class Employee
scala> val sqlContext= new org.apache.spark.sql.SQLContext(sc)
warning: there was one deprecation warning; re-run with -deprecation for details
sqlContext: org.apache.spark.sql.SQLContext = org.apache.spark.sql.SQLContext@1f94e3a

scala> import sqlContext.implicits._
import sqlContext.implicits._
scala> var empl1=   empl.map(_.split(",")).map(e=>Employee(e(0).trim.toInt,e(1),e(2).trim.toInt)).toDF
empl1: org.apache.spark.sql.DataFrame = [id: int, name: string ... 1 more field]
scala> val allrecords = sqlContext.sql("SELECT * FROM employee")
allrecords: org.apache.spark.sql.DataFrame = [id: int, name: string ... 1 more field]

scala> allrecords.show();
+----+--------+---+
|  id|    name|age|
+----+--------+---+
|1201|  satish| 25|
|1202| krishna| 28|
|1203|   amith| 39|
|1204|   javed| 23|
|1205|  prudvi| 23|
+----+--------+---+

How to import spark.implicits._ in Spark 2.2: error “value toDS is not , implicits._ in Spark 2.2: error “value toDS is not a member of org.apache.spark.​rdd.RDD”. Saeed Barghi  Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. Learn more value toDF is not a member org.apache.spark.rdd.RDD

Ran into this issue when running spark in a scala worksheet. Basically, you cant use toDF() in those circumstances due to the nature of worksheets. Instead, use spark.createDataFrame.

value toDF is not a member of org.apache.spark.rdd.RDD, Problem. value toDF is not a member of org.apache.spark.rdd.RDD. Solution. Need to add: val sqlContext = new SQLContext(sc) import  ZEPPELIN-89 value toDF is not a member of org.apache.spark.rdd.RDD[Bank]. Resolved

value toDF is not a member of org.apache.spark.rdd , "value toDF is not a member of org.apache.spark.rdd.RDD[Bank]" so, I am using (​import sqlContext.implicits._) which converts rdd to dataframe, it starts throwing  Solved: Im working on a spark program which can load the data into a Hive table. import org.apache.spark.sql.SparkSession import

value toDF is not a member org.apache.spark.rdd.RDD, How to do to make toDF work? the error: Error:(25, 55) value toDF is not a member of org.apache.spark.rdd.RDD[(Int, Int)] val df1 = sc.makeRDD(1 to 5).​map(i  Solved: Hi all, I am trying to create a DataFrame of a text file which gives me error: " value toDF is not a member of org.apache.spark.rdd.RDD Support Questions Find answers, ask questions, and share your expertise

getting error: value toDF is not a member , for(line <- pricesRDD.collect.toArray) { var key = line._2.split(',').view(0).toString var ticker = line._2.split(',').view(1).toString var timeissued = line  Zeppelin Tutorial : error: value toDF is not a member of org.apache.spark.rdd.RDD[Tweet] value toDF is not a member of org.apache.spark.rdd.RDD[Tweet] hsfelix.

Comments
  • did you do in repl aka spark-shell import sqlContext.implicits._ ??
  • @WoodChopper...yes I did but same error is coming.
  • @AshishAggarwal Please edit the code (at least part of the code is duplicated) and remove obsolete whitespaces to make this question at least remotely readable.
  • My issue was # 2. Thanks for that solution.
  • Move case class outside the main method could help !
  • Your #2 is the one I missed (again).