Skip to content

Commit

Permalink
Add troubleshooting section in README
Browse files Browse the repository at this point in the history
  • Loading branch information
alexarchambault committed Aug 16, 2018
1 parent 475a3f0 commit 155a27f
Showing 1 changed file with 20 additions and 1 deletion.
21 changes: 20 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,8 @@ Run [spark](https://spark.apache.org/) calculations from [Ammonite](http://ammon
1. [Syncing dependencies](#syncing-dependencies)
3. [Using with standalone cluster](#using-with-standalone-cluster)
4. [Using with YARN cluster](#using-with-yarn-cluster)
5. [Missing](#missing)
5. [Troubleshooting](#troubleshooting)
6. [Missing](#missing)



Expand Down Expand Up @@ -101,6 +102,24 @@ Ensure the configuration directory of the cluster is set in `HADOOP_CONF_DIR` or

Before raising issues, ensure you are aware of all that needs to be set up to get a working spark-shell from a Spark distribution, and that all of them are passed in one way or another to the SparkSession created from Ammonite.

## Troubleshooting

### Getting `org.apache.spark.sql.AnalysisException` when calling `.toDS`

Add `org.apache.spark.sql.catalyst.encoders.OuterScopes.addOuterScope(this)` on the same lines as those where you define case classes involved, like
```scala
@ import spark.implicits._
import spark.implicits._

@ org.apache.spark.sql.catalyst.encoders.OuterScopes.addOuterScope(this); case class Foo(id: String, value: Int)
defined class Foo

@ val ds = List(Foo("Alice", 42), Foo("Bob", 43)).toDS
ds: Dataset[Foo] = [id: string, value: int]
```

(This should likely be added automatically in the future.)

## Missing

Local clusters, Mesos, and Kubernetes, aren't supported yet.
Expand Down

0 comments on commit 155a27f

Please sign in to comment.