如何在 anaconda 中导入 pyspark

How to import pyspark in anaconda(如何在 anaconda 中导入 pyspark)
本文介绍了如何在 anaconda 中导入 pyspark的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着跟版网的小编来一起学习吧!

问题描述

我正在尝试将 pyspark 与 anaconda 一起导入和使用.

I am trying to import and use pyspark with anaconda.

安装 spark 后,我尝试设置 $SPARK_HOME 变量:

After installing spark, and setting the $SPARK_HOME variable I tried:

$ pip install pyspark

(当然)这行不通,因为我发现我需要通过 tel python 在 $SPARK_HOME/python/ 下查找 pyspark.问题是要做到这一点,我需要设置 $PYTHONPATH 而 anaconda 不使用该环境变量.

This won't work (of course) because I discovered that I need to tel python to look for pyspark under $SPARK_HOME/python/. The problem is that to do that, I need to set the $PYTHONPATH while anaconda don't use that environment variable.

我试图将 $SPARK_HOME/python/ 的内容复制到 ANACONDA_HOME/lib/python2.7/site-packages/ 但它不起作用.

I tried to copy the content of $SPARK_HOME/python/ to ANACONDA_HOME/lib/python2.7/site-packages/ but it won't work.

有没有在anaconda中使用pyspark的解决方案?

Is there any solution to use pyspark in anaconda?

推荐答案

这可能只是最近才成为可能,但我使用了以下方法并且效果很好.在此之后,我可以将 pyspark 作为 ps 导入"并毫无问题地使用它.

This may have only become possible recently, but I used the following and it worked perfectly. After this, I am able to 'import pyspark as ps' and use it with no problems.

conda install -c conda-forge pyspark

这篇关于如何在 anaconda 中导入 pyspark的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!

本站部分内容来源互联网,如果有图片或者内容侵犯了您的权益,请联系我们,我们会在确认后第一时间进行删除!

相关文档推荐

groupby multiple coords along a single dimension in xarray(在xarray中按单个维度的多个坐标分组)
Group by and Sum in Pandas without losing columns(Pandas中的GROUP BY AND SUM不丢失列)
Group by + New Column + Grab value former row based on conditionals(GROUP BY+新列+基于条件的前一行抓取值)
Groupby and interpolate in Pandas(PANDA中的Groupby算法和插值算法)
Pandas - Group Rows based on a column and replace NaN with non-null values(PANAS-基于列对行进行分组,并将NaN替换为非空值)
Grouping pandas DataFrame by 10 minute intervals(按10分钟间隔对 pandas 数据帧进行分组)