如何从头开始设置所需的配置以在纱线上运行火花?

时间:2016-11-23 10:48:29

标签: hadoop apache-spark yarn hpc

我是在纱线上运行火花的新手,我在HPC集群上尝试过它。由于我无法弄清楚如何正确设置配置,我无法成功连接到纱线,我的日志如下:

public class HomeFragment extends Fragment implements NavigationView.OnNavigationItemSelectedListener{
    View homeView;
    @Nullable
    @Override
    public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
        homeView = inflater.inflate(R.layout.home_layout, container, false);

       return homeView;
    }    

    @Override
    public boolean onNavigationItemSelected(MenuItem menuItem) {
        // Handle navigation view item clicks here.
        int id = menuItem.getItemId();
        FragmentManager fragmentManager = getFragmentManager();

        if (id == R.id.nav_home_layout) {
            fragmentManager.beginTransaction().replace(R.id.myFrame, new HomeFragment()).commit();

            // Handle the camera action
        } else if (id == R.id.nav_appointment_layout) {
            fragmentManager.beginTransaction().replace(R.id.myFrame, new ApptFragment()).commit();    

        } else if (id == R.id.nav_record_layout) {
            fragmentManager.beginTransaction().replace(R.id.myFrame, new RecordFragment()).commit();    

        } else if (id == R.id.nav_profile_layout) {
            fragmentManager.beginTransaction().replace(R.id.myFrame, new ProfileFragment()).commit();
        } else if (id == R.id.nav_groom_layout) {

            fragmentManager.beginTransaction().replace(R.id.myFrame, new GroomFragment()).commit();
        } else if (id == R.id.nav_boarding_layout) {
            fragmentManager.beginTransaction().replace(R.id.myFrame, new MedicalHistory()).commit();
        }

        DrawerLayout drawer = (DrawerLayout) homeView.findViewById(R.id.drawer_layout);
        drawer.closeDrawer(GravityCompat.START);
        return true;
    }
}

我使用spark 1.4.1并且已将Hadoop 2.6下载到群集中的本地文件夹。这是我的应用程序代码中的spark配置行:

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/11/22 18:35:59 INFO SparkContext: Running Spark version 1.4.1
16/11/22 18:36:00 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/11/22 18:36:00 WARN SparkConf: In Spark 1.0 and later spark.local.dir will be overridden by the value set by the cluster manager (via SPARK_LOCAL_DIRS in mesos/standalone and LOCAL_DIRS in YARN).
16/11/22 18:36:00 INFO SecurityManager: Changing view acls to:  
16/11/22 18:36:00 INFO SecurityManager: Changing modify acls to:  
16/11/22 18:36:00 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set( ); users with modify permissions: Set( )
16/11/22 18:36:01 INFO Slf4jLogger: Slf4jLogger started
16/11/22 18:36:01 INFO Remoting: Starting remoting
16/11/22 18:36:01 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@172.23.9.106:35138]
16/11/22 18:36:01 INFO Utils: Successfully started service 'sparkDriver' on port 35138.
16/11/22 18:36:01 INFO SparkEnv: Registering MapOutputTracker
16/11/22 18:36:01 INFO SparkEnv: Registering BlockManagerMaster
16/11/22 18:36:01 INFO DiskBlockManager: Created local directory at /local/tmp/spark-c335cd9f-7719-4ffd-942c-2bbcc543e3c8/blockmgr-2057f8aa-e074-4910-a1fa-3fc9c73ad1f1
16/11/22 18:36:01 INFO MemoryStore: MemoryStore started with capacity 16.6 GB
16/11/22 18:36:01 INFO HttpFileServer: HTTP File server directory is /local/tmp/spark-c335cd9f-7719-4ffd-942c-2bbcc543e3c8/httpd-63876231-4681-4e75-bfd9-f46edf02087c
16/11/22 18:36:01 INFO HttpServer: Starting HTTP Server
16/11/22 18:36:01 INFO Utils: Successfully started service 'HTTP file server' on port 46754.
16/11/22 18:36:01 INFO SparkEnv: Registering OutputCommitCoordinator
16/11/22 18:36:01 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/11/22 18:36:01 INFO SparkUI: Started SparkUI at http://172.23.9.106:4040
16/11/22 18:36:02 INFO SparkContext: Added JAR file:/hard-mounts/user/311/ /cosine-lsh.jar at http://172.23.9.106:46754/jars/cosine-lsh.jar with timestamp 1479836162928
16/11/22 18:36:03 INFO RMProxy: Connecting to ResourceManager at /127.0.0.1:8032
16/11/22 18:36:04 INFO Client: Retrying connect to server: localhost/127.0.0.1:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/11/22 18:36:05 INFO Client: Retrying connect to server: localhost/127.0.0.1:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/11/22 18:36:06 INFO Client: Retrying connect to server: localhost/127.0.0.1:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/11/22 18:36:07 INFO Client: Retrying connect to server: localhost/127.0.0.1:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/11/22 18:36:08 INFO Client: Retrying connect to server: localhost/127.0.0.1:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/11/22 18:36:09 INFO Client: Retrying connect to server: localhost/127.0.0.1:8032. Already tried 5 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/11/22 18:36:10 INFO Client: Retrying connect to server: localhost/127.0.0.1:8032. Already tried 6 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/11/22 18:36:11 INFO Client: Retrying connect to server: localhost/127.0.0.1:8032. Already tried 7 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
16/11/22 18:36:12 INFO Client: Retrying connect to server: localhost/127.0.0.1:8032. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)

我通过bash文件在jar文件中运行我的代码,如下所示:

val conf = new SparkConf()
      .setAppName("LSH-Cosine")
      .setMaster("yarn-client") //local[*]
      .set("spark.driver.maxResultSize", "0")
      .set("spark.local.dir", "/local/tmp");

我尝试将HADOOP_CONF_DIR指定到hadoop 2.6中找到的conf文件夹,该文件夹已下载到我的本地文件夹中。另外,我将yarn-site.xml更改为

#!/bin/bash -l

#PBS -l walltime=00:45:00

module load Spark/1.4.1
module load Hadoop

cd $PBS_O_WORKDIR

export HADOOP_CONF_DIR=/vsc-hard-mounts/user/311/31182/hadoop-2.6.0-src/hadoop-yarn-project/hadoop-yarn/conf

spark-submit \
  --class com.soundcloud.lsh.MainCerebro \
  --master yarn-client \
  --num-executors 50 \
  --driver-memory 32g \
  --executor-memory 32g \
  --executor-cores 2 \
  cosine-lsh.jar

我已从相关的stackoverflow帖子中完成了上述所有试验。由于他们中的许多人都没有明确的答案,我想发表这个问题。

提前致谢。

0 个答案:

没有答案