我有一个非常简单的测试代码,目的是读取--files
传递的 java属性文件,并显示其键之一的值。
我有属性文件 testprop.prop:
name:aiman
location:india
我正在使用 spark-submit 命令,例如:
spark-submit --class org.main.ReadLocalFile --master yarn --deploy-mode cluster --queue orion --files /path/to/testprop.prop#testprop.prop spark_cluster_file_read-0.0.1-SNAPSHOT-jar-with-dependencies.jar testprop.prop
我已经使用--files /path/to/file/testprop.prop#testprop.prop
传递了文件名,并传递了testprop.prop
作为代码的参数,以便FileInputStream
可以读取它。
我的代码是:
package org.main;
import java.io.FileInputStream;
import java.io.InputStream;
import java.util.Properties;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.spark.sql.SparkSession;
public class ReadLocalFile {
public static void main(String args[]) throws Exception
{
SparkSession spark = SparkSession.builder().getOrCreate();
String filename = args[0];
Properties prop = new Properties();
InputStream in = null;
try{
in = new FileInputStream(filename);
prop.load(in);
}
catch(Exception e){
e.printStackTrace();
System.out.println("=========Exception Thrown============");
System.exit(1);
}
System.out.println("====================Value: "+prop.getProperty("name"));
}
}
代码正在运行完成,但未显示任何输出。由于进行了name
处理,预期输出应该是try-catch
键的值,或者可能是 FileNotFoundException 。
生成的日志是:
19/07/03 11:32:52 INFO O: Set a new configuration for the first time.
19/07/03 11:32:52 INFO d: Method not implemented in this version of Hadoop: org.apache.hadoop.fs.FileSystem$Statistics.getBytesReadLocalHost
19/07/03 11:32:52 INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
19/07/03 11:32:52 INFO u: Scheduling statistics report every 2000 millisecs
19/07/03 11:32:52 INFO RequestHedgingRMFailoverProxyProvider: Looking for the active RM in [rm1, rm2]...
19/07/03 11:32:53 INFO RequestHedgingRMFailoverProxyProvider: Found active RM [rm2]
19/07/03 11:32:53 INFO Client: Requesting a new application from cluster with 24 NodeManagers
19/07/03 11:32:53 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (102400 MB per container)
19/07/03 11:32:53 INFO Client: Will allocate AM container, with 1408 MB memory including 384 MB overhead
19/07/03 11:32:53 INFO Client: Setting up container launch context for our AM
19/07/03 11:32:53 INFO Client: Setting up the launch environment for our AM container
19/07/03 11:32:53 INFO Client: Preparing resources for our AM container
19/07/03 11:32:53 INFO HadoopFSCredentialProvider: getting token for: hdfs://clustername/user/serviceuser
19/07/03 11:32:53 INFO DFSClient: Created HDFS_DELEGATION_TOKEN token 6977007 for serviceuser on ha-hdfs:clustername
19/07/03 11:32:55 INFO metastore: Trying to connect to metastore with URI thrift://XX.XX.XX.XX:9083
19/07/03 11:32:55 INFO metastore: Connected to metastore.
19/07/03 11:32:56 INFO HiveCredentialProvider: Get Token from hive metastore: Kind: HIVE_DELEGATION_TOKEN, Service: , Ident: 00 1a 65 62 64 70 62 75 73 73 40 43 41 42 4c 45 2e 43 4f 4d 43 41 53 54 2e 43 4f 4d 04 68 69 76 65 00 8a 01 6b b7 9b e6 e3 8a 01 6b db a8 6a e3 8e 9e e1 8e 02 f2
19/07/03 11:32:56 INFO Client: Use hdfs cache file as spark.yarn.archive for HDP, hdfsCacheFile:hdfs://clustername/hdp/apps/2.6.3.20-2/spark2/spark2-hdp-yarn-archive.tar.gz
19/07/03 11:32:56 INFO Client: Source and destination file systems are the same. Not copying hdfs://clustername/hdp/apps/2.6.3.20-2/spark2/spark2-hdp-yarn-archive.tar.gz
19/07/03 11:32:56 INFO Client: Uploading resource file:/home/serviceuser/aiman/spark_cluster_file_read-0.0.1-SNAPSHOT-jar-with-dependencies.jar -> hdfs://clustername/user/serviceuser/.sparkStaging/application_1561094073414_101648/spark_cluster_file_read-0.0.1-SNAPSHOT-jar-with-dependencies.jar
19/07/03 11:32:57 INFO Client: Uploading resource file:/home/serviceuser/aiman/testprop.prop#testprop.prop -> hdfs://clustername/user/serviceuser/.sparkStaging/application_1561094073414_101648/testprop.prop
19/07/03 11:32:57 INFO Client: Uploading resource file:/tmp/spark-02d69650-9fb9-4f5e-9947-d8fa629323f4/__spark_conf__3111084457019278305.zip -> hdfs://clustername/user/serviceuser/.sparkStaging/application_1561094073414_101648/__spark_conf__.zip
19/07/03 11:32:57 INFO SecurityManager: Changing view acls to: serviceuser
19/07/03 11:32:57 INFO SecurityManager: Changing modify acls to: serviceuser
19/07/03 11:32:57 INFO SecurityManager: Changing view acls groups to:
19/07/03 11:32:57 INFO SecurityManager: Changing modify acls groups to:
19/07/03 11:32:57 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(serviceuser); groups with view permissions: Set(); users with modify permissions: Set(serviceuser); groups with modify permissions: Set()
19/07/03 11:32:57 INFO Client: Submitting application application_1561094073414_101648 to ResourceManager
19/07/03 11:32:57 INFO YarnClientImpl: Submitted application application_1561094073414_101648
19/07/03 11:32:58 INFO Client: Application report for application_1561094073414_101648 (state: ACCEPTED)
19/07/03 11:32:58 INFO Client:
client token: Token { kind: YARN_CLIENT_TOKEN, service: }
diagnostics: AM container is launched, waiting for AM container to Register with RM
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: orion
start time: 1562153577621
final status: UNDEFINED
tracking URL: http://XX.XX.XX.XX:8088/proxy/application_1561094073414_101648/
user: serviceuser
19/07/03 11:32:59 INFO Client: Application report for application_1561094073414_101648 (state: ACCEPTED)
19/07/03 11:33:00 INFO Client: Application report for application_1561094073414_101648 (state: ACCEPTED)
19/07/03 11:33:01 INFO Client: Application report for application_1561094073414_101648 (state: ACCEPTED)
19/07/03 11:33:02 INFO Client: Application report for application_1561094073414_101648 (state: ACCEPTED)
19/07/03 11:33:03 INFO Client: Application report for application_1561094073414_101648 (state: ACCEPTED)
19/07/03 11:33:04 INFO Client: Application report for application_1561094073414_101648 (state: ACCEPTED)
19/07/03 11:33:05 INFO Client: Application report for application_1561094073414_101648 (state: RUNNING)
19/07/03 11:33:05 INFO Client:
client token: Token { kind: YARN_CLIENT_TOKEN, service: }
diagnostics: N/A
ApplicationMaster host: XX.XX.XX.XX
ApplicationMaster RPC port: 0
queue: orion
start time: 1562153577621
final status: UNDEFINED
tracking URL: http://XX.XX.XX.XX:8088/proxy/application_1561094073414_101648/
user: serviceuser
19/07/03 11:33:06 INFO Client: Application report for application_1561094073414_101648 (state: RUNNING)
19/07/03 11:33:07 INFO Client: Application report for application_1561094073414_101648 (state: RUNNING)
19/07/03 11:33:08 INFO Client: Application report for application_1561094073414_101648 (state: RUNNING)
19/07/03 11:33:09 INFO Client: Application report for application_1561094073414_101648 (state: RUNNING)
19/07/03 11:33:10 INFO Client: Application report for application_1561094073414_101648 (state: RUNNING)
19/07/03 11:33:11 INFO Client: Application report for application_1561094073414_101648 (state: RUNNING)
19/07/03 11:33:12 INFO Client: Application report for application_1561094073414_101648 (state: FINISHED)
19/07/03 11:33:12 INFO Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: XX.XX.XX.XX
ApplicationMaster RPC port: 0
queue: orion
start time: 1562153577621
final status: SUCCEEDED
tracking URL: http://XX.XX.XX.XX:8088/proxy/application_1561094073414_101648/
user: serviceuser
19/07/03 11:33:12 INFO ShutdownHookManager: Shutdown hook called
19/07/03 11:33:12 INFO ShutdownHookManager: Deleting directory /tmp/spark-02d69650-9fb9-4f5e-9947-d8fa629323f4
请告诉我我要去哪里错了,或者错过了什么?
是在集群模式下无法查看输出吗?我该怎么做才能在控制台上获得输出?
答案 0 :(得分:1)
files选项将您的prop文件复制到执行程序节点。这是如何获取文件的方法。
def readProperties(propertiesPath: String) = {
val url = getClass.getResource("/" + propertiesPath)
assert(url != null, s"Could not create URL to read $propertiesPath properties file")
val source = Source.fromURL(url)
val properties = new Properties
properties.load(source.bufferedReader)
properties
}
,您的通话应该是这样的 读入var
val myProp= readProperties(args(0))
并访问属性,您可以使用类似
val getnamefromProp= myProp.getProperty("name")
如果这不起作用,请尝试使用Soruce.fromFile(Path)作为替代。