我正在尝试学习SparkCLR以处理文本文件并使用Sample
在其上运行Spark SQL查询,如下所示:
[Sample]
internal static void MyDataFrameSample()
{
var schemaTagValues = new StructType(new List<StructField>
{
new StructField("tagname", new StringType()),
new StructField("time", new LongType()),
new StructField("value", new DoubleType()),
new StructField("confidence", new IntegerType()),
new StructField("mode", new IntegerType())
});
var rddTagValues1 = SparkCLRSamples.SparkContext.TextFile(SparkCLRSamples.Configuration.GetInputDataPath(myDataFile))
.Map(r => r.Split('\t')
.Select(s => (object)s).ToArray());
var dataFrameTagValues = GetSqlContext().CreateDataFrame(rddTagValues1, schemaTagValues);
dataFrameTagValues.RegisterTempTable("tagvalues");
//var qualityFilteredDataFrame = GetSqlContext().Sql("SELECT tagname, value, time FROM tagvalues where confidence > 85");
var qualityFilteredDataFrame = GetSqlContext().Sql("SELECT * FROM tagvalues");
var data = qualityFilteredDataFrame.Collect();
var filteredCount = qualityFilteredDataFrame.Count();
Console.WriteLine("Filter By = 'confidence', RowsCount={0}", filteredCount);
}
但这不断给我一个错误:
[2016-01-13 08:56:28,593] [8] [ERROR] [Microsoft.Spark.CSharp.Interop.Ipc.JvmBridge] - JVM method execution failed: Static method collectAndServe failed for class org.apache.spark.api.python.PythonRDD when called with 1 parameters ([Index=1, Type=JvmObjectReference, Value=19], )
[2016-01-13 08:56:28,593] [8] [ERROR] [Microsoft.Spark.CSharp.Interop.Ipc.JvmBridge] -
*******************************************************************************************************************************
at Microsoft.Spark.CSharp.Interop.Ipc.JvmBridge.CallJavaMethod(Boolean isStatic, Object classNameOrJvmObjectReference, String methodName, Object[] parameters) in d:\SparkCLR\csharp\Adapter\Microsoft.Spark.CSharp\Interop\Ipc\JvmBridge.cs:line 91
*******************************************************************************************************************************
我的文本文件如下所示:
10PC1008.AA 130908762000000000 7.059829 100 0
10PC1008.AA 130908762050000000 7.060376 100 0
10PC1008.AA 130908762100000000 7.059613 100 0
10PC1008.BB 130908762150000000 7.059134 100 0
10PC1008.BB 130908762200000000 7.060124 100 0
我使用它的方式有问题吗?
修改1
我已将以下内容设置为我的示例项目属性:
我的用户环境变量如下:(不确定是否重要)
我在 SparkCLRWorker日志中看到它无法按日志加载程序集:
[2016-01-14 08:37:01,865] [1] [ERROR] [Microsoft.Spark.CSharp.Worker] - System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation.
---> System.IO.FileNotFoundException: Could not load file or assembly 'SparkCLRSamples, Version=1.5.2.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified.
at System.Reflection.RuntimeAssembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, RuntimeAssembly locationHint, StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks)
at System.Reflection.RuntimeAssembly.InternalLoadAssemblyName(AssemblyName assemblyRef, Evidence assemblySecurity, RuntimeAssembly reqAssembly, StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean throwOnFileNotFound, Boolean forIntrospection, Boolean suppressSecurityChecks)
at System.Reflection.RuntimeAssembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean forIntrospection)
at System.Reflection.RuntimeAssembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection)
at System.Reflection.Assembly.Load(String assemblyString)
at System.Runtime.Serialization.FormatterServices.LoadAssemblyFromString(String assemblyName)
at System.Reflection.MemberInfoSerializationHolder..ctor(SerializationInfo info, StreamingContext context)
--- End of inner exception stack trace ---
at System.RuntimeMethodHandle.SerializationInvoke(IRuntimeMethodInfo method, Object target, SerializationInfo info, StreamingContext& context)
at System.Runtime.Serialization.ObjectManager.CompleteISerializableObject(Object obj, SerializationInfo info, StreamingContext context)
at System.Runtime.Serialization.ObjectManager.FixupSpecialObject(ObjectHolder holder)
at System.Runtime.Serialization.ObjectManager.DoFixups()
at System.Runtime.Serialization.Formatters.Binary.ObjectReader.Deserialize(HeaderHandler handler, __BinaryParser serParser, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage)
at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Stream serializationStream, HeaderHandler handler, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage)
at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Stream serializationStream)
at Microsoft.Spark.CSharp.Worker.Main(String[] args) in d:\SparkCLR\csharp\Worker\Microsoft.Spark.CSharp\Worker.cs:line 149
答案 0 :(得分:0)
您是否指定了样本数据位置并将源文本文件复制到该位置?如果没有,你可以参考
使用参数[--data |设置示例数据位置sparkclr.sampledata.loc]。
答案 1 :(得分:0)
尝试明确设置[--temp | spark.local.dir]选项(有关支持的参数的详细信息,请参阅sampleusage.md)运行代码时。 SparkCLR工作程序可执行文件在执行时会下载到此目录中。如果您使用默认临时目录,则可能会由您的防病毒软件隔离工作程序可执行文件,将其误认为是您的浏览器下载的某些恶意程序。将默认值覆盖为c:\ temp \ SparkCLRTemp将有助于避免该问题。
如果设置临时目录没有帮助,请在启动SparkCLR驱动程序代码时共享您正在使用的整个命令行参数列表。
答案 2 :(得分:0)
这是你改变端口号的方法,我希望这有帮助
在App.config中添加以下内容
为了完整起见,您还必须添加指定csharpworker路径的标记
schedule.rb
注意,要使其在调试模式下工作,您应首先使用命令行运行此命令, 来自(mobius home)目录
%SPARKCLR_HOME%\脚本
运行
every 1.day, :at => '4:30 am' do
rake 'api_service:fetch_data_for_model1'
end
这会给你一条包含端口号
的消息[CSharpRunner.main] CSharpBackend使用的端口号为5567
* [CSharpRunner.main]后端运行调试模式。按enter退出*