F#:如何避免在Tests Project中出现“ [FS0988]程序主模块为空:在运行时什么都不会发生”的问题?

时间:2019-06-13 12:40:11

标签: compilation f# compiler-warnings

我有一个.NET解决方案,其中包含以下项目:

  • Console:控制台.NET Core 2.2
  • Domain :. NET Standard 2.0
  • Domain.Tests:控制台.NET Core 2.2(XUnit)
  • Infrastructure :. NET Standard 2.0

我的Domain.Tests.fsproj定义为:

<Project Sdk="Microsoft.NET.Sdk">

    <PropertyGroup>

        <IsPackable>false</IsPackable>
        <GenerateProgramFile>false</GenerateProgramFile>
        <TargetFramework>netcoreapp2.2</TargetFramework>
    </PropertyGroup>

    <ItemGroup>
        <PackageReference Include="FsCheck" Version="3.0.0-alpha4" />
        <PackageReference Include="Microsoft.NET.Test.Sdk" Version="15.9.0" />
        <PackageReference Include="xunit" Version="2.4.0" />
        <PackageReference Include="xunit.runner.visualstudio" Version="2.4.0" />
    </ItemGroup>

    <ItemGroup>
      <Compile Include="Dsl.fs" />
      <Compile Include="OpenAccountTests.fs" />
      <Compile Include="CloseAccountTests.fs" />
      <Compile Include="DepositCashTests.fs" />
      <Compile Include="WithdrawCashTests.fs" />
      <Compile Include="WireMoneyTests.fs" />
      <Compile Include="RequestAddressChangeTests.fs" />
      <Compile Include="RequestEmailChangeTests.fs" />
      <Compile Include="RequestPhoneNumberChangeTests.fs" />
      <Compile Include="ValidateAddressChangeTests.fs" />
      <Compile Include="ValidateEmailChangeTests.fs" />
      <Compile Include="ValidatePhoneNumberChangeTests.fs" />
    </ItemGroup>

    <ItemGroup>
      <ProjectReference Include="..\Domain\Domain.fsproj" />
    </ItemGroup>

</Project>

但是在编译解决方案时,我有以下警告:

  

ValidatePhoneNumberChangeTests.fs(102,35):[FS0988]程序主模块为空:运行该程序将不会发生任何事情

我检查了that answer on SO,并在do()的最后一个文件的末尾添加了Domain.TestsValidatePhoneNumberChangeTests.fs没有任何作用。

我该怎么做才能消除此警告?

2 个答案:

答案 0 :(得分:4)

<TargetFramework>netcoreapp2.2</TargetFramework>

这将Domain.Tests项目指定为可执行文件

如果只需要将其用作类库,则将其更改为

<TargetFramework>netstandard2.0</TargetFramework>

如果只想删除警告,则可以通过在ValidatePhoneNumberChangeTests.fs末尾或在编译顺序末尾的Program.fs中添加主方法来添加主要方法

[<EntryPoint>] 
let main argv =
    0

答案 1 :(得分:3)

@rmunn在注释中处于正确的位置。当<OutputType>LibraryTargetFramework时,netstandardXX默认为net4XX,而当ExeTargetFramework时,netcoreappXX默认为<OutputType>Library</OutputType>

设置import org.apache.spark.sql.functions._ import org.apache.spark.sql._ import org.apache.spark.sql.types._ scala> val lines = spark.read.textFile("file:///home/fsdjob/theDir").withColumn("filename", input_file_name()) scala> lines.show(false) +--------------+------------------------------------+ |value |filename | +--------------+------------------------------------+ |20100101|12.34|file:///home/fsdjob/theDir/file1.txt| |20100101|12.34|file:///home/fsdjob/theDir/file1.txt| |20100101|36.00|file:///home/fsdjob/theDir/file1.txt| |20100102|36.00|file:///home/fsdjob/theDir/file1.txt| |20100101|14.00|file:///home/fsdjob/theDir/file1.txt| |20100101|14.00|file:///home/fsdjob/theDir/file1.txt| |20100101|12.34|file:///home/fsdjob/theDir/file2.txt| |20100101|12.34|file:///home/fsdjob/theDir/file2.txt| |20100101|36.00|file:///home/fsdjob/theDir/file2.txt| |20100102|36.00|file:///home/fsdjob/theDir/file2.txt| |20100101|14.00|file:///home/fsdjob/theDir/file2.txt| |20100101|14.00|file:///home/fsdjob/theDir/file2.txt| +--------------+------------------------------------+ scala> val linesGrpWithUid = lines.groupBy("value", "filename").count.drop("count").rdd.zipWithUniqueId linesGrpWithUid: org.apache.spark.rdd.RDD[(org.apache.spark.sql.Row, Long)] = MapPartitionsRDD[135] at zipWithUniqueId at <console>:31 scala> val linesGrpWithIdRdd = linesGrpWithUid.map( x => { org.apache.spark.sql.Row(x._1.get(0),x._1.get(1), x._2) }) linesGrpWithIdRdd: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = MapPartitionsRDD[136] at map at <console>:31 scala> val schema = | StructType( | StructField("value", StringType, false) :: | StructField("filename", StringType, false) :: | StructField("id", LongType, false) :: | Nil) schema: org.apache.spark.sql.types.StructType = StructType(StructField(value,StringType,false), StructField(filename,StringType,false), StructField(id,LongType,false)) scala> val linesGrpWithIdDF = spark.createDataFrame(linesGrpWithIdRdd, schema) linesGrpWithIdDF: org.apache.spark.sql.DataFrame = [value: string, filename: string ... 1 more field] scala> linesGrpWithIdDF.show(false) +--------------+------------------------------------+---+ |value |filename |id | +--------------+------------------------------------+---+ |20100101|12.34|file:///home/fsdjob/theDir/file2.txt|3 | |20100101|36.00|file:///home/fsdjob/theDir/file2.txt|6 | |20100102|36.00|file:///home/fsdjob/theDir/file2.txt|20 | |20100102|36.00|file:///home/fsdjob/theDir/file1.txt|30 | |20100101|14.00|file:///home/fsdjob/theDir/file1.txt|36 | |20100101|14.00|file:///home/fsdjob/theDir/file2.txt|56 | |20100101|36.00|file:///home/fsdjob/theDir/file1.txt|146| |20100101|12.34|file:///home/fsdjob/theDir/file1.txt|165| +--------------+------------------------------------+---+ scala> val output = lines.join(linesGrpWithIdDF, Seq("value", "filename")) output: org.apache.spark.sql.DataFrame = [value: string, filename: string ... 1 more field] scala> output.show(false) +--------------+------------------------------------+---+ |value |filename |id | +--------------+------------------------------------+---+ |20100101|12.34|file:///home/fsdjob/theDir/file2.txt|3 | |20100101|12.34|file:///home/fsdjob/theDir/file2.txt|3 | |20100101|36.00|file:///home/fsdjob/theDir/file2.txt|6 | |20100102|36.00|file:///home/fsdjob/theDir/file2.txt|20 | |20100102|36.00|file:///home/fsdjob/theDir/file1.txt|30 | |20100101|14.00|file:///home/fsdjob/theDir/file1.txt|36 | |20100101|14.00|file:///home/fsdjob/theDir/file1.txt|36 | |20100101|14.00|file:///home/fsdjob/theDir/file2.txt|56 | |20100101|14.00|file:///home/fsdjob/theDir/file2.txt|56 | |20100101|36.00|file:///home/fsdjob/theDir/file1.txt|146| |20100101|12.34|file:///home/fsdjob/theDir/file1.txt|165| |20100101|12.34|file:///home/fsdjob/theDir/file1.txt|165| +--------------+------------------------------------+---+ 是IMO修复此问题的最佳方法,而不是添加不会被调用的入口点。