使用“主机”网络模式时有多个实例?

时间:2019-07-03 11:43:14

标签: apache docker networking containers host

使用“主机”网络时是否可以运行多个apache-server实例?就像使用“桥接”网络和端口映射一样可能吗?

还是要映射“主机”网络实例旁边的其他实例 ,以便映射另一个可能已经使用的端口(而不是80)?

1 个答案:

答案 0 :(得分:0)

任何使用主机网络运行的东西都使用主机网络。容器,其他由主机联网的容器以及直接在主机上运行的进程之间没有隔离。如果您在主机上运行Apache,并且有两个Py4JJavaError Traceback (most recent call last) ~/vishal/lib/python3.5/site-packages/pyspark/sql/utils.py in deco(*a, **kw) 62 try: ---> 63 return f(*a, **kw) 64 except py4j.protocol.Py4JJavaError as e: ~/vishal/lib/python3.5/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name) 327 "An error occurred while calling {0}{1}{2}.\n". --> 328 format(target_id, ".", name), value) 329 else: Py4JJavaError: An error occurred while calling o24.sql. : org.apache.spark.sql.AnalysisException: Hive support is required to CREATE Hive TABLE (AS SELECT);; 'CreateTable `employee`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, Ignore at org.apache.spark.sql.execution.datasources.HiveOnlyCheck$$anonfun$apply$12.apply(rules.scala:392) at org.apache.spark.sql.execution.datasources.HiveOnlyCheck$$anonfun$apply$12.apply(rules.scala:390) at org.apache.spark.sql.catalyst.trees.TreeNode.foreach(TreeNode.scala:117) at org.apache.spark.sql.execution.datasources.HiveOnlyCheck$.apply(rules.scala:390) at org.apache.spark.sql.execution.datasources.HiveOnlyCheck$.apply(rules.scala:388) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$2.apply(CheckAnalysis.scala:386) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$2.apply(CheckAnalysis.scala:386) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:386) at org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:95) at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:108) at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105) at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201) at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105) at org.apache.spark.sql.execution.QueryExecution.analyzed$lzycompute(QueryExecution.scala:57) at org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:55) at org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:47) at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:78) at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642) at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.lang.Thread.run(Thread.java:748) During handling of the above exception, another exception occurred: AnalysisException Traceback (most recent call last) <ipython-input-23-16ed5164dfb1> in <module> 1 with open("/home/data/Downloads/Sample-SQL-File-10-Rows.sql") as fr: 2 query = fr.read() ----> 3 results = sqlContext.sql(query) ~/vishal/lib/python3.5/site-packages/pyspark/sql/context.py in sql(self, sqlQuery) 356 [Row(f1=1, f2=u'row1'), Row(f1=2, f2=u'row2'), Row(f1=3, f2=u'row3')] 357 """ --> 358 return self.sparkSession.sql(sqlQuery) 359 360 @since(1.0) ~/vishal/lib/python3.5/site-packages/pyspark/sql/session.py in sql(self, sqlQuery) 765 [Row(f1=1, f2=u'row1'), Row(f1=2, f2=u'row2'), Row(f1=3, f2=u'row3')] 766 """ --> 767 return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped) 768 769 @since(2.0) ~/vishal/lib/python3.5/site-packages/py4j/java_gateway.py in __call__(self, *args) 1255 answer = self.gateway_client.send_command(command) 1256 return_value = get_return_value( -> 1257 answer, self.gateway_client, self.target_id, self.name) 1258 1259 for temp_arg in temp_args: ~/vishal/lib/python3.5/site-packages/pyspark/sql/utils.py in deco(*a, **kw) 67 e.java_exception.getStackTrace())) 68 if s.startswith('org.apache.spark.sql.AnalysisException: '): ---> 69 raise AnalysisException(s.split(': ', 1)[1], stackTrace) 70 if s.startswith('org.apache.spark.sql.catalyst.analysis'): 71 raise AnalysisException(s.split(': ', 1)[1], stackTrace) AnalysisException: "Hive support is required to CREATE Hive TABLE (AS SELECT);;\n'CreateTable `employee`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, Ignore\n" Apache容器,并且它们都试图绑定到0.0.0.0端口80,则它们将发生冲突。您需要使用特定于应用程序的配置来解决此问题。主机联网模式下没有端口映射的概念。

尤其对于简单的HTTP / TCP服务,几乎不需要主机网络。如果您使用标准的桥接网络,则容器中的应用程序不会相互冲突或托管进程。您可以将端口重新映射到任何您方便的位置,而不必担心重新配置应用程序。