我将如何做到这一点。我能够使用getExtra传递一个xml字符串数组,但我不知道为包含一堆@drawable引用的整数数组执行它的语法。
继承人如何引用字符串数组
//fills route detail image view with xml array of images
final TypedArray image = getResources().obtainTypedArray(R.array.routeImage);
继承人我现在如何使用putExtra在我的mainactivity.java中使用字符串数组
routeListView.setOnItemClickListener(
new AdapterView.OnItemClickListener() {
@Override
public void onItemClick(AdapterView<?> parent, View view, int position, long id) {
String route = values[position];
//for loop to increment through each route list item
int i;
for (i=0; i < values.length;i++)
{
if (route.equals(values[i]))
{
Intent intent = new Intent(view.getContext(), RouteDetails.class);
和继承人 intent.putExtra(&#34; route&#34;,routeDetail [i]); startActivity(意向); } } } } );
并且在我的routeDetail.java中提到我的getExtra
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_route_details);
//TextView for route details
final TextView routeDetailsView = (TextView) findViewById(R.id.routeDetailsView);
routeDetailsView.setText(getIntent().getExtras().getString("route"));
//ImageView for route details
routeImage = (ImageView) findViewById(R.id.routeImage);
//routeImage.setImageResource(R.drawable.birdsboroareamap);
答案 0 :(得分:0)
首先将图像资源存储在整数
中intent.putExtra("imageResourceId", imageId);
然后在下面添加此代码,将路径Extra
routeImage.setImageResource(getIntent.getIntExtra("imageResourceId", 0);
然后得到Extra Like this
Sys.setenv(SPARK_HOME = "C:/Users/hms/Desktop/spark-2.0.1-bin-hadoop2.7/spark-2.0.1-bin-hadoop2.7",
HADOOP_HOME = "C:/Users/hms/Desktop/spark-2.0.1-bin-hadoop2.7/spark-2.0.1-bin-hadoop2.7/tmp/hadoop")
.libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths()))
library(SparkR)
sc<-sparkR.session( enableHiveSupport = FALSE, master="local[*]",
sparkHome=Sys.getenv("SPARK_HOME"),
sparkConfig = list(spark.driver.memory="2g",
spark.driver.extraClassPath="C:/Users/hms/Desktop/spark-2.0.1-bin-hadoop2.7/spark-2.0.1-bin-hadoop2.7/jars/sqljdbc4.jar",
spark.executor.extraClassPath="C:/Users/hms/Desktop/spark-2.0.1-bin-hadoop2.7/spark-2.0.1-bin-hadoop2.7/jars/sqljdbc4.jar",
spark.sql.warehouse.dir="C:/Users/hms/Desktop/spark-2.0.1-bin-hadoop2.7/spark-2.0.1-bin-hadoop2.7/tmp/hadoop/bin") )
persons <- read.jdbc("jdbc:sqlserver://xpto//xpto:1433;databaseName=XPTO_XPTO", "table.XPTO", user = "XPTO", password = "XPTO")