从Firebase身份验证获取更新的用户

时间:2017-01-05 15:28:11

标签: android firebase firebase-authentication

我目前正在使用Firebase。目前我正在使用com.google.firebase:firebase-auth:10.0.1

创建用户,登录就像魅力一样。

现在我想为用户添加一个选项,用户可以在其中更改他/她的displayName和profileImage

在活动A中,我通过单击按钮调用此方法:

   public void updateFirebaseUser(String newDisplayName, Uri newPhotoUri) {
      final FirebaseAuth auth = FirebaseAuth.getInstance();
      final FirebaseUser user = auth.getCurrentUser();
      if (user == null || (TextUtils.isEmpty(newDisplayName) && newPhotoUri == null)) {
         return;
      }
      UserProfileChangeRequest.Builder builder = new UserProfileChangeRequest.Builder();

      if (!TextUtils.isEmpty(newDisplayName)) {
         builder.setDisplayName(newDisplayName);
      }
      if (newPhotoUri != null) {
         builder.setPhotoUri(newPhotoUri);
      }
      UserProfileChangeRequest request = builder.build();
      user.updateProfile(request)
            .addOnCompleteListener(new OnCompleteListener<Void>() {
               @Override
               public void onComplete(@NonNull Task<Void> task) {
                  Toast.makeText(context, "profile update success = " + task.isSuccessful(),
                        Toast.LENGTH_SHORT)
                        .show();
               }
            });
   }

一切似乎都有效,因为我在屏幕上收到Toast,显示"profile update success = true"

稍后在应用中,我想显示用户信息,我在活动B中调用fillUserViews()但仍显示我的旧信息。

   private void fillUserViews(NavigationView navigationView) {
      View headerView = navigationView.getHeaderView(0);
      FirebaseAuth auth = FirebaseAuth.getInstance();
      FirebaseUser user = auth.getCurrentUser();

      ImageView avatarVIew = (ImageView) headerView.findViewById(R.id.img_avatar);
      picasso.load(user.getPhotoUrl())
            .into(avatarVIew);

      TextView userName = (TextView) headerView.findViewById(R.id.txt_name);
      userName.setText(user.getDisplayName());

      TextView userMail = (TextView) headerView.findViewById(R.id.txt_mail);
      userMail.setText(user.getEmail());
   }

此外,重新启动应用程序也无济于事。但是如果我退出并再次重新登录,我的用户数据就会更新。

我认为这与Firebase内部缓存有关。这就是为什么我在展示吐司后添加了这一行auth.getCurrentUser().reload();

是否有可能强制更新请求到Firebase?我在文档中找不到任何内容。

提前致谢!

2 个答案:

答案 0 :(得分:7)

wuhuuuu!我找到了解决方案:

我在实际需要时重新加载用户对象,并在请求成功后我更新了我的观点!

   private void fillUserViews(NavigationView navigationView) {
      final View headerView = navigationView.getHeaderView(0);
      final FirebaseAuth auth = FirebaseAuth.getInstance();
      auth.getCurrentUser()
            .reload()
            .addOnSuccessListener(new OnSuccessListener<Void>() {

               @Override
               public void onSuccess(Void aVoid) {
                  FirebaseUser user = auth.getCurrentUser();

                  ImageView avatarVIew = (ImageView) headerView.findViewById(R.id.img_avatar);
                  picasso.load(user.getPhotoUrl())
                        .into(avatarVIew);
                  TextView userName = (TextView) headerView.findViewById(R.id.txt_name);
                  userName.setText(user.getDisplayName());
               }
            });
   }

答案 1 :(得分:0)

我建议您将凭据分别保存在## Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.io.IOException: No FileSystem for scheme: F ## at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584) ## at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591) ## at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) ## at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) ## at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) ## at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) ## at org.apache.spark.util.Utils$.getHadoopFileSystem(Utils.scala:1686) ## at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:598) ## at org.apache.spark.util.Utils$.fetchFile(Utils.scala:395) ## at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:430) ## at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:422) ## at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772) ## at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) ## at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) ## at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226) ## at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) ## at scala.collection.mutable.HashMap.foreach(HashMap.scala:98) ## at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771) ## at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$updateDependencies(Executor.scala:422) ## at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:206) ## at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ## at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) ## at java.lang.Thread.run(Thread.java:745) ## ## Driver stacktrace: ## at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431) ## at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419) ## at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418) ## at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) ## at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) ## at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418) ## at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799) ## at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799) ## at scala.Option.foreach(Option.scala:236) ## at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799) ## at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640) ## at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599) ## at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588) ## at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) ## at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620) ## at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832) ## at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845) ## at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858) ## at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929) ## at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927) ## at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150) ## at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111) ## at org.apache.spark.rdd.RDD.withScope(RDD.scala:316) ## at org.apache.spark.rdd.RDD.collect(RDD.scala:926) ## at sparklyr.Utils$.collectColumnString(utils.scala:59) ## at sparklyr.Utils$.collectColumn(utils.scala:73) ## at sparklyr.Utils.collectColumn(utils.scala) ## at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ## at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ## at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ## at java.lang.reflect.Method.invoke(Method.java:498) ## at sparklyr.Invoke$.invoke(invoke.scala:94) ## at sparklyr.StreamHandler$.handleMethodCall(stream.scala:89) ## at sparklyr.StreamHandler$.read(stream.scala:55) ## at sparklyr.BackendHandler.channelRead0(handler.scala:49) ## at sparklyr.BackendHandler.channelRead0(handler.scala:14) ## at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105) ## at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) ## at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) ## at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) ## at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) ## at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) ## at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:244) ## at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:308) ## at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:294) ## at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846) ## at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131) ## at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511) ## at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468) ## at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382) ## at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) ## at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111) ## at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) ## at java.lang.Thread.run(Thread.java:745) ## Caused by: java.io.IOException: No FileSystem for scheme: F ## at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2584) ## at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591) ## at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91) ## at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630) ## at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612) ## at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370) ## at org.apache.spark.util.Utils$.getHadoopFileSystem(Utils.scala:1686) ## at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:598) ## at org.apache.spark.util.Utils$.fetchFile(Utils.scala:395) ## at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:430) ## at org.apache.spark.executor.Executor$$anonfun$org$apache$spark$executor$Executor$$updateDependencies$5.apply(Executor.scala:422) ## at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772) ## at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) ## at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) ## at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226) ## at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) ## at scala.collection.mutable.HashMap.foreach(HashMap.scala:98) ## at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771) ## at org.apache.spark.executor.Executor.org$apache$spark$executor$Executor$$updateDependencies(Executor.scala:422) ## at org.apache.spark.executor.Executor$TaskRunner.run(Executor.s ``` # Session Info ```r sessionInfo() ``` ``` ## R version 3.3.2 (2016-10-31) ## Platform: x86_64-w64-mingw32/x64 (64-bit) ## Running under: Windows Server 2012 R2 x64 (build 9600) ## ## locale: ## [1] LC_COLLATE=English_United States.1252 ## [2] LC_CTYPE=English_United States.1252 ## [3] LC_MONETARY=English_United States.1252 ## [4] LC_NUMERIC=C ## [5] LC_TIME=English_United States.1252 ## ## attached base packages: ## [1] stats graphics grDevices utils datasets methods base ## ## other attached packages: ## [1] dplyr_0.5.0 sparklyr_0.5.2 ## ## loaded via a namespace (and not attached): ## [1] Rcpp_0.12.8 knitr_1.15.1 magrittr_1.5 xtable_1.8-2 ## [5] R6_2.2.0 stringr_1.1.0 httr_1.2.1 tools_3.3.2 ## [9] parallel_3.3.2 config_0.2 DBI_0.5-1 withr_1.0.2 ## [13] htmltools_0.3.5 yaml_2.1.14 assertthat_0.1 rprojroot_1.1 ## [17] digest_0.6.11 tibble_1.2 shiny_0.14.2 base64enc_0.1-3 ## [21] evaluate_0.10 mime_0.5 rmarkdown_1.2 stringi_1.1.2 ## [25] backports_1.0.4 jsonlite_1.2 httpuv_1.3.3 firebaseDatabase为每个用户保存用户名和配置文件图像,以便尽快更新您的视图如果在数据库中更改了数据,则设置了正确的侦听器 即

enter image description here