在catch子句中为类中的所有方法执行相同的代码

时间:2018-01-05 11:02:13

标签: java exception-handling

我有一个有很多方法的课程。数据未就绪时,所有方法都会抛出一个异常。在这种情况下,我想在一定间隔后重试该方法。所以在catch中,我需要添加重试逻辑。我需要为所有方法添加相同的逻辑。

是否有一些方法/模式可以为所有catch子句执行相同的逻辑而无需复制粘贴

我能想到的一种方法是编写自己的Exception类并抛出该异常。并从My Exception类中执行此重试逻辑。

还有其他更好的方法吗?

class MyClass {
    public void method1() {
        try {
            //do some logic
        } catch (Exception e) {
            //retry logic
            //existing exception handling logic
        }
    }

    public void method2() {
        try {
            //do some logic
        } catch (Exception e) {
            //retry logic
            //existing exception handling logic
        }
    }

    public void method3() {
        try {
            //do some logic
        } catch (Exception e) {
            //retry logic
            //existing exception handling logic
        }
    }
}

修改

class MyClass {
public void method1(int a, int b) {
    try {
        //do some logic
    } catch (Exception e) {
        Object args[] = {a,b};
        executeLater("method1",args);
        //retry logic
        //existing exception handling logic
    }
}

public void method2() {
    try {
        //do some logic
    } catch (Exception e) {
        Object args[] = null;
        executeLater("method1",args);
        //retry logic
        //existing exception handling logic
    }
}

public void method3(String abcd, int a) {
    try {
        //do some logic
    } catch (Exception e) {
        Object args[] = {abcd,a};
        executeLater("method1",args);
        //retry logic
        //existing exception handling logic
    }
}

public boolean executeLater(String methodName, Object args[]){
    //Execute given method with the supplied args
    return true;
}
}

添加了显示我将在每个catch子句中执行的操作的代码

2 个答案:

答案 0 :(得分:0)

 * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
172.17.0.55 - - [05/Jan/2018 10:41:26] "GET /parameters HTTP/1.1" 200 -
172.17.0.55 - - [05/Jan/2018 10:41:26] "POST /update HTTP/1.1" 500 -
Traceback (most recent call last):
  File "/opt/conda/lib/python3.6/site-packages/flask/app.py", line 1997, in __call__
    return self.wsgi_app(environ, start_response)
  File "/opt/conda/lib/python3.6/site-packages/flask/app.py", line 1985, in wsgi_app
    response = self.handle_exception(e)
  File "/opt/conda/lib/python3.6/site-packages/flask/app.py", line 1540, in handle_exception
    reraise(exc_type, exc_value, tb)
  File "/opt/conda/lib/python3.6/site-packages/flask/_compat.py", line 33, in reraise
    raise value
  File "/opt/conda/lib/python3.6/site-packages/flask/app.py", line 1982, in wsgi_app
    response = self.full_dispatch_request()
  File "/opt/conda/lib/python3.6/site-packages/flask/app.py", line 1614, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/opt/conda/lib/python3.6/site-packages/flask/app.py", line 1517, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "/opt/conda/lib/python3.6/site-packages/flask/_compat.py", line 33, in reraise
    raise value
  File "/opt/conda/lib/python3.6/site-packages/flask/app.py", line 1612, in full_dispatch_request
    rv = self.dispatch_request()
  File "/opt/conda/lib/python3.6/site-packages/flask/app.py", line 1598, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "/opt/conda/lib/python3.6/site-packages/elephas/spark_model.py", line 160, in update_parameters
    constraints = self.master_network.model.constraints
AttributeError: 'Model' object has no attribute 'constraints'
---------------------------------------------------------------------------
Py4JJavaError                             Traceback (most recent call last)
<ipython-input-10-44a540be31e8> in <module>()
----> 1 spark_model.train(rdd, nb_epoch=20, batch_size=32, verbose=0, validation_split=0.1)

/opt/conda/lib/python3.6/site-packages/elephas/spark_model.py in train(self, rdd, nb_epoch, batch_size, verbose, validation_split)
    195 
    196         if self.mode in ['asynchronous', 'synchronous', 'hogwild']:
--> 197             self._train(rdd, nb_epoch, batch_size, verbose, validation_split, master_url)
    198         else:
    199             print("""Choose from one of the modes: asynchronous, synchronous or hogwild""")

/opt/conda/lib/python3.6/site-packages/elephas/spark_model.py in _train(self, rdd, nb_epoch, batch_size, verbose, validation_split, master_url)
    215                 self.master_optimizer, self.master_loss, self.master_metrics, self.custom_objects
    216             )
--> 217             rdd.mapPartitions(worker.train).collect()
    218             new_parameters = get_server_weights(master_url)
    219         elif self.mode == 'synchronous':

/usr/local/spark/python/pyspark/rdd.py in collect(self)
    807         """
    808         with SCCallSiteSync(self.context) as css:
--> 809             port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
    810         return list(_load_from_socket(port, self._jrdd_deserializer))
    811 

/usr/local/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py in __call__(self, *args)
   1131         answer = self.gateway_client.send_command(command)
   1132         return_value = get_return_value(
-> 1133             answer, self.gateway_client, self.target_id, self.name)
   1134 
   1135         for temp_arg in temp_args:

/usr/local/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    317                 raise Py4JJavaError(
    318                     "An error occurred while calling {0}{1}{2}.\n".
--> 319                     format(target_id, ".", name), value)
    320             else:
    321                 raise Py4JError(

Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 9, localhost, executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 177, in main
    process()
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 172, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 268, in dump_stream
    vs = list(itertools.islice(iterator, batch))
  File "/opt/conda/lib/python3.6/site-packages/elephas/spark_model.py", line 311, in train
    put_deltas_to_server(deltas, self.master_url)
  File "/opt/conda/lib/python3.6/site-packages/elephas/spark_model.py", line 44, in put_deltas_to_server
    return urllib2.urlopen(request).read()
  File "/opt/conda/lib/python3.6/urllib/request.py", line 223, in urlopen
    return opener.open(url, data, timeout)
  File "/opt/conda/lib/python3.6/urllib/request.py", line 532, in open
    response = meth(req, response)
  File "/opt/conda/lib/python3.6/urllib/request.py", line 642, in http_response
    'http', request, response, code, msg, hdrs)
  File "/opt/conda/lib/python3.6/urllib/request.py", line 570, in error
    return self._call_chain(*args)
  File "/opt/conda/lib/python3.6/urllib/request.py", line 504, in _call_chain
    result = func(*args)
  File "/opt/conda/lib/python3.6/urllib/request.py", line 650, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 500: INTERNAL SERVER ERROR

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:108)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1499)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1487)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1486)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1486)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:814)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:814)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1714)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1669)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1658)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:630)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2022)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2043)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2062)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2087)
    at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:936)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
    at org.apache.spark.rdd.RDD.collect(RDD.scala:935)
    at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:458)
    at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:280)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 177, in main
    process()
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/worker.py", line 172, in process
    serializer.dump_stream(func(split_index, iterator), outfile)
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 268, in dump_stream
    vs = list(itertools.islice(iterator, batch))
  File "/opt/conda/lib/python3.6/site-packages/elephas/spark_model.py", line 311, in train
    put_deltas_to_server(deltas, self.master_url)
  File "/opt/conda/lib/python3.6/site-packages/elephas/spark_model.py", line 44, in put_deltas_to_server
    return urllib2.urlopen(request).read()
  File "/opt/conda/lib/python3.6/urllib/request.py", line 223, in urlopen
    return opener.open(url, data, timeout)
  File "/opt/conda/lib/python3.6/urllib/request.py", line 532, in open
    response = meth(req, response)
  File "/opt/conda/lib/python3.6/urllib/request.py", line 642, in http_response
    'http', request, response, code, msg, hdrs)
  File "/opt/conda/lib/python3.6/urllib/request.py", line 570, in error
    return self._call_chain(*args)
  File "/opt/conda/lib/python3.6/urllib/request.py", line 504, in _call_chain
    result = func(*args)
  File "/opt/conda/lib/python3.6/urllib/request.py", line 650, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 500: INTERNAL SERVER ERROR

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:193)
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:234)
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:152)
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:108)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:335)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    ... 1 more

这可能会给你一个想法。它一直尝试调用doProcess,直到它不抛出异常。如果发生任何异常,则等待10秒。

答案 1 :(得分:0)

好吧,您可以将整个catch块内容提取到方法并调用该方法,但这仅在您的重试逻辑不依赖于特定方法时才有效。并且每种方法都需要try-catch

相反,使用函数式编程来缩短它:

public class Playground
{
    public static void main(String[] args)
    {
        new Playground().method2(1, 2);
        new Playground().method1();
    }

    public void method1()
    {
        tryAndTryAgain(() -> {
            // logic 1
            System.out.println("no params");
            throw new RuntimeException();
        });
    }

    public void method2(int a, int b)
    {
        tryAndTryAgain(() -> {
            // logic 2
            System.out.println(a + " " + b);
            throw new RuntimeException();
        });
    }

    public static void tryAndTryAgain(Runnable tryThis)
    {
        try
        {
            tryThis.run();
        }
        catch (Exception e)
        {
            new Timer().schedule(new TimerTask()
            {
                @Override
                public void run()
                {
                    tryAndTryAgain(tryThis);
                }
            }, 1000);
            // existing exception handling logic
        }
    }
}

确切的结构取决于您的具体实现,但它应该让您了解如何构建它。好处是所有这些方法都可以专注于业务逻辑,重试逻辑和异常处理是在util方法中完成的。而且这种util方法甚至不需要知道任何关于参数,方法或任何东西的知识,因为所有的业务逻辑都包含在Runnable中。