如何使Javascript同步运行功能

时间:2017-04-12 15:14:52

标签: javascript jquery ajax asp.net-mvc

在我的asp.net mvc应用程序中,我点按了一个指向调用

的javascript的按钮
function OnButtonClick(s, e, startUrl, progressUrl) {
    Fetch(progressUrl);
    ImportUpdate(startUrl);
   }

Fetch和ImportUpdate是控制器动作的ajax jquery。

 function Fetch(progressUrl) {
        positionDate = ReportingPositionDate.GetDate().toDateString();
        $.ajax({
            type: 'POST',
            url: "@Url.Action("BloombergFet", "ImportData")",
            data: JSON.stringify({ positionDate: positionDate }),
            dataType: "text",
            contentType: "application/json; charset=utf-8",
            beforeSend: function () { lpBloomberg.Show(); },
            success: function (msg) {
                ImportSuccessMessage.SetText(msg);
                lpBloomberg.Hide();
                lpImport.Show();

            },
            error: function (xhr, textStatus, errorThrown) {
                lpBloomberg.Hide()

            }
        });
    }

function ImportUpdate(progressUrl) {
        positionDate = ReportingPositionDate.GetDate().toDateString();
        myProgressBar.Show;
        $.ajax({
            type: 'POST',
            url: "@Url.Action("ProcessImportRecord", "ImportData")",
            data: JSON.stringify({ positionDate: positionDate }),
            dataType: "text",
            contentType: "application/json; charset=utf-8",
            beforeSend: function () { lpImport.Show(); },
            success: function (msg) {
            ImportDataGridView.PerformCallback();
            ImportSuccessMessage.SetVisible(true);
            ImportSuccessMessage.SetText(msg);
            lpImport.Hide();
        },
        error: function (xhr, textStatus, errorThrown) {
            ImportErrorMessage.SetVisible(true);
            ImportErrorMessage.SetText(xhr.statusText)
        }
    });
    }

目前,同时调用方法Fetch(progressUrl)ImportUpdate(progressUrl)。我希望Fetch(progressUrl)完成,然后运行ImportUpdate。

我如何实现这一目标。感谢所有帮助。

2 个答案:

答案 0 :(得分:3)

在第一个函数Fetch(progressUrl)的成功块中调用第二个函数ImportUpdate(progressUrl),如下所示:

SELECT 
MEM.MBR_FST_NM
,MEM.MBR_LST_NM 

            ,DT.FULL_DT            ADMIT_DATE
            ,DT2.FULL_DT        DISCHARGE_DATE
            ,MAX(case when DT3.Full_DT between dt2.full_dt and 
(dt2.full_dt+7) then DT3.FULL_DT else Null end)  as Seven_Day_Followup_DATE
            ,MAX(case when DT3.Full_DT between dt2.full_dt and 
(dt2.full_dt+7) then 'Y' else Null end)  as Seven_Day_Followup_flag
            ,MAX(case when DT4.FULL_DT BETWEEN (DT2.FULL_DT+1) AND 
(DT2.FULL_DT+30) THEN DT4.FULL_DT END) AS READMIT_DATE
            ,MAX(case when DT4.Full_DT between (DT2.FULL_DT+1) AND 
(DT2.FULL_DT+30) then 'Y' Else Null End) as Readmitted_Within_30_days 



FROM 
UHCDM001.HP_MEMBER    MEM
         INNER JOIN UHCDM001.INPATIENT IP
         ON MEM.MBR_SYS_ID = IP.MBR_SYS_ID
         LEFT JOIN UHCDM001.HP_DATE DT
         ON IP.ADMIT_DT_SYS_ID = DT.DT_SYS_ID
         LEFT JOIN  UHCDM001.HP_DATE DT2
         ON IP.HLTH_PLN_DSCHRG_DT_SYS_ID = DT2.DT_SYS_ID
         INNER JOIN PSU_TEMP TMP
          ON MEM.INDV_SYS_ID=TMP.IND_SYS_ID
        LEFT JOIN UHCDM001.PHYSICIAN  P
         ON MEM.MBR_SYS_ID=P.MBR_SYS_ID
         LEFT JOIN UHCDM001.HP_DATE DT3
         ON P.FST_SRVC_DT_SYS_ID=DT3.DT_SYS_ID
        INNER JOIN UHCDM001.INPATIENT IP2
        ON IP.MBR_SYS_ID=IP2.MBR_SYS_ID
        LEFT JOIN UHCDM001.HP_DATE DT4
         ON IP2.ERLY_SRVC_DT_SYS_ID = DT4.DT_SYS_ID


WHERE  dt3.full_dt>= dt2.full_dt


Group by MEM.MBR_FST_NM
                            ,MEM.MBR_LST_NM
                            ,DT.FULL_DT
                            ,DT2.FULL_DT    
                            ,DT3.FULL_DT    
                            ,DT4.FULL_DT
                            --,TMP.MBR_STATE
                            --,TMP.IND_SYS_ID

order by MEM.MBR_FST_NM
,MEM.MBR_LST_NM 
,admit_date Asc






 This query is optimized using type 2 profile insert-sel, profileid
 10001.
  1) First, we lock UHCTB001.FACT_PHYSICIAN in view UHCDM001.PHYSICIAN
     for access, we lock UHCTB001.FACT_INPATIENT in view
     UHCDM001.INPATIENT for access, we lock UHCTB001.DIM_MEMBER in view
     UHCDM001.HP_MEMBER for access, and we lock UHCTB001.DIM_DATE in
     view UHCDM001.HP_DATE for access.
  2) Next, we execute the following steps in parallel.
       1) We do an all-AMPs RETRIEVE step from UHCTB001.DIM_DATE in
          view UHCDM001.HP_DATE by way of an all-rows scan with no
          residual conditions into Spool 4 (all_amps) (compressed
          columns allowed), which is duplicated on all AMPs.  The size
          of Spool 4 is estimated with high confidence to be 32,399,136
          rows (745,180,128 bytes).  The estimated time for this step
          is 0.17 seconds.
       2) We do an all-AMPs RETRIEVE step from UHCTB001.FACT_INPATIENT
          in view UHCDM001.INPATIENT by way of an all-rows scan with no
          residual conditions into Spool 5 (all_amps) (compressed
          columns allowed), which is redistributed by the hash code of
          (UHCTB001.FACT_INPATIENT.MBR_SYS_ID) to all AMPs.  Then we do
          a SORT to order Spool 5 by row hash.  The size of Spool 5 is
          estimated with high confidence to be 76,386,713 rows (
          1,604,120,973 bytes).  The estimated time for this step is
          3.09 seconds.
       3) We do an all-AMPs RETRIEVE step from MWOODS27.TMP by way of
          an all-rows scan with no residual conditions into Spool 6
          (all_amps) (compressed columns allowed) fanned out into 6
          hash join partitions, which is duplicated on all AMPs.  The
          size of Spool 6 is estimated with high confidence to be
          47,822,040 rows (1,530,305,280 bytes).  The estimated time
          for this step is 0.35 seconds.
  3) We do an all-AMPs JOIN step from UHCTB001.DIM_MEMBER in view
     UHCDM001.HP_MEMBER by way of a RowHash match scan with no residual
     conditions, which is joined to Spool 5 (Last Use) by way of a
     RowHash match scan.  UHCTB001.DIM_MEMBER and Spool 5 are joined
     using a merge join, with a join condition of (
     "UHCTB001.DIM_MEMBER.MBR_SYS_ID = MBR_SYS_ID").  The result goes
     into Spool 7 (all_amps) (compressed columns allowed) fanned out
     into 6 hash join partitions, which is built locally on the AMPs.
     The size of Spool 7 is estimated with low confidence to be
     76,386,713 rows (3,361,015,372 bytes).  The estimated time for
     this step is 1.43 seconds.
  4) We do an all-AMPs JOIN step from Spool 6 (Last Use) by way of an
     all-rows scan, which is joined to Spool 7 (Last Use) by way of an
     all-rows scan.  Spool 6 and Spool 7 are joined using a hash join
     of 6 partitions, with a join condition of ("(INDV_SYS_ID )=
     (IND_SYS_ID (FLOAT, FORMAT '-9.99999999999999E-999'))").  The
     result goes into Spool 8 (all_amps) (compressed columns allowed),
     which is redistributed by the hash code of (
     UHCTB001.FACT_INPATIENT.ADMIT_DT_SYS_ID) to all AMPs.  The size of
     Spool 8 is estimated with index join confidence to be 2,318,638
     rows (90,426,882 bytes).  The estimated time for this step is 0.50
     seconds.
  5) We do an all-AMPs JOIN step from UHCTB001.DIM_DATE in view
     UHCDM001.HP_DATE by way of an all-rows scan with no residual
     conditions, which is joined to Spool 8 (Last Use) by way of an
     all-rows scan locking UHCTB001.DIM_DATE for access.
     UHCTB001.DIM_DATE and Spool 8 are right outer joined using a
     single partition hash join, with a join condition of (
     "ADMIT_DT_SYS_ID = UHCTB001.DIM_DATE.DT_SYS_ID").  The result goes
     into Spool 9 (all_amps) (compressed columns allowed), which is
     built locally on the AMPs.  The size of Spool 9 is estimated with
     index join confidence to be 2,318,638 rows (95,064,158 bytes).
     The estimated time for this step is 0.03 seconds.
  6) We do an all-AMPs RETRIEVE step from UHCTB001.FACT_PHYSICIAN in
     view UHCDM001.PHYSICIAN by way of an all-rows scan with no
     residual conditions into Spool 13 (all_amps) (compressed columns
     allowed) fanned out into 19 hash join partitions, which is built
     locally on the AMPs.  The size of Spool 13 is estimated with high
     confidence to be 1,049,698,588 rows (19,944,273,172 bytes).  The
     estimated time for this step is 8.20 seconds.
  7) We do an all-AMPs JOIN step from Spool 9 (Last Use) by way of an
     all-rows scan, which is joined to Spool 4 by way of an all-rows
     scan.  Spool 9 and Spool 4 are joined using a single partition
     hash join, with a join condition of ("HLTH_PLN_DSCHRG_DT_SYS_ID =
     DT_SYS_ID").  The result goes into Spool 14 (all_amps) (compressed
     columns allowed) fanned out into 19 hash join partitions, which is
     duplicated on all AMPs.  The size of Spool 14 is estimated with
     index join confidence to be 1,168,593,552 rows (50,249,522,736
     bytes).  The estimated time for this step is 10.45 seconds.
  8) We do an all-AMPs JOIN step from Spool 13 (Last Use) by way of an
     all-rows scan, which is joined to Spool 14 (Last Use) by way of an
     all-rows scan.  Spool 13 and Spool 14 are joined using a hash join
     of 19 partitions, with a join condition of ("MBR_SYS_ID =
     MBR_SYS_ID").  The result goes into Spool 16 (all_amps)
     (compressed columns allowed), which is built locally on the AMPs.
     The size of Spool 16 is estimated with index join confidence to be
     75,438,358 rows (3,092,972,678 bytes).  The estimated time for
     this step is 5.05 seconds.
  9) We do an all-AMPs JOIN step from Spool 4 by way of an all-rows
     scan, which is joined to Spool 16 (Last Use) by way of an all-rows
     scan.  Spool 4 and Spool 16 are joined using a single partition
     hash join, with a join condition of ("(FULL_DT >= FULL_DT) AND
     (FST_SRVC_DT_SYS_ID = DT_SYS_ID)").  The result is split into
     Spool 17 (all_amps) with a condition of ("MBR_SYS_ID IN (:*)") to
     qualify rows matching skewed rows of the skewed relation and Spool
     18 (all_amps) with remaining rows fanned out into 3 hash join
     partitions.  Spool 17 is built locally on the AMPs.  The size of
     Spool 17 is estimated with index join confidence to be 32 rows (
     1,376 bytes).  Spool 18 is redistributed by hash code to all AMPs.
     The size of Spool 18 is estimated with index join confidence to be
     75,438,326 rows (3,243,848,018 bytes).  The estimated time for
     this step is 2.76 seconds.
 10) We do an all-AMPs JOIN step from Spool 4 (Last Use) by way of an
     all-rows scan, which is joined to UHCTB001.FACT_INPATIENT in view
     UHCDM001.INPATIENT by way of an all-rows scan with no residual
     conditions locking UHCTB001.FACT_INPATIENT for access.  Spool 4
     and UHCTB001.FACT_INPATIENT are right outer joined using a dynamic
     hash join, with a join condition of (
     "UHCTB001.FACT_INPATIENT.ERLY_SRVC_DT_SYS_ID = DT_SYS_ID").  The
     result is split into Spool 19 (all_amps) with a condition of (
     "MBR_SYS_ID IN (:*)") to qualify skewed rows and Spool 21
     (all_amps) with remaining rows fanned out into 3 hash join
     partitions.  Spool 19 is built locally on the AMPs.  Then we do a
     SORT to order Spool 19 by row hash.  The size of Spool 19 is
     estimated with low confidence to be 55,223 rows (1,159,683 bytes).
     Spool 21 is redistributed by hash code to all AMPs.  The size of
     Spool 21 is estimated with low confidence to be 76,331,490 rows (
     1,602,961,290 bytes).  The estimated time for this step is 1.99
     seconds.
 11) We do an all-AMPs RETRIEVE step from Spool 17 (Last Use) by way of
     an all-rows scan into Spool 22 (all_amps) (compressed columns
     allowed), which is duplicated on all AMPs.  Then we do a SORT to
     order Spool 22 by the hash code of (MBR_SYS_ID).  The size of
     Spool 22 is estimated with index join confidence to be 16,128 rows
     (693,504 bytes).  The estimated time for this step is 0.00 seconds.
 12) We do an all-AMPs JOIN step from Spool 22 (Last Use) by way of a
     RowHash match scan, which is joined to Spool 19 (Last Use) by way
     of a RowHash match scan.  Spool 22 and Spool 19 are joined using a
     merge join, with a join condition of ("MBR_SYS_ID = MBR_SYS_ID").
     The result goes into Spool 3 (all_amps), which is built locally on
     the AMPs.  The size of Spool 3 is estimated with index join
     confidence to be 1,767,136 rows (79,521,120 bytes).  The estimated
     time for this step is 0.01 seconds.
 13) We do an all-AMPs JOIN step from Spool 21 (Last Use) by way of an
     all-rows scan, which is joined to Spool 18 (Last Use) by way of an
     all-rows scan.  Spool 21 and Spool 18 are joined using a hash join
     of 3 partitions, with a join condition of ("MBR_SYS_ID =
     MBR_SYS_ID").  The result goes into Spool 3 (all_amps), which is
     built locally on the AMPs.  The size of Spool 3 is estimated with
     index join confidence to be 1,842,100,578 rows (82,894,526,010
     bytes).  The estimated time for this step is 14.91 seconds.
 14) We do an all-AMPs SUM step to aggregate from Spool 3 (Last Use) by
     way of an all-rows scan, and the grouping identifier in field 1.
     Aggregate Intermediate Results are computed globally, then placed
     in Spool 23.  The size of Spool 23 is estimated with low
     confidence to be 1,382,575,097 rows (139,640,084,797 bytes).  The
     estimated time for this step is 2 minutes and 9 seconds.
 15) We do an all-AMPs RETRIEVE step from Spool 23 (Last Use) by way of
     an all-rows scan into Spool 1 (group_amps), which is built locally
     on the AMPs.  Then we do a SORT to order Spool 1 by the sort key
     in spool field1.  The size of Spool 1 is estimated with low
     confidence to be 1,382,575,097 rows (158,996,136,155 bytes).  The
     estimated time for this step is 37.94 seconds.
 16) Finally, we send out an END TRANSACTION step to all AMPs involved
     in processing the request.
  -> The contents of Spool 1 are sent back to the user as the result of
     statement 1.  The total estimated time is 3 minutes and 36 seconds.

然而,就像James指出的那样,如果你想在每次调用Fetch之后调用ImportUpdate,那么组合它们更有意义,除非你在其他地方独立调用ImportUpdate,而不是先调用Fetch。

BTW,Kevin B.可能引用的回调与jQuery .post()函数一起使用,你可以这样使用:

function Fetch(progressUrl) {
    positionDate = ReportingPositionDate.GetDate().toDateString();
    $.ajax({
        type: 'POST',
        url: "@Url.Action("BloombergFet", "ImportData")",
        data: JSON.stringify({ positionDate: positionDate }),
        dataType: "text",
        contentType: "application/json; charset=utf-8",
        beforeSend: function () { lpBloomberg.Show(); },
        success: function (msg) {
            ImportSuccessMessage.SetText(msg);
            lpBloomberg.Hide();
            lpImport.Show();

            //Place call for ImportUpdate function here, like so
            ImportUpdate(startUrl);
        },
        error: function (xhr, textStatus, errorThrown) {
            lpBloomberg.Hide()

        }
    });
}

所以不要将函数调用放在当前Fetch函数的成功回调中,而是将它放在.done回调中,如下所示:

// Assign handlers immediately after making the request,
// and remember the jqxhr object for this request
var jqxhr = $.post( "example.php", function() {
  alert( "success" );
})
  .done(function() {
    alert( "second success" );
  })
  .fail(function() {
    alert( "error" );
  })
  .always(function() {
    alert( "finished" );
});

// Perform other work here ...

// Set another completion function for the request above
jqxhr.always(function() {
  alert( "second finished" );
});

答案 1 :(得分:1)

F放入Output

的成功回调函数中