SAS: Improve efficiency of a cross join

时间:2019-03-19 14:37:42

标签: sas proc-sql cross-join

In my project I am combining three unique input sources to generate one score. Imagine this formula

Integrated score = weight_1 * Score_1 + weight_2 * Score_2 + weight_3 * Score_3

So, to do this, I have utilised the following code

    DATA w_matrix_t;
     /*Create a row count to identify the model weight combination*/
         RETAIN model_combination;
         model_combination = 0;

         DO n_1 = 0 TO 100 BY 1;
               DO n_2 = 0 TO 100 BY 1;
                    IF (100 - n_1 - n_2) ge 0 AND (100 - n_1 - n_2) le 100 THEN DO;
                         n_3 = 100 - n_1 - n_2;
                         model_combination+1;
                         output;
                    END;
               END;
         END;
    RUN;

    DATA w_matrix;
        SET w_matrix_t;
        w_1 = n_1/100;
        w_2 = n_2/100;
        w_3 = n_3/100;
        /*Drop the old variables*/
        DROP n_1 n_2 n_3;
    RUN;

PROC SQL; 
    CREATE TABLE weights_added AS 
    SELECT
          w.model_combination
        , w.w_1
        , w.w_2
        , w.w_3
        , fit.name
        , fit.logsalary
        , (
            w.w_1*fit.crhits    + 
            w.w_2*fit.natbat    + 
            w.w_3*fit.nbb
        )                               AS y_hat_int
    FROM
        work.w_matrix                   AS w
    CROSS JOIN
        sashelp.baseball                AS fit
    ORDER BY
        model_combination;
QUIT;

My question is, is there a more efficient way of making this join? The purpose is to create a large table that contains the entire sashelp.baseball dataset duplicated for all combinations of weights.

In my live data, I have three input sources of 46,000 observations each and that cross join takes 1 hour. I also have three input sources of 465,000 each, I imagine this will take a very long time.

The reason I do it this way is because I calculate my Somers' D using Proc freq and by group processing (by model combination)

1 个答案:

答案 0 :(得分:2)

500,000行表的5000个副本将是一个具有2.5B行的相当大的表

这是数据步骤堆叠的示例; have的每一行都有weights数据集的一个副本。该示例具有SET weights来处理每个权重(通过隐式循环)和SET have POINT= / OUTPUT在显式循环(内部循环)内。内部循环在计算加权总和时复制数据

data have;
  set sashelp.baseball (obs=200);  * keep it small for demonstration;
run;

data weights (keep=comboId w1 w2 w3);
  do i = 0 to 100; do j = 0 to 100; if (i+j) <= 100 then do;
    comboId + 1;
    w1 = i / 100;
    w2 = j / 100;
    w3 = (100 - i - j) / 100;
    output; 
  end; end; end;
run;


data want (keep=comboid w1-w3 name logsalary y_hat_int);
  do while (not endOfWeights);
    set weights end = endOfWeights;
    do row = 1 to RowsInHave;
      set have (keep=name logsalary crhits natbat nbb) nobs = RowsInHave point = row;
      y_hat_int = w1 * crhits + w2 * natbat + w3 * nbb;
      output;
    end;
  end;
  stop;
run;

proc freq data=want noprint;
  by comboId;
  table y_hat_int / out=freqout ;
  format y_hat_int 4.;
run;

proc contents data=want;
run;

袖手旁观,一张包含来自棒球的200行摘录的5,151份副本的单个表名义上为72.7MB,因此,期望465K行表的5,151份副本具有〜2.4G行和约170 GB的磁盘。在旋转速度为@ 7200的磁盘上,在仅写入的20分钟内就可以达到最佳性能,并且可能还有更多。