首先,我想向整个社区表示感谢 - 我只是通过论坛阅读,学到了很多东西。
我有一个关于变量分组的问题,可以感谢帮助。
我有一个包含多个列的数据集(' 1'到' 5')。我想以这样一种方式对数据进行分组,即列的平均值为' 1' (条带)最接近100%。例如,假设数据如下所示:
Banding Gender Country Type BirthYear Salary
220.9% Male Canada alpha 1962 7,779.15
112.2% Male Canada alpha 1946 1,355.64
80.8% Male Canada alpha 1959 24,535.52
83.7% Male Canada alpha 1943 3,961.32
112.6% Male Canada alpha 1965 17,388.12
146.2% Male Canada beta 1943 2,915.33
54.8% Male Canada beta 1949 5,005.50
138.6% Male Canada beta 1949 17,297.12
141.5% Male Canada beta 1942 494.52
137.0% Male Canada beta 1943 2,054.52
54.0% Male UStates alpha 1940 208.56
62.1% Male UStates alpha 1946 1,216.68
19.5% Male UStates alpha 1960 5,589.45
134.6% Male UStates alpha 1959 5,928.50
39.6% Male UStates alpha 1952 4,486.02
149.5% Male UStates beta 1954 3,427.36
95.6% Male UStates beta 1940 313.10
113.7% Male UStates beta 1942 927.00
120.4% Male UStates beta 1954 3,408.36
170.7% Male UStates beta 1937 606.60
88.1% Male Canada alpha 1941 727.67
201.1% Male Canada alpha 1946 1,715.88
347.3% Male Canada alpha 1969 1,438.92
380.3% Male Canada alpha 1941 282.60
506.2% Male Canada alpha 1942 1,167.48
418.7% Female Canada beta 1943 934.40
109.0% Female Canada beta 1952 4,831.43
223.7% Female Canada beta 1953 2,161.06
193.8% Female Canada beta 1954 5,119.91
83.9% Female Canada beta 1963 14,716.20
76.3% Female UStates alpha 1960 6,255.56
241.6% Female UStates alpha 1944 1,567.68
79.9% Female UStates alpha 1942 622.77
42.8% Female UStates alpha 1952 2,149.20
78.0% Female UStates alpha 1951 2,689.20
65.7% Female UStates beta 1951 11,721.19
179.7% Female UStates beta 1923 1,362.00
136.0% Female UStates beta 1945 528.48
74.1% Female UStates beta 1966 25,290.89
127.1% Female UStates beta 1963 7,451.59
19.2% Female Canada alpha 1942 2,070.19
116.2% Female Canada alpha 1936 298.66
118.6% Female Canada alpha 1958 428.28
108.1% Female Canada alpha 1954 3,610.08
99.1% Female Canada alpha 1943 519.48
135.9% Female UStates beta 1940 63.96
144.2% Female UStates beta 1968 23,851.96
119.3% Female UStates beta 1936 1,376.76
112.9% Female UStates beta 1951 2,527.56
129.0% Female UStates beta 1949 1,061.88
我想得到一个看起来像第二个表的输出。从第二张表开始,该程序已确定BirthYear不是一个重要的变量,并同时绑定工资以获得“绑定”的垃圾箱。尽可能接近100。使用所有变量并不重要,但在每个分组中获得最少量的样本会很不错。
现在,我在Excel中使用了一系列数据透视表,在R中使用了CART分析,使条带接近100。这需要大量的试验/错误,需要花费很多时间(真实数据集包含许多变量和超过50,000行)。
Gender Country Type Salary Banding
Male Canada and Ustates Alpha <1000 112.5
Male Canada and Ustates Alpha 1000-4000 117
Male Canada and Ustates Alpha >4000 108
Male Canada and Ustates Beta <1000 110
Male Canada and Ustates Beta 1000-4000 98
Male Canada and Ustates Beta >4000 97
Female Canada Alpha <1000 100
Female Canada Alpha 1000-4000 115
Female Canada Alpha >4000 117.5
Female Canada Beta <1000 118
Female Canada Beta 1000-4000 110
Female Canada Beta >4000 115
Female Ustates Alpha <1000 102
Female Ustates Alpha 1000-4000 99
Female Ustates Alpha >4000 101
Female Ustates Beta <1000 116
Female Ustates Beta 1000-4000 102
Female Ustates Beta >4000 98
谢谢大家。任何帮助是赞赏的朋友。
快乐编码。
答案 0 :(得分:1)
您应该考虑您要优化的内容,然后以数学方式定义它。你有什么限制,你对每个目标有多重?等等?这将帮助您找到最佳的分组,而不是其他任何东西。
以下是使用随机搜索的一种方法:
library(dplyr)
dat$Banding <- gsub("\\%", "", dat$Banding) %>% as.numeric
band_vals <- matrix(dat$Banding, ncol=1)
max_groups <- 20
min_groups <- 10
min_group_size <- 2
iters <- 100000
cost_vector <- rep(NA, iters)
best_cost <- Inf
n_groups <- sample(min_groups:max_groups, size=iters, replace=T)
for(iter in 1:iters) {
set.seed(iter)
x <- sample(n_groups[i], size=nrow(dat), replace=T)
if(any(table(x) < min_group_size)) next;
x_mat <- matrix(nrow = n_groups[i], ncol = nrow(dat), 0)
for (i in 1:length(x)) {
x_mat[x[i], i] <- 1/sum(x==x[i])
}
cost <- sum(( (x_mat %*% band_vals) - 100 )^2)
if(cost < best_cost) {
best_cost <- cost
best_x <- x
}
cost_vector[iter] <- best_cost
}
dat$group <- best_x
plot(na.omit(cost_vector), type="l")
dat %>% group_by(group) %>% summarize(avg_banding = mean(Banding), n=n())
group avg_banding n
<int> <dbl> <int>
1 1 114 2
2 2 153 6
3 3 114 4
4 4 120 3
5 5 170 10
6 6 154 2
7 7 138 2
8 8 57.6 2
9 9 100 2
10 10 119 3
11 11 134 3
12 12 176 6
13 13 127 3
14 14 95.8 2
随着时间的推移成本: