TensorFlow / Keras:如何从我的广义骰子损失函数中获得有意义的损失值?

时间:2019-06-21 07:09:38

标签: python tensorflow keras loss-function

我正在尝试使用generalized dice loss函数在TensorFlow 1.10的Keras API(使用Python)中执行语义分割:

public class AttendeesRecAdapter extends RecyclerView.Adapter<AttendeesRecAdapter.ViewHolder>{
    private Context context;
    private AttendeesListener attendeesListener;
    private ArrayList<AttendeesDUmmy> dumme;
ArrayList<AttendDummyAgain> list = new ArrayList<>();

    public AttendeesRecAdapter(ArrayList<AttendeesDUmmy> dumme,Context context,AttendeesListener attendeesListener) {
        this.context = context;
        this.dumme = dumme;
        this.attendeesListener = attendeesListener;
    }

    public interface AttendeesListener{
        void onClickAtLayout(AttendeesDUmmy attt);
    }

    @NonNull
    @Override
    public ViewHolder onCreateViewHolder(ViewGroup parent, int viewType) {
        View v = LayoutInflater.from(parent.getContext())
                .inflate(R.layout.attendees_card,parent,false);

        return new ViewHolder(v);
    }

    @Override
    public void onBindViewHolder(AttendeesRecAdapter.ViewHolder holder, int position) {

        AttendeesDUmmy att = dumme.get(position);
        holder.txName.setText(att.getName());
        holder.txtNRIC.setText(att.getNRIC());
        holder.txtPassport.setText(att.getPassport());
        holder.txtmV.setText(att.getWp());

        holder.cardItemRoot.setOnClickListener(v -> attendeesListener.onClickAtLayout(att));
        holder.checkBox.setOnCheckedChangeListener((buttonView, isChecked) -> {


            if (isChecked){
                list.add(new AttendDummyAgain(att.getName(),att.getWp()));
                Logger.d(" Attende Dummy" +list);
                notifyDataSetChanged();
            }else {

                Logger.d(" Attende Dummy" +list);
                if (list.size() != 0){
                    list.remove(position);
                    notifyItemRemoved(position);
                    Logger.d(" Attende Dummy" +list);
                }


            }
        });

    }

    @Override
    public int getItemCount() {
        if (dumme != null){
            return dumme.size();
        }else {
            return 0;
        }
    }

    public class ViewHolder extends RecyclerView.ViewHolder{

        TextView txName;
        TextView txtmV;
        TextView txtPassport;
        TextView txtNRIC;
        CardView cardItemRoot;
        CheckBox checkBox;

        public ViewHolder(View itemView) {
            super(itemView);
            txName = itemView.findViewById(R.id.attendees_name);
            txtNRIC = itemView.findViewById(R.id.attendees_NRIC);
            txtPassport = itemView.findViewById(R.id.attendees_passport);
            txtmV = itemView.findViewById(R.id.attendees_wp);
            cardItemRoot = itemView.findViewById(R.id.attendees_card_item_root);
            checkBox = itemView.findViewById(R.id.attendees_checkboxes);
        }
    }
}

但是,我正在努力获得并非总是1的任何有意义的损失。

在计算初始权重(每个类别一个)之后,它们包含来自零除的许多def generalized_dice_loss(onehots_true, logits): smooth = tf.constant(1e-17) onehots_true, logits = mask(onehots_true, logits) # Not all of my pixels contain ground truth, and I filter those pixels out, which results in shape [num_gt_pixels, num_classes]-shaped labels and logits. probabilities = tf.nn.softmax(logits) weights = 1.0 / (tf.reduce_sum(onehots_true, axis=0)**2) weights = tf.clip_by_value(weights, 1e-17, 1.0 - 1e-7) # Is this the correct way of dealing with inf values (the results of zero divisions)? numerator = tf.reduce_sum(onehots_true * probabilities, axis=0) numerator = tf.reduce_sum(weights * numerator) denominator = tf.reduce_sum(onehots_true + probabilities, axis=0) denominator = tf.reduce_sum(weights * denominator) loss = 1.0 - 2.0 * (numerator + smooth) / (denominator + smooth) return loss ,因为通常样本图像中仅出现所有类别的一小部分。因此,我将权重裁剪为[1e-17,1-1e-17]范围(这是一个好主意吗?),之后它们看起来像这样:

inf

对我来说似乎很好,尽管它们很小。分子(在加权之前为tf.Tensor( [4.89021e-05 2.21410e-10 5.43187e-11 1.00000e+00 1.00000e+00 4.23855e-07 5.87461e-09 3.13044e-09 2.95369e-07 1.00000e+00 1.00000e+00 2.22499e-05 1.00000e+00 1.73611e-03 9.47212e-10 1.12608e-05 2.77563e-09 1.00926e-08 7.74787e-10 1.00000e+00 1.34570e-07], shape=(21,), dtype=float32) )如下:

tf.reduce_sum(onehots_true * probabilities, axis=0)

看起来也很合理,因为它们基本上是标签的各自大小乘以网络对其的确定性(在培训开始时可能较低)。分母(加权前的tf.Tensor( [3.42069e+01 0.00000e+00 9.43506e+03 7.88478e+01 1.50554e-02 0.00000e+00 1.22765e+01 4.36149e-01 1.75026e+02 0.00000e+00 2.33183e+02 1.81064e-01 0.00000e+00 1.60128e+02 1.48867e+04 0.00000e+00 3.87697e+00 4.49753e+02 5.87062e+01 0.00000e+00 0.00000e+00], shape=(21,), dtype=float32) tf.Tensor(1.0, shape=(), dtype=float32) )看起来也不错:

tf.reduce_sum(onehots_true + probabilities, axis=0)

这些很大,但这是可以预料的,因为像素的类概率之和为1,因此这些分母的总和应或多或少等于具有地面真实性的像素的数量。

但是,对分子求和会得到非常小的总和(〜0.001,尽管偶尔在一个数字范围内),而分母的总和却很大。这导致我的最终损失排他地为1,或者接近于此。如何减轻这种影响并获得稳定的渐变?我几乎实现了确切的骰子损失公式。我在这里想念什么?

1 个答案:

答案 0 :(得分:0)

显然,我需要省略权重,然后才能得到可行的损失函数。不知道为什么我不能使用权重,如果可以的话会增加什么。后续问题:https://stats.stackexchange.com/questions/414107/why-are-weights-being-used-in-generalized-dice-loss-and-why-cant-i