按值对对象数组进行排序

时间:2020-04-21 13:05:05

标签: javascript arrays sorting

我需要按其值对对象数组进行排序,然后使用此代码执行此操作:

function compare(a,b){return dict[a]-dict[b]} Object.keys(dict).sort(compare)

它的工作原理是每个值都不同,但是如果两个对象具有相同的值,它将按照它们在数组中出现的顺序离开它们,但我希望它们按字母顺序排序。我不知道该怎么做。

dict = {a:1, d:4, c:2, b:4, e:5, f:3}应该是:

{a:1, c:2, f:3, b:4, d:4, e:5 }

但是我得到了:

{a:1, c:2, f:3, d:4, b:4, e:5 }

2 个答案:

答案 0 :(得分:2)

您可以将compareFunction更改为使用localeCompare

dict[a] - dict[b] || a.localeCompare(b)

如果dict[a] - dict[b]返回0,它将检查下一个条件并按字母顺序对键进行排序

这是一个片段:

const dict = {a:1, d:4, c:2, b:4, e:5, f:3}

function compare(a, b) {
  return dict[a] - dict[b] || a.localeCompare(b)
}

const sorted = Object.keys(dict)
                      .sort(compare)
                      .reduce((acc, k) => (acc[k] = dict[k], acc), {})

console.log( sorted )

答案 1 :(得分:1)

因此比较差异是否为零的键

import numpy as np
import pandas as pd
import os
import random
import tensorflow as tf
from tensorflow.keras import layers
AUTOTUNE = tf.data.experimental.AUTOTUNE

os.environ['PYTHONHASHSEED'] = '1'
np.random.seed(1)
random.seed(1)
tf.random.set_seed(1)
tf.keras.backend.clear_session()
tf.compat.v1.set_random_seed(1)
session_conf = tf.compat.v1.ConfigProto(
    intra_op_parallelism_threads=1, 
    inter_op_parallelism_threads=1
)
sess = tf.compat.v1.Session(
    graph=tf.compat.v1.get_default_graph(), 
    config=session_conf
)
tf.compat.v1.keras.backend.set_session(sess)

dataset, info =  tfds.load('mnist', as_supervised=True, with_info=True)
train_dataset, test_dataset = dataset['train'], dataset['test']
num_train_examples= info.splits['train'].num_examples

def convert(image, label):
  image = tf.image.convert_image_dtype(image, tf.float32)
  return image, label

def augment(image,label,seed=5):
  image,label = convert(image, label)
  image = tf.image.convert_image_dtype(image, tf.float32)

  # image augmentation -----------------------------------
  image = tf.image.random_flip_left_right(image, seed=seed)
  # image augmentation -----------------------------------

  return image,label

BATCH_SIZE = 64
NUM_EXAMPLES = 2048
augmented_train_batches = (
    train_dataset
    .take(NUM_EXAMPLES)
    .cache()
    .map(augment, num_parallel_calls=AUTOTUNE)
    .batch(BATCH_SIZE)
    .prefetch(AUTOTUNE)
) 

def make_model():
  model = tf.keras.Sequential([
      layers.Flatten(input_shape=(28, 28, 1)),
      layers.Dense(4096, activation='relu'),
      layers.Dense(4096, activation='relu'),
      layers.Dense(10)
  ])
  model.compile(optimizer = 'adam',
                loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
                metrics=['accuracy'])
  return model

model_with_aug = make_model()
aug_history = model_with_aug.fit(augmented_train_batches, epochs=2,shuffle = False)