如何用jquery删除类?

时间:2016-08-24 11:02:04

标签: jquery show-hide removeclass

如何删除隐藏的课程?为了显示色带?

HTML

<div id="alwaysInStockRibbon" class="ribbon-wrapper-productpage hidden">

的CSS

.hidden {
    display: none!important;
    visibility: hidden!important;

我在下面尝试过这些但没有成功。

Jquery的

$(".hidden").remove();
$(".hidden").removeClass();

https://api.jquery.com/remove/

https://api.jquery.com/removeClass/

输入

5 个答案:

答案 0 :(得分:2)

您需要在removeClass方法中将classname作为参数传递,以便在匹配的集合中删除它:

  

从匹配元素集中的每个元素中删除单个类,多个类或所有类。

 $(".hidden").removeClass('hidden');

答案 1 :(得分:1)

$("#alwaysInStockRibbon").removeClass('hidden');

答案 2 :(得分:1)

您必须告诉您要删除的课程。

$(".hidden").removeClass("hidden");

答案 3 :(得分:1)

试试这个:removeClass方法需要删除类名。如果要删除多个类,可以放置空格分隔的类名

$('.hidden').removeClass("hidden");

答案 4 :(得分:-2)

或者您可以执行以下操作来显示功能区:

    String[] hexList = input.toString().split(",");
    int numHex = (int) Math.pow(9, lLevel_From_config - hLevel_From_config);
    for (String hex : hexList) {
        for (int i = 0; i < numHex; i++) {
            context.write(m_mapKey, generateHexagon(hex, i));
        }
    }

java.lang.OutOfMemoryError: GC overhead limit exceeded
    at java.util.HashMap.createEntry(HashMap.java:897)
    at java.util.HashMap.addEntry(HashMap.java:884)
    at java.util.HashMap.put(HashMap.java:505)
    at java.util.HashSet.add(HashSet.java:217)
    at com.pb.hadoop.spark.hexgen.function.HexGenMapFunction.call(HexGenMapFunction.java:56)
    at com.pb.hadoop.spark.hexgen.function.HexGenMapFunction.call(HexGenMapFunction.java:21)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:129)
    at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:129)
    at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1197)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1251)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
    at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)