禁用特定视口宽度的Owl Carousel

时间:2015-01-31 12:34:41

标签: javascript jquery css carousel owl-carousel

我目前正在使用Owl Carousel在桌面/笔记本电脑大小的设备上展示一个画廊。但是在较小的设备上,我想禁用该插件,并将每个图像堆叠在一起。

这可能吗?我知道你可以调整猫头鹰旋转木马在某些断点处在屏幕上显示一定数量的图像。但我希望能够完全关闭它,删除所有的div等。

这是我目前正在使用的笔:http://codepen.io/abbasinho/pen/razXdN

HTML:

<div id="carousel">
  <div class="carousel-item" style="background-image: url(http://owlgraphic.com/owlcarousel/demos/assets/owl1.jpg);">
  </div>
  <div class="carousel-item" style="background-image: url(http://owlgraphic.com/owlcarousel/demos/assets/owl2.jpg);">
  </div>
  <div class="carousel-item" style="background-image: url(http://owlgraphic.com/owlcarousel/demos/assets/owl3.jpg);">
  </div>
</div>

CSS:

#carousel {
  width: 100%;
  height: 500px;
}

.carousel-item {
  width: 100%;
  height: 500px;
  background-size: cover;
  background-repeat: no-repeat;
  background-position: center center;
}

jQuery的:

$("#carousel").owlCarousel({
      navigation : true,
      slideSpeed : 300,
      paginationSpeed : 400,
      singleItem: true
});

任何帮助都一如既往地感激不尽!

5 个答案:

答案 0 :(得分:19)

如果屏幕宽度大于854px ,我必须禁用插件。当然,您可以更改代码以满足您的需求。这是我的解决方案:

  1. 在调用插件之前检查视口宽度
  2. 如果宽度适合调用插件,请调用插件。否则添加'off'类以显示该插件已被调用并销毁。
  3. 调整大小时:
    3.1。如果宽度适合调用插件并且插件尚未被调用或者之前已被销毁(我使用'off'类来了解它),那么请调用插件。
    3.2。如果宽度不好调用插件并且插件现在处于活动状态,那么使用Owl的触发事件 destroy.owl.carousel 将其销毁,然后删除不必要的包装元素'owl-stage-outer'
  4. $(function() {
        var owl = $('.owl-carousel'),
            owlOptions = {
                loop: false,
                margin: 10,
                responsive: {
                    0: {
                        items: 2
                    }
                }
            };
    
        if ( $(window).width() < 854 ) {
            var owlActive = owl.owlCarousel(owlOptions);
        } else {
            owl.addClass('off');
        }
    
        $(window).resize(function() {
            if ( $(window).width() < 854 ) {
                if ( $('.owl-carousel').hasClass('off') ) {
                    var owlActive = owl.owlCarousel(owlOptions);
                    owl.removeClass('off');
                }
            } else {
                if ( !$('.owl-carousel').hasClass('off') ) {
                    owl.addClass('off').trigger('destroy.owl.carousel');
                    owl.find('.owl-stage-outer').children(':eq(0)').unwrap();
                }
            }
        });
    });
    

    还有一些CSS显示禁用的Owl元素:

    .owl-carousel.off {
        display: block;
    }
    

答案 1 :(得分:7)

更容易使用基于CSS的解决方案

&#13;
&#13;
@media screen and (max-width: 992px) {
  .owl-item.cloned{
    display: none !important;
  }
  .owl-stage{
    transform:none !important;
    transition: none !important;
    width: auto !important;
  }
  .owl-item{
    width: auto !important;
  }
}
&#13;
&#13;
&#13;

答案 2 :(得分:4)

mcmimik's answer是正确的,但我必须做一个改变才能为我工作。

在功能中:

    $(window).resize(function () {
    if ($(window).width() < 641) {
        if ($('.owl-carousel').hasClass('off')) {
            var owlActive = owl.owlCarousel(owlOptions);
            owl.removeClass('off');
        }
    } else {
        if (!$('.owl-carousel').hasClass('off')) {
             owl.addClass('off').trigger('destroy.owl.carousel');
            owl.find('.owl-stage-outer').children(':eq(0)').unwrap();
        }
    }
});

owl.addClass('off').trigger('destroy.owl.carousel');交换owl.addClass('off').data("owlCarousel").destroy();

    $(window).resize(function () {
    if ($(window).width() < 641) {
        if ($('.owl-carousel').hasClass('off')) {
            var owlActive = owl.owlCarousel(owlOptions);
            owl.removeClass('off');
        }
    } else {
        if (!$('.owl-carousel').hasClass('off')) {
            owl.addClass('off').data("owlCarousel").destroy();
            owl.find('.owl-stage-outer').children(':eq(0)').unwrap();
        }
    }
});

答案 3 :(得分:3)

using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using IBM.Watson.DeveloperCloud.Services.TextToSpeech.v1;
using IBM.Watson.DeveloperCloud.Services.Conversation.v1;
//using IBM.Watson.DeveloperCloud.Services.Assistant.v1;
using IBM.Watson.DeveloperCloud.Services.ToneAnalyzer.v3;
using IBM.Watson.DeveloperCloud.Services.SpeechToText.v1;
using IBM.Watson.DeveloperCloud.Logging;
using IBM.Watson.DeveloperCloud.Utilities;
using IBM.Watson.DeveloperCloud.Connection;
using IBM.Watson.DeveloperCloud.DataTypes;
using MiniJSON;
using UnityEngine.UI;
using FullSerializer;

public class WatsonAgent : MonoBehaviour {

public string literalEntityCity;
public Text DestinationField;
public Text DepartureField;
public Text DepartureTime;
public Text StayingPeriod;
public Text AdultsField;
public Text KidsField;
public Text RoomsField;
public Text TransportationField;
public string destinationCity;
public string departureCity;

[SerializeField]
private fsSerializer _serializer = new fsSerializer();

[System.Serializable]
public class CredentialInformation
{
    public string username, password, url;
}

[System.Serializable]
public class Services
{
    public CredentialInformation
        textToSpeech, 
        languageTranslator, 
        personality_insights, 
        conversation, 
        speechToText, 
        toneAnalyzer;
}

[Header("Credentials")]
[Space]
public Services
    serviceCredentials;

[Space]
[Header("Agent voice settings")]
[Space]
public AudioSource
    voiceSource;

public VoiceType
    voiceType;

[Space]
[Header("Conversation settings")]
[Space]
public string
    workspaceId;

[Space]
[Header("Feedback fields")]
[Space]
public Text
    speechToTextField;
public Text 
    conversationInputField;
public Text
    conversationOutputField;

[System.Serializable]
public class Emotion
{
    public string 
        emotionId;
    public float 
        power;
}

[Space]
[Header("Emotions (read only)")]
[Space]
public List<Emotion> 
    emotions = new List<Emotion>();

public enum SocialState
{
    idle, listening, thinking, talking
}

[Space]
[Header("Agent social behaviour (read only)")]
[Space]
public SocialState
    characterState;

// services
SpeechToText
    speechToText;

private int 
    recordingRoutine = 0,
    recordingBufferSize = 1,
    recordingHZ = 22050;

private string 
    microphoneID = null;

private AudioClip 
    recording = null;

TextToSpeech 
    textToSpeech;

Conversation
    conversation;

private Dictionary<string, object> 
    conversationContext = null;

ToneAnalyzer
    toneAnalyzer;


private void Start()
{
    PrepareCredentials();
    Initialize();
}

void PrepareCredentials()
{
    speechToText = new SpeechToText(GetCredentials(serviceCredentials.speechToText));
    textToSpeech = new TextToSpeech(GetCredentials(serviceCredentials.textToSpeech));
    conversation = new Conversation(GetCredentials(serviceCredentials.conversation));
    toneAnalyzer = new ToneAnalyzer(GetCredentials(serviceCredentials.toneAnalyzer));
}

Credentials GetCredentials (CredentialInformation credentialInformation)
{
    return new Credentials(credentialInformation.username, credentialInformation.password, credentialInformation.url);
}

void Initialize ()
{
    conversation.VersionDate = "2017-05-26";
    toneAnalyzer.VersionDate = "2017-05-26";
    Active = true;
    StartRecording();
}

// speech to text
public bool Active
{
    get { return speechToText.IsListening; }
    set
    {
        if (value && !speechToText.IsListening)
        {
            speechToText.DetectSilence = true;
            speechToText.EnableWordConfidence = true;
            speechToText.EnableTimestamps = true;
            speechToText.SilenceThreshold = 0.01f;
            speechToText.MaxAlternatives = 0;
            speechToText.EnableInterimResults = true;
            speechToText.OnError = OnSpeechError;
            speechToText.InactivityTimeout = -1;
            speechToText.ProfanityFilter = false;
            speechToText.SmartFormatting = true;
            speechToText.SpeakerLabels = false;
            speechToText.WordAlternativesThreshold = null;
            speechToText.StartListening(OnSpeechRecognize);
        }
        else if (!value && speechToText.IsListening)
        {
            speechToText.StopListening();
        }
    }
}

private void StartRecording()
{
    if (recordingRoutine == 0)
    {
        UnityObjectUtil.StartDestroyQueue();
        recordingRoutine = Runnable.Run(RecordingHandler());
    }
}

private void StopRecording()
{
    if (recordingRoutine != 0)
    {
        Microphone.End(microphoneID);
        Runnable.Stop(recordingRoutine);
        recordingRoutine = 0;
    }
}

private void OnSpeechError(string error)
{
    Active = false;

    Log.Debug("ExampleStreaming.OnError()", "Error! {0}", error);
}

private IEnumerator RecordingHandler()
{
    recording = Microphone.Start(microphoneID, true, recordingBufferSize, recordingHZ);
    yield return null;      // let _recordingRoutine get set..

    if (recording == null)
    {
        StopRecording();
        yield break;
    }

    bool bFirstBlock = true;
    int midPoint = recording.samples / 2;
    float[] samples = null;

    while (recordingRoutine != 0 && recording != null)
    {
        int writePos = Microphone.GetPosition(microphoneID);
        if (writePos > recording.samples || !Microphone.IsRecording(microphoneID))
        {
            Debug.Log("Microphone disconnected.");
            StopRecording();
            yield break;
        }

        if ((bFirstBlock && writePos >= midPoint) || (!bFirstBlock && writePos < midPoint))
        {
            // front block is recorded, make a RecordClip and pass it onto our callback.
            samples = new float[midPoint];
            recording.GetData(samples, bFirstBlock ? 0 : midPoint);

            AudioData record = new AudioData();
            record.MaxLevel = Mathf.Max(Mathf.Abs(Mathf.Min(samples)), Mathf.Max(samples));
            record.Clip = AudioClip.Create("Recording", midPoint, recording.channels, recordingHZ, false);
            record.Clip.SetData(samples, 0);

            speechToText.OnListen(record);

            bFirstBlock = !bFirstBlock;
        }
        else
        {
            // calculate the number of samples remaining until we ready for a block of audio, 
            // and wait that amount of time it will take to record.
            int remaining = bFirstBlock ? (midPoint - writePos) : (recording.samples - writePos);
            float timeRemaining = (float)remaining / (float) recordingHZ;

            yield return new WaitForSeconds(timeRemaining);
        }
    }

    yield break;
}

private void OnSpeechRecognize(SpeechRecognitionEvent result)
{
    if (result != null && result.results.Length > 0)
    {
        foreach (var res in result.results)
        {
            foreach (var alt in res.alternatives)
            {

                string text = string.Format("{0} ({1}, {2:0.00})\n", alt.transcript, res.final ? "Final" : "Interim", alt.confidence);
                // Log.Debug("ExampleStreaming.OnRecognize()", text);

                if (speechToTextField != null)
                {
                    speechToTextField.text = text;
                }

                if (res.final)
                {
                    if (characterState == SocialState.listening)
                    {
                        Debug.Log("WATSON | Speech to text recorded: \n" + alt.transcript);
                        StartCoroutine(Message(alt.transcript));
                    }
                } else
                {
                    if(characterState == SocialState.idle)
                    {
                        characterState = SocialState.listening;
                    }
                }
            }
        }
    }
}


// text to speech
private IEnumerator Synthesize(string text)
{
    Debug.Log("WATSON CALL | Synthesize input: \n" + text);

    textToSpeech.Voice = voiceType;
    bool doSynthesize = textToSpeech.ToSpeech(HandleSynthesizeCallback, OnFail, text, true);

    if(doSynthesize)
    {
        StartCoroutine(Analyze(text));
        characterState = SocialState.talking;
    }

    yield return null;
}

void HandleSynthesizeCallback(AudioClip clip, Dictionary<string, object> customData = null)
{
    if (Application.isPlaying && clip != null)
    {
        Invoke("ResumeIdle", clip.length);
        voiceSource.clip = clip;
        voiceSource.Play();
    }
}

void ResumeIdle()
{
    characterState = SocialState.idle;
}

// conversation
private IEnumerator Message (string text)
{
    Debug.Log("WATSON | Conversation input: \n" + text);

    MessageRequest messageRequest = new MessageRequest()
    {
        input = new Dictionary<string, object>()
        {
            { "text", text }
        },
        context = conversationContext
    };
    bool doMessage = conversation.Message(HandleMessageCallback, OnFail, workspaceId, messageRequest);

    if(doMessage)
    {
        characterState = SocialState.thinking;

        if (conversationInputField != null)
        {
            conversationInputField.text = text;
        }
    }

    yield return null;
}

    void HandleMessageCallback (object resp, Dictionary<string, object> customData)
{
    object _tempContext = null;
    (resp as Dictionary<string, object>).TryGetValue("context", out _tempContext);

    if (_tempContext != null)
        conversationContext = _tempContext as Dictionary<string, object>;
    string contextList = conversationContext.ToString();

    Dictionary<string, object> dict = Json.Deserialize(customData["json"].ToString()) as Dictionary<string, object>;
    //Debug.Log("Context data: " + customData["json"].ToString());
    // load output --> text;answer from Json node
    Dictionary <string, object> output = dict["output"] as Dictionary<string, object>;

    var context = dict["context"] as Dictionary<string, object>;
    if (context["destination_city"] != null)
    {
        destinationCity = context["destination_city"].ToString();
        Debug.Log("Destination city: " + destinationCity);
        DestinationField.text = "Destination: " + destinationCity;
    }
    if (context["departure_city"] != null)
    {
        departureCity = context["departure_city"].ToString();
        DepartureField.text = "Departure: " + departureCity;
    }
    if (context["DateBegin"] != null && context["date"] != null)
    {
        string dateBegin = context["DateBegin"].ToString();
        string dateEnd = context["DateEnd"].ToString();
        StayingPeriod.text = "Stay: " + dateBegin + " - " + dateEnd;
    }
    if (context["PeriodNumber"] != null && context["PeriodDate"] != null && context["DateEnd"] != null)
    {
        string periodNumber = context["PeriodNumber"].ToString();
        string periodDate = context["PeriodDate"].ToString();
        string dateEnd = context["DateEnd"].ToString();
        StayingPeriod.text = "Stay: " + periodNumber + " " + periodDate + " - " + dateEnd;
    }
    if (context["time"] != null)
    {
        string timeInfo = context["time"].ToString();
        DepartureTime.text = "Time: " + timeInfo;
    }

    List<object> text = output["text"] as List<object>;
    string answer = text[0].ToString(); //returns only the first response

    Debug.Log("WATSON | Conversation output: \n" + answer);

    if (conversationOutputField != null)
    {
        conversationOutputField.text = answer;
    }

    fsData fsdata = null;
    fsResult r = _serializer.TrySerialize(resp.GetType(), resp, out fsdata);
    if (!r.Succeeded)
    {
        throw new WatsonException(r.FormattedMessages);
    }

        //convert fsdata to MessageResponse
    MessageResponse messageResponse = new MessageResponse();
    object obj = messageResponse;
    r = _serializer.TryDeserialize(fsdata, obj.GetType(), ref obj);

    if (!r.Succeeded)
    {
        throw new WatsonException(r.FormattedMessages);
    }

    if (resp != null)
    {
        //Recognize intents & entities
        if (messageResponse.intents.Length > 0 && messageResponse.entities.Length > 0)
        {
            string intent = messageResponse.intents[0].intent;
            string entity = messageResponse.entities[0].entity;
            string literalEntity = messageResponse.entities[0].value;
            if (entity == "city")
            {
                literalEntityCity = literalEntity;
            }
            if (intent == "weather" && entity == "city")
            {
                literalEntityCity = literalEntity;
            }
        }
        if (messageResponse.intents.Length > 0)
        {
            string intent = messageResponse.intents[0].intent;
            //intent name
            Debug.Log("Intent: " + intent);
        }
        if (messageResponse.entities.Length > 0)
        {
            string entity = messageResponse.entities[0].entity;
            //entity name
            Debug.Log("Entity: " + entity);
            string literalEntity = messageResponse.entities[0].value;
            //literal spoken entity
            Debug.Log("Entity Literal: " + literalEntity);
            if (entity == "city")
            {
                literalEntityCity = literalEntity;
            }
        }
    }

    StartCoroutine(Synthesize(answer));
}

// tone analyzer
private IEnumerator Analyze (string text)
{
    Debug.Log("WATSON | Tone analyze input: \n" + text);

    bool doAnalyze = toneAnalyzer.GetToneAnalyze(HandleAnalyzeCallback, OnFail, text);

    yield return null;
}

private void HandleAnalyzeCallback(ToneAnalyzerResponse resp, Dictionary<string, object> customData)
{
    Dictionary<string, object> dict = Json.Deserialize(customData["json"].ToString()) as Dictionary<string, object>;

    Dictionary<string, object> document_tone = dict["document_tone"] as Dictionary<string, object>;

    List<object> tone_categories = document_tone["tone_categories"] as List<object>;

    string debugOutput = "";

    foreach (object tone in tone_categories)
    {
        Dictionary<string, object> category = (Dictionary<string, object>)tone;

        List<object> newTone = category["tones"] as List<object>;

        foreach (object insideTone in newTone)
        {
            Dictionary<string, object> tonedict = (Dictionary<string, object>)insideTone;

            float score = float.Parse(tonedict["score"].ToString());
            string id = tonedict["tone_id"].ToString();

            bool emotionAvailable = false;

            foreach(Emotion emotion in emotions)
            {
                if(emotion.emotionId == id)
                {
                    emotionAvailable = true;
                    emotion.power = score;
                    debugOutput += emotion.emotionId + " : " + emotion.power.ToString() + " - ";
                    break;
                }
            }

            if(!emotionAvailable)
            {
                Emotion newEmotion = new Emotion();
                newEmotion.emotionId = id;
                newEmotion.power = score;
                emotions.Add(newEmotion);
                debugOutput += newEmotion.emotionId + " : " + newEmotion.power.ToString() + " - ";
            }
        }
    }

    Debug.Log("WATSON | Tone analyze output: \n" + debugOutput);
}

private void OnFail(RESTConnector.Error error, Dictionary<string, object> customData)
{
    Log.Error("WatsonAgent.OnFail()", "Error received: {0}", error.ToString());
}
}

工作小提琴: https://jsfiddle.net/j6qk4pq8/187/

答案 4 :(得分:0)

简单,使用jquery。将一个类添加到轮播的容器div中,例如“<div id="carousel class='is_hidden'>”和一些css,例如“.is_hidden{display:none;}”。然后使用jquery删除类,或者根据窗口宽度添加类。