如何定位移动设备的横向和纵向方向

时间:2013-10-02 10:29:06

标签: css twitter-bootstrap media-queries stylesheet

我正在使用bootstrap来构建一个客户端的网站,当我尝试在移动设备上定位横向和纵向方向时,我已经失败了,以便为两个视口添加一些特定的样式。如何针对移动样式定位纵向和横向?我需要在320px断点处添加特定样式,在480px断点处添加某些样式。使用我当前的媒体查询,这不起作用目前在我的样式表中我有以下内容:

/* portrait phones */
@media only screen and (max-width: 320px) and (orientation:portrait) {
    /* Styles */
}

/* landscape phones */
@media only screen and (min-width: 321px) and (orientation:landscape) {
/* Styles */
    }

如果我把风格用于风景,但我不认为它们被拾取。每次我做出改变,然后刷新我的Iphone,我都没有看到任何区别。我想也许我的媒体查询错了?如果有更好的方法来定位移动状态,我将非常感谢任何帮助。

2 个答案:

答案 0 :(得分:1)

我设法最终通过为我的321px媒体查询添加最大宽度来解决此问题,并且能够同时定位横向和纵向移动方向。我还在我的标题中发现:[ { _id: "585be0da13385513689f704b", site: "NBT", url: "nbt.com", label: "NBT Home", platform: "desktop", speed: 75, date: "2016-12-22T14:15:57.975Z" }, { _id: "585be0da13385513689f704c", site: "MT", url: "m.mt.com", label: "MT Home", platform: "mobile", speed: 40, date: "2016-12-22T14:01:57.975Z" }, { _id: "585be0da13385513689f704d", site: "NBT", url: "m.nbt.com", label: "NBT Home", platform: "mobile", speed: 90, date: "2016-12-22T12:18:57.975Z" } ] 似乎导致了问题,在删除它之后,我能够定位我需要的移动断点。

initial-scale=1

答案 1 :(得分:0)

尝试使用:

Traceback (most recent call last):

  File "<ipython-input-1-680341707274>", line 1, in <module>
    runfile('D:/google_sync/google_sync/Programming Language/Tensorflow/game/game.py', wdir='D:/google_sync/google_sync/Programming Language/Tensorflow/game')

  File "C:\Users\wyp87\Anaconda3\envs\tensorflow-gpu\lib\site-packages\spyder\utils\site\sitecustomize.py", line 705, in runfile
    execfile(filename, namespace)

  File "C:\Users\wyp87\Anaconda3\envs\tensorflow-gpu\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile
    exec(compile(f.read(), filename, 'exec'), namespace)

  File "D:/google_sync/google_sync/Programming Language/Tensorflow/game/game.py", line 210, in <module>
    train_neural_network(input_image)

  File "D:/google_sync/google_sync/Programming Language/Tensorflow/game/game.py", line 193, in train_neural_network
    out_batch = predict_action.eval(feed_dict = {input_image : input_image_data1_batch})

  File "C:\Users\wyp87\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\framework\ops.py", line 680, in eval
    return _eval_using_default_session(self, feed_dict, self.graph, session)

  File "C:\Users\wyp87\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\framework\ops.py", line 4951, in _eval_using_default_session
    return session.run(tensors, feed_dict)

  File "C:\Users\wyp87\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\client\session.py", line 877, in run
    run_metadata_ptr)

  File "C:\Users\wyp87\Anaconda3\envs\tensorflow-gpu\lib\site-packages\tensorflow\python\client\session.py", line 1069, in _run
    np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)

  File "C:\Users\wyp87\Anaconda3\envs\tensorflow-gpu\lib\site-packages\numpy\core\numeric.py", line 501, in asarray
    return array(a, dtype, copy=False, order=order)

MemoryError