在启动时执行交互式脚本并显示到默认的tty连接监视器屏幕

时间:2017-01-10 19:49:24

标签: linux bash startup tty

我已经将Centos 6配置为每次启动时自动登录。

我已经修改了/etc/init/tty.conf来实现这一点,这很好用。

/etc/init/tty.conf的内容

stop on runlevel [S016]

respawn
instance $TTY
#exec /sbin/mingetty $TTY
exec /sbin/mingetty --autologin root $TTY
usage 'tty TTY=/dev/ttyX  - where X is console id'

然后我配置了〜/ .bash_profile来运行脚本。见下文内容。

# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH


echo "This is one time" >/tmp/one.txt

正如您在上面所看到的,我将文本回显到文件/tmp/one.txt, 文件中的预期文本应该只出现一次。但由于某种原因,这个脚本被执行了3次。

如果我在/tmp/one.txt中显示以下-f /tmp/one.txt。它表明脚本执行了3次。

tail -f /netboot/tmp/one.txt
This is one time
tail: /netboot/tmp/one.txt: file truncated
This is one time
tail: /netboot/tmp/one.txt: file truncated
This is one time
tail: /netboot/tmp/one.txt: file truncated
This is one time

我可以做些什么来阻止它多次执行它,我只想让它运行一次。

感谢您阅读此帖

1 个答案:

答案 0 :(得分:0)

我必须从/etc/init/tty.conf文件中删除respawn实例$ TTY。

在修复之前/etc/init/tty.conf看起来像这样。

stop on runlevel [S016]


exec /sbin/mingetty --autologin root $TTY
usage 'tty TTY=/dev/ttyX  - where X is console id'

解决问题后。 /etc/init/tty.conf看起来像这样。

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from capterra.items import CapterraItem

class CapterraCatSpider(CrawlSpider):
    name = 'capterra_cat'
    #allowed_domains = ['http://www.capterra.com/categories']
    start_urls = ['http://www.capterra.com/categories']
    # rules = (
    #     Rule(LinkExtractor(allow=r'Items/'), callback='parse_item', follow=True),
    # )

def parse(self, response):
    #TEMP
    for category in response.css('ol.browse-group-list'):
        #Debug: only elements of one category
        if category.css('a::text').extract_first() == 'Yoga Studio':
            i = CapterraItem()
            #Get link to detail page
            i['cat_name'] = category.css('a::text').extract_first()
            #join link to detail page with base url
            i['cat_link'] = response.urljoin(category.css('a::attr(href)').extract_first())

            cat_link = i['cat_link']
            print cat_link
            #call request to detail page and pass response to parse_details method with callback method
            request = scrapy.Request(cat_link, callback=self.parse_details)

            request.meta['item'] = i
            yield request

def parse_details(self,response):
    #Debug print
    print 'DETAILS!'
    #read your items from response meta
    item = response.meta['item']
    #iterate over listings
    for detail in response.css('p.listing-description.milli'):
        item['profile_link'] = response.urljoin(detail.css('a.spotlight-link::attr(href)').extract_first())
        #call request to profile page to get more information for listing
        request = scrapy.Request(item['profile_link'], callback=self.parse_profile)
        #set your item to rquest metadata
        request.meta['item'] = item
        yield request

def parse_profile(self,response):
    #Debug print
    print 'PROFILE'
    item = response.meta['item']
    item['product_name'] = response.css('h1.beta.no-margin-bottom::text').extract_first()
    item['who_uses_software'] = response.css('div.spotlight-target > p.epsilon > i::text').extract_first()
    item['vendor_name'] = response.css('h2.spotlight-vendor-name > span::text').extract_first()

    return item

这解决了我上面解释的问题。