将while循环添加到脚本会导致输入错误

时间:2016-07-17 18:05:46

标签: python python-3.x

我正在编写一些代码来玩Hangman(Python 3.5.2)。我希望我的代码永远运行,例如使用while 1 < 2:,但是我开始在没有while的情况下正常工作的语句上出现语法错误。这是我的代码:

with open('dictionary.txt') as f:
  words = f.read().splitlines()
alphabet = 'abcdefghijklmnopqrstuvwxyz'
words2 = ''
alphabetCount = []
guesses = []
input = input('Input: ')
for x in range(0, len(words)):
  valid = True
  if len(input) == len(words[x]):
    for y in range(-1, len(input)-1):
      if input[y] != words[x][y] and input[y] != '_':
        valid = False
    if valid:
      words2 = words2 + (words[x])
for x in range(0, 26):
  alphabetCount.append(0)
for x in range(0, len(words2)):
  alphabetCount[alphabet.index(words2[x])] = alphabetCount[alphabet.index(words2[x])] + 1
for z in range(0, 26):
  if max(alphabetCount) != 0 and (alphabet[alphabetCount.index(max(alphabetCount))]) not in input:
    guesses.append(alphabet[alphabetCount.index(max(alphabetCount))])
  alphabetCount[alphabetCount.index(max(alphabetCount))] = 0
print (guesses)

基本上,我想像这样循环:

while 1 < 2:
  with open('dictionary.txt') as f:
    words = f.read().splitlines()
  alphabet = 'abcdefghijklmnopqrstuvwxyz'
  words2 = ''
  alphabetCount = []
  guesses = []
  input = input('Input: ')
  for x in range(0, len(words)):
    valid = True
    if len(input) == len(words[x]):
      for y in range(-1, len(input)-1):
        if input[y] != words[x][y] and input[y] != '_':
          valid = False
      if valid:
        words2 = words2 + (words[x])
  for x in range(0, 26):
    alphabetCount.append(0)
  for x in range(0, len(words2)):
    alphabetCount[alphabet.index(words2[x])] = alphabetCount[alphabet.index(words2[x])] + 1
  for z in range(0, 26):
    if max(alphabetCount) != 0 and (alphabet[alphabetCount.index(max(alphabetCount))]) not in input:
      guesses.append(alphabet[alphabetCount.index(max(alphabetCount))])
    alphabetCount[alphabetCount.index(max(alphabetCount))] = 0
  print (guesses)

1 个答案:

答案 0 :(得分:1)

出现此问题是因为您使用作业掩饰了内置函数 input()

import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor

class FundaItem(scrapy.Item):
    url = scrapy.Field()
    title = scrapy.Field()

class PropertyLinksSpider(CrawlSpider):

    name = "property_links"
    allowed_domains = ["funda.nl"]

    def __init__(self, place='amsterdam', page='1'):
        self.start_urls = ["http://www.funda.nl/koop/%s/p%s/" % (place, page)]
        self.base_url = "http://www.funda.nl/koop/%s/" % place
        self.le1 = LinkExtractor(allow=r'%s+huis|appartement-\d{8}' % self.base_url)

    def parse(self, response):
        links = self.le1.extract_links(response)
        for link in links:
            if link.url.count('/') == 6 and link.url.endswith('/'):
                item = FundaItem()
                item['url'] = link.url
                yield scrapy.Request(link.url, callback=self.parse_dir_contents, meta={'item': item})

    def parse_dir_contents(self, response):
        new_item = response.request.meta['item']
        new_item['title'] = response.xpath('//title').extract()
        yield new_item

对于一次迭代,这样可以正常工作,因为您没有再次调用input = input('Input: ') 。不止一次地执行此操作会使用被屏蔽的input()值,在您的情况下使用类型为input的值;调用str值会产生str

将名称更改为其他名称,例如:

TypeError

要离开错误,请确保在整个代码中更改错误。

请记住,永远不要将用户定义的名称与built-in function names混合,因为您会得到奇怪的行为。