我是Python的新手。我想在下面的代码中找到Python关键字['def','in', 'if'...]
的出现位置。但是,需要忽略代码中任何字符串常量中找到的关键字。
如何计算关键字出现次数而不计算字符串中的关键字?
def grade(result):
'''
if if (<--- example to test if the word "if" will be ignored in the counts)
:param result: none
:return:none
'''
if result >= 80:
grade = "HD"
elif 70 <= result:
grade = "DI"
elif 60 <= result:
grade = "CR"
elif 50 <= result:
grade = "PA"
else:
#else (ignore this word)
grade = "NN"
return grade
result = float(raw_input("Enter a final result: "))
while result < 0 or result > 100:
print "Invalid result. Result must be between 0 and 100."
result = float(raw_input("Re-enter final result: "))
print "The corresponding grade is", grade(result)
答案 0 :(得分:2)
使用tokenize
,keyword
和collections
模块。
tokenize.generate_tokens(readline)
generate_tokens()生成器 需要一个参数readline,它必须是一个可调用的对象 提供与内置文件的readline()方法相同的接口 对象(请参阅文件对象部分)。每次调用函数都应该 将一行输入作为字符串返回。或者,readline可以是a 通过提高StopIteration来表示完成的可调用对象。
生成器使用这些成员生成5元组:令牌类型; 令牌字符串;指定行的2元组(srow,scol) 和令牌在源中开始的列;一个2元组(erow, ecol)ofts指定令牌结束的行和列 来源;以及找到令牌的行。这条线路过去了 (最后一个元组项)是逻辑行;延续线是 包括在内。
2.2版中的新功能。
import tokenize
with open('source.py') as f:
print list(tokenize.generate_tokens(f.readline))
部分输出:
[(1, 'def', (1, 0), (1, 3), 'def grade(result):\n'),
(1, 'grade', (1, 4), (1, 9), 'def grade(result):\n'),
(51, '(', (1, 9), (1, 10), 'def grade(result):\n'),
(1, 'result', (1, 10), (1, 16), 'def grade(result):\n'),
(51, ')', (1, 16), (1, 17), 'def grade(result):\n'),
(51, ':', (1, 17), (1, 18), 'def grade(result):\n'),
(4, '\n', (1, 18), (1, 19), 'def grade(result):\n'),
(5, ' ', (2, 0), (2, 4), " '''\n"),
(3,
'\'\'\'\n if if (<--- example to test if the word "if" will be ignored in the counts)\n :param result: none\n :return:none\n \'\'\'',
(2, 4),
(6, 7),
' \'\'\'\n if if (<--- example to test if the word "if" will be ignored in the counts)\n :param result: none\n :return:none\n \'\'\'\n'),
(4, '\n', (6, 7), (6, 8), " '''\n"),
(54, '\n', (7, 0), (7, 1), '\n'),
(1, 'if', (8, 4), (8, 6), ' if result >= 80:\n'),
您可以从keyword
模块中检索关键字列表:
import keyword
print keyword.kwlist
print keyword.iskeyword('def')
集合解决方案与collections.Counter:
import tokenize
import keyword
import collections
with open('source.py') as f:
# tokens is lazy generator
tokens = (token for _, token, _, _, _ in tokenize.generate_tokens(f.readline))
c = collections.Counter(token for token in tokens if keyword.iskeyword(token))
print c # Counter({'elif': 3, 'print': 2, 'return': 1, 'else': 1, 'while': 1, 'or': 1, 'def': 1, 'if': 1})