您好我正在编写一个程序,用于计算文件中每个单词出现的次数。然后它打印一个计数在800到1000之间的单词列表,按计数顺序排序。我坚持要一个计数器,看看第一个单词是否与下一个单词匹配,直到出现一个新单词。在主要的我试图打开文件,逐字逐句地读取并在while循环中调用sort来对矢量进行排序。然后,在for循环中查看所有单词,如果第一个单词等于第二个单词++。我不认为这是你如何保持一个柜台。
以下是代码:
#include <string>
#include <iostream>
#include <fstream>
#include <vector>
#include <algorithm>
#include <set>
using namespace std;
vector<string> lines;
vector<int> second;
set<string> words;
multiset<string> multiwords;
void readLines(const char *filename)
{
string line;
ifstream infile;
infile.open(filename);
if (!infile)
{
cerr << filename << " cannot open" << endl;
return;
}
getline(infile, line);
while (!infile.eof())
{
lines.push_back(line);
getline(infile, line);
}
infile.close();
}
int binary_search(vector<string> &v, int size, int value)
{
int from = 0;
int to = size - 1;
while (from <= to)
{
int mid = (from + to) / 2;
int mid_count = multiwords.count(v[mid]);
if (value == mid_count)
return mid;
if (value < mid_count) to = mid - 1;
else from = mid + 1;
}
return from;
}
int main()
{
vector<string> words;
string x;
ifstream inFile;
int count = 0;
inFile.open("bible.txt");
if (!inFile)
{
cout << "Unable to open file";
exit(1);
}
while (inFile >> x){
sort(words.begin(), words.end());
}
for(int i = 0;i < second.size();i++)
{
if(x == x+1)
{
count++;
}
else
return;
}
inFile.close();
}
答案 0 :(得分:3)
他。我知道直截了当地表明解决方案并没有真正帮助你。然而。
我浏览了你的代码,看到了许多未使用和混乱的内容。这就是我要做的事情:
#include <algorithm>
#include <fstream>
#include <functional>
#include <iostream>
#include <iterator>
#include <map>
#include <string>
#include <vector>
using namespace std;
// types
typedef std::pair<string, size_t> frequency_t;
typedef std::vector<frequency_t> words_t;
// predicates
static bool byDescendingFrequency(const frequency_t& a, const frequency_t& b)
{ return a.second > b.second; }
const struct isGTE // greater than or equal
{
size_t inclusive_threshold;
bool operator()(const frequency_t& record) const
{ return record.second >= inclusive_threshold; }
} over1000 = { 1001 }, over800 = { 800 };
int main()
{
words_t words;
{
map<string, size_t> tally;
ifstream inFile("bible.txt");
string s;
while (inFile >> s)
tally[s]++;
remove_copy_if(tally.begin(), tally.end(),
back_inserter(words), over1000);
}
words_t::iterator begin = words.begin(),
end = partition(begin, words.end(), over800);
std::sort(begin, end, &byDescendingFrequency);
for (words_t::const_iterator it=begin; it!=end; it++)
cout << it->second << "\t" << it->first << endl;
}
993 because
981 men
967 day
954 over
953 God,
910 she
895 among
894 these
886 did
873 put
868 thine
864 hand
853 great
847 sons
846 brought
845 down
819 you,
811 so
995 tuum
993 filius
993 nec
966 suum
949 meum
930 sum
919 suis
907 contra
902 dicens
879 tui
872 quid
865 Domine
863 Hierusalem
859 suam
839 suo
835 ipse
825 omnis
811 erant
802 se
这两个文件的效果大约为1.12秒,但在map<>
替换为{{1}}之后只有0.355秒
答案 1 :(得分:2)
可以使用单个map< string, int >
次出现,逐个读取单词并在m[ word ]
中递增计数器来实现更有效的方法。在考虑了所有单词后,迭代地图,对于给定范围内的单词,将它们添加到multimap<int, string>
。最后转储multimap的内容,这些内容将按出现次数和字母顺序排序......
答案 2 :(得分:2)
一种解决方案可能是:定义letter_only
语言环境,以便忽略来自流的标点符号,并从输入流中只读取有效的“英语”字母。这样,流将处理“方式”,“方式”等字样。和“方式!”只是相同的字“方式”,因为流将忽略像“。”这样的标点符号。和“!”。
struct letter_only: std::ctype<char>
{
letter_only(): std::ctype<char>(get_table()) {}
static std::ctype_base::mask const* get_table()
{
static std::vector<std::ctype_base::mask>
rc(std::ctype<char>::table_size,std::ctype_base::space);
std::fill(&rc['A'], &rc['z'+1], std::ctype_base::alpha);
return &rc[0];
}
};
然后将其用作:
int main()
{
std::map<std::string, int> wordCount;
ifstream input;
//enable reading only english letters only!
input.imbue(std::locale(std::locale(), new letter_only()));
input.open("filename.txt");
std::string word;
std::string uppercase_word;
while(input >> word)
{
std::transform(word.begin(),
word.end(),
std::back_inserter(uppercase_word),
(int(&)(int))std::toupper); //the cast is needed!
++wordCount[uppercase_word];
}
for (std::map<std::string, int>::iterator it = wordCount.begin();
it != wordCount.end();
++it)
{
std::cout << "word = "<< it->first
<<" : count = "<< it->second << std::endl;
}
}
答案 3 :(得分:0)
为了好玩,我使用Boost MultiIndex以c ++ 0x风格制作了解决方案。
如果没有auto
keyword(类型推断),这种风格会非常笨拙。
通过始终按字和按频率维护索引,不需要删除,分区或排序词表:它都将在那里。
编译并运行:
g++ --std=c++0x -O3 test.cpp -o test
curl ftp://ftp.funet.fi/pub/doc/bible/texts/english/av.tar.gz |
tar xzO | sed 's/^[ 0-9:]\+//' > bible.txt
time ./test
#include <boost/foreach.hpp>
#include <boost/lambda/lambda.hpp>
#include <boost/multi_index_container.hpp>
#include <boost/multi_index/ordered_index.hpp>
#include <boost/multi_index/member.hpp>
#include <fstream>
#include <iostream>
#include <string>
using namespace std;
struct entry
{
string word;
size_t freq;
void increment() { freq++; }
};
struct byword {}; // TAG
struct byfreq {}; // TAG
int main()
{
using ::boost::lambda::_1;
using namespace ::boost::multi_index;
multi_index_container<entry, indexed_by< // sequenced<>,
ordered_unique <tag<byword>, member<entry,string,&entry::word> >, // alphabetically
ordered_non_unique<tag<byfreq>, member<entry,size_t,&entry::freq> > // by frequency
> > tally;
ifstream inFile("bible.txt");
string s;
while (inFile>>s)
{
auto& lookup = tally.get<byword>();
auto it = lookup.find(s);
if (lookup.end() != it)
lookup.modify(it, boost::bind(&entry::increment, _1));
else
lookup.insert({s, 1});
}
BOOST_FOREACH(auto e, tally.get<byfreq>().range(800 <= _1, _1 <= 1000))
cout << e.freq << "\t" << e.word << endl;
}
注意
entry
类型而不是使用std::pair
(由于显而易见的原因),这比我的earlier code慢:这在插入阶段按频率维持索引。这是不必要的,但它可以更有效地提取[800,1000]范围:
tally.get<byfreq>().range(800 <= _1, _1 <= 1000)
已经订购了多套频率。因此,实际的速度/内存交易可能会有利于这个版本,特别是当文档很大并且包含很少的重复单词时(唉,这是一个已知的属性不能保留的语料库文本圣经,以免有人将其翻译成新生儿)。