我正在尝试使用马尔可夫链创建一个简单的聊天机器人。我已经能够使用输入文本中的模式成功创建字典,但我无法弄清楚如何使用它来生成句子。
import java.text.BreakIterator;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.TreeMap;
final class MarkovChain {
private static final BreakIterator sentenceIterator = BreakIterator.getSentenceInstance();
private static final BreakIterator wordIterator = BreakIterator.getWordInstance();
private static final Map<String, List<String>> dictionary = new TreeMap<>();
public static void addDictionary(String string) {
string = string.toLowerCase().trim();
for (final String sentence : splitSentences(string)) {
String lastWord = null, lastLastWord = null;
for (final String word : splitWords(sentence)) {
if (lastLastWord != null) {
final String key = lastLastWord + ' ' + lastWord;
List<String> value = dictionary.get(key);
if (value == null)
value = new ArrayList<>();
value.add(word);
dictionary.put(key, value);
}
lastLastWord = lastWord;
lastWord = word;
}
}
}
private static List<String> splitSentences(final String string) {
sentenceIterator.setText(string);
final List<String> sentences = new ArrayList<>();
for (int start = sentenceIterator.first(), end = sentenceIterator.next(); end != BreakIterator.DONE; start = end, end = sentenceIterator.next()) {
sentences.add(string.substring(start, end).trim());
}
return sentences;
}
private static List<String> splitWords(final String string) {
wordIterator.setText(string);
final List<String> words = new ArrayList<>();
for (int start = wordIterator.first(), end = wordIterator.next(); end != BreakIterator.DONE; start = end, end = wordIterator.next()) {
String word = string.substring(start, end).trim();
if (word.length() > 0 && Character.isLetterOrDigit(word.charAt(0)))
words.add(word);
}
return words;
}
}
我如何从字典中生成句子?
答案 0 :(得分:1)
以下是我将如何更改代码以生成句子的方法。我添加Map<String, List<String>> singleWords
将前一个单词指向可能的下一个单词列表,并在循环中填充此地图,迭代句子中的单词。另外,我在单词列表的两侧添加了点,以便在第一个单词&#34;之前注册名为&#34;的特殊状态。和#34;在最后一句话后#34; (见addDots(...)
)。
import java.nio.charset.Charset;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.text.BreakIterator;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Random;
import java.util.TreeMap;
final class MarkovChain {
private static final BreakIterator sentenceIterator = BreakIterator.getSentenceInstance();
private static final BreakIterator wordIterator = BreakIterator.getWordInstance();
private static final Map<String, List<String>> singleWords = new TreeMap<>();
private static final Map<String, List<String>> dictionary = new TreeMap<>();
public static void main(String[] args) throws Exception {
String text = new String(Files.readAllBytes(Paths.get("text.txt")), Charset.defaultCharset());
addDictionary(text);
StringBuilder output = new StringBuilder();
generateSentence(singleWords, dictionary, output, 5);
System.out.println(output.toString());
}
public static void addDictionary(String string) {
string = string.toLowerCase().trim();
for (final String sentence : splitSentences(string)) {
String lastWord = null, lastLastWord = null;
for (final String word : addDots(splitWords(sentence))) {
if (lastLastWord != null) {
final String key = lastLastWord + ' ' + lastWord;
List<String> value = dictionary.get(key);
if (value == null)
value = new ArrayList<>();
value.add(word);
dictionary.put(key, value);
}
if (lastWord != null) {
final String key = lastWord;
List<String> value = singleWords.get(key);
if (value == null)
value = new ArrayList<>();
value.add(word);
singleWords.put(key, value);
}
lastLastWord = lastWord;
lastWord = word;
}
}
}
private static List<String> splitSentences(final String string) {
sentenceIterator.setText(string);
final List<String> sentences = new ArrayList<>();
for (int start = sentenceIterator.first(), end = sentenceIterator.next(); end != BreakIterator.DONE; start = end, end = sentenceIterator.next()) {
sentences.add(string.substring(start, end).trim());
}
return sentences;
}
private static List<String> splitWords(final String string) {
wordIterator.setText(string);
final List<String> words = new ArrayList<>();
for (int start = wordIterator.first(), end = wordIterator.next(); end != BreakIterator.DONE; start = end, end = wordIterator.next()) {
String word = string.substring(start, end).trim();
if (word.length() > 0 && Character.isLetterOrDigit(word.charAt(0)))
words.add(word);
}
return words;
}
private static List<String> addDots(List<String> words) {
words.add(0, ".");
words.add(".");
return words;
}
public static void generateSentence(Map<String, List<String>> singleWords,
Map<String, List<String>> dictionary, StringBuilder target, int count) {
Random r = new Random();
for (int i = 0; i < 5; i++) {
String w1 = ".";
String w2 = pickRandom(singleWords.get(w1), r);
while (w2 != null) {
target.append(w2).append(" ");
if (w2.equals("."))
break;
String w3 = pickRandom(dictionary.get(w1 + " " + w2), r);
w1 = w2;
w2 = w3;
}
target.append("\n");
}
}
private static String pickRandom(List<String> alternatives, Random r) {
return alternatives.get(r.nextInt(alternatives.size()));
}
}
我应该提到这种方法没有优化。如果我需要提高效率,我会计算字典映射中的单词数量,最后将它们标准化以产生频率。类似于:Map<String, Map<String, Double>> dictionary
,内部地图指向频率词。这需要采用与我在我的例子中所做的不同的方式来挑选单词。