查询从一个表中提取基于其他的信息

时间:2016-03-09 18:48:19

标签: mysql sql sql-server database

我有两张桌子发帖和分享,帖子有很多分享。我想使用userId(由所有者用户发布)获取post表中的所有数据,并使用相同的userid检查共享表,如果有一个共享帖子给其他用户,如果有任何条件为真,我需要获取数据。

如果由所有者发布或由共享表中的其他用户共享,我想在post表中获取数据。

示例:

表名:发布

id(pk) postname userid
    1  abc       10
    2  xxx       10
    3  yyy       11
    4  zzz       12
    5  bbb       13 

表名:分享

id postid(fk) userid
1   3           10
2   4           10
3   3           11
4   1           12

预期输出:按用户ID 10查找示例

 id postname userid
  1  abc       10  // this record created by user 10 (owner)
  2  xxx       10  // this record created by user 10 (owner)
  3  yyy       11  // this record shared by other user to user 10.
  4  zzz       12  // this record shared by other user to user 10.

2 个答案:

答案 0 :(得分:1)

您可能希望在两个不同的列中打印package com.ConnectGlobe; import java.io.*; import weka.classifiers.Classifier; import weka.classifiers.Evaluation; import weka.classifiers.bayes.NaiveBayesMultinomialUpdateable; import weka.core.*; /** * * @author sv */ public class TextDirectoryToArff { public Instances createDataset(String directoryPath) throws Exception { FastVector atts = new FastVector(2); atts.addElement(new Attribute("filename", (FastVector) null)); atts.addElement(new Attribute("contents", (FastVector) null)); Instances data = new Instances("text_files_in_" + directoryPath, atts, 0); File dir = new File(directoryPath); String[] files = dir.list(); for (int i = 0; i < files.length; i++) { if (files[i].endsWith(".txt")) { try { double[] newInst = new double[2]; newInst[0] = (double)data.attribute(0).addStringValue(files[i]); File txt = new File(directoryPath + File.separator + files[i]); InputStreamReader is; is = new InputStreamReader(new FileInputStream(txt)); StringBuffer txtStr = new StringBuffer(); int c; while ((c = is.read()) != -1) { txtStr.append((char)c); } newInst[1] = (double)data.attribute(1).addStringValue(txtStr.toString()); data.add(new Instance(1.0, newInst)); } catch (Exception e) { //System.err.println("failed to convert file: " + directoryPath + File.separator + files[i]); } } } return data; } public static void main(String[] args) { TextDirectoryToArff tdta1 = new TextDirectoryToArff(); TextDirectoryToArff tdta2 = new TextDirectoryToArff(); try { Instances dataset1 = tdta1.createDataset("C:\\1"); // .txt file will be loaded dataset1.setClassIndex(dataset1.numAttributes() - 1 ); Instances dataset2 = tdta2.createDataset("C:\\2"); dataset2.setClassIndex(dataset1.numAttributes() - 1); System.out.println(dataset1); System.out.println(dataset2); double precision = 0, recall=0,fmeasure=0,error=0; int size1 = dataset1.numInstances() / 10; int size2 = dataset2.numInstances() / 10; int begin = 0; int end = size1 - 1 ; for (int i=1 ; i<=10;i++) { System.out.println("iteration :" + 1); Instances training = new Instances(dataset1); Instances testing = new Instances(dataset1, begin , (end - begin)); for (int j=0;j < (end - begin); j++) training.delete(begin); Classifier tree = (Classifier)new NaiveBayesMultinomialUpdateable(); Instances filteredInstaces = training; StringToNominal nominal ; for(int a=0;a<training.numAttributes()-1;a++) { if(training.attribute(a).isString()) { nominal = new StringToNominal(); nominal.setInputFormat(filteredInstaces); training = Filter.useFilter(training, nominal); } } tree.buildClassifier(filteredInstaces); Evaluation eval = new Evaluation(testing); eval.evaluateModel(tree, testing); System.out.println("Precision:" + eval.precision(1)); System.out.println("Recall:" + eval.recall(1)); System.out.println("Fmeasure:" + eval.fMeasure(1)); System.out.println("Error:" + eval.errorRate()); precision += eval.precision(1); recall += eval.recall(1); fmeasure += eval.fMeasure(1); error += eval.errorRate(); //update begin = end + 1; end+= size1; if(i==(9)) { end = dataset1.numInstances(); } System.out.println("Precision:" + precision/10.0); System.out.println("Recall:" + recall/10.0); System.out.println("Fmeasure:" + fmeasure/10.0); System.out.println("Error:" + error/0.0); } // Classifier cls = new NaiveBayesMultinomialUpdateable(); // cls.buildClassifier(dataset1); // evaluate classifier and print some statistics // Evaluation eval = new Evaluation(dataset1); //eval.evaluateModel(cls, dataset2); //eval.crossValidateModel(cls,dataset1,10, dataset2.getRandomNumberGenerator(1)); //System.out.println(eval.toSummaryString("\nResults\n======\n", false)); } catch (Exception e) { System.err.println(e.getMessage()); e.printStackTrace(); } } } created_by,因为示例似乎有点令人困惑。在这种情况下,查询应该产生预期的输出:

shared_by

答案 1 :(得分:0)

它终于点击了你要找的东西..

select p.id, p.postname, p.userid 
from post p 
join share s on s.postid=p.id 
where s.userid='10' or p.userid='10'

如果该帖子是由该用户创建的,或者该帖子是与该用户共享的,则应该返回与特定用户(ID 10)相关的所有交互。