当每个螺栓从同一个喷口采集数据时,如何逐个执行一个螺栓?

时间:2016-01-29 07:54:28

标签: database apache-storm

我从spout获取数据。每个bolt都会将映射的字段插入到我的数据库中的不同表中。但是我的数据库表有约束。在我的测试表中,我有两个表名为user-details,my-details for哪些约束允许用户表首先填充(首先应插入),之后只插入my-details表。当我运行拓扑时,只有用户表被插入,因为当螺栓执行插入查询到数据库时只允许psqlbolt首先插入(因为约束)和psqlbolt1抛出异常说用户id没有被找到。所以我保持(1000)睡在psqlbolt1当我这样做时,两个螺栓正在工作。但是当我申请同样的许多螺栓(12)等待时间正在增加,螺栓执行失败,说超过了螺栓等待时间。如果只有psql1应该开始插入,我怎么能先执行用户字段。

我的拓扑类

public class Topology {

ConnectionProvider cp;
protected static final String JDBC_CONF = "jdbc.conf";
protected static final String TABLE_NAME = "users";
protected static final String SELECT_QUERY = "select dept_name from department, user_department where department.dept_id = user_department.dept_id" +
        " and user_department.user_id = ?";
public static void main(String[] args) throws Exception{
    String argument = args[0];
    JdbcMapper jdbcMapper;
   TopologyBuilder builder = new TopologyBuilder();
Map map = Maps.newHashMap();
        map.put("dataSourceClassName", "org.postgresql.ds.PGSimpleDataSource");
    map.put("dataSource.url","jdbc:postgresql://localhost:5432/twitter_analysis?user=postgres");
ConnectionProvider cp = new MyConnectionProvider(map);

    jdbcMapper = new SimpleJdbcMapper(TABLE_NAME, cp);

    List<Column> schemaColumns = Lists.newArrayList(new Column("user_id", Types.INTEGER), new Column             ("user_name",Types.VARCHAR),new Column("create_date", Types.TIMESTAMP));



    JdbcMapper mapper = new SimpleJdbcMapper(schemaColumns);

        PsqlBolt userPersistanceBolt = new PsqlBolt(cp, mapper)
    .withInsertQuery("insert into user_details (id, user_name, created_timestamp) values (?,?,?)");

    builder.setSpout("myspout", new UserSpout(), 1);

    builder.setBolt("Psql_Bolt", userPersistanceBolt,1).shuffleGrouping("myspout");



    jdbcMapper = new SimpleJdbcMapper("My_details", cp);

    List<Column> schemaColumns1 = Lists.newArrayList(new Column("my_id", Types.INTEGER), new Column             ("my_name",Types.VARCHAR));



    JdbcMapper mapper1 = new SimpleJdbcMapper(schemaColumns1);

        PsqlBolt1 userPersistanceBolt1 = new PsqlBolt1(cp, mapper1)
    .withInsertQuery("insert into My_details (my_id, my_name) values (?,?)");

    //builder.setSpout("myspout", new UserSpout(), 1);

    builder.setBolt("Psql_Bolt1", userPersistanceBolt1,1).shuffleGrouping("myspout");
 Config conf = new Config();
     conf.put(JDBC_CONF, map);
     conf.setDebug(true);
         conf.setNumWorkers(3);


    if (argument.equalsIgnoreCase("runLocally")){
        System.out.println("Running topology locally...");
        LocalCluster cluster = new LocalCluster();
        cluster.submitTopology("Twitter Test Storm-postgresql", conf, builder.createTopology());
    }
    else {
        System.out.println("Running topology on cluster...");
        StormSubmitter.submitTopology("Topology_psql", conf, builder.createTopology()); 
    }
}}

我的螺栓:psql1

public class PsqlBolt1 extends AbstractJdbcBolt {
private static final Logger LOG = Logger.getLogger(PsqlBolt1.class);

 private String tableName;
 private String insertQuery;
 private JdbcMapper jdbcMapper;

public PsqlBolt1(ConnectionProvider connectionProvider,  JdbcMapper jdbcMapper) {
    super(connectionProvider);
    this.jdbcMapper = jdbcMapper;
}
public PsqlBolt1 withInsertQuery(String insertQuery) {
    this.insertQuery = insertQuery;
System.out.println("query passsed.....");
    return this;
}
@Override
public void prepare(Map map, TopologyContext topologyContext, OutputCollector collector) {
    super.prepare(map, topologyContext, collector);
    if(StringUtils.isBlank(tableName) && StringUtils.isBlank(insertQuery)) {
        throw new IllegalArgumentException("You must supply either a tableName or an insert Query.");
    }
}

@Override
public void execute(Tuple tuple) {
    try {
  Thread.sleep(1000);
        List<Column> columns = jdbcMapper.getColumns(tuple);
        List<List<Column>> columnLists = new ArrayList<List<Column>>();
        columnLists.add(columns);
        if(!StringUtils.isBlank(tableName)) {
            this.jdbcClient.insert(this.tableName, columnLists);
        } else {
            this.jdbcClient.executeInsertQuery(this.insertQuery, columnLists);
        }
        this.collector.ack(tuple);
    } catch (Exception e) {
        this.collector.reportError(e);
        this.collector.fail(tuple);
    }
}

@Override
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {

}}

psqlbolt:

public class PsqlBolt extends AbstractJdbcBolt {
private static final Logger LOG = Logger.getLogger(PsqlBolt.class);
 private String tableName;
 private String insertQuery;
 private JdbcMapper jdbcMapper;

public PsqlBolt(ConnectionProvider connectionProvider,  JdbcMapper jdbcMapper) {
    super(connectionProvider);
    this.jdbcMapper = jdbcMapper;
}
public PsqlBolt withTableName(String tableName) {
    this.tableName = tableName;
    return this;
}



public PsqlBolt withInsertQuery(String insertQuery) {
    this.insertQuery = insertQuery;
System.out.println("query passsed.....");
    return this;
}
@Override
public void prepare(Map map, TopologyContext topologyContext, OutputCollector collector) {
    super.prepare(map, topologyContext, collector);
    if(StringUtils.isBlank(tableName) && StringUtils.isBlank(insertQuery)) {
        throw new IllegalArgumentException("You must supply either a tableName or an insert Query.");
    }
}

@Override
public void execute(Tuple tuple) {
    try {

        List<Column> columns = jdbcMapper.getColumns(tuple);
        List<List<Column>> columnLists = new ArrayList<List<Column>>();
        columnLists.add(columns);
        if(!StringUtils.isBlank(tableName)) {
            this.jdbcClient.insert(this.tableName, columnLists);
        } else {
            this.jdbcClient.executeInsertQuery(this.insertQuery, columnLists);
        }
        this.collector.ack(tuple);
    } catch (Exception e) {
        this.collector.reportError(e);
        this.collector.fail(tuple);
    }
}

@Override
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {

}}

当我为许多螺栓应用相同时,我的拓扑颜色变为红色(等待状态)。enter image description here

这是螺栓等待时间。第一个螺栓没有任何睡眠。我在第二个螺栓中保持1秒睡眠并且休息所有螺栓都有2秒睡眠。 enter image description here

如何更换睡眠以执行我的工作,或者如果我增加主管人数,问题是否会解决?

2 个答案:

答案 0 :(得分:1)

您可以重新构建拓扑,以便spout向螺栓1发送消息M.螺栓1可以对此消息采取某些操作,并且只有在操作成功时才​​将相同的消息转发到bolt 2。这样,动作之间就有了严格的顺序。

答案 1 :(得分:1)

我错过了编写一个螺栓以在元组上执行某些不同功能的一点。我正在尝试从不同的螺栓编写不同的插入查询,这些查询执行来自喷口的元组插入的相同功能。我意识到我在螺栓上没有任何区别。所以我在一个印迹中实现了所有的插入查询而没有使用多个插入查询螺栓,在映射了所有字段之后,我只是按照我的意愿编写了一系列插入查询(一个接一个)。

kafka_spout and bolt