可以使用JAVA sql打开到不同远程MySQL服务器的两个持久连接吗?

时间:2012-04-10 18:06:06

标签: java mysql jdbc persistent-connection

所以我试图通过引用位于不同MYSQL服务器中的表来运行在远程MySQL服务器中创建表的代码。我尝试创建表的服务器有空间限制,并且被引用的表非常大,因此必须将它们保存在不同的远程服务器上。

我正在尝试找到一种方法来同时建立与两个数据库的持久连接(使用JDBC库),所以我不必保持缓冲少量的数据行......我想要能够直接引用数据。

E.g。 数据库A包含我引用的数据,数据库B是我创建新表的地方。 说我在数据库A中引用的表是1,000,000行。而不是打开与数据库A的连接,缓冲10,000行,关闭连接,打开与数据库B的连接,写入该数据库,删除我的缓冲区,然后重复......

我想与数据库A建立持久连接,因此每次写入数据库B都可以引用数据库A中的数据。

这可能吗?我已经尝试了几种方法(主要是通过创建新的连接对象,只有在连接断开时才刷新),而且我似乎无法使这个想法发挥作用。

有没有人使用JDBC做过类似的事情?如果是这样,如果你能指出我正确的方向,或者告诉我你是如何设法让它发挥作用的话,我将不胜感激。

4 个答案:

答案 0 :(得分:1)

您可以在数据库A中创建数据,并通过复制将其复制到数据库B.

或者,听起来你正在实现某种队列。我曾经用Java构建了一个数据复制程序,它使用了Queue接口的内置实现。我有一个线程从数据库A中读取数据并填充队列,以及从队列中读取并写入数据库B的线程。我可以尝试挖掘我使用的类,如果有用的话吗?

编辑:

这是代码,稍微调整一下,用于发布。我没有包含配置类,但它应该让您了解如何使用队列类;

package test;

import java.io.File;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.BlockingQueue;

/**
 * This class implements a JDBC bridge between databases, allowing data to be
 * copied from one place to another.
 * <p>This implementation is threaded, as it uses a {@link BlockingQueue} to pass
 * data between a producer and a consumer.
 */
public class DBBridge {

    public static void main( String[] args ) {

        Adaptor fromAdaptor = null;
        Adaptor toAdaptor = null;

        BridgeConfig config = null;

        try {
            /* BridgeConfig is essentially a wrapper around the Simple XML serialisation library.
             * http://simple.sourceforge.net/
             */
            config = BridgeConfig.loadConfig( new File( "db-bridge.xml" ) );
        }
        catch ( Exception e ) {
            System.err.println( "Failed to read or parse db-bridge.xml: " + e.getLocalizedMessage() );
            System.exit( 1 );
        }

        BlockingQueue<Object> b = new ArrayBlockingQueue<Object>( config.getQueueSize() );

        try {
            HashMap<String, DatabaseConfig> dbs = config.getDbs();

            System.err.println( "Configured DBs" );

            final String sourceName = config.getSource();
            final String destinationName = config.getDestination();

            if ( !dbs.containsKey( sourceName ) ) {
                System.err.println( sourceName + " is not a configured database connection" );
                System.exit( 1 );
            }

            if ( !dbs.containsKey( destinationName ) ) {
                System.err.println( destinationName + " is not a configured database connection" );
                System.exit( 1 );
            }

            DatabaseConfig sourceConfig = dbs.get( sourceName );
            DatabaseConfig destinationConfig = dbs.get( destinationName );

            try {
                /*
                 * Both adaptors must be created before attempting a connection,
                 * as otherwise I've seen DriverManager pick the wrong driver!
                 */
                fromAdaptor = AdaptorFactory.buildAdaptor( sourceConfig, sourceConfig );
                toAdaptor = AdaptorFactory.buildAdaptor( destinationConfig, destinationConfig );

                System.err.println( "Connecting to " + sourceName );
                fromAdaptor.connect();

                System.err.println( "Connecting to " + destinationName );
                toAdaptor.connect();

                /* We'll send our updates to the destination explicitly */
                toAdaptor.getConn().setAutoCommit( false );
            }
            catch ( SQLException e ) {
                System.err.println();
                System.err.println( "Failed to create and configure adaptors" );
                e.printStackTrace();
                System.exit( 1 );
            }
            catch ( ClassNotFoundException e ) {
                System.err.println( "Failed to load JDBC driver due to error: " + e.getLocalizedMessage() );
                System.exit( 1 );
            }

            DataProducer producer = null;
            DataConsumer consumer = null;

            try {
                producer = new DataProducer( config, fromAdaptor, b );
                consumer = new DataConsumer( config, toAdaptor, b );
            }
            catch ( SQLException e ) {
                System.err.println();
                System.err.println( "Failed to create and configure data producer or consumer" );
                e.printStackTrace();
                System.exit( 1 );
            }

            consumer.start();
            producer.start();
        }
        catch ( Exception e ) {
            e.printStackTrace();
        }
    }

    public static class DataProducer extends DataLogger {

        private BridgeConfig config;
        private Adaptor adaptor;
        private BlockingQueue<Object> queue;


        public DataProducer(BridgeConfig c, Adaptor a, BlockingQueue<Object> bq) {
            super( "Producer" );
            this.config = c;
            this.adaptor = a;
            this.queue = bq;
        }


        @Override
        public void run() {
            /* The tables to copy are listed in BridgeConfig */
            for ( Table table : this.config.getManifest() ) {

                PreparedStatement stmt = null;
                ResultSet rs = null;

                try {
                    String sql = table.buildSourceSelect();
                    this.log( "executing: " + sql );
                    stmt = this.adaptor.getConn().prepareStatement( sql );

                    stmt.execute();

                    rs = stmt.getResultSet();

                    ResultSetMetaData meta = rs.getMetaData();

                    /* Notify consumer that a new table is to be processed */
                    this.queue.put( table );
                    this.queue.put( meta );

                    final int columnCount = meta.getColumnCount();

                    while ( rs.next() ) {
                        ArrayList<Object> a = new ArrayList<Object>( columnCount );

                        for ( int i = 0; i < columnCount; i++ ) {
                            a.add( rs.getObject( i + 1 ) );
                        }

                        this.queue.put( a );
                    }
                }
                catch ( InterruptedException ex ) {
                    ex.printStackTrace();
                }
                catch ( SQLException e ) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
                }

                try {
                    /* refresh the connection */
                    /* Can't remember why I can this line - maybe the other
                     * end kept closing the connection. */
                    this.adaptor.reconnect();
                }
                catch ( SQLException e ) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
                }
            }

            try {
                /* Use an object of a specific type to "poison" the queue
                 * and instruct the consumer to terminate. */
                this.log( "putting finished object into queue" );
                this.queue.put( new QueueFinished() );

                this.adaptor.close();
            }
            catch ( InterruptedException e ) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            }
            catch ( SQLException e ) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            }
        }

    }

    /* Superclass for producer and consumer */
    public static abstract class DataLogger extends Thread {

        private String prefix;


        public DataLogger(String p) {
            this.prefix = p;
        }


        protected void log( String s ) {
            System.err.printf( "%d %s: %s%n", System.currentTimeMillis(), this.prefix, s );
        }


        protected void log() {
            System.err.println();
        }
    }

    public static class DataConsumer extends DataLogger {

        private BridgeConfig config;
        private Adaptor adaptor;
        private BlockingQueue<Object> queue;
        private int currentRowNumber = 0;
        private int currentBatchSize = 0;
        private long tableStartTs = -1;


        public DataConsumer(BridgeConfig c, Adaptor a, BlockingQueue<Object> bq) throws SQLException {

            super( "Consumer" );

            this.config = c;
            this.adaptor = a;
            this.queue = bq;

            /* We'll send our updates to the destination explicitly */
            this.adaptor.getConn().setAutoCommit( false );
        }


        public void printThroughput() {
            double duration = ( System.currentTimeMillis() - this.tableStartTs ) / 1000.0;
            long rowsPerSec = Math.round( this.currentRowNumber / duration );
            this.log( String.format( "%d rows processed, %d rows/s", this.currentRowNumber, rowsPerSec ) );
        }


        @Override
        public void run() {

            this.log( "running" );

            Table currentTable = null;
            ResultSetMetaData meta = null;

            int columnCount = -1;

            PreparedStatement stmt = null;

            while ( true ) {
                try {
                    Object o = this.queue.take();

                    if ( o instanceof Table ) {
                        currentTable = (Table) o;

                        this.log( "processing" + currentTable );

                        if ( this.currentBatchSize > 0 ) {
                            /* Commit outstanding rows from previous table */

                            this.adaptor.getConn().commit();

                            this.printThroughput();
                            this.currentBatchSize = 0;
                        }

                        /* refresh the connection */
                        this.adaptor.reconnect();
                        this.adaptor.getConn().setAutoCommit( false );

                        /*
                         * Arguably, there's no need to flush the commit buffer
                         * after every table, but I like it because it feels
                         * tidy.
                         */
                        this.currentBatchSize = 0;
                        this.currentRowNumber = 0;

                        if ( currentTable.isTruncate() ) {
                            this.log( "truncating " + currentTable );
                            stmt = this.adaptor.getConn().prepareStatement( "TRUNCATE TABLE " + currentTable );
                            stmt.execute();
                        }

                        this.tableStartTs = System.currentTimeMillis();
                    }
                    else if ( o instanceof ResultSetMetaData ) {

                        this.log( "received metadata for " + currentTable );

                        meta = (ResultSetMetaData) o;
                        columnCount = meta.getColumnCount();

                        String sql = currentTable.buildDestinationInsert( columnCount );
                        stmt = this.adaptor.getConn().prepareStatement( sql );
                    }
                    else if ( o instanceof ArrayList ) {

                        ArrayList<?> a = (ArrayList<?>) o;

                        /* One counter for ArrayList access, one for JDBC access */
                        for ( int i = 0, j = 1; i < columnCount; i++, j++ ) {

                            try {
                                stmt.setObject( j, a.get( i ), meta.getColumnType( j ) );
                            }
                            catch ( SQLException e ) {
                                /* Sometimes data in a shonky remote system
                                 * is rejected by a more sane destination
                                 * system. Translate this data into
                                 * something that will fit. */
                                if ( e.getMessage().contains( "Only dates between" ) ) {

                                    if ( meta.isNullable( j ) == ResultSetMetaData.columnNullable ) {
                                        this.log( "Casting bad data to null: " + a.get( i ) );
                                        stmt.setObject( j, null, meta.getColumnType( j ) );
                                    }
                                    else {
                                        this.log( "Casting bad data to 0000-01-01: " + a.get( i ) );
                                        stmt.setObject( j, new java.sql.Date( -64376208000L ), meta.getColumnType( j ) );
                                    }
                                }
                                else {
                                    throw e;
                                }
                            }
                        }

                        stmt.execute();

                        this.currentBatchSize++;
                        this.currentRowNumber++;

                        if ( this.currentBatchSize == this.config.getBatchSize() ) {
                            /*
                             * We've reached our non-committed limit. Send the
                             * requests to the destination server.
                             */

                            this.adaptor.getConn().commit();

                            this.printThroughput();
                            this.currentBatchSize = 0;
                        }
                    }
                    else if ( o instanceof QueueFinished ) {
                        if ( this.currentBatchSize > 0 ) {
                            /* Commit outstanding rows from previous table */

                            this.adaptor.getConn().commit();

                            this.printThroughput();

                            this.log();
                            this.log( "completed" );
                        }

                        /* Exit while loop */
                        break;
                    }
                    else {
                        throw new RuntimeException( "Unexpected obeject in queue: " + o.getClass() );
                    }
                }
                catch ( InterruptedException e ) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
                }
                catch ( SQLException e ) {
                    // TODO Auto-generated catch block
                    e.printStackTrace();
                }
            }

            try {
                this.adaptor.close();
            }
            catch ( SQLException e ) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            }
        }
    }

    public static final class QueueFinished {
        /*
         * This only exists as a completely type-safe value in "instanceof"
         * expressions
         */
    }
}

答案 1 :(得分:0)

我以前做过这个,我建议你做我做的,从DB A获取你需要的数据,然后把它写成一个或多个文件作为SQL'set'语句。当我这样做时,由于对DB B中加载文件大小的限制,我不得不拆分成大约10个文件

答案 2 :(得分:0)

在一个程序中,我写了一段时间回来工作,我有两个同时连接。如果没有提供代码,你就会想要

public void initialize() {

    String dbUrl, dbUrl2, dbClass, dbClass2, user, user2, password, password2;
    Connection con, con2;
    Statement stmt, stmt2;
    ResultSet rs, rs2;

    try {
        Class.forName(dbClass);
        con = DriverManager.getConnection(dbUrl,user,password);
        con2 = DriverManager.getConnection(dbUrl2,user2,password2);
        stmt = con.createStatement();
    } catch(ClassNotFoundException e) {
        e.printStackTrace();
    }
    catch(SQLException e) {
        e.printStackTrace();
    }
}

然后,一旦你运行了两个连接,

rs = stmt.executeQuery("query");
rs2 = stmt2.executeQuery("second query");

我不知道如何专门解决您的问题,但这段代码可能会对您的系统造成一定的负担(假设您没有高端的个人/公司机器)并且可能需要一段时间。这应该足以让你至少开始,如果可能的话,我会发布更多内容,遗憾的是,模拟版本有点太复杂了。祝你好运!

答案 3 :(得分:0)

我认为通过两个独立的连接,一个读取连接和一个写连接,您将获得最佳服务,并使用一小部分缓冲区通过Java应用程序传递数据。

另一个更复杂但可能更优雅的解决方案是使用FEDERATED表。它使远程服务器上的表看起来是本地的。查询将传递到远程服务器并返回结果。您必须小心索引,否则它会非常慢,但它可能适用于您想要做的事情。

http://dev.mysql.com/doc/refman/5.5/en/federated-description.html