Shell脚本从大型sql文件中提取内容

时间:2019-05-17 07:12:44

标签: sql shell

我有一个类似的脚本:

#!/bin/bash

export today=`date +%Y%m%d`

DB_FILE='pgalldump_'$today'.out'

echo $DB_FILE

DB_NAME=''

if [ ! -f $DB_FILE -o ! -r $DB_FILE ]
    then
    echo "error: $DB_FILE not found or not readable" >&2
    exit 2
fi

egrep -n "\\\\connect\ $DB_NAME" $DB_FILE | while read LINE
do
    echo $LINE

    DB_NAME=$(echo $LINE | awk '{print $2}')
    STARTING_LINE_NUMBER=$(echo $LINE | cut -d: -f1)

    STARTING_LINE_NUMBER=$(($STARTING_LINE_NUMBER+1))
    TOTAL_LINES=$(tail -n +$STARTING_LINE_NUMBER $DB_FILE | \
    egrep -n -m 1 "PostgreSQL\ database\ dump\ complete" | \
    head -n 1 | \
    cut -d: -f1)

    echo $TOTAL_LINES;
    tail -n +$STARTING_LINE_NUMBER $DB_FILE | head -n +$TOTAL_LINES > /backup/$DB_NAME.sql
done

dump.out包含100个数据库的sql dump。上面的shell脚本为每个数据库创建了单独的sql文件。可以使这个过程更快吗?

0 个答案:

没有答案