用自定义值替换字符串,并在所有情况下进行迭代

时间:2018-11-09 23:13:26

标签: python perl awk sed

我正在尝试使用SED,PERL,AWK或python脚本替换文件中的某些文本。我尝试了几件事,但似乎无法解决。

在名为data.txt的文本文件中有以下内容

&st=ALPHA&type=rec&uniId=JIM&acceptCode=123&drainNel=supp&
&st=ALPHA&type=rec&uniId=JIM&acceptCode=167&drainNel=supp&
&st=ALPHA&type=rec&uniId=SARA&acceptCode=231&drainNel=ured&
&st=ALPHA&type=rec&uniId=SARA&acceptCode=344&drainNel=iris&
&st=ALPHA&type=rec&uniId=SARA&acceptCode=349&drainNel=iris&
&st=ALPHA&type=rec&uniId=DAVE&acceptCode=201&drainNel=teef&

1)脚本将采用数字形式的输入参数,例如:10000

2)我想用给定的长数字将所有文本ALPHA替换为arg,并以100递增,例如如果 uniId 相同。如果不同,则将增加5000,例如

3)对于所有具有相同 uniId

的行,我想将所有 acceptCode 替换为第一个 st
  

./ script 10000

..仍然感到困惑吗?好吧,最终结果可能是这样:

&st=10000&type=rec&uniId=JIM&acceptCode=10000&drainNel=supp&
&st=10100&type=rec&uniId=JIM&acceptCode=10000&drainNel=supp&
&st=15100&type=rec&uniId=SARA&acceptCode=15100&drainNel=ured&
&st=15200&type=rec&uniId=SARA&acceptCode=15100&drainNel=iris&
&st=15300&type=rec&uniId=SARA&acceptCode=15100&drainNel=iris&
&st=20300&type=rec&uniId=DAVE&acceptCode=20300&drainNel=teef&

此^应该被替换并应用于文件data.txt-而不是仅在屏幕上打印。

3 个答案:

答案 0 :(得分:1)

好的,这是使用awk的一种方法(为了方便起见,将其包装在shell脚本中,因为单行代码有点过多):

#!/bin/sh
# Usage:
# $./transform.sh [STARTCOUNT] < data.txt > temp.txt
# $ mv -f temp.txt data.txt
awk  -F '&' -v "cnt=${1:-10000}" -v 'OFS=&' \
   'NR == 1 { ac = cnt; uni = $4; }
    NR > 1 && $4 == uni { cnt += 100 }
    $4 != uni { cnt += 5000; ac = cnt; uni = $4 }
    { $2 = "st=" cnt; $5 = "acceptCode=" ac; print }'

在保存示例输入的文件上运行此操作

$ ./transform.sh 10000 < data.txt
&st=10000&type=rec&uniId=JIM&acceptCode=10000&drainNel=supp&
&st=10100&type=rec&uniId=JIM&acceptCode=10000&drainNel=supp&
&st=15100&type=rec&uniId=SARA&acceptCode=15100&drainNel=ured&
&st=15200&type=rec&uniId=SARA&acceptCode=15100&drainNel=iris&
&st=15300&type=rec&uniId=SARA&acceptCode=15100&drainNel=iris&
&st=20300&type=rec&uniId=DAVE&acceptCode=20300&drainNel=teef&

和一个对输入文件进行就地编辑的perl版本:

#!/usr/bin/perl -ani -F'&'
# Usage:
# $ ./transform.pl COUNT datafile
use warnings;
use strict;
use English;

our ($count, $a, $uni);

BEGIN {
  $count = shift @ARGV;
  die "Missing count argument" unless defined $count and $count =~ /^\d+$/;
  $ac = $count;
  $uni = "";
  $OFS = '&';
}

if ($NR == 1) {
  $uni = $F[3];
} elsif ($uni ne $F[3]) {
  $count += 5000;
  $ac = $count;
  $uni = $F[3];
} else {
  $count += 100;
}
$F[1] = "st=$count";
$F[4] = "acceptCode=$ac";
print @F;

在示例输入中运行它:

$ ./transform.pl 10000 data.txt
$ cat data.txt
&st=10000&type=rec&uniId=JIM&acceptCode=10000&drainNel=supp&
&st=10100&type=rec&uniId=JIM&acceptCode=10000&drainNel=supp&
&st=15100&type=rec&uniId=SARA&acceptCode=15100&drainNel=ured&
&st=15200&type=rec&uniId=SARA&acceptCode=15100&drainNel=iris&
&st=15300&type=rec&uniId=SARA&acceptCode=15100&drainNel=iris&
&st=20300&type=rec&uniId=DAVE&acceptCode=20300&drainNel=teef&

答案 1 :(得分:1)

一些假设

您的要求2)我想用给定的长号arg替换所有文本ALPHA,并以100递增,例如如果uniId相同。如果不同,它将增加5000,例如_和您的示例输出一起需要在uniId字段上对输入数据进行排序。如果文件未排序,则每个uniId的100增量和5000增量将无法产生所需的初始值。

增量方案假定没有一个uniId值具有足够的记录来递增到新标识的uniId值的下一个5000范围集。

#!/usr/bin/env python3

from collections import OrderedDict
import csv
import sys

class TrackingVars(object):
    """
    The TrackingVars class manages the business logic for maintaining the
    st field counters and the acctCode values for each uniId
    """

    def __init__(self, long_number):
        self.uniId_table = {}
        self.running_counter = long_number

    def __initial_value__(self):
        """
        The first encounter for a uniId will have st = acctCode
        """
        retval = (self.running_counter, self.running_counter)
        return retval

    def get_uniId(self, id):
        """
        A convenience method for returning uniId tracking values
        """
        curval, original_value = self.uniId_table.get(id, self.__initial_value__())
        return (curval, original_value)

    def track(self, uniId):
        """
        curval = original_value when a new uniId is encountered.
        If the uniId is known, simply increment curval by 100
        if the uniId is new and there is at least 1 key in the
        tracking table increment curval by 5000
        always update tracking variables
        """
        curval, original_value = self.get_uniId(uniId)
        if uniId in self.uniId_table.keys():
            curval = curval + 100
        else:
            if self.uniId_table:
                curval = curval + 5000
                original_value = curval
        self.running_counter = curval
        retval = (curval, original_value)
        self.uniId_table[uniId] = retval
        return retval


def data_lines(filename):
    """
    Read file as input delimited by &
    """
    with open(filename, "r", newline=None) as fin:
        csvin = csv.reader(fin, delimiter="&")
        for row in csvin:
            yield row

def transform_data_line(line):
    """
    Transform data into key, values pairs
    leading and traling & have no valid key, value pairs
    """
    head = ("head", None)
    tail = ("tail", None)
    items = [head]
    for field in line[1:-1]:
        key, value = field.split("=")
        items.append([key, value])
    retval = OrderedDict(items)
    retval["tail"] = tail
    return retval

def process_data_line(record, text_to_replace, tracking_vars):
    """
    if st value is ALPHA update record with tracking variables
    """
    st = record.get("st")
    if st is not None:
        if st == text_to_replace:
            uniId = record.get("uniId")
            curval, original_value = tracking_vars.track(uniId)
            record["st"] = curval
            record["acceptCode"] = original_value
    return record


def process_file():
    """
    Get the long number from the command line input.
    Initialize the tracking variables.
    Process each row of the file.
    """
    long_number = sys.argv[1]
    tracking_vars = TrackingVars(int(long_number))
    for row in data_lines("data.txt"):
        record = transform_data_line(row)
        retval = process_data_line(record, "ALPHA", tracking_vars)
        yield retval


def write(iter_in, filename_out):
    """
    Write each row from the iterator to the csv.
    make sure the first and last fields are empty.
    """
    with open(filename_out, "w", newline=None) as fout:
        csvout = csv.writer(fout, delimiter="&")
        for row in iter_in:
            encoded_row = ["{0}={1}".format(k, v) for k, v in row.items()]
            encoded_row[0]=""
            encoded_row[-1]=""
            csvout.writerow(encoded_row)

if __name__ == "__main__":
    write(process_file(), "data.new.txt")

输出

$cat data.net.txt

&st=10000&type=rec&uniId=JIM&acceptCode=10000&drainNel=supp&
&st=10100&type=rec&uniId=JIM&acceptCode=10000&drainNel=supp&
&st=15100&type=rec&uniId=SARA&acceptCode=15100&drainNel=ured&
&st=15200&type=rec&uniId=SARA&acceptCode=15100&drainNel=iris&
&st=15300&type=rec&uniId=SARA&acceptCode=15100&drainNel=iris&
&st=20300&type=rec&uniId=DAVE&acceptCode=20300&drainNel=teef&

结论

只有您知道为什么递增编号方案的业务规则是如此。但是,对uniId的控制中断和依赖于先前uniId增量的st值对我来说似乎是个问题。如果遇到的每个新uniId将从新的5000边界开始,则可以处理未排序的文件。例如15000、2000、25000等

PS

我喜欢AWK和Perl的答案。它们简单而直接。他们完全按照提出的问题回答。现在,我们需要的只是一个SED示例:)

答案 2 :(得分:0)

更有效的控制,只需一行gnu awk:

awk  -F\& -vi=10000 -vOFS=\& '{if(NR==1) { ac=i; u=$4; } else { if($4==u) i+=100; else { i+=5000; ac=i; u=$4; } }; $2="st=" i; $5 =gensub(/[0-9]+/,ac,1,$5); print } ' data.txt

在第5个字段接受任何各种字符串。谢谢Shawn。