对于每个分组依据的熊猫,都会在字符串列中枚举并转换为反词典

时间:2018-08-21 18:50:31

标签: python pandas dictionary enumerate

我正在尝试为任何输入的熊猫数据框自动构建networkx图。

数据框如下所示:

  FeatureID       BC         chrom       pos        ftm_call
  1_1_1           GCTATT     12          25398138   NRAS_3
  1_1_1           GCCTAT     12          25398160   NRAS_3
  1_1_1           GCCTAT     12          25398073   NRAS_3
  1_1_1           GATCCT     12          25398128   NRAS_3
  1_1_1           GATCCT     12          25398107   NRAS_3

这是我需要整理的算法:

  • 按FeatureID分组
  • 对于每个FeatureID,选择具有与ftm_call匹配的“名称”属性的图形
  • 对于组中的每一行,请枚举BC列,其起始位置等于pos列中的值
  • 对于BC中的每个字母,检查该字母是否已在图形中的该位置找到,如果没有,则加1的权重。如果已经存在,则将1加权重

到目前为止,这是我所拥有的:

import pandas as pd
import numpy as np
import networkx as nx
from collections import defaultdict

# read in test basecalls
hamming_df = pd.read_csv("./test_data.txt", sep="\t")
hamming_df = hamming_df[["FeatureID", "BC", "chrom", "pos"]]

# initiate graphs 
G = nx.DiGraph(name="G")
KRAS = nx.DiGraph(name="KRAS")
NRAS_3 = nx.DiGraph(name="NRAS_3")

# list of reference graphs
ref_graph_list = [G, KRAS, NRAS_3]

def add_basecalls(row):
    basecall = row.BC.astype(str)
    target = row.name[1]
    pos = row["pos"]
    chrom = row["chrom"]

    # initialize counter dictionary
    d = defaultdict()

    # select graph that matches ftm call
    graph = [f for f in ref_graph_list if f.graph["name"] == target]

stuff = hamming_df.groupby(["FeatureID", "ftm_call"])  
stuff.apply(add_basecalls)

但是,这并不是将条形码以字符串形式提取出来,而只是将其枚举出来,而是将其作为一系列序列提取出来,我被困住了。

所需的输出是包含以下信息的图形,其中第一个BC“ GCTATT”的示例显示为虚拟计数:

FeatureID    chrom    pos         Nucleotide    Weight
1_1_1        12       25398138       G            10
1_1_1        12       25398138       C            22
1_1_1        12       25398139       T            12
1_1_1        12       25398140       A            15
1_1_1        12       25398141       T            18
1_1_1        12       25398142       T            22                     

谢谢!

1 个答案:

答案 0 :(得分:1)

您可能还需要另外applyaxis=1来解析每个组的行:

import pandas as pd
import numpy as np
import networkx as nx
from collections import defaultdict

# initiate graphs
GRAPHS = {"G": nx.DiGraph(name="G"),
          "KRAS": nx.DiGraph(name="KRAS"),
          "NRAS_3": nx.DiGraph(name="NRAS_3"), # notice that test_data.txt has "NRAS_3" not "KRAS_3"
     }

WEIGHT_DICT = defaultdict()

def update_weight_for_row(row, target_graph):
    pos = row["pos"]
    chrom = row["chrom"]
    for letter in row.BC:
        print(letter)
        # now you have access to letters in BC per row
        # and can update graph weights as desired

def add_basecalls(grp):
    # select graph that matches ftm_call
    target = grp.name[1]
    target_graph = GRAPHS[target]
    grp.apply(lambda row: update_weight_for_row(row, target_graph), axis=1)

# read in test basecalls
hamming_df = pd.read_csv("./test_data.txt", sep="\t")
hamming_df2 = hamming_df[["FeatureID", "BC", "chrom", "pos"]]  # Why is this line needed?
stuff = hamming_df.groupby(["FeatureID", "ftm_call"])  
stuff.apply(lambda grp: add_basecalls(grp))