我正在尝试为任何输入的熊猫数据框自动构建networkx图。
数据框如下所示:
FeatureID BC chrom pos ftm_call
1_1_1 GCTATT 12 25398138 NRAS_3
1_1_1 GCCTAT 12 25398160 NRAS_3
1_1_1 GCCTAT 12 25398073 NRAS_3
1_1_1 GATCCT 12 25398128 NRAS_3
1_1_1 GATCCT 12 25398107 NRAS_3
这是我需要整理的算法:
到目前为止,这是我所拥有的:
import pandas as pd
import numpy as np
import networkx as nx
from collections import defaultdict
# read in test basecalls
hamming_df = pd.read_csv("./test_data.txt", sep="\t")
hamming_df = hamming_df[["FeatureID", "BC", "chrom", "pos"]]
# initiate graphs
G = nx.DiGraph(name="G")
KRAS = nx.DiGraph(name="KRAS")
NRAS_3 = nx.DiGraph(name="NRAS_3")
# list of reference graphs
ref_graph_list = [G, KRAS, NRAS_3]
def add_basecalls(row):
basecall = row.BC.astype(str)
target = row.name[1]
pos = row["pos"]
chrom = row["chrom"]
# initialize counter dictionary
d = defaultdict()
# select graph that matches ftm call
graph = [f for f in ref_graph_list if f.graph["name"] == target]
stuff = hamming_df.groupby(["FeatureID", "ftm_call"])
stuff.apply(add_basecalls)
但是,这并不是将条形码以字符串形式提取出来,而只是将其枚举出来,而是将其作为一系列序列提取出来,我被困住了。
所需的输出是包含以下信息的图形,其中第一个BC“ GCTATT”的示例显示为虚拟计数:
FeatureID chrom pos Nucleotide Weight
1_1_1 12 25398138 G 10
1_1_1 12 25398138 C 22
1_1_1 12 25398139 T 12
1_1_1 12 25398140 A 15
1_1_1 12 25398141 T 18
1_1_1 12 25398142 T 22
谢谢!
答案 0 :(得分:1)
您可能还需要另外apply
和axis=1
来解析每个组的行:
import pandas as pd
import numpy as np
import networkx as nx
from collections import defaultdict
# initiate graphs
GRAPHS = {"G": nx.DiGraph(name="G"),
"KRAS": nx.DiGraph(name="KRAS"),
"NRAS_3": nx.DiGraph(name="NRAS_3"), # notice that test_data.txt has "NRAS_3" not "KRAS_3"
}
WEIGHT_DICT = defaultdict()
def update_weight_for_row(row, target_graph):
pos = row["pos"]
chrom = row["chrom"]
for letter in row.BC:
print(letter)
# now you have access to letters in BC per row
# and can update graph weights as desired
def add_basecalls(grp):
# select graph that matches ftm_call
target = grp.name[1]
target_graph = GRAPHS[target]
grp.apply(lambda row: update_weight_for_row(row, target_graph), axis=1)
# read in test basecalls
hamming_df = pd.read_csv("./test_data.txt", sep="\t")
hamming_df2 = hamming_df[["FeatureID", "BC", "chrom", "pos"]] # Why is this line needed?
stuff = hamming_df.groupby(["FeatureID", "ftm_call"])
stuff.apply(lambda grp: add_basecalls(grp))