使用Cpuset将内核模块隔离到特定的核心

时间:2016-03-29 15:39:06

标签: linux linux-kernel linux-device-driver cpuset

从用户空间,我们可以使用cpuset实际隔离系统中的特定核心,并仅对该核心执行一个特定流程。

我试图用内核模块做同样的事情。所以我希望模块在一个孤立的核心中执行。换句话说:如何在内核模块中使用cpuset *

在我的内核模块中使用linux/cpuset.h并不起作用。所以,我有一个这样的模块:

#include <linux/module.h>
#include <linux/cpuset.h>

...
#ifdef CONFIG_CPUSETS
    printk(KERN_INFO, "cpusets is enabled!");
#endif
cpuset_init(); // this function is declared in cpuset.h
...

尝试加载此模块时,我得到(dmesg)以下消息cpusets is enabled!。但我也收到了消息Unknown symbol cpu_init (err 0)

同样,我尝试使用sched_setaffinity中的linux/sched.h来将所有正在运行的procceses移动到特定的核心,然后将我的模块运行到一个隔离的核心。我得到了相同的错误消息:Unknown symbol sched_setaffinity (err 0)。我想我得到了#34;未知符号&#34;因为这些函数在内核中没有EXPORT_SYMBOL。所以我去尝试拨打sys_sched_setaffinity 系统调用(基于此question),但再次得到了这个消息:Unknown symbol sys_sched_setaffinity (err 0)

此外,我不是在寻找使用isolcpus的解决方案,该解决方案在启动时设置。我想加载模块,然后发生隔离。

  • (更确切地说,我希望它的内核线程在隔离的内核中执行。我知道我可以使用affinity来将线程绑定到特定的内核,但这并不能保证我的内核会运行被其他进程隔离。)

4 个答案:

答案 0 :(得分:9)

  

所以我希望模块能够在一个孤立的核心中执行。

  

实际上隔离了我们系统中的特定核心,只执行一个核心   该核心的具体过程

这是一个使用内核3.16在Debian盒子上编译和测试的工作源代码。我将描述如何首先加载和卸载以及传递的参数意味着什么。

所有来源都可以在github上找到......

https://github.com/harryjackson/doc/tree/master/linux/kernel/toy/toy

构建并加载模块......

make
insmod toy param_cpu_id=2

要卸载模块,请使用

rmmod toy

我没有使用modprobe,因为它需要一些配置等。我们传递给toy内核模块的参数是我们想要隔离的CPU。除非它们在该CPU上执行,否则所有被调用的设备操作都不会运行。

加载模块后,您可以在此处找到它

/dev/toy

这样的简单操作
cat /dev/toy

创建内核模块捕获并生成一些输出的事件。您可以使用dmesg查看输出。

源代码......

#include <linux/module.h>
#include <linux/fs.h>
#include <linux/miscdevice.h>
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Harry");
MODULE_DESCRIPTION("toy kernel module");
MODULE_VERSION("0.1"); 
#define  DEVICE_NAME "toy"
#define  CLASS_NAME  "toy"

static int    param_cpu_id;
module_param(param_cpu_id    , int, (S_IRUSR | S_IRGRP | S_IROTH));
MODULE_PARM_DESC(param_cpu_id, "CPU ID that operations run on");

//static void    bar(void *arg);
//static void    foo(void *cpu);
static int     toy_open(   struct inode *inodep, struct file *fp);
static ssize_t toy_read(   struct file *fp     , char *buffer, size_t len, loff_t * offset);
static ssize_t toy_write(  struct file *fp     , const char *buffer, size_t len, loff_t *);
static int     toy_release(struct inode *inodep, struct file *fp);

static struct file_operations toy_fops = {
  .owner = THIS_MODULE,
  .open = toy_open,
  .read = toy_read,
  .write = toy_write,
  .release = toy_release,
};

static struct miscdevice toy_device = {
  .minor = MISC_DYNAMIC_MINOR,
  .name = "toy",
  .fops = &toy_fops
};

//static int CPU_IDS[64] = {0};
static int toy_open(struct inode *inodep, struct file *filep) {
  int this_cpu = get_cpu();
  printk(KERN_INFO "open: called on CPU:%d\n", this_cpu);
  if(this_cpu == param_cpu_id) {
    printk(KERN_INFO "open: is on requested CPU: %d\n", smp_processor_id());
  }
  else {
    printk(KERN_INFO "open: not on requested CPU:%d\n", smp_processor_id());
  }
  put_cpu();
  return 0;
}
static ssize_t toy_read(struct file *filep, char *buffer, size_t len, loff_t *offset){
  int this_cpu = get_cpu();
  printk(KERN_INFO "read: called on CPU:%d\n", this_cpu);
  if(this_cpu == param_cpu_id) {
    printk(KERN_INFO "read: is on requested CPU: %d\n", smp_processor_id());
  }
  else {
    printk(KERN_INFO "read: not on requested CPU:%d\n", smp_processor_id());
  }
  put_cpu();
  return 0;
}
static ssize_t toy_write(struct file *filep, const char *buffer, size_t len, loff_t *offset){
  int this_cpu = get_cpu();
  printk(KERN_INFO "write called on CPU:%d\n", this_cpu);
  if(this_cpu == param_cpu_id) {
    printk(KERN_INFO "write: is on requested CPU: %d\n", smp_processor_id());
  }
  else {
    printk(KERN_INFO "write: not on requested CPU:%d\n", smp_processor_id());
  }
  put_cpu();
  return 0;
}
static int toy_release(struct inode *inodep, struct file *filep){
  int this_cpu = get_cpu();
  printk(KERN_INFO "release called on CPU:%d\n", this_cpu);
  if(this_cpu == param_cpu_id) {
    printk(KERN_INFO "release: is on requested CPU: %d\n", smp_processor_id());
  }
  else {
    printk(KERN_INFO "release: not on requested CPU:%d\n", smp_processor_id());
  }
  put_cpu();
  return 0;
}

static int __init toy_init(void) {
  int cpu_id;
  if(param_cpu_id < 0 || param_cpu_id > 4) {
    printk(KERN_INFO "toy: unable to load module without cpu parameter\n");
    return -1;
  }
  printk(KERN_INFO "toy: loading to device driver, param_cpu_id: %d\n", param_cpu_id);
  //preempt_disable(); // See notes below
  cpu_id = get_cpu();
  printk(KERN_INFO "toy init called and running on CPU: %d\n", cpu_id);
  misc_register(&toy_device);
  //preempt_enable(); // See notes below
  put_cpu();
  //smp_call_function_single(1,foo,(void *)(uintptr_t) 1,1);
  return 0;
}

static void __exit toy_exit(void) {
    misc_deregister(&toy_device);
    printk(KERN_INFO "toy exit called\n");
}

module_init(toy_init);
module_exit(toy_exit); 

上面的代码包含您要求的两种方法,即隔离CPU和init在隔离内核上运行。

在init get_cpu上禁用抢占,即在它之后的任何内容都不会被内核抢占并且将在一个核心上运行。注意,这是使用3.16完成的内核,你的里程可能因你的内核版本而异,但我认为这些API已经存在了很长时间

这是Makefile ...

obj-m += toy.o

all:
    make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules

clean:
    make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean

注记。 get_cpulinux/smp.h中声明为

#define get_cpu()   ({ preempt_disable(); smp_processor_id(); })
#define put_cpu()   preempt_enable()

因此,在致电preempt_disable之前,您实际上不需要致电get_cpu。 get_cpu调用是以下调用序列的包装...

preempt_count_inc();
barrier();

和put_cpu真的这样做......

barrier();
if (unlikely(preempt_count_dec_and_test())) {
  __preempt_schedule();
}   

使用上述内容,您可以随心所欲。几乎所有这些都来自以下来源。

Google for ... smp_call_function_single

Linux内核开发,Robert Love的书。

http://derekmolloy.ie/writing-a-linux-kernel-module-part-2-a-character-device/

https://github.com/vsinitsyn/reverse/blob/master/reverse.c

答案 1 :(得分:2)

你指出了你的问题:

  

我想我得到了&#34;未知的符号&#34;因为这些函数在内核中没有EXPORT_SYMBOL

我认为这是你问题的关键点。我看到您要包含定义方法的文件linux/cpuset.hcpuset_init等。但是,在编译和使用命令nm期间,我们都可以看到指示我们此功能不可用的指标:

<强>编译:

root@hectorvp-pc:/home/hectorvp/cpuset/cpuset_try# make
make -C /lib/modules/3.19.0-31-generic/build M=/home/hectorvp/cpuset/cpuset_try modules 
make[1]: Entering directory '/usr/src/linux-headers-3.19.0-31-generic'
  CC [M]  /home/hectorvp/cpuset/cpuset_try/cpuset_try.o
  Building modules, stage 2. 
  MODPOST 1 modules 
  WARNING: "cpuset_init" [/home/hectorvp/cpuset/cpuset_try/cpuset_try.ko] undefined!
  CC      /home/hectorvp/cpuset/cpuset_try/cpuset_try.mod.o
  LD [M]  /home/hectorvp/cpuset/cpuset_try/cpuset_try.ko
make[1]: Leaving directory '/usr/src/linux-headers-3.19.0-31-generic'

请参阅WARNING: "cupset_init" [...] undefined!使用nm

root@hectorvp-pc:/home/hectorvp/cpuset/cpuset_try# nm cpuset_try.ko
0000000000000030 T cleanup_module
                 U cpuset_init
                 U __fentry__
0000000000000000 T init_module
000000000000002f r __module_depends
                 U printk
0000000000000000 D __this_module
0000000000000000 r __UNIQUE_ID_license0
000000000000000c r __UNIQUE_ID_srcversion1
0000000000000038 r __UNIQUE_ID_vermagic0
0000000000000000 r ____versions

(注意:U代表&#39; undefined&#39;)

然而,我一直在探索内核的符号如下:

root@hectorvp-pc:/home/hectorvp/cpuset/cpuset_try# cat /proc/kallsyms | grep cpuset_init
ffffffff8110dc40 T cpuset_init_current_mems_allowed
ffffffff81d722ae T cpuset_init
ffffffff81d72342 T cpuset_init_smp

我看到它已导出,但在/lib/modules/$(uname -r)/build/Module.symvers中无法使用。所以你是对的。

经过进一步调查后,我发现其实际定义如下:

http://lxr.free-electrons.com/source/kernel/cpuset.c#L2101

这是您需要调用的函数,因为它在内核空间中可用。因此,您无法访问用户空间。

this question的第二个答案中报告了我发现让模块能够调用此符号的工作。请注意,您不再需要包含linux/cpuset.h

#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/init.h>
//#include <linux/cpuset.h>
#include <linux/kallsyms.h>


int init_module(void)
{
        static void (*cpuset_init_p)(void);
        cpuset_init_p = (void*) kallsyms_lookup_name("cpuset_init");
        printk(KERN_INFO "Starting ...\n");
        #ifdef CONFIG_CPUSETS
            printk(KERN_INFO "cpusets is enabled!");
        #endif
        (*cpuset_init_p)();
        /* 
         * A non 0 return means init_module failed; module can't be loaded. 
         */
        return 0;
}

void cleanup_module(void)
{
        printk(KERN_INFO "Ending ...\n");
}

MODULE_LICENSE("GPL");

我已成功编译并安装insmod。贝娄是我在dmesg得到的输出:

[ 1713.738925] Starting ...
[ 1713.738929] cpusets is enabled!
[ 1713.738943] kernel tried to execute NX-protected page - exploit attempt? (uid: 0)
[ 1713.739042] BUG: unable to handle kernel paging request at ffffffff81d7237b
[ 1713.739074] IP: [<ffffffff81d7237b>] cpuset_init+0x0/0x94
[ 1713.739102] PGD 1c16067 PUD 1c17063 PMD 30bc74063 PTE 8000000001d72163
[ 1713.739136] Oops: 0011 [#1] SMP 
[ 1713.739153] Modules linked in: cpuset_try(OE+) xt_conntrack ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 xt_addrtype iptable_filter ip_tables x_tables nf_nat nf_conntrack br_netfilter bridge stp llc pci_stub vboxpci(OE) vboxnetadp(OE) vboxnetflt(OE) vboxdrv(OE) aufs binfmt_misc cfg80211 nls_iso8859_1 snd_hda_codec_hdmi snd_hda_codec_realtek intel_rapl snd_hda_codec_generic iosf_mbi snd_hda_intel x86_pkg_temp_thermal intel_powerclamp snd_hda_controller snd_hda_codec snd_hwdep coretemp kvm_intel amdkfd kvm snd_pcm snd_seq_midi snd_seq_midi_event amd_iommu_v2 snd_rawmidi radeon snd_seq crct10dif_pclmul crc32_pclmul snd_seq_device aesni_intel ttm aes_x86_64 drm_kms_helper drm snd_timer i2c_algo_bit dcdbas mei_me lrw gf128mul mei snd glue_helper ablk_helper
[ 1713.739533]  cryptd soundcore shpchp lpc_ich serio_raw 8250_fintek mac_hid video parport_pc ppdev lp parport autofs4 hid_generic usbhid hid e1000e ahci psmouse ptp libahci pps_core
[ 1713.739628] CPU: 2 PID: 24679 Comm: insmod Tainted: G           OE  3.19.0-56-generic #62-Ubuntu
[ 1713.739663] Hardware name: Dell Inc. OptiPlex 9020/0PC5F7, BIOS A03 09/17/2013
[ 1713.739693] task: ffff8800d29f09d0 ti: ffff88009177c000 task.ti: ffff88009177c000
[ 1713.739723] RIP: 0010:[<ffffffff81d7237b>]  [<ffffffff81d7237b>] cpuset_init+0x0/0x94
[ 1713.739757] RSP: 0018:ffff88009177fd10  EFLAGS: 00010292
[ 1713.739779] RAX: 0000000000000013 RBX: ffffffff81c1a080 RCX: 0000000000000013
[ 1713.739808] RDX: 000000000000c928 RSI: 0000000000000246 RDI: 0000000000000246
[ 1713.739836] RBP: ffff88009177fd18 R08: 000000000000000a R09: 00000000000003db
[ 1713.739865] R10: 0000000000000092 R11: 00000000000003db R12: ffff8800ad1aaee0
[ 1713.739893] R13: 0000000000000000 R14: ffffffffc0947000 R15: ffff88009177fef8
[ 1713.739923] FS:  00007fbf45be8700(0000) GS:ffff88031dd00000(0000) knlGS:0000000000000000
[ 1713.739955] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1713.739979] CR2: ffffffff81d7237b CR3: 00000000a3733000 CR4: 00000000001407e0
[ 1713.740007] Stack:
[ 1713.740016]  ffffffffc094703e ffff88009177fd98 ffffffff81002148 0000000000000001
[ 1713.740052]  0000000000000001 ffff8802479de200 0000000000000001 ffff88009177fd78
[ 1713.740087]  ffffffff811d79e9 ffffffff810fb058 0000000000000018 ffffffffc0949000
[ 1713.740122] Call Trace:
[ 1713.740137]  [<ffffffffc094703e>] ? init_module+0x3e/0x50 [cpuset_try]
[ 1713.740175]  [<ffffffff81002148>] do_one_initcall+0xd8/0x210
[ 1713.740190]  [<ffffffff811d79e9>] ? kmem_cache_alloc_trace+0x189/0x200
[ 1713.740207]  [<ffffffff810fb058>] ? load_module+0x15b8/0x1d00
[ 1713.740222]  [<ffffffff810fb092>] load_module+0x15f2/0x1d00
[ 1713.740236]  [<ffffffff810f6850>] ? store_uevent+0x40/0x40
[ 1713.740250]  [<ffffffff810fb916>] SyS_finit_module+0x86/0xb0
[ 1713.740265]  [<ffffffff817ce10d>] system_call_fastpath+0x16/0x1b
[ 1713.740280] Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0c 53 58 31 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <00> 00 00 00 00 1c 00 00 00 c0 92 2c 7d c0 92 2c 7d a0 fc 69 ee 
[ 1713.740398] RIP  [<ffffffff81d7237b>] cpuset_init+0x0/0x94
[ 1713.740413]  RSP <ffff88009177fd10>
[ 1713.740421] CR2: ffffffff81d7237b
[ 1713.746177] ---[ end trace 25614103c0658b94 ]---

尽管有错误,我还是说我已经回答了你的初步问题:

  

如何在内核模块中使用cpuset? *

可能不是最优雅的方式,因为我根本不是专家。你需要从这里继续。

此致

答案 2 :(得分:0)

您是否尝试使用

进行work_struct
struct workqueue_attrs {
cpumask_var_t           cpumask;        /* allowed CPUs */
}

首先应该通过(例如cpu 0x1)

隔离cpu
setenv bootargs isolcpus=\"0x1"\

和下一个

struct lkm_sample {
struct work_struct lkm_work_struct;
struct workqueue_struct *lkm_wq_struct;
...
};
static struct lkm_sample lkm_smpl;

static void work(struct work_struct *work)
{
struct lkm_sample *tmp = container_of(work, struct lkm_sample,     lkm_work_struct);
....
return;
}
static int __init lkm_init(void)
{
//see:     https://lwn.net/Articles/540999/
lkm_smpl.lkm_wq_struct = create_singlethread_workqueue("you_wq_name");
INIT_WORK(&lkm_smpl.lkm_wq_struct, work);
}

如果你想在隔离的cpu上启动(运行__init)lkm:

  1. setenv bootargs isolcpus = \&#34; 0x1&#34; \

  2. lsmod helper_module.ko with

    call_usermodehelper_setup struct subprocess_info * call_usermodehelper_setup(char * path, char ** argv,/ * taskset 0x00000001 helper_application * / char ** envp, gfp_t gfp_mask, int(* init)(struct subprocess_info * info,struct cred * new), void(* cleanup)(struct subprocess_info * info), void * data); 使用helper内核模块,它应该通过taskset运行用户空间程序(helper_application),而mask应该来自isolcpus。 Helper模块应该只运行__init function()并返回-1,因为只有一个任务:在隔离的cpu上运行userspace app。

  3. 接下来的用户空间助手应用程序应该只是:lsmod for goal_module.ko, goal_module应该在同一个隔离的cpu上启动。

  4. 使用workqueue继续在隔离的cpu上运行隔离模块。

答案 3 :(得分:0)

使用on_each_cpu()并为所需的CPU进行过滤是有效的

targetcpu.c

#include <linux/init.h>
#include <linux/module.h>
#include <linux/kernel.h>

const static int TARGET_CPU = 4;

static void func(void *info){
    int cpu = get_cpu();
    if(cpu == TARGET_CPU){
        printk("on target cpu: %d\n", cpu);
    }
    put_cpu();
}

int init_module(void) {
    printk("enter\n");
    on_each_cpu(func, NULL, 1);
    return 0;
}

void cleanup_module(void) {
    printk("exit\n");
}

Makefile

obj-m += targetcpu.o

all:
    make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules

clean:
    make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean