我有一组用E:\Research\ELK\filebeat-6.2.3-windows-x86_64>filebeat --setup -e
2018-03-24T22:58:39.660+0530 INFO instance/beat.go:468 Home path: [E:\Research\ELK\filebeat-6.2.3-windows-x86_64] Config path: [E:\Research\ELK\filebeat-6.2.3-windows-x86_64] Data path: [E:\Research\ELK\filebeat-6.2.3-windows-x86_64\data] Logs path: [E:\Research\ELK\filebeat-6.2.3-windows-x86_64\logs]
2018-03-24T22:58:39.661+0530 INFO instance/beat.go:475 Beat UUID: f818bcc0-25bb-4545-bcd4-3523366a4c0e
2018-03-24T22:58:39.662+0530 INFO instance/beat.go:213 Setup Beat: filebeat; Version: 6.2.3
2018-03-24T22:58:39.662+0530 INFO elasticsearch/client.go:145 Elasticsearch url: http://localhost:9200
2018-03-24T22:58:39.665+0530 INFO pipeline/module.go:76 Beat name: DESKTOP-J932HJH
2018-03-24T22:58:39.666+0530 INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
2018-03-24T22:58:39.666+0530 INFO elasticsearch/client.go:145 Elasticsearch url: http://localhost:9200
2018-03-24T22:58:39.672+0530 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.3
2018-03-24T22:58:39.672+0530 INFO kibana/client.go:69 Kibana url: http://localhost:5601
2018-03-24T22:59:08.882+0530 INFO instance/beat.go:583 Kibana dashboards successfully loaded.
2018-03-24T22:59:08.882+0530 INFO elasticsearch/client.go:145 Elasticsearch url: http://localhost:9200
2018-03-24T22:59:08.885+0530 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.3
2018-03-24T22:59:08.888+0530 INFO instance/beat.go:301 filebeat start running.
2018-03-24T22:59:08.888+0530 INFO registrar/registrar.go:108 Loading registrar data from E:\Research\ELK\filebeat-6.2.3-windows-x86_64\data\registry
2018-03-24T22:59:08.888+0530 INFO registrar/registrar.go:119 States Loaded from registrar: 5
2018-03-24T22:59:08.888+0530 INFO crawler/crawler.go:48 Loading Prospectors: 1
2018-03-24T22:59:08.889+0530 INFO log/prospector.go:111 Configured paths: [E:\Research\ELK\elasticsearch-6.2.3\logs\*.log]
2018-03-24T22:59:08.890+0530 INFO log/harvester.go:216 Harvester started for file: E:\Research\ELK\elasticsearch-6.2.3\logs\elasticsearch.log
2018-03-24T22:59:08.892+0530 ERROR fileset/factory.go:69 Error creating prospector: No paths were defined for prospector accessing config
2018-03-24T22:59:08.892+0530 INFO crawler/crawler.go:109 Stopping Crawler
2018-03-24T22:59:08.893+0530 INFO crawler/crawler.go:119 Stopping 1 prospectors
2018-03-24T22:59:08.897+0530 INFO log/prospector.go:410 Scan aborted because prospector stopped.
2018-03-24T22:59:08.897+0530 INFO log/harvester.go:216 Harvester started for file: E:\Research\ELK\elasticsearch-6.2.3\logs\elasticsearch_deprecation.log
2018-03-24T22:59:08.897+0530 INFO prospector/prospector.go:121 Prospector ticker stopped
2018-03-24T22:59:08.898+0530 INFO prospector/prospector.go:138 Stopping Prospector: 18361622063543553778
2018-03-24T22:59:08.898+0530 INFO log/harvester.go:237 Reader was closed: E:\Research\ELK\elasticsearch-6.2.3\logs\elasticsearch.log. Closing.
2018-03-24T22:59:08.898+0530 INFO crawler/crawler.go:135 Crawler stopped
2018-03-24T22:59:08.899+0530 INFO registrar/registrar.go:210 Stopping Registrar
2018-03-24T22:59:08.908+0530 INFO registrar/registrar.go:165 Ending Registrar
2018-03-24T22:59:08.910+0530 INFO instance/beat.go:308 filebeat stopped.
2018-03-24T22:59:08.948+0530 INFO [monitoring] log/log.go:132 Total non-zero metrics
2018-03-24T22:59:08.948+0530 INFO [monitoring] log/log.go:133 Uptime: 29.3387858s
2018-03-24T22:59:08.949+0530 INFO [monitoring] log/log.go:110 Stopping metrics logging.
2018-03-24T22:59:08.950+0530 ERROR instance/beat.go:667 Exiting: No paths were defined for prospector accessing config
Exiting: No paths were defined for prospector accessing config
分配的嵌套结构指针。我希望在二进制文件中打印数组,然后将其加载到此结构中再次破坏现有记录。这是我的结构数组:
malloc
我正在尝试将其打印在二进制文件中,如:
struct viaje {
char *identificador;
char *ciudadDestino;
char *hotel;
int numeroNoches;
char *tipoTransporte;
float precioAlojamiento;
float precioDesplazamiento;
};
struct cliente {
char *dni;
char *nombre;
char *apellidos;
char *direccion;
int totalViajes;
struct viaje *viajes;
} *clientes;
其中:
for (i = 0; i < MAX_TAM_CLIENTES; i++) {
fwrite(&clientes[i], sizeof(struct cliente)-(sizeof(struct viaje)*MAX_TAM_VIAJES_CLIENTE), 1, fp_guardarCargarEstado);
for (j = 0; j < clientes[i].totalViajes; j++) {
fwrite(&clientes[i].viajes[j], sizeof(struct viaje), 1, fp_guardarCargarEstado);
}
}
是一个定义。是数组MAX_TAM_CLIENTES
的最大大小,
clientes
是一个定义。是一个客户端中数组MAX_TAM_VIAJES_CLIENTE
的最大大小
另外,我尝试加载这样的二进制数据,如:
viajes
其中:
for (i = 0; i < MAX_TAM_CLIENTES; i++) {
clientes = (struct cliente *)realloc(clientes, (totalClientes+1)*sizeof(struct cliente));
clientes[totalClientes].dni = (char *)malloc((MAX_TAM_DNI+1)*sizeof(char));
clientes[totalClientes].nombre = (char *)malloc((MAX_TAM_NOMBRE+1)*sizeof(char));
clientes[totalClientes].apellidos = (char *)malloc((MAX_TAM_APELLIDOS+1)*sizeof(char));
clientes[totalClientes].direccion = (char *)malloc((MAX_TAM_DIRECCION+1)*sizeof(char));
fread(&clientes[i], sizeof(struct cliente)-(sizeof(struct viaje)*MAX_TAM_VIAJES_CLIENTE), 1, fp_guardarCargarEstado);
for (j = 0; j < clientes[i].totalViajes; j++) {
clientes[i].viajes = (struct viaje *)realloc(clientes[i].viajes, (j+1)*sizeof(struct viaje));
clientes[i].viajes[j].identificador = (char *)malloc((MAX_TAM_IDENTIFICADOR+1)*sizeof(char));
clientes[i].viajes[j].ciudadDestino = (char *)malloc((MAX_TAM_CIUDAD_DESTINO+1)*sizeof(char));
clientes[i].viajes[j].hotel = (char *)malloc((MAX_TAM_HOTEL+1)*sizeof(char));
clientes[i].viajes[j].tipoTransporte = (char *)malloc((MAX_TAM_TIPO_TRANSPORTE+1)*sizeof(char));
fread(&clientes[i].viajes[j], sizeof(struct viaje), 1, fp_guardarCargarEstado);
}
}
是定义。它们是每个字段的最大大小。
但我不知道MAX_TAM_*
是否可以保存好数据,因为当我尝试加载它时,程序崩溃并且没有加载数据。
我不知道那里怎么可能出错。有什么想法吗?
谢谢。
答案 0 :(得分:1)
您无法使用fwrite
保存这些结构的全部内容,并使用fread
将其读回,因为它们包含指向字符串的指针,而不是实际字符。
有两种方法可以实现目标:
,
分隔。char
的数组,而不是指向已分配存储的指针。 If you open the files as binary, you should be able to write the structures and read them back with
fread`。请注意,后一种方法的灵活性远低于文本方法:
int
的表示形式可能不同。struct viaje *viajes;
也必须替换为没有任何指针的实际结构。