Postgres表现

时间:2013-09-30 15:01:36

标签: python django postgresql twisted postgis

我需要有关postgresql性能的帮助

我已经配置了postgresql文件来调整我的服务器,但它很慢并且cpu资源高于120%

我不知道如何解决这个问题,我试图在谷歌搜索更多信息,但还不够,我也尝试autovacuum句子和reindex数据库,但它继续缓慢

我的应用程序是一个gps监听器,使用在python twisted上开发的tcp服务器每分钟插入超过6000条记录,这里没有问题,问题是当我尝试在关联时间上的地图上关注gps设备时,我每隔6秒从我的django应用程序对我的数据库进行查询,使用存储过程请求最后一个位置,但查询在50多个设备上变慢,cpu开始使用超过120%的资源

Django App直接连接postgres数据库,设备的tcp监听器服务器使用pgbouncer以线程方式连接数据库,我没有在pgbouncer上使用我的django web应用程序因为我不想在pgbouncer上崩溃gps设备连接

我可以帮助您获得更好的表现

我附加了我的商店程序,我的conf文件和我的cpu,内存信息

存储过程

CREATE OR REPLACE FUNCTION gps_get_live_location (
    _imeis varchar(8)
)
RETURNS TABLE (
    imei varchar,
    device_id integer,
    date_time_process timestamp with time zone, 
    latitude double precision, 
    longitude double precision, 
    course smallint, 
    speed smallint, 
    mileage integer,
    gps_signal smallint,
    gsm_signal smallint,
    alarm_status boolean,
    gsm_status boolean,
    vehicle_status boolean,
    alarm_over_speed boolean,
    other text,
    address varchar
) AS $func$
DECLARE 
    arr varchar[];
BEGIN
    arr := regexp_split_to_array(_imeis, E'\\s+');
    FOR i IN 1..array_length(arr, 1) LOOP
        RETURN QUERY 
        SELECT 
        gpstracking_device_tracks.imei,
        gpstracking_device_tracks.device_id, 
        gpstracking_device_tracks.date_time_process,
        gpstracking_device_tracks.latitude,
        gpstracking_device_tracks.longitude,
        gpstracking_device_tracks.course,
        gpstracking_device_tracks.speed,
        gpstracking_device_tracks.mileage,
        gpstracking_device_tracks.gps_signal,
        gpstracking_device_tracks.gsm_signal,
        gpstracking_device_tracks.alarm_status,
        gpstracking_device_tracks.gps_status,
        gpstracking_device_tracks.vehicle_status,
        gpstracking_device_tracks.alarm_over_speed,
        gpstracking_device_tracks.other,
        gpstracking_device_tracks.address
        FROM gpstracking_device_tracks
        WHERE gpstracking_device_tracks.imei = arr[i]::VARCHAR
        AND gpstracking_device_tracks.date_time_process >= date_trunc('hour', now()) 
        AND gpstracking_device_tracks.date_time_process <= NOW()
        ORDER BY gpstracking_device_tracks.date_time_process DESC
        LIMIT 1;
    END LOOP;
    RETURN;
END;
$func$ 
LANGUAGE plpgsql VOLATILE SECURITY DEFINER;

$ cat less /etc/sysctl.conf

kernel.shmmax = 6871947673
kernel.shmall = 6871947673
fs.file-max = 4194304

$ cat /etc/postgresql/9.1/main/postgresql.conf

data_directory = '/var/lib/postgresql/9.1/main'         # use data in another directory
hba_file = '/etc/postgresql/9.1/main/pg_hba.conf'       # host-based authentication file
ident_file = '/etc/postgresql/9.1/main/pg_ident.conf'   # ident configuration file
external_pid_file = '/var/run/postgresql/9.1-main.pid'          # write an extra PID file
listen_addresses = 'localhost'          # what IP address(es) to listen on;
port = 5432                             # (change requires restart)
max_connections = 80                    # (change requires restart)
superuser_reserved_connections = 3      # (change requires restart)
unix_socket_directory = '/var/run/postgresql'           # (change requires restart)
#unix_socket_group = ''                 # (change requires restart)
#unix_socket_permissions = 0777         # begin with 0 to use octal notation
#bonjour = off                          # advertise server via Bonjour
#bonjour_name = ''                      # defaults to the computer name
ssl = true                              # (change requires restart)
#ssl_ciphers = 'ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH'      # allowed SSL ciphers
#ssl_renegotiation_limit = 512MB        # amount of data between renegotiations
#password_encryption = on
#db_user_namespace = off
#krb_server_keyfile = ''
#krb_srvname = 'postgres'               # (Kerberos only)
#krb_caseins_users = off
#tcp_keepalives_idle = 0                # TCP_KEEPIDLE, in seconds;
#tcp_keepalives_interval = 0            # TCP_KEEPINTVL, in seconds;
#tcp_keepalives_count = 0               # TCP_KEEPCNT;
# shared_buffers = 4096MB                       # min 128kB
temp_buffers = 16MB                     # min 800kB
# work_mem = 80MB                               # min 64kB
# maintenance_work_mem = 2048MB         # min 1MB
max_stack_depth = 4MB                   # min 100kB
#max_files_per_process = 1000           # min 25
#vacuum_cost_delay = 0ms                # 0-100 milliseconds
#vacuum_cost_page_hit = 1               # 0-10000 credits
#vacuum_cost_page_miss = 10             # 0-10000 credits
#vacuum_cost_page_dirty = 20            # 0-10000 credits
#vacuum_cost_limit = 200                # 1-10000 credits
#bgwriter_delay = 200ms                 # 10-10000ms between rounds
#bgwriter_lru_maxpages = 100            # 0-1000 max buffers written/round
#bgwriter_lru_multiplier = 2.0          # 0-10.0 multipler on buffers scanned/round
#effective_io_concurrency = 1           # 1-1000. 0 disables prefetching
#wal_level = minimal                    # minimal, archive, or hot_standby
#fsync = on                             # turns forced synchronization on or off
#synchronous_commit = on                # synchronization level; on, off, or local
#wal_sync_method = fsync                # the default is the first option
#full_page_writes = on                  # recover from partial page writes
#wal_buffers = -1                       # min 32kB, -1 sets based on shared_buffers
#wal_writer_delay = 200ms               # 1-10000 milliseconds
#commit_delay = 0                       # range 0-100000, in microseconds
#commit_siblings = 5                    # range 1-1000
# checkpoint_segments = 64              # in logfile segments, min 1, 16MB each
checkpoint_timeout = 5min               # range 30s-1h
# checkpoint_completion_target = 0.5    # checkpoint target duration, 0.0 - 1.0
#checkpoint_warning = 30s               # 0 disables
#archive_mode = off             # allows archiving to be done
#archive_command = ''           # command to use to archive a logfile segment
#archive_timeout = 0            # force a logfile segment switch after this
#max_wal_senders = 0            # max number of walsender processes
#wal_sender_delay = 1s          # walsender cycle time, 1-10000 milliseconds
#wal_keep_segments = 0          # in logfile segments, 16MB each; 0 disables
#vacuum_defer_cleanup_age = 0   # number of xacts by which cleanup is delayed
#replication_timeout = 60s      # in milliseconds; 0 disables
#synchronous_standby_names = '' # standby servers that provide sync rep
#hot_standby = off                      # "on" allows queries during recovery
#max_standby_archive_delay = 30s        # max delay before canceling queries
#max_standby_streaming_delay = 30s      # max delay before canceling queries
#wal_receiver_status_interval = 10s     # send replies at least this often
#hot_standby_feedback = off             # send info from standby to prevent
#enable_bitmapscan = on
#enable_hashagg = on
#enable_hashjoin = on
#enable_indexscan = on
#enable_material = on
#enable_mergejoin = on
#enable_nestloop = on
#enable_seqscan = on
#enable_sort = on
#enable_tidscan = on
#seq_page_cost = 1.0                    # measured on an arbitrary scale
#random_page_cost = 4.0                 # same scale as above
cpu_tuple_cost = 0.01                   # same scale as above
cpu_index_tuple_cost = 0.005            # same scale as above
#cpu_operator_cost = 0.0025             # same scale as above
# effective_cache_size = 8192MB
#geqo = on
#geqo_threshold = 12
#geqo_effort = 5                        # range 1-10
#geqo_pool_size = 0                     # selects default based on effort
#geqo_generations = 0                   # selects default based on effort
#geqo_selection_bias = 2.0              # range 1.5-2.0
#geqo_seed = 0.0                        # range 0.0-1.0
#default_statistics_target = 100        # range 1-10000
#constraint_exclusion = partition       # on, off, or partition
#cursor_tuple_fraction = 0.1            # range 0.0-1.0
#from_collapse_limit = 8
#join_collapse_limit = 8                # 1 disables collapsing of explicit
#log_destination = 'stderr'             # Valid values are combinations of
#logging_collector = off                # Enable capturing of stderr and csvlog
# These are only used if logging_collector is on:
#log_directory = 'pg_log'               # directory where log files are written,
#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'        # log file name pattern,
#log_file_mode = 0600                   # creation mode for log files,
#log_truncate_on_rotation = off         # If on, an existing log file with the
#log_rotation_age = 1d                  # Automatic rotation of logfiles will
#log_rotation_size = 10MB               # Automatic rotation of logfiles will
#syslog_facility = 'LOCAL0'
#syslog_ident = 'postgres'
#silent_mode = off                      # Run server silently.
#client_min_messages = notice           # values in order of decreasing detail:
#log_min_messages = warning             # values in order of decreasing detail:
#log_min_error_statement = error        # values in order of decreasing detail:
#log_min_duration_statement = -1        # -1 is disabled, 0 logs all statements
#debug_print_parse = off
#debug_print_rewritten = off
#debug_print_plan = off
#debug_pretty_print = on
#log_checkpoints = off
#log_connections = off
#log_disconnections = off
#log_duration = off
#log_error_verbosity = default          # terse, default, or verbose messages
#log_hostname = off
log_line_prefix = '%t '                 # special values:
#log_lock_waits = off                   # log lock waits >= deadlock_timeout
#log_statement = 'none'                 # none, ddl, mod, all
#log_temp_files = -1                    # log temporary files equal or larger
#log_timezone = '(defaults to server environment setting)'
#track_activities = on
#track_counts = on
#track_functions = none                 # none, pl, all
#track_activity_query_size = 1024       # (change requires restart)
#update_process_title = on
#stats_temp_directory = 'pg_stat_tmp'
#log_parser_stats = off
#log_planner_stats = off
#log_executor_stats = off
#log_statement_stats = off
#autovacuum = on                        # Enable autovacuum subprocess?  'on'
#log_autovacuum_min_duration = -1       # -1 disables, 0 logs all actions and
#autovacuum_max_workers = 3             # max number of autovacuum subprocesses
#autovacuum_naptime = 1min              # time between autovacuum runs
#autovacuum_vacuum_threshold = 50       # min number of row updates before
#autovacuum_analyze_threshold = 50      # min number of row updates before
#autovacuum_vacuum_scale_factor = 0.2   # fraction of table size before vacuum
#autovacuum_analyze_scale_factor = 0.1  # fraction of table size before analyze
#autovacuum_freeze_max_age = 200000000  # maximum XID age before forced vacuum
#autovacuum_vacuum_cost_delay = 20ms    # default vacuum cost delay for
#autovacuum_vacuum_cost_limit = -1      # default vacuum cost limit for
#search_path = '"$user",public'         # schema names
#default_tablespace = ''                # a tablespace name, '' uses the default
#temp_tablespaces = ''                  # a list of tablespace names, '' uses
#check_function_bodies = on
#default_transaction_isolation = 'read committed'
#default_transaction_read_only = off
#default_transaction_deferrable = off
#session_replication_role = 'origin'
#statement_timeout = 0                  # in milliseconds, 0 is disabled
#vacuum_freeze_min_age = 50000000
#vacuum_freeze_table_age = 150000000
#bytea_output = 'hex'                   # hex, escape
#xmlbinary = 'base64'
#xmloption = 'content'
datestyle = 'iso, mdy'
#intervalstyle = 'postgres'
#timezone = '(defaults to server environment setting)'
#timezone_abbreviations = 'Default'     # Select the set of available time zone
#extra_float_digits = 0                 # min -15, max 3
#client_encoding = sql_ascii            # actually, defaults to database
lc_messages = 'en_US.UTF-8'                     # locale for system error message
lc_monetary = 'en_US.UTF-8'                     # locale for monetary formatting
lc_numeric = 'en_US.UTF-8'                      # locale for number formatting
lc_time = 'en_US.UTF-8'                         # locale for time formatting
default_text_search_config = 'pg_catalog.english'
#dynamic_library_path = '$libdir'
#local_preload_libraries = ''
#deadlock_timeout = 1s
#max_locks_per_transaction = 64         # min 10
#max_pred_locks_per_transaction = 64    # min 10
#array_nulls = on
#backslash_quote = safe_encoding        # on, off, or safe_encoding
#default_with_oids = off
#escape_string_warning = on
#lo_compat_privileges = off
#quote_all_identifiers = off
#sql_inheritance = on
#standard_conforming_strings = on
#synchronize_seqscans = on
#transform_null_equals = off
#exit_on_error = off                            # terminate session on any error?
#restart_after_crash = on                       # reinitialize after backend crash?
#custom_variable_classes = ''           # list of custom variable class names
default_statistics_target = 50 # pgtune wizard 2013-09-24
maintenance_work_mem = 960MB # pgtune wizard 2013-09-24
constraint_exclusion = on # pgtune wizard 2013-09-24
checkpoint_completion_target = 0.9 # pgtune wizard 2013-09-24
effective_cache_size = 11GB # pgtune wizard 2013-09-24
work_mem = 96MB # pgtune wizard 2013-09-24
wal_buffers = 8MB # pgtune wizard 2013-09-24
checkpoint_segments = 16 # pgtune wizard 2013-09-24
shared_buffers = 3840MB # pgtune wizard 2013-09-24

$ cat /etc/pgbouncer/pgbouncer.ini

[databases]
anfitrion = host=127.0.0.1 port=5432 dbname=**** user=**** password=**** client_encoding=UNICODE datestyle=ISO connect_query='SELECT 1'

[pgbouncer]
logfile = /var/log/postgresql/pgbouncer.log
pidfile = /var/run/postgresql/pgbouncer.pid
listen_addr = 127.0.0.1
listen_port = 6432
unix_socket_dir = /var/run/postgresql
auth_type = trust
auth_file = /etc/pgbouncer/userlist.txt
;admin_users = user2, someadmin, otheradmin
;stats_users = stats, root
pool_mode = statement
server_reset_query = DISCARD ALL
;ignore_startup_parameters = extra_float_digits
;server_check_query = select 1
;server_check_delay = 30
; total number of clients that can connect
max_client_conn = 1000
default_pool_size = 80
;reserve_pool_size = 5
;reserve_pool_timeout = 3
;log_connections = 1
;log_disconnections = 1
;log_pooler_errors = 1
;server_round_robin = 0
;server_lifetime = 1200
;server_idle_timeout = 60
;server_connect_timeout = 15
;server_login_retry = 15
;query_timeout = 0
;query_wait_timeout = 0
;client_idle_timeout = 0
;client_login_timeout = 60
;autodb_idle_timeout = 3600
;pkt_buf = 2048
;listen_backlog = 128
;tcp_defer_accept = 0
;tcp_socket_buffer = 0
;tcp_keepalive = 1
;tcp_keepcnt = 0
;tcp_keepidle = 0
;tcp_keepintvl = 0
;dns_max_ttl = 15
;dns_zone_check_period = 0

$ free -h

             total       used       free     shared    buffers     cached
Mem:           15G        11G       4.1G         0B       263M        10G
-/+ buffers/cache:       1.2G        14G
Swap:          30G         0B        30G

$ cat / proc / cpuinfo

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 58
model name      : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz
stepping        : 9
microcode       : 0x15
cpu MHz         : 3101.000
cache size      : 8192 KB
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 4
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms
bogomips        : 6186.05
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:
processor       : 1
vendor_id       : GenuineIntel
cpu family      : 6
model           : 58
model name      : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz
stepping        : 9
microcode       : 0x15
cpu MHz         : 3101.000
cache size      : 8192 KB
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 4
apicid          : 2
initial apicid  : 2
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms
bogomips        : 6185.65
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:
processor       : 2
vendor_id       : GenuineIntel
cpu family      : 6
model           : 58
model name      : Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz
stepping        : 9
microcode       : 0x15
cpu MHz         : 3101.000
cache size      : 8192 KB
physical id     : 0
siblings        : 4
core id         : 2
cpu cores       : 4
apicid          : 4
initial apicid  : 4
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms
bogomips        : 6185.66
clflush size    : 64
cache_alignment : 64
address sizes   : 36 bits physical, 48 bits virtual
power management:

1 个答案:

答案 0 :(得分:1)

FOR i IN 1..array_length(arr, 1) LOOP

从不这样做。您的存储过程只是迭代地构建结果集。实际上,您正在为每个目标设备运行一个查询。相反,请使用ANY + ROW_NUMBER

arr := regexp_split_to_array(_imeis, E'\\s+');
RETURN QUERY 
    SELECT * FROM (
        SELECT 
        ROW_NUMBER() OVER(PARTITION BY dt.imei ORDER BY dt.date_time_process DESC) as rnumber
        --other fields here
        FROM gpstracking_device_tracks dt --added alias
        WHERE dt.imei = ANY(arr)
        AND dt.date_time_process >= date_trunc('hour', now()) 
        AND dt.date_time_process <= NOW()
    )
    where rnumber = 1

我手边没有Postgres和你的数据库来验证解决方案的正确性,但我认为你应该能够将它调试到它的位置。至少,这是一个可以遵循的方向。

有关详细信息,请参阅window functions docsrow and array comparisons docs

要进一步提高响应速度,请考虑仅提取位置更新。因此,不是每6秒获取所有历史记录,而是在页面加载/ app init / whatever获取一次,而不是仅轮询服务器以获取更新。内部查询看起来像:

    SELECT 
        ROW_NUMBER() OVER(PARTITION BY dt.imei ORDER BY dt.date_time_process DESC) as rnumber
        --other fields here
    FROM gpstracking_device_tracks dt --added alias
    WHERE dt.imei = ANY(arr)
    AND dt.date_time_process >= @last_poll_timestamp
    AND dt.date_time_process <= NOW()