MySQL表必须包含80至100行

时间:2018-08-26 15:23:27

标签: mysql triggers mariadb

对于学校作业,我需要有一个包含80至100行(含)的MySQL表。 Cron作业每2小时在表格中输入一行。我想使用触发器来做到这一点,但是显然这是不可能的,因为该表由于INSERT语句而被锁定。这是我使用的触发器:

<?php

return [

/*
|--------------------------------------------------------------------- 
 -----
| Default Queue Connection Name
|--------------------------------------------------------------------- 
 -----
|
| Laravel's queue API supports an assortment of back-ends via a single
| API, giving you convenient access to each back-end using the same
| syntax for every one. Here you may define a default connection.
|
*/

'default' => env('QUEUE_DRIVER', 'database'),

/*
|--------------------------------------------------------------------- 
-----
| Queue Connections
|--------------------------------------------------------------------- 
 -----
|
| Here you may configure the connection information for each server 
 that
| is used by your application. A default configuration has been added
| for each back-end shipped with Laravel. You are free to add more.
|
| Drivers: "sync", "database", "beanstalkd", "sqs", "redis", "null"
|
*/

'connections' => [

    'sync' => [
        'driver' => 'sync',
    ],

    'database' => [
        'driver' => 'database',
        'table' => 'jobs',
        'queue' => 'default',
        'retry_after' => 90,
    ],

    'beanstalkd' => [
        'driver' => 'beanstalkd',
        'host' => 'localhost',
        'queue' => 'default',
        'retry_after' => 90,
    ],

    'sqs' => [
        'driver' => 'sqs',
        'key' => env('SQS_KEY', 'your-public-key'),
        'secret' => env('SQS_SECRET', 'your-secret-key'),
        'prefix' => env('SQS_PREFIX', 'https://sqs.us-east- 
    1.amazonaws.com/your-account-id'),
        'queue' => env('SQS_QUEUE', 'your-queue-name'),
        'region' => env('SQS_REGION', 'us-east-1'),
    ],

    'redis' => [
        'driver' => 'redis',
        'connection' => 'default',
        'queue' => 'default',
        'retry_after' => 90,
        'block_for' => null,
    ],

],

/*
|--------------------------------------------------------------------- 
-----
| Failed Queue Jobs
|--------------------------------------------------------------------- 
-----
|
| These options configure the behavior of failed queue job logging so 
you
| can control which database and table are used to store the jobs that
| have failed. You may change them to any database / table you wish.
|
*/

'failed' => [
    'database' => env('DB_CONNECTION', 'mysql'),
    'table' => 'failed_jobs',
],

];

棘手的部分是表上也有手动插入,这就是为什么我不能定期检查(CRON作业或MySQL事件)表是否太多的原因,因为在某个时刻可能有太多表行。理想的情况是每次添加新行时都删除最旧的行。

我还有什么办法可以做到这一点?

2 个答案:

答案 0 :(得分:1)

您不能使用触发器来执行此操作。我会使用从应用程序代码调用的事务来做到这一点。

BEGIN;
INSERT INTO log ...;
SELECT id INTO @id FROM log ORDER BY id DESC LIMIT 1 OFFSET 100;
DELETE FROM log WHERE id <= @id;
COMMIT;

如果表中少于100行,它将为@id分配NULL,而DELETE将不删除任何行。

如果表中有多于100行,它将从末尾开始向@id分配该行的ID。然后,DELETE将删除该行和所有更早的行,仅保留100行。使用这种多步骤方法比仅删除id

如果有多个客户端同时执行此操作,则有一个race condition。也就是说,两个并发会话可能同时处于选择和删除状态。这可能导致临时的总行数超过100,至少直到下一次插入一行并删除多余的行为止。

答案 1 :(得分:0)

看来我要找的东西实际上是不可能的,所以我使用MySQL事件每小时删除一次多余的行。

代码如下:

事件check_for_redundant_rows

SET GLOBAL event_scheduler = ON;

DELIMITER $$

CREATE EVENT check_for_redundant_rows
ON SCHEDULE EVERY 1 HOUR
STARTS '2018-01-01 00:00:00'
DO
CALL delete_redundant_rows();
$$

DELIMITER ;

过程delete_redundant_rows()

DELIMITER $$

CREATE PROCEDURE delete_redundant_rows()
BEGIN
    DECLARE cnt INT DEFAULT 0;

    SELECT COUNT(id)
    INTO cnt
    FROM log;

    SET cnt = cnt - 80;

    DELETE FROM log
    ORDER BY id ASC
    LIMIT cnt;
END
$$

DELIMITER ;