如何制作多个where子句并仅获取第一个

时间:2019-06-17 07:22:35

标签: php mysql laravel collections eloquent

我试图完成一个where子句,但是失败了。我想检查用户的当前日期是否等于created_at,第二个子句是用户是否具有按用户ID的条目。我正在开发一个健身应用程序,用户可以在其中跟踪他跑步的公里数。而要在数据库表中创建新条目,只需将它们添加到现有条目中即可。 我的问题专注于if子句的问题,因为变量$hasUserEntries不等于null,但是数据库表中没有条目。它是空的。

我尝试使用get()而不是使用first()。但是问题是我无法使用Carbon::today(),或者可能是我在需要的where子句中使用了3个值,因为我无法仅通过Date < strong> YYYY-MM-DD 。在created_at语句中,我使用了一个硬编码的DateTime来与first()进行检查,并且可以正常工作。但是我认为我不能解释为什么硬编码不是最佳的。

我在Stackoverflow上进行了搜索,发现大多数答案都与使用created_at有关。很好,但是为什么触发我的get()是因为从我的角度来看数据库是空的(Null),所以应该触发else

if($hasUserEntries==null)

预期结果应该是if语句的触发原因,因为如果数据库表为空,用户标识不存在或者created_at的日期与当前日期不同,则应该为触发了public function add_km_to_progressbar(Request $request, TrackKM $trackKM) { $track_dailies = new TrackDaily(); $track_dailies->user_id = \Auth::user()->id; $amount_runned_km = $request->input('amount_runned_km'); $amount_runned_km = (int)$amount_runned_km; $track_dailies->amount = (int)$amount_runned_km; $track_dailies->type = 1; $check_requirements = [ 'user_id'=>\Auth::user()->id, 'created_at'=>'Carbon::today()' ]; $hasUserEntries = DB::table('track_dailies') ->where('user_id','=',\Auth::user()) ->where('created_at','>=',Carbon::today()) ->get(); if ($hasUserEntries == null) { return dd('does not work'); } else { return dd('does work'); } } 。如果数据库中此条件== null,我想在那里创建新行。

实际结果 if($hasUserEntries==null)是正确的,即使数据库表为空。我认为方法if($hasUserEntries==null)保存的值与数据库无关。

我希望你能帮助我。

3 个答案:

答案 0 :(得分:1)

我认为您应该做的是在继续操作之前检查数据库中是否存在该记录...

ipfs

有了它,它不会抛出错误!

答案 1 :(得分:0)

如果需要,请尝试以下操作:

if (is_empty($hasUserEntries))

答案 2 :(得分:0)

import logging
import os
import json
from google.cloud import storage
from apache_beam import Pipeline, ParDo, DoFn
from apache_beam.io import ReadFromPubSub, WriteToBigQuery, BigQueryDisposition
from apache_beam.options.pipeline_options import PipelineOptions, StandardOptions, WorkerOptions, GoogleCloudOptions, \
    SetupOptions


def _get_storage_service():
    storage_client = storage.Client \
        .from_service_account_json(
        json_credentials_path='C:\Users\dneema\PycharmProjects\iot_dataflow\df_stm_iot_pubsub_bq\service_account_credentials.json')
    print('storage service fetched')
    return storage_client


class RuntimeOptions(PipelineOptions):

    def __init__(self, flags=None, **kwargs):
        super(RuntimeOptions, self).__init__(flags, **kwargs)

    @classmethod
    def _add_argparse_args(cls, parser):
        parser.add_value_provider_argument('--bucket_name', type=str)
        parser.add_value_provider_argument('--config_json_path', type=str,)


class PipelineCreator:

    def __init__(self):
        self.options = PipelineOptions()
        storage_client = storage.Client.from_service_account_json(
            'service_account_credentials_updated.json')

        runtime_options = self.options.view_as(RuntimeOptions)
        bucket_name = str(runtime_options.bucket_name)
        config_json_path = str(runtime_options.config_json_path)

        # get the bucket with name
        bucket = storage_client.get_bucket(bucket_name)

        # get bucket file as blob
        blob = bucket.get_blob(config_json_path)

        # convert to string and load config
        json_data = blob.download_as_string()
        self.configData = json.loads(json_data)

        dataflow_config = self.configData['dataflow_config']
        self.options.view_as(StandardOptions).streaming = bool(dataflow_config['streaming'])
        self.options.view_as(SetupOptions).save_main_session = True

        worker_options = self.options.view_as(WorkerOptions)
        worker_options.max_num_workers = int(dataflow_config['max_num_worker'])
        worker_options.autoscaling_algorithm = str(dataflow_config['autoscaling_algorithm'])
        #worker_options.machine_type = str(dataflow_config['machine_type'])
        #worker_options.zone = str(dataflow_config['zone'])
        #worker_options.network = str(dataflow_config['network'])
        #worker_options.subnetwork = str(dataflow_config['subnetwork'])

    def run(self):
        os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'dataflow-service-account.json'

        project_id = self.configData['project_id']
        dataset_id = self.configData['dataset_id']
        topics = self.configData['topics']
        table_ids = self.configData['bq_table_ids']
        error_table_id = self.configData['error_table_id']

        logger = logging.getLogger(project_id)
        logger.info(self.options.display_data())

        pipeline = Pipeline(options=self.options)

        size = len(topics)
        for index in range(size):
            print(topics[index])
            pipeline_name = "pipeline_"+str(index)
            logger.info("Launch pipeline :: "+pipeline_name)
            messages = pipeline | 'Read PubSub Message in ' + pipeline_name >> ReadFromPubSub(topic=topics[index])
            logger.info("Read PubSub Message")
            valid_messages, invalid_messages = messages  | 'Convert Messages to TableRows in ' + pipeline_name >> ParDo(TransformMessageToTableRow()).with_outputs('invalid', main='valid')
            valid_messages | 'Write Messages to BigQuery in ' + pipeline_name >> WriteToBigQuery(table=table_ids[index],
                                                                                               dataset=dataset_id,
                                                                                               project=project_id,
                                                                                          write_disposition=BigQueryDisposition.WRITE_APPEND)

        pipeline.run().wait_until_finish()

class TransformMessageToTableRow(DoFn):

    def process(self, element, *args, **kwargs):
        logging.getLogger('dataflow').log(logging.INFO, element)
        print element
        print("element type ", type(element))
        print("inside bq pardo")
        import json
        try:
            message_rows = json.loads(element)

            # if using emulator, uncomment below line
            message_rows = json.loads(message_rows)
            print 'loaded element'
        except:
            try:
                element = "[" + element + "]"
                message_rows = json.loads(element)
            except Exception as e:
                print(e)
                from apache_beam import pvalue
                yield [pvalue.TaggedOutput('invalid', [element, str(e)])]
        print(message_rows)
        print("message rows", type(message_rows))
        if not isinstance(message_rows, list):
            message_rows = [message_rows]
        #rows = list()
        if isinstance(message_rows, list):

            for row in message_rows:
                try:
                    new_row = dict()
                    for k, v in row.items():
                        new_row[str(k)] = v
                    #rows.append(new_row)
                    print(new_row)
                    yield new_row
                except Exception as e:
                    print(e)
                    from apache_beam import pvalue
                    yield pvalue.TaggedOutput('invalid', [row, str(e)])

if __name__ == '__main__':
        PipelineCreator().run()

请记住,user_id的记录不可能为空,因为首先必须在db内部进行记录