MySQL查询失败:字段列表中的列'candidate_ID'不明确?

时间:2016-10-26 19:12:34

标签: mysql

我遇到以下错误的问题,但可以找出原因吗?

MySQL Query Failed: Column 'candidate_ID' in field list is ambiguous

SELECT SQL_CALC_FOUND_ROWS 
            candidate.candidate_id AS candidateID,
            candidate.candidate_id AS exportID,
            candidate.is_hot AS isHot,
            candidate.date_modified AS dateModifiedSort,
            candidate.date_created AS dateCreatedSort,
        candidate_ID AS candidateID,
candidate.first_name AS firstName,
candidate.last_name AS lastName,
extra_field0.value AS extra_field_value0,
candidate.city AS city,
candidate.desired_pay AS desiredPay,
candidate.email1 AS email1,
candidate.phone_cell AS phoneCell,
DATE_FORMAT(candidate.date_modified, '%d-%m-%y') AS dateModified,
IF(candidate_joborder_submitted.candidate_joborder_id, 1, 0) AS submitted,
                                            IF(attachment_id, 1, 0) AS **strong text**attachmentPresent
        FROM
            candidate
        LEFT JOIN extra_field AS extra_field0 ON candidate.candidate_id = extra_field0.data_item_id AND extra_field0.field_name = 'Job Title' AND extra_field0.data_item_type = 100
LEFT JOIN attachment
                                                    ON candidate.candidate_id = attachment.data_item_id
                                      AND attachment.data_item_type = 100
                                                LEFT JOIN candidate_joborder AS candidate_joborder_submitted
                                                    ON candidate_joborder_submitted.candidate_id = candidate.candidate_id
                                                    AND candidate_joborder_submitted.status >= 400
                                                    AND candidate_joborder_submitted.site_id = 1
                                                    AND candidate_joborder_submitted.status != 650 LEFT JOIN saved_list_entry
                                ON saved_list_entry.data_item_type = 100
                                AND saved_list_entry.data_item_id = candidate.candidate_id
                                AND saved_list_entry.site_id = 1
        WHERE
            candidate.site_id = 1



        GROUP BY candidate.candidate_id

        ORDER BY dateModifiedSort DESC
        LIMIT 0, 15

在我需要的时候,非常感谢任何帮助

3 个答案:

答案 0 :(得分:1)

您有a = r'.*?(?iLmsux)\s*' b = 'thisWorks' c = r'.*?(?iLmsux)\s*' d = r'.*?(?iLmsux)\s*' e = r'.*?(?iLmsux)\s*' f = r'.*?(?iLmsux)\s*' path = re.compile(r"/%s/%s/%s/%s/%s/%s/" % (a,b,c,d,e,f)) y = r'//thisWorks/gf e /8900_=!/ 90[//' if path.search(y): print "match found" else: print 'no match' 没有表名。由于您在两个不同的表中具有candidate_ID,因此必须指定表名:

x = re.compile('this works')
if x.search('this works'):
    print 'match'

这可以避免歧义

答案 1 :(得分:1)

你有一个不合格的candidate_ID作为你的第六项。

candidate_ID AS candidateID,

应该是

candidate.candidate_ID as candidateID

由于您已经使用candidate.candidate_id定义了candidateID,我建议您完全从查询中删除“candidate_ID AS candidateID”。

答案 2 :(得分:0)

如果仔细观察它告诉你的错误。

  

MySQL查询失败:列' candidate_ID'在字段列表中是不明确的

在字段列表中,您需要为object Foo { def parseJson(json: String): Option[Map[String, Any]] = { if (json == null) Some(Map()) else parseOpt(json).map((j: JValue) => j.values.asInstanceOf[Map[String, Any]]) } } } // read in as text and parse json using json4s val jsonRDD: RDD[String] = sc.textFile(inputPath) .map(row -> Foo.parseJson(row)) // infer a schema that will encapsulate the most rows in a sample of size sampleRowNum val schema: StructType = Infer.getMostCommonSchema(sc, jsonRDD, sampleRowNum) // get documents compatibility with schema val jsonWithCompatibilityRDD: RDD[(String, Boolean)] = jsonRDD .map(js => (js, Infer.getSchemaCompatibility(schema, Infer.inferSchema(js)).toBoolean)) .repartition(partitions) val jsonCompatibleRDD: RDD[String] = jsonWithCompatibilityRDD .filter { case (js: String, compatible: Boolean) => compatible } .map { case (js: String, _: Boolean) => js } // create a dataframe from documents with compatible schema val dataFrame: DataFrame = spark.read.schema(schema).json(jsonCompatibleRDD) 指定表格,与其他字段一样。

org.apache.spark.SparkException: Task failed while writing rows
    at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:261)
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
    at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand$$anonfun$run$1$$anonfun$apply$mcV$sp$1.apply(InsertIntoHadoopFsRelationCommand.scala:143)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
    at org.apache.spark.scheduler.Task.run(Task.scala:86)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: Failed to commit task
    at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:275)
    at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply$mcV$sp(WriterContainer.scala:257)
    at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
    at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$1.apply(WriterContainer.scala:252)
    at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1345)
    at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.writeRows(WriterContainer.scala:258)
    ... 8 more
    Suppressed: java.lang.NullPointerException
        at org.apache.parquet.hadoop.InternalParquetRecordWriter.flushRowGroupToStore(InternalParquetRecordWriter.java:147)
        at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:113)
        at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
        at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569)
        at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$abortTask$1(WriterContainer.scala:282)
        at org.apache.spark.sql.execution.datasources.DefaultWriterContainer$$anonfun$writeRows$2.apply$mcV$sp(WriterContainer.scala:258)
        at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1354)
        ... 9 more
Caused by: com.amazonaws.AmazonClientException: Unable to unmarshall response (Failed to parse XML document with handler class com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListBucketHandler). Response Code: 200, Response Text: OK
    at com.amazonaws.http.AmazonHttpClient.handleResponse(AmazonHttpClient.java:738)
    at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:399)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3480)
    at com.amazonaws.services.s3.AmazonS3Client.listObjects(AmazonS3Client.java:604)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:962)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.deleteUnnecessaryFakeDirectories(S3AFileSystem.java:1147)
    at org.apache.hadoop.fs.s3a.S3AFileSystem.finishedWrite(S3AFileSystem.java:1136)
    at org.apache.hadoop.fs.s3a.S3AOutputStream.close(S3AOutputStream.java:142)
    at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
    at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
    at org.apache.parquet.hadoop.ParquetFileWriter.end(ParquetFileWriter.java:400)
    at org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:117)
    at org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:112)
    at org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetFileFormat.scala:569)
    at org.apache.spark.sql.execution.datasources.DefaultWriterContainer.org$apache$spark$sql$execution$datasources$DefaultWriterContainer$$commitTask$1(WriterContainer.scala:267)
    ... 13 more
Caused by: com.amazonaws.AmazonClientException: Failed to parse XML document with handler class com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListBucketHandler
    at com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.parseXmlInputStream(XmlResponsesSaxParser.java:150)
    at com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.parseListBucketObjectsResponse(XmlResponsesSaxParser.java:279)
    at com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsUnmarshaller.unmarshall(Unmarshallers.java:75)
    at com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsUnmarshaller.unmarshall(Unmarshallers.java:72)
    at com.amazonaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:62)
    at com.amazonaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:31)
    at com.amazonaws.http.AmazonHttpClient.handleResponse(AmazonHttpClient.java:712)
    ... 29 more
Caused by: org.xml.sax.SAXParseException; lineNumber: 1; columnNumber: 2; XML document structures must start and end within the same entity.
    at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source)
    at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown Source)
    at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
    at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
    at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
    at org.apache.xerces.impl.XMLScanner.reportFatalError(Unknown Source)
    at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.endEntity(Unknown Source)
    at org.apache.xerces.impl.XMLDocumentScannerImpl.endEntity(Unknown Source)
    at org.apache.xerces.impl.XMLEntityManager.endEntity(Unknown Source)
    at org.apache.xerces.impl.XMLEntityScanner.load(Unknown Source)
    at org.apache.xerces.impl.XMLEntityScanner.skipChar(Unknown Source)
    at org.apache.xerces.impl.XMLDocumentScannerImpl$PrologDispatcher.dispatch(Unknown Source)
    at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
    at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
    at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
    at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
    at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
    at com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.parseXmlInputStream(XmlResponsesSaxParser.java:141)
    ... 35 more

这是不明确的,因为candidate_ID存在于查询的多个表中。