I have case where I only need the first 10,000 rows.
What is strange is select top 10,000 versus just stopping sed -i '/^\s*$/d' file
at 10,000 is not always the same as to which is fastest.
It is never different by more than 10% but is there a best approach?
I suspect the Reader get blocks at a time and if 10,000 happens early in the block that would be longer.
But what I am seeing is strange to me.
It seems to take longer close the reader after 10,000 are read via a select with no top like it still has to clear the rows.
My numbers don't vary enough to be conclusive.
Is there a best approach to reading the first 10,000 rows?
I am reading sorted with (nolock) and just two Int32 in case that makes a difference.
答案 0 :(得分:1)
When you request foreach ($_POST['cantidad'] as $i => $cantidad) {
$insertQuery[] = '(?, ?, ?)';
$insertData[] = $_POST['cantidad'][$i];
$insertData[] = $_POST['nombreProd'][$i];
$insertData[] = $_POST['Id']; //OR $insertData[] = $_POST['Id'][0], depending on $_POST array
}
the optimizer takes note of that and produces that amount as efficiently as possible (given its limitations in reasoning and information).
So you are likely getting different plans in the two cases. Normally, TOP 10000
will be more efficient but not specifying it might give you a better performing plan by coincidence. A random change causing a better plan is not unheard of.
Note, that there are problems associated with closing a reader early. I can't find the link right now and I don't remember what it was. I think in some circumstances closing the reader simply fetched the rest of the query and discarded it instead of cancelling the query.