Friday, March 30, 2012

Populate arraylist from a database

Hi

i'm pretty new to this, how do i connect to my database a put all the values from one column into the arraylist.

Any help and a easy example would be nice

Thanks in advance

Richard

check out this sample, fromASPNet101.com:

http://aspnet101.com/aspnet101/aspnet/codesample.aspx?code=ddlArrayList

|||thanks i give it a go|||

hi i get an error

An error has occurred while establishing a connection tothe server. When connecting to SQL Server 2005, this failure may becaused by the fact that under the default settings SQL Server does notallow remote connections

heres my code

<%@. Page Language="VB" %
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"
<%@. Import Namespace="System.Data" %>
<%@. Import Namespace="System.Data.SQLClient" %>
<html>
<head>
<meta name="GENERATOR" Content="ASP Express 2.1">
<title>Untitled</title>
<script language="VB" runat="server">
Dim MyArrayList as ArrayList
Dim sItem as String
Sub Page_Load(Source as Object, E as EventArgs)
MyArrayList=New Arraylist
if not Page.IsPostBack then
Dim strConn As String = "server=YourServer;uid=UID;pwd=PWD;database=Holiday"
Dim MySQL As String = "SELECT startDate FROM holidayRequest"
Dim MyConn as New SQLConnection(strConn)
Dim objDR as SQLDataReader
Dim Cmd as New SQLCommand(MySQL, MyConn)
MyConn.Open()
objDR=Cmd.ExecuteReader(system.data.CommandBehavior.CloseConnection)
While objDR.Read()
MyArrayList.Add(objDR("startDate"))
End While
end if
ddl1.datasource=MyArrayList
ddl1.databind
End Sub
</script>
</head>
<body
<Form id="form1" runat="server">
<asp:DropDownList id="ddl1" runat="server" />
</Form>
</body>
</html>

the database i'm connecting too is on my computer and is called Holiday.mdf and the table i'm trying to get too is holidayRequeststart and from that i'm trying to get the startDate out.

Please help i'm new to this and prob done a stupid erro

|||Make sure your connection string is pointing to an available SQL instance on your computer. If you're using a default SQL instance, it'ok to refer to it using server=YourServer; if it is a named SQL instance, you can set server to something likeYourServer\SQLInstanceName.

populate access database from a TableCollection

Hi,

I need to ask about how can I populate an access database tables directly with the tables in a TableCollection that already exist in my application.

Thanks for any help,

Aya.

You can either use DTS or Linked server in this case to capture the information.|||

Thanks too much for the reply , but can I get some explanation about how to use the DTS.

Thanks in advance,

Aya.

|||

Hi,

if you choose DTS, you'll need to make a DTS package, here is a related article:

http://www.devarticles.com/c/a/ASP/Creating-DTS-Packages-With-SQL-Server-2000/

|||

The TableCollection in my case is the collection of tables contained in the DataSet.

Can I use the DTS to copy it directly to access database?

Thanks,

Aya.

sql

Populate a Table with Stored Proc.

I am looking to populate a Schedule table with information from two
other tables. I am able to populate it row by row, but I have created
tables that should provide all necessary information for me to be
able
to automatically populate a "generic" schedule for a few weeks or
more
at a time.

The schedule table contains:
(pk) schedule_id, start_datetime, end_datetime, shift_employee,
shift_position

A DaysOff table contains:
(pk) emp_id, dayoff_1, dayoff_2 <-- the days off are entered in day
of
week (1-7) form

A CalendarDays table contains:
(pk) date, calendar_dow <-- dow contains the day of week number (as
above) for each day until 2010.

My main question is how to put all of this information together and
have SQL populate the rows with data based on days off. Any
suggestions?Nate (nate.borland@.westecnow.com) writes:

Quote:

Originally Posted by

I am looking to populate a Schedule table with information from two
other tables. I am able to populate it row by row, but I have created
tables that should provide all necessary information for me to be able
to automatically populate a "generic" schedule for a few weeks or more
at a time.
>
The schedule table contains:
(pk) schedule_id, start_datetime, end_datetime, shift_employee,
shift_position
>
>
A DaysOff table contains:
(pk) emp_id, dayoff_1, dayoff_2 <-- the days off are entered in day
of
week (1-7) form
>
>
A CalendarDays table contains:
(pk) date, calendar_dow <-- dow contains the day of week number (as
above) for each day until 2010.
>
>
My main question is how to put all of this information together and
have SQL populate the rows with data based on days off. Any
suggestions?


Just as a reminder, in case you are getting old and don't remember
what you did yesterday, you posted this question yesterday as well,
and I replied by asking some questions, and Plamen Ratchev suggested
some queries. I suggest that you go Google news and find the old
thread and review our replies.

--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||Erland Sommarskog wrote:

Quote:

Originally Posted by

Nate (nate.borland@.westecnow.com) writes:

Quote:

Originally Posted by

>I am looking to populate a Schedule table with information from two
>other tables. I am able to populate it row by row, but I have created
>tables that should provide all necessary information for me to be able
>to automatically populate a "generic" schedule for a few weeks or more
>at a time.
>>
>The schedule table contains:
>(pk) schedule_id, start_datetime, end_datetime, shift_employee,
>shift_position
>>
>>
>A DaysOff table contains:
>(pk) emp_id, dayoff_1, dayoff_2 <-- the days off are entered in day
>of
>week (1-7) form
>>
>>
>A CalendarDays table contains:
>(pk) date, calendar_dow <-- dow contains the day of week number (as
>above) for each day until 2010.
>>
>>
>My main question is how to put all of this information together and
>have SQL populate the rows with data based on days off. Any
>suggestions?


>
Just as a reminder, in case you are getting old and don't remember
what you did yesterday, you posted this question yesterday as well,
and I replied by asking some questions, and Plamen Ratchev suggested
some queries. I suggest that you go Google news and find the old
thread and review our replies.
>
>


Hmm. I'm getting old and I don't do daft things like that!
Now, what was I doing before I read this?...

Populate a table with a guid

How do I populate a table with existing data with a new guid field? I tried
creating the field, entering NewID() as the default, and isrowguid Yes but
it just gives me errors that the field cannot be null because it already is
null. Do i have to create a blank table with the new id, then dump the data
into it, then rename the tables? I have a lot of relationships taht will
break if I have to do that. Thanks for your help.Stop using Enterprise Manager for this stuff.
CREATE TABLE foo
(
a INT
)
GO
INSERT foo SELECT 1
INSERT foo SELECT 2
ALTER TABLE foo ADD guid UNIQUEIDENTIFIER
GO
ALTER TABLE foo ADD CONSTRAINT guidDefault DEFAULT(NEWID()) FOR guid
GO
UPDATE foo SET guid = NEWID()
GO
SELECT * FROM foo
GO
INSERT foo(a) SELECT 3
GO
SELECT * FROM foo
GO
DROP TABLE foo
GO
"eagle" <eagle@.yahoo.com> wrote in message
news:uR4rkB3pFHA.156@.TK2MSFTNGP11.phx.gbl...
> How do I populate a table with existing data with a new guid field? I
> tried creating the field, entering NewID() as the default, and isrowguid
> Yes but it just gives me errors that the field cannot be null because it
> already is null. Do i have to create a blank table with the new id, then
> dump the data into it, then rename the tables? I have a lot of
> relationships taht will break if I have to do that. Thanks for your help.
>|||Never Mind, updating the table with set newfield=newid() did the trick.
"eagle" <eagle@.yahoo.com> wrote in message
news:uR4rkB3pFHA.156@.TK2MSFTNGP11.phx.gbl...
> How do I populate a table with existing data with a new guid field? I
> tried creating the field, entering NewID() as the default, and isrowguid
> Yes but it just gives me errors that the field cannot be null because it
> already is null. Do i have to create a blank table with the new id, then
> dump the data into it, then rename the tables? I have a lot of
> relationships taht will break if I have to do that. Thanks for your help.
>|||Hi,
You can set the column to allow nulls and then do a simple update statement
to update the column
You don't need to to set isrowguid is you don't need to
--
kind regards
Greg O
Need to document your databases. Use the firs and still the best AGS SQL
Scribe
http://www.ag-software.com
"eagle" <eagle@.yahoo.com> wrote in message
news:uR4rkB3pFHA.156@.TK2MSFTNGP11.phx.gbl...
> How do I populate a table with existing data with a new guid field? I
> tried creating the field, entering NewID() as the default, and isrowguid
> Yes but it just gives me errors that the field cannot be null because it
> already is null. Do i have to create a blank table with the new id, then
> dump the data into it, then rename the tables? I have a lot of
> relationships taht will break if I have to do that. Thanks for your help.
>

Popuate field with value from last row

Does anyone know how
-- during an insert --
to automatically populate a field with the same data as the field in the
last inserted row?
Thanks.rmg66 wrote:
> Does anyone know how
> -- during an insert --
> to automatically populate a field with the same data as the field in
> the last inserted row?
>
First, you need a way to identify the last inserted row. is there an
InsertionDate column?
Microsoft MVP -- ASP/ASP.NET
Please reply to the newsgroup. The email account listed in my From
header is my spam trap, so I don't check it very often. You will get a
quicker response by posting to the newsgroup.|||"rmg66" <rgwathney__xXx__primepro.com> wrote in message
news:%23ozT1swBGHA.208@.tk2msftngp13.phx.gbl...
> Does anyone know how
> -- during an insert --
> to automatically populate a field with the same data as the field in the
> last inserted row?
> Thanks.
SQL server has no concept of Last Inserted Row.
If you have a column on the table that is populated with the date and time
the row was inserted, you could query that row. But it's not impossible that
2 or more rows can be inserted at exactly the same time.
Also, in your case, what happens if the value later changes in that Last
Inserted Row.
If you tell us what the logic is behind what you're trying to do, someone
here could perhaps suggest a better solution.|||> Does anyone know how
> -- during an insert --
> to automatically populate a field with the same data as the field in the
> last inserted row?
Since a table is an unordered set of rows, how do you define "last"?
And why not keep this "last" value in another table?|||Does anyone know why
-- during an insert --
you would want to do this.
I mean if you are doing a select into or and insert select then you can
choose the data you want, if you are doing simple insert stmts. then you
already know what the data is.
post some ddl and maybe we can help
"rmg66" wrote:

> Does anyone know how
> -- during an insert --
> to automatically populate a field with the same data as the field in the
> last inserted row?
> Thanks.
>
>|||You could use a insert trigger on the table.
If value is specified add it as an extended property to the table.
If the value is not specified lookup the extended property on the table.
John
"Bob Barrows [MVP]" <reb01501@.NOyahoo.SPAMcom> wrote in message
news:OOr62uwBGHA.412@.TK2MSFTNGP15.phx.gbl...
> rmg66 wrote:
> First, you need a way to identify the last inserted row. is there an
> InsertionDate column?
> --
> Microsoft MVP -- ASP/ASP.NET
> Please reply to the newsgroup. The email account listed in my From
> header is my spam trap, so I don't check it very often. You will get a
> quicker response by posting to the newsgroup.
>|||John Kendrick wrote:
> You could use a insert trigger on the table.
>
Why are you replying to me?
I'm not the OP ;-)

> If value is specified add it as an extended property to the table.
> If the value is not specified lookup the extended property on the
> table.
Which gets us back to:
Microsoft MVP -- ASP/ASP.NET
Please reply to the newsgroup. The email account listed in my From
header is my spam trap, so I don't check it very often. You will get a
quicker response by posting to the newsgroup.|||> If value is specified add it as an extended property to the table.
> If the value is not specified lookup the extended property on the table.
What does an extended property have to do with data in the table?|||You could create an extended property for each columns that is use the last
inserted record.
When the insert trigger fires it will lookup and use that last column value.
The extended property is an alternative way to handle getting the last
record value without querying the table. Otherwise you would need some way
to find the latest record. Like max(datetime) as Bob mentioned.
John
"Aaron Bertrand [SQL Server MVP]" <ten.xoc@.dnartreb.noraa> wrote in message
news:%23S$7lDxBGHA.3840@.TK2MSFTNGP15.phx.gbl...
> What does an extended property have to do with data in the table?
>|||>> You could create an extended property for each columns that is use the
Can you please elaborate on this approach a bit? Perhaps an example would be
helpful.
Anith

pop up requirement

how do we click on anything within the reports, pass the parameters to another reports and also make the drill thru report open in a new page

See this page for information on Report Interactivity, including jumping to other Reports or Urls, http://msdn2.microsoft.com/en-us/library/ms345258(sql.90).aspx

If you need to have the report open in a new window, then you will need to use the Jump to Url method using javascript, like the following,

=void(window.open('" & Globals!ReportServerUrl & "?/ReportName&rs:Command=Render&rs:Format=Html4.0&Parameter1=" & Fields!Field1.Value & "','_blank'))"

Poorly written assembly and CLR

As the CLR is hosted in the SQL OS.

How will the SQL OS handle a situation wherein I have a badly written .net assembly which is chewing up lot of memory and other system resources of the server.

1. Won’t it directly affect the SQL OS and the overall performance of the server?

2. The bad assembly is also hosted in the same app domain as other assemblies;As all assemblies owned by same user are hosted in the same app domain; will this approach not affect other .net assemblies running in the same Appdomain?

Thanks!

1. Whether it will affect the overall performance of the server depends on whether or not the server has enough memory and resources to handle however much it is chewing up :) 2. Yes, other assemblies in the same AppDomain could be affected; but you can control this to some degree by splitting things up into different AppDomains by using different owners... But let's get down to the more important question: If this assembly is so badly written, and using so much memory, why are you hosting it in-process? Wouldn't this be a better candidate for the application tier? The situation you describe is really no different than a badly written or improperly implemented T-SQL stored procedure. If you have something eating up all of the RAM and processor time on your server, perhaps you need to take a hard look at it from an architectural point of view. -- Adam MachanicPro SQL Server 2005, available nowhttp://www..apress.com/book/bookDisplay.html?bID=457-- <Yedu@.discussions.microsoft..com> wrote in message news:79d20459-8f1e-435f-97d9-9c8423852d11@.discussions.microsoft.com... As the CLR is hosted in the SQL OS. How will the SQL OS handle a situation wherein I have a badly written .net assembly which is chewing up lot of memory and other system resources of the server. 1. Won't it directly affect the SQL OS and the overall performance of the server? 2. The bad assembly is also hosted in the same app domain as other assemblies;As all assemblies owned by same user are hosted in the same app domain; will this approach not affect other .net assemblies running in the same Appdomain?Thanks!|||Adam,
On point 1 is it true that SQL OS punishes threads/assemblies that are performing poorly? How does this happen? On what criteria does SQL OS decide this?

On Point 2 I agree with you.

Thanks|||

Most of the memory for CLR comes outside SQL Server buffer pool. If the system runs out of memory then your CLR routine would get an OOM exception and SQL Server would abort this thread and consequently the memory would be freed.

For CPU, yes SQL Server punishes the threads that do not yield the scheduler in a reasonable amount of time. It would force them to yield and put them at the end of the scheduler.

Thanks,
-Vineet.

Poorly Performing Unicode Query

The main performance issue is performaing the "N" function infront of the strings in the where clause. I get 20x-45x perfromance boost by removing it, but unfortunately my reporting tool doesn't allow me that option.

Anything I can do on the DB end to solve this? The DB is not unicode.

tia,

paul

SELECT

MAX(Dim9s.FunctionDescription) [Dim9s_FunctionDescription],

MAX(Dim9s.BU_Description) [Dim9s_BU_Description],

SUM(convert(decimal(18,2),ixvTransAx.GLS_AMOUNTMST)) [GLS_AMOUNTMST]

FROM ixvLedgerBalanceDimTransAx AS ixvTransAx

JOIN ixvDim9s AS Dim9s ON (ixvTransAx.Dim9Id=Dim9s.Id)

LEFT OUTER JOIN ixvAccounts AS Accounts ON (ixvTransAx.AccountId=Accounts.Id)

JOIN ixvCompany AS CompanyId ON (ixvTransAx.CompanyId=CompanyId.Id)

WHERE 1=1

AND ((1=1)) AND ((Dim9s.Client ='M99') OR Dim9s.Client IS NULL)

AND ((1=1)) AND ((Accounts.Client ='M99') OR Accounts.Client IS NULL)

AND ((Accounts.Code BETWEEN N' 5000' AND N' 6796') AND (Accounts.ACCOUNTPLTYPE != N'^AC_Total^'))

AND ((Dim9s.BU_Description = N'PI/Dwights') AND (Dim9s.OrganizationalViewDescription = N'2006 Current'))

AND ((1=1))

AND (CompanyId.Code = N'56')

AND ((ixvTransAx.TransDate >= '2006-3-1') AND (ixvTransAx.TransDate < '2006-4-1'))

AND NOT (ixvTransAx.GLS_AMOUNTMST IS NULL)

GROUP BY Dim9s.FunctionDescription, Dim9s.BU_Description

Hi,

I recommend that you post your question on the Transact-SQL forum. The Documentation forum is usually not the best spot for code-specific questions.

Regards,

Gail

Poor SQL Server Performance on Windows 2003 Server

I'm having a problem with SQL Server 2000 SP4 running on Windows 2003 Server
SP1. The performance is just awful.
I'm running a benchmark that creates a table and inserts 10000 records. This
is an ODBC application written in C++.
The server is a Quad 2.8G XEON with 2G RAM. On this server it takes 46
seconds to run the application that inserts the records.
On my laptop which is a 2.0G Pentium M with 1G RAM the same application
takes 7 seconds. My laptop has WinXP rather than Win2003 Server.
Now if you're not a very astute reader of benchmark results the bottom line
is that it takes 6 times longer to run this test on a high spec server
machine than on my laptop.
This is what I've done so far to try and fix this:
1. Try various configurations of processors and memory in the SQL
server configuration.
2. Reinstall SQL server.
3. Install Win2003 SP1.
4. Investigate hardware conflicts on the machine.
5. Benchmark disk performance. It's about twice as fast as the disk in
my laptop for both sequential and burst access.
So does anyone have any idea why the performance on this high spec server
would be so bad?
Thanks,
Colin
A likely cause is that the server does not have disk write caching on the
SQL Server log but your laptop does. Assuming each insert is an individual
transaction, 10000 log I/Os are required. At a typical rate of 200 physical
disk I/Os per second, this comes to about 50 seconds.
A decent disk controller with write caching (and importantly battery backup)
ought to address your performance issue.
Hope this helps.
Dan Guzman
SQL Server MVP
"Colin Blair" <col_blair@.nospam.hotmail.com> wrote in message
news:eDIS4Ye1FHA.2212@.TK2MSFTNGP15.phx.gbl...
> I'm having a problem with SQL Server 2000 SP4 running on Windows 2003
> Server SP1. The performance is just awful.
>
> I'm running a benchmark that creates a table and inserts 10000 records.
> This is an ODBC application written in C++.
>
> The server is a Quad 2.8G XEON with 2G RAM. On this server it takes 46
> seconds to run the application that inserts the records.
>
> On my laptop which is a 2.0G Pentium M with 1G RAM the same application
> takes 7 seconds. My laptop has WinXP rather than Win2003 Server.
>
> Now if you're not a very astute reader of benchmark results the bottom
> line is that it takes 6 times longer to run this test on a high spec
> server machine than on my laptop.
>
> This is what I've done so far to try and fix this:
>
> 1. Try various configurations of processors and memory in the SQL
> server configuration.
> 2. Reinstall SQL server.
> 3. Install Win2003 SP1.
> 4. Investigate hardware conflicts on the machine.
> 5. Benchmark disk performance. It's about twice as fast as the disk
> in my laptop for both sequential and burst access.
>
> So does anyone have any idea why the performance on this high spec server
> would be so bad?
>
> Thanks,
> Colin
>
|||Thanks for your reply Dan.
I checked the disk write cache option in the drive policies in the Win2003
device manager and it was enabled.
I also checked the SCSI card setup and that was fine too.
I should have also mentioned that the number of rows per transaction is
10000. So there is basically only 1 transaction.
Cheers,
Colin.
"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:uL$WGhe1FHA.556@.TK2MSFTNGP12.phx.gbl...
>A likely cause is that the server does not have disk write caching on the
>SQL Server log but your laptop does. Assuming each insert is an individual
>transaction, 10000 log I/Os are required. At a typical rate of 200
>physical disk I/Os per second, this comes to about 50 seconds.
> A decent disk controller with write caching (and importantly battery
> backup) ought to address your performance issue.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Colin Blair" <col_blair@.nospam.hotmail.com> wrote in message
> news:eDIS4Ye1FHA.2212@.TK2MSFTNGP15.phx.gbl...
>
|||What kind of drive is this in the Win2003 Server? Is it a drive array or
just a drive? Where are the log files located? A typical drive in a laptop
has more cache than one you normally find in a server. It can be several
times more in fact and this can make a huge difference when you have lots of
writes such as this. You also want the log drive on a physically separate
(not logically) drive than the data file. Even though the cache may be
enabled it may not be 100% write back. Normally they come as a 50/50 or
60/40 split between read and write cache. You can get a big performance
boost on writes by changing it to 100% write back.
Andrew J. Kelly SQL MVP
"Colin Blair" <col_blair@.nospam.hotmail.com> wrote in message
news:OauuqrD2FHA.2072@.TK2MSFTNGP14.phx.gbl...
> Thanks for your reply Dan.
> I checked the disk write cache option in the drive policies in the Win2003
> device manager and it was enabled.
> I also checked the SCSI card setup and that was fine too.
> I should have also mentioned that the number of rows per transaction is
> 10000. So there is basically only 1 transaction.
> Cheers,
> Colin.
> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
> news:uL$WGhe1FHA.556@.TK2MSFTNGP12.phx.gbl...
>
|||It seems a bit suspect that your throughput is as if each insert was in an
individual transaction. The log i/o during the COMMIT is synchronous so
your app will wait for completion. How are you doing the inserts?
Hope this helps.
Dan Guzman
SQL Server MVP
"Colin Blair" <col_blair@.nospam.hotmail.com> wrote in message
news:OauuqrD2FHA.2072@.TK2MSFTNGP14.phx.gbl...
> Thanks for your reply Dan.
> I checked the disk write cache option in the drive policies in the Win2003
> device manager and it was enabled.
> I also checked the SCSI card setup and that was fine too.
> I should have also mentioned that the number of rows per transaction is
> 10000. So there is basically only 1 transaction.
> Cheers,
> Colin.
> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
> news:uL$WGhe1FHA.556@.TK2MSFTNGP12.phx.gbl...
>

Poor SQL Server Performance on Windows 2003 Server

I'm having a problem with SQL Server 2000 SP4 running on Windows 2003 Server
SP1. The performance is just awful.
I'm running a benchmark that creates a table and inserts 10000 records. This
is an ODBC application written in C++.
The server is a Quad 2.8G XEON with 2G RAM. On this server it takes 46
seconds to run the application that inserts the records.
On my laptop which is a 2.0G Pentium M with 1G RAM the same application
takes 7 seconds. My laptop has WinXP rather than Win2003 Server.
Now if you're not a very astute reader of benchmark results the bottom line
is that it takes 6 times longer to run this test on a high spec server
machine than on my laptop.
This is what I've done so far to try and fix this:
1. Try various configurations of processors and memory in the SQL
server configuration.
2. Reinstall SQL server.
3. Install Win2003 SP1.
4. Investigate hardware conflicts on the machine.
5. Benchmark disk performance. It's about twice as fast as the disk in
my laptop for both sequential and burst access.
So does anyone have any idea why the performance on this high spec server
would be so bad?
Thanks,
ColinA likely cause is that the server does not have disk write caching on the
SQL Server log but your laptop does. Assuming each insert is an individual
transaction, 10000 log I/Os are required. At a typical rate of 200 physical
disk I/Os per second, this comes to about 50 seconds.
A decent disk controller with write caching (and importantly battery backup)
ought to address your performance issue.
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Colin Blair" <col_blair@.nospam.hotmail.com> wrote in message
news:eDIS4Ye1FHA.2212@.TK2MSFTNGP15.phx.gbl...
> I'm having a problem with SQL Server 2000 SP4 running on Windows 2003
> Server SP1. The performance is just awful.
>
> I'm running a benchmark that creates a table and inserts 10000 records.
> This is an ODBC application written in C++.
>
> The server is a Quad 2.8G XEON with 2G RAM. On this server it takes 46
> seconds to run the application that inserts the records.
>
> On my laptop which is a 2.0G Pentium M with 1G RAM the same application
> takes 7 seconds. My laptop has WinXP rather than Win2003 Server.
>
> Now if you're not a very astute reader of benchmark results the bottom
> line is that it takes 6 times longer to run this test on a high spec
> server machine than on my laptop.
>
> This is what I've done so far to try and fix this:
>
> 1. Try various configurations of processors and memory in the SQL
> server configuration.
> 2. Reinstall SQL server.
> 3. Install Win2003 SP1.
> 4. Investigate hardware conflicts on the machine.
> 5. Benchmark disk performance. It's about twice as fast as the disk
> in my laptop for both sequential and burst access.
>
> So does anyone have any idea why the performance on this high spec server
> would be so bad?
>
> Thanks,
> Colin
>|||Thanks for your reply Dan.
I checked the disk write cache option in the drive policies in the Win2003
device manager and it was enabled.
I also checked the SCSI card setup and that was fine too.
I should have also mentioned that the number of rows per transaction is
10000. So there is basically only 1 transaction.
Cheers,
Colin.
"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:uL$WGhe1FHA.556@.TK2MSFTNGP12.phx.gbl...
>A likely cause is that the server does not have disk write caching on the
>SQL Server log but your laptop does. Assuming each insert is an individual
>transaction, 10000 log I/Os are required. At a typical rate of 200
>physical disk I/Os per second, this comes to about 50 seconds.
> A decent disk controller with write caching (and importantly battery
> backup) ought to address your performance issue.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Colin Blair" <col_blair@.nospam.hotmail.com> wrote in message
> news:eDIS4Ye1FHA.2212@.TK2MSFTNGP15.phx.gbl...
>> I'm having a problem with SQL Server 2000 SP4 running on Windows 2003
>> Server SP1. The performance is just awful.
>>
>> I'm running a benchmark that creates a table and inserts 10000 records.
>> This is an ODBC application written in C++.
>>
>> The server is a Quad 2.8G XEON with 2G RAM. On this server it takes 46
>> seconds to run the application that inserts the records.
>>
>> On my laptop which is a 2.0G Pentium M with 1G RAM the same application
>> takes 7 seconds. My laptop has WinXP rather than Win2003 Server.
>>
>> Now if you're not a very astute reader of benchmark results the bottom
>> line is that it takes 6 times longer to run this test on a high spec
>> server machine than on my laptop.
>>
>> This is what I've done so far to try and fix this:
>>
>> 1. Try various configurations of processors and memory in the SQL
>> server configuration.
>> 2. Reinstall SQL server.
>> 3. Install Win2003 SP1.
>> 4. Investigate hardware conflicts on the machine.
>> 5. Benchmark disk performance. It's about twice as fast as the disk
>> in my laptop for both sequential and burst access.
>>
>> So does anyone have any idea why the performance on this high spec server
>> would be so bad?
>>
>> Thanks,
>> Colin
>>
>|||What kind of drive is this in the Win2003 Server? Is it a drive array or
just a drive? Where are the log files located? A typical drive in a laptop
has more cache than one you normally find in a server. It can be several
times more in fact and this can make a huge difference when you have lots of
writes such as this. You also want the log drive on a physically separate
(not logically) drive than the data file. Even though the cache may be
enabled it may not be 100% write back. Normally they come as a 50/50 or
60/40 split between read and write cache. You can get a big performance
boost on writes by changing it to 100% write back.
Andrew J. Kelly SQL MVP
"Colin Blair" <col_blair@.nospam.hotmail.com> wrote in message
news:OauuqrD2FHA.2072@.TK2MSFTNGP14.phx.gbl...
> Thanks for your reply Dan.
> I checked the disk write cache option in the drive policies in the Win2003
> device manager and it was enabled.
> I also checked the SCSI card setup and that was fine too.
> I should have also mentioned that the number of rows per transaction is
> 10000. So there is basically only 1 transaction.
> Cheers,
> Colin.
> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
> news:uL$WGhe1FHA.556@.TK2MSFTNGP12.phx.gbl...
>>A likely cause is that the server does not have disk write caching on the
>>SQL Server log but your laptop does. Assuming each insert is an
>>individual transaction, 10000 log I/Os are required. At a typical rate of
>>200 physical disk I/Os per second, this comes to about 50 seconds.
>> A decent disk controller with write caching (and importantly battery
>> backup) ought to address your performance issue.
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> "Colin Blair" <col_blair@.nospam.hotmail.com> wrote in message
>> news:eDIS4Ye1FHA.2212@.TK2MSFTNGP15.phx.gbl...
>> I'm having a problem with SQL Server 2000 SP4 running on Windows 2003
>> Server SP1. The performance is just awful.
>>
>> I'm running a benchmark that creates a table and inserts 10000 records.
>> This is an ODBC application written in C++.
>>
>> The server is a Quad 2.8G XEON with 2G RAM. On this server it takes 46
>> seconds to run the application that inserts the records.
>>
>> On my laptop which is a 2.0G Pentium M with 1G RAM the same application
>> takes 7 seconds. My laptop has WinXP rather than Win2003 Server.
>>
>> Now if you're not a very astute reader of benchmark results the bottom
>> line is that it takes 6 times longer to run this test on a high spec
>> server machine than on my laptop.
>>
>> This is what I've done so far to try and fix this:
>>
>> 1. Try various configurations of processors and memory in the SQL
>> server configuration.
>> 2. Reinstall SQL server.
>> 3. Install Win2003 SP1.
>> 4. Investigate hardware conflicts on the machine.
>> 5. Benchmark disk performance. It's about twice as fast as the disk
>> in my laptop for both sequential and burst access.
>>
>> So does anyone have any idea why the performance on this high spec
>> server would be so bad?
>>
>> Thanks,
>> Colin
>>
>>
>|||It seems a bit suspect that your throughput is as if each insert was in an
individual transaction. The log i/o during the COMMIT is synchronous so
your app will wait for completion. How are you doing the inserts?
--
Hope this helps.
Dan Guzman
SQL Server MVP
"Colin Blair" <col_blair@.nospam.hotmail.com> wrote in message
news:OauuqrD2FHA.2072@.TK2MSFTNGP14.phx.gbl...
> Thanks for your reply Dan.
> I checked the disk write cache option in the drive policies in the Win2003
> device manager and it was enabled.
> I also checked the SCSI card setup and that was fine too.
> I should have also mentioned that the number of rows per transaction is
> 10000. So there is basically only 1 transaction.
> Cheers,
> Colin.
> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
> news:uL$WGhe1FHA.556@.TK2MSFTNGP12.phx.gbl...
>>A likely cause is that the server does not have disk write caching on the
>>SQL Server log but your laptop does. Assuming each insert is an
>>individual transaction, 10000 log I/Os are required. At a typical rate of
>>200 physical disk I/Os per second, this comes to about 50 seconds.
>> A decent disk controller with write caching (and importantly battery
>> backup) ought to address your performance issue.
>> --
>> Hope this helps.
>> Dan Guzman
>> SQL Server MVP
>> "Colin Blair" <col_blair@.nospam.hotmail.com> wrote in message
>> news:eDIS4Ye1FHA.2212@.TK2MSFTNGP15.phx.gbl...
>> I'm having a problem with SQL Server 2000 SP4 running on Windows 2003
>> Server SP1. The performance is just awful.
>>
>> I'm running a benchmark that creates a table and inserts 10000 records.
>> This is an ODBC application written in C++.
>>
>> The server is a Quad 2.8G XEON with 2G RAM. On this server it takes 46
>> seconds to run the application that inserts the records.
>>
>> On my laptop which is a 2.0G Pentium M with 1G RAM the same application
>> takes 7 seconds. My laptop has WinXP rather than Win2003 Server.
>>
>> Now if you're not a very astute reader of benchmark results the bottom
>> line is that it takes 6 times longer to run this test on a high spec
>> server machine than on my laptop.
>>
>> This is what I've done so far to try and fix this:
>>
>> 1. Try various configurations of processors and memory in the SQL
>> server configuration.
>> 2. Reinstall SQL server.
>> 3. Install Win2003 SP1.
>> 4. Investigate hardware conflicts on the machine.
>> 5. Benchmark disk performance. It's about twice as fast as the disk
>> in my laptop for both sequential and burst access.
>>
>> So does anyone have any idea why the performance on this high spec
>> server would be so bad?
>>
>> Thanks,
>> Colin
>>
>>
>

Poor SQL Server Performance on Windows 2003 Server

I'm having a problem with SQL Server 2000 SP4 running on Windows 2003 Server
SP1. The performance is just awful.
I'm running a benchmark that creates a table and inserts 10000 records. This
is an ODBC application written in C++.
The server is a Quad 2.8G XEON with 2G RAM. On this server it takes 46
seconds to run the application that inserts the records.
On my laptop which is a 2.0G Pentium M with 1G RAM the same application
takes 7 seconds. My laptop has WinXP rather than Win2003 Server.
Now if you're not a very astute reader of benchmark results the bottom line
is that it takes 6 times longer to run this test on a high spec server
machine than on my laptop.
This is what I've done so far to try and fix this:
1. Try various configurations of processors and memory in the SQL
server configuration.
2. Reinstall SQL server.
3. Install Win2003 SP1.
4. Investigate hardware conflicts on the machine.
5. Benchmark disk performance. It's about twice as fast as the disk in
my laptop for both sequential and burst access.
So does anyone have any idea why the performance on this high spec server
would be so bad?
Thanks,
ColinA likely cause is that the server does not have disk write caching on the
SQL Server log but your laptop does. Assuming each insert is an individual
transaction, 10000 log I/Os are required. At a typical rate of 200 physical
disk I/Os per second, this comes to about 50 seconds.
A decent disk controller with write caching (and importantly battery backup)
ought to address your performance issue.
Hope this helps.
Dan Guzman
SQL Server MVP
"Colin Blair" <col_blair@.nospam.hotmail.com> wrote in message
news:eDIS4Ye1FHA.2212@.TK2MSFTNGP15.phx.gbl...
> I'm having a problem with SQL Server 2000 SP4 running on Windows 2003
> Server SP1. The performance is just awful.
>
> I'm running a benchmark that creates a table and inserts 10000 records.
> This is an ODBC application written in C++.
>
> The server is a Quad 2.8G XEON with 2G RAM. On this server it takes 46
> seconds to run the application that inserts the records.
>
> On my laptop which is a 2.0G Pentium M with 1G RAM the same application
> takes 7 seconds. My laptop has WinXP rather than Win2003 Server.
>
> Now if you're not a very astute reader of benchmark results the bottom
> line is that it takes 6 times longer to run this test on a high spec
> server machine than on my laptop.
>
> This is what I've done so far to try and fix this:
>
> 1. Try various configurations of processors and memory in the SQL
> server configuration.
> 2. Reinstall SQL server.
> 3. Install Win2003 SP1.
> 4. Investigate hardware conflicts on the machine.
> 5. Benchmark disk performance. It's about twice as fast as the disk
> in my laptop for both sequential and burst access.
>
> So does anyone have any idea why the performance on this high spec server
> would be so bad?
>
> Thanks,
> Colin
>|||Thanks for your reply Dan.
I checked the disk write cache option in the drive policies in the Win2003
device manager and it was enabled.
I also checked the SCSI card setup and that was fine too.
I should have also mentioned that the number of rows per transaction is
10000. So there is basically only 1 transaction.
Cheers,
Colin.
"Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
news:uL$WGhe1FHA.556@.TK2MSFTNGP12.phx.gbl...
>A likely cause is that the server does not have disk write caching on the
>SQL Server log but your laptop does. Assuming each insert is an individual
>transaction, 10000 log I/Os are required. At a typical rate of 200
>physical disk I/Os per second, this comes to about 50 seconds.
> A decent disk controller with write caching (and importantly battery
> backup) ought to address your performance issue.
> --
> Hope this helps.
> Dan Guzman
> SQL Server MVP
> "Colin Blair" <col_blair@.nospam.hotmail.com> wrote in message
> news:eDIS4Ye1FHA.2212@.TK2MSFTNGP15.phx.gbl...
>|||What kind of drive is this in the Win2003 Server? Is it a drive array or
just a drive? Where are the log files located? A typical drive in a laptop
has more cache than one you normally find in a server. It can be several
times more in fact and this can make a huge difference when you have lots of
writes such as this. You also want the log drive on a physically separate
(not logically) drive than the data file. Even though the cache may be
enabled it may not be 100% write back. Normally they come as a 50/50 or
60/40 split between read and write cache. You can get a big performance
boost on writes by changing it to 100% write back.
Andrew J. Kelly SQL MVP
"Colin Blair" <col_blair@.nospam.hotmail.com> wrote in message
news:OauuqrD2FHA.2072@.TK2MSFTNGP14.phx.gbl...
> Thanks for your reply Dan.
> I checked the disk write cache option in the drive policies in the Win2003
> device manager and it was enabled.
> I also checked the SCSI card setup and that was fine too.
> I should have also mentioned that the number of rows per transaction is
> 10000. So there is basically only 1 transaction.
> Cheers,
> Colin.
> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
> news:uL$WGhe1FHA.556@.TK2MSFTNGP12.phx.gbl...
>|||It seems a bit suspect that your throughput is as if each insert was in an
individual transaction. The log i/o during the COMMIT is synchronous so
your app will wait for completion. How are you doing the inserts?
Hope this helps.
Dan Guzman
SQL Server MVP
"Colin Blair" <col_blair@.nospam.hotmail.com> wrote in message
news:OauuqrD2FHA.2072@.TK2MSFTNGP14.phx.gbl...
> Thanks for your reply Dan.
> I checked the disk write cache option in the drive policies in the Win2003
> device manager and it was enabled.
> I also checked the SCSI card setup and that was fine too.
> I should have also mentioned that the number of rows per transaction is
> 10000. So there is basically only 1 transaction.
> Cheers,
> Colin.
> "Dan Guzman" <guzmanda@.nospam-online.sbcglobal.net> wrote in message
> news:uL$WGhe1FHA.556@.TK2MSFTNGP12.phx.gbl...
>sql

Poor SQL Server Performance

Hello,
I have a big problem with the performance of SQL Server 2000 on an IBM
eSeries 335, with one Intel Xeon 2,4 GHz CPU and 1 GB RAM. As OS I tried:
Windows Server 2003
Windows Server 2000
Windows 2000 Professional
To insert of 6400 lines needs around 28 seconds. It doesn`t matter on
witch OS the SQL Server 2000 is installed. My desktop system with
Windows XP Professional, Windows 2000 Professional and SQL Server 2000
needs for the same datas just 8 seconds.
I supose the SQL Server 2000 may have a problem with the hyperthreading
of the Xeon CPU. I have configured the SQL Server 2000 to use only one
CPU but it has no effects to the performance of the SQL Server 2000.
Is there any possibility to disable the hyperthreading within the OS?
Hope somebody an help.
Cu
Thomas M.There are a lot of factors in how long it takes to do something like an
insert. What is the client and how are you actually doing the inserts?
What is the DDL of the table like and a typical row look like? Do you have
your log file on a separate RAID 1 from anything else?
--
Andrew J. Kelly
SQL Server MVP
"Thomas Meinke" <thomas.meinke@.arcord.de> wrote in message
news:1hhs9ollnxc5x$.dzb5as2qlncf.dlg@.40tude.net...
> Hello,
> I have a big problem with the performance of SQL Server 2000 on an IBM
> eSeries 335, with one Intel Xeon 2,4 GHz CPU and 1 GB RAM. As OS I tried:
> Windows Server 2003
> Windows Server 2000
> Windows 2000 Professional
> To insert of 6400 lines needs around 28 seconds. It doesn`t matter on
> witch OS the SQL Server 2000 is installed. My desktop system with
> Windows XP Professional, Windows 2000 Professional and SQL Server 2000
> needs for the same datas just 8 seconds.
> I supose the SQL Server 2000 may have a problem with the hyperthreading
> of the Xeon CPU. I have configured the SQL Server 2000 to use only one
> CPU but it has no effects to the performance of the SQL Server 2000.
> Is there any possibility to disable the hyperthreading within the OS?
> Hope somebody an help.
> Cu
> Thomas M.|||On Fri, 31 Oct 2003 07:51:12 -0500, Andrew J. Kelly wrote:
> There are a lot of factors in how long it takes to do something like an
> insert. What is the client and how are you actually doing the inserts?
> What is the DDL of the table like and a typical row look like? Do you have
> your log file on a separate RAID 1 from anything else?
Hello,
the client is a .Net-Application, that uses the sql ADO.NET drivers. The
inserts are made by calling a stored procedure.
Here is the sql create syntax for the stored procedure:
CREATE PROCEDURE sp_teilenummer_insert
@.tln_pk_tln_vchar VARCHAR(10),
@.tln_tind_vchar VARCHAR(2),
@.tln_bng_vchar VARCHAR(40),
@.wgp_fk_wgpid_vchar VARCHAR(10),
@.tln_tart_vchar VARCHAR(20),
@.tln_tstat_vchar VARCHAR(2)
AS
INSERT INTO tbl_liste_teilenummer(
tln_pk_tln_vchar,
tln_tind_vchar,
tln_bng_vchar,
wgp_fk_wgpid_vchar,
tln_tart_vchar,
tln_tstat_vchar
)
VALUES (@.tln_pk_tln_vchar,
@.tln_tind_vchar,
@.tln_bng_vchar,
@.wgp_fk_wgpid_vchar,
@.tln_tart_vchar,
@.tln_tstat_vchar
)
The amazing thing is that the same software configuration on my development
machine is so much faster than the server machine with much better
hardware. My development machine is an Intel Pentium III with 1 GHz and 512
MB RAM. It is the same application with the same datas but its poor slow on
the server machine.
Cu
Thomas M.|||Isn't it simply because you add network overhead? If you insert row by row
you add 6400 network roundtrips to what you might have tested locally on
your XP machine.
--
regards,
Mario
http://www.sqlinternals.com
"Thomas Meinke" <thomas.meinke@.arcord.de> wrote in message
news:1107459esxrhn.f67d4a2opmrw.dlg@.40tude.net...
> On Fri, 31 Oct 2003 07:51:12 -0500, Andrew J. Kelly wrote:
> > There are a lot of factors in how long it takes to do something like an
> > insert. What is the client and how are you actually doing the inserts?
> > What is the DDL of the table like and a typical row look like? Do you
have
> > your log file on a separate RAID 1 from anything else?
> Hello,
> the client is a .Net-Application, that uses the sql ADO.NET drivers. The
> inserts are made by calling a stored procedure.
> Here is the sql create syntax for the stored procedure:
> CREATE PROCEDURE sp_teilenummer_insert
> @.tln_pk_tln_vchar VARCHAR(10),
> @.tln_tind_vchar VARCHAR(2),
> @.tln_bng_vchar VARCHAR(40),
> @.wgp_fk_wgpid_vchar VARCHAR(10),
> @.tln_tart_vchar VARCHAR(20),
> @.tln_tstat_vchar VARCHAR(2)
> AS
> INSERT INTO tbl_liste_teilenummer(
> tln_pk_tln_vchar,
> tln_tind_vchar,
> tln_bng_vchar,
> wgp_fk_wgpid_vchar,
> tln_tart_vchar,
> tln_tstat_vchar
> )
> VALUES (@.tln_pk_tln_vchar,
> @.tln_tind_vchar,
> @.tln_bng_vchar,
> @.wgp_fk_wgpid_vchar,
> @.tln_tart_vchar,
> @.tln_tstat_vchar
> )
> The amazing thing is that the same software configuration on my
development
> machine is so much faster than the server machine with much better
> hardware. My development machine is an Intel Pentium III with 1 GHz and
512
> MB RAM. It is the same application with the same datas but its poor slow
on
> the server machine.
> Cu
> Thomas M.|||Are you comparing like with like? I.E It sounds like in your development
case your client and the db are on the same machine, whereas for production
the client and the db are on different machines?
--
Niall Litchfield
Oracle DBA
Audit Commission Uk
"Thomas Meinke" <thomas.meinke@.arcord.de> wrote in message
news:1107459esxrhn.f67d4a2opmrw.dlg@.40tude.net...
> On Fri, 31 Oct 2003 07:51:12 -0500, Andrew J. Kelly wrote:
> > There are a lot of factors in how long it takes to do something like an
> > insert. What is the client and how are you actually doing the inserts?
> > What is the DDL of the table like and a typical row look like? Do you
have
> > your log file on a separate RAID 1 from anything else?
> Hello,
> the client is a .Net-Application, that uses the sql ADO.NET drivers. The
> inserts are made by calling a stored procedure.
> Here is the sql create syntax for the stored procedure:
> CREATE PROCEDURE sp_teilenummer_insert
> @.tln_pk_tln_vchar VARCHAR(10),
> @.tln_tind_vchar VARCHAR(2),
> @.tln_bng_vchar VARCHAR(40),
> @.wgp_fk_wgpid_vchar VARCHAR(10),
> @.tln_tart_vchar VARCHAR(20),
> @.tln_tstat_vchar VARCHAR(2)
> AS
> INSERT INTO tbl_liste_teilenummer(
> tln_pk_tln_vchar,
> tln_tind_vchar,
> tln_bng_vchar,
> wgp_fk_wgpid_vchar,
> tln_tart_vchar,
> tln_tstat_vchar
> )
> VALUES (@.tln_pk_tln_vchar,
> @.tln_tind_vchar,
> @.tln_bng_vchar,
> @.wgp_fk_wgpid_vchar,
> @.tln_tart_vchar,
> @.tln_tstat_vchar
> )
> The amazing thing is that the same software configuration on my
development
> machine is so much faster than the server machine with much better
> hardware. My development machine is an Intel Pentium III with 1 GHz and
512
> MB RAM. It is the same application with the same datas but its poor slow
on
> the server machine.
> Cu
> Thomas M.|||On Fri, 31 Oct 2003 14:16:04 +0100, Mario wrote:
> Isn't it simply because you add network overhead? If you insert row by row
> you add 6400 network roundtrips to what you might have tested locally on
> your XP machine.
Hello,
I don`t think so, because I have installed the .NET Application on the
server to test it. And the application needs the same time to insert the
data as over the network.
I tried to insert the datas into another SQL Server 2000 on another
developer machine over network and it needs the same time as locally on
my machine.
Cu
Thomas M.|||6,400 inserts in 28 sec for 228 inserts/sec sounds like
the log disk bottleneck to me.
for each insert, sql svr is waiting for the prev insert
log write to complete.
In Computer Management -> Disk Management, right click on
the physical disk (disk 0, etc) go to the Policies tab,
try various settings of write cache enabled, adv. perf etc.
you can do this while your insert test is running, be sure
to monitor disk writes/sec
>--Original Message--
>Hello,
>I have a big problem with the performance of SQL Server
2000 on an IBM
>eSeries 335, with one Intel Xeon 2,4 GHz CPU and 1 GB
RAM. As OS I tried:
>Windows Server 2003
>Windows Server 2000
>Windows 2000 Professional
>To insert of 6400 lines needs around 28 seconds. It
doesn`t matter on
>witch OS the SQL Server 2000 is installed. My desktop
system with
>Windows XP Professional, Windows 2000 Professional and
SQL Server 2000
>needs for the same datas just 8 seconds.
>I supose the SQL Server 2000 may have a problem with the
hyperthreading
>of the Xeon CPU. I have configured the SQL Server 2000 to
use only one
>CPU but it has no effects to the performance of the SQL
Server 2000.
>Is there any possibility to disable the hyperthreading
within the OS?
>Hope somebody an help.
>Cu
>Thomas M.
>.
>|||If you are the only one one the test machine the following might show where
you 'loose' time:
if you test program is logged on: select cpu from sysprocesses
do a dbcc sqlperf(waitstats,clear)
do your test run
now a dbcc sqlperf(waitstats)
check sysprocesses.cpu for the spid that ran the test run
the overall resource consumption (cpu+wait+sigwait) should be about the
elapsed time of your testrun.
If you do the same on the 'fast' host, you should see a difference
somewhere: either in cpu consumption or on one of the waitstats.
(a sqlprofiler run might be more easy to do, but doesn;t show wait
statistics, which can reveal your current problem)
--
regards,
Mario
http://www.sqlinternals.com
"Thomas Meinke" <thomas.meinke@.arcord.de> wrote in message
news:1uzrvpac6ff4d$.1un4vediy6zql$.dlg@.40tude.net...
> On Fri, 31 Oct 2003 14:16:04 +0100, Mario wrote:
> > Isn't it simply because you add network overhead? If you insert row by
row
> > you add 6400 network roundtrips to what you might have tested locally on
> > your XP machine.
> Hello,
> I don`t think so, because I have installed the .NET Application on the
> server to test it. And the application needs the same time to insert the
> data as over the network.
> I tried to insert the datas into another SQL Server 2000 on another
> developer machine over network and it needs the same time as locally on
> my machine.
> Cu
> Thomas M.|||"joe chang" <anonymous@.discussions.microsoft.com> wrote in message
news:0b1801c39fb9$ce6c8350$a401280a@.phx.gbl...
> 6,400 inserts in 28 sec for 228 inserts/sec sounds like
> the log disk bottleneck to me.
> for each insert, sql svr is waiting for the prev insert
> log write to complete.
> In Computer Management -> Disk Management, right click on
> the physical disk (disk 0, etc) go to the Policies tab,
> try various settings of write cache enabled, adv. perf etc.
> you can do this while your insert test is running, be sure
> to monitor disk writes/sec
>
Good thought. To further reduce log write waits, wrap all of your inserts
in a single transaction.
This will allow you to defer the log writes until the end and do one large
write instead of 6,400 small ones.
David|||The information gets written to the log regardless of if it's in a user
defined transaction or not. There can be a certain amount of overhead
associated with 6400 separate log entries vs 1 but the info is still all
there. You have to be careful with a general statement (unfortunately we
have to do it at times) as that since it can depend on a lot of factors as
to which would be faster. I do agree that the log seems a likely place to
start though. Thomas, is it on a separate disk?
--
Andrew J. Kelly
SQL Server MVP
"David Browne" <davidbaxterbrowne no potted meat@.hotmail.com> wrote in
message news:u4jBJL8nDHA.2068@.TK2MSFTNGP09.phx.gbl...
> "joe chang" <anonymous@.discussions.microsoft.com> wrote in message
> news:0b1801c39fb9$ce6c8350$a401280a@.phx.gbl...
> > 6,400 inserts in 28 sec for 228 inserts/sec sounds like
> > the log disk bottleneck to me.
> > for each insert, sql svr is waiting for the prev insert
> > log write to complete.
> > In Computer Management -> Disk Management, right click on
> > the physical disk (disk 0, etc) go to the Policies tab,
> > try various settings of write cache enabled, adv. perf etc.
> > you can do this while your insert test is running, be sure
> > to monitor disk writes/sec
> >
> >
> Good thought. To further reduce log write waits, wrap all of your inserts
> in a single transaction.
> This will allow you to defer the log writes until the end and do one large
> write instead of 6,400 small ones.
> David
>|||"Andrew J. Kelly" <sqlmvpnooospam@.shadhawk.com> wrote in message
news:OCbW13%23nDHA.1764@.tk2msftngp13.phx.gbl...
> The information gets written to the log regardless of if it's in a user
> defined transaction or not. There can be a certain amount of overhead
> associated with 6400 separate log entries vs 1 but the info is still all
> there.
Yes, but the log is flushed to disk after each of the 6400 inserts unless
they are all in a transaction. With a transaction, the log is flushed on
its normal schedule, or at the latest when the transaction is commited.
David

Poor Server Performance When Running Data Driven Subscriptions

We are seeing poor server performance when running simple data driven
subscriptions. The subscription references a simple view that provides
parameter data. When we run the subscription, we see disk usage and memeory
usage increase dramatically on the server and the server becomes very slow
(almost to the point where other activity on the server stops).
Has anyone seen this behavior and does anyone have any recommendations?We are experiencing the same performance issue, our server gets 100% busy
when we run subscriptions, also, some subscriptions never get completed.sql

Poor query performance with: set statistics time on (SQL2K5)

Hi,
I try this query:
--
use AdventureWorks
go
select c.*,o.* from sales.salesorderheader o join sales.customer c
on o.customerid=c.customerid
go
set statistics time on
go
-- try the query again
select c.*,o.* from sales.salesorderheader o join sales.customer c
on o.customerid=c.customerid
--
the second query after setting "statistics time" is very slow on my
computer. I'd like to know if anybody else has the same problem?
Thanks in advance,
LeilaI tested it on 2005 SP1 using Management Studio. I ran it as give,
and with the order reversed. I also added select getdate() lines to
give a measure of elapsed time. Results were returned to a grid.
With statistics time on the query took a bit more than twice as long
to run.
Roy Harvey
Beacon Falls, CT
On Wed, 3 May 2006 03:39:17 +0430, "Leila" <Leilas@.hotpop.com> wrote:

>Hi,
>I try this query:
>--
>use AdventureWorks
>go
>select c.*,o.* from sales.salesorderheader o join sales.customer c
>on o.customerid=c.customerid
>go
>set statistics time on
>go
>-- try the query again
>select c.*,o.* from sales.salesorderheader o join sales.customer c
>on o.customerid=c.customerid
>--
>the second query after setting "statistics time" is very slow on my
>computer. I'd like to know if anybody else has the same problem?
>Thanks in advance,
>Leila|||Leila and Roy,
I can also reproduce this behavior. I don't have an answer
or workaround, but I'd suggest one of you file it as a bug
at http://lab.msdn.microsoft.com/productfeedback/
Steve Kass
Drew University
Leila wrote:

>Hi,
>I try this query:
>--
>use AdventureWorks
>go
>select c.*,o.* from sales.salesorderheader o join sales.customer c
>on o.customerid=c.customerid
>go
>set statistics time on
>go
>-- try the query again
>select c.*,o.* from sales.salesorderheader o join sales.customer c
>on o.customerid=c.customerid
>--
>the second query after setting "statistics time" is very slow on my
>computer. I'd like to know if anybody else has the same problem?
>Thanks in advance,
>Leila
>
>
>|||On Wed, 03 May 2006 00:00:42 -0400, Steve Kass <skass@.drew.edu> wrote:
>Leila and Roy,
>I can also reproduce this behavior. I don't have an answer
>or workaround, but I'd suggest one of you file it as a bug
>at http://lab.msdn.microsoft.com/productfeedback/
>Steve Kass
>Drew University
Yikes.
Just for the record, I've been using statistics io on a lot over the
last year or so, and on SQL2000 it generally had a small impact, maybe
5 to 10 percent.
Josh

Poor query performance with: set statistics time on (SQL2K5)

Hi,
I try this query:
--
use AdventureWorks
go
select c.*,o.* from sales.salesorderheader o join sales.customer c
on o.customerid=c.customerid
go
set statistics time on
go
-- try the query again
select c.*,o.* from sales.salesorderheader o join sales.customer c
on o.customerid=c.customerid
--
the second query after setting "statistics time" is very slow on my
computer. I'd like to know if anybody else has the same problem?
Thanks in advance,
LeilaI tested it on 2005 SP1 using Management Studio. I ran it as give,
and with the order reversed. I also added select getdate() lines to
give a measure of elapsed time. Results were returned to a grid.
With statistics time on the query took a bit more than twice as long
to run.
Roy Harvey
Beacon Falls, CT
On Wed, 3 May 2006 03:39:17 +0430, "Leila" <Leilas@.hotpop.com> wrote:
>Hi,
>I try this query:
>--
>use AdventureWorks
>go
>select c.*,o.* from sales.salesorderheader o join sales.customer c
>on o.customerid=c.customerid
>go
>set statistics time on
>go
>-- try the query again
>select c.*,o.* from sales.salesorderheader o join sales.customer c
>on o.customerid=c.customerid
>--
>the second query after setting "statistics time" is very slow on my
>computer. I'd like to know if anybody else has the same problem?
>Thanks in advance,
>Leila|||Leila and Roy,
I can also reproduce this behavior. I don't have an answer
or workaround, but I'd suggest one of you file it as a bug
at http://lab.msdn.microsoft.com/productfeedback/
Steve Kass
Drew University
Leila wrote:
>Hi,
>I try this query:
>--
>use AdventureWorks
>go
>select c.*,o.* from sales.salesorderheader o join sales.customer c
>on o.customerid=c.customerid
>go
>set statistics time on
>go
>-- try the query again
>select c.*,o.* from sales.salesorderheader o join sales.customer c
>on o.customerid=c.customerid
>--
>the second query after setting "statistics time" is very slow on my
>computer. I'd like to know if anybody else has the same problem?
>Thanks in advance,
>Leila
>
>
>|||On Wed, 03 May 2006 00:00:42 -0400, Steve Kass <skass@.drew.edu> wrote:
>Leila and Roy,
>I can also reproduce this behavior. I don't have an answer
>or workaround, but I'd suggest one of you file it as a bug
>at http://lab.msdn.microsoft.com/productfeedback/
>Steve Kass
>Drew University
Yikes.
Just for the record, I've been using statistics io on a lot over the
last year or so, and on SQL2000 it generally had a small impact, maybe
5 to 10 percent.
Josh

Poor query performance with: set statistics time on (SQL2K5)

Hi,
I try this query:
--
use AdventureWorks
go
select c.*,o.* from sales.salesorderheader o join sales.customer c
on o.customerid=c.customerid
go
set statistics time on
go
-- try the query again
select c.*,o.* from sales.salesorderheader o join sales.customer c
on o.customerid=c.customerid
--
the second query after setting "statistics time" is very slow on my
computer. I'd like to know if anybody else has the same problem?
Thanks in advance,
LeilaI tested it on 2005 SP1 using Management Studio. I ran it as give,
and with the order reversed. I also added select getdate() lines to
give a measure of elapsed time. Results were returned to a grid.
With statistics time on the query took a bit more than twice as long
to run.
Roy Harvey
Beacon Falls, CT
On Wed, 3 May 2006 03:39:17 +0430, "Leila" <Leilas@.hotpop.com> wrote:

>Hi,
>I try this query:
>--
>use AdventureWorks
>go
>select c.*,o.* from sales.salesorderheader o join sales.customer c
>on o.customerid=c.customerid
>go
>set statistics time on
>go
>-- try the query again
>select c.*,o.* from sales.salesorderheader o join sales.customer c
>on o.customerid=c.customerid
>--
>the second query after setting "statistics time" is very slow on my
>computer. I'd like to know if anybody else has the same problem?
>Thanks in advance,
>Leila|||Leila and Roy,
I can also reproduce this behavior. I don't have an answer
or workaround, but I'd suggest one of you file it as a bug
at http://lab.msdn.microsoft.com/productfeedback/
Steve Kass
Drew University
Leila wrote:

>Hi,
>I try this query:
>--
>use AdventureWorks
>go
>select c.*,o.* from sales.salesorderheader o join sales.customer c
>on o.customerid=c.customerid
>go
>set statistics time on
>go
>-- try the query again
>select c.*,o.* from sales.salesorderheader o join sales.customer c
>on o.customerid=c.customerid
>--
>the second query after setting "statistics time" is very slow on my
>computer. I'd like to know if anybody else has the same problem?
>Thanks in advance,
>Leila
>
>
>|||On Wed, 03 May 2006 00:00:42 -0400, Steve Kass <skass@.drew.edu> wrote:
>Leila and Roy,
>I can also reproduce this behavior. I don't have an answer
>or workaround, but I'd suggest one of you file it as a bug
>at http://lab.msdn.microsoft.com/productfeedback/
>Steve Kass
>Drew University
Yikes.
Just for the record, I've been using statistics io on a lot over the
last year or so, and on SQL2000 it generally had a small impact, maybe
5 to 10 percent.
Joshsql

poor preformance usinng sql 2005 with access 2003 application

we have run access 2003 application using SQL 2005 server
and the data transfer rate is extreamly slow.
we are using ODBC connection
and the access apllication works very fast with SQL 2000.

we have the tested the comptabilty of the application to SQL 2005 with Microsoft wizard and it had no problems.
even when i run simple application on the sql 2005 server, it runs very slowly.

i am using the MDAC clients ODBC component and the SQL Native Clients one.
which eliminates the possibilty of network problem.

what could be the problem?

Most likely the problem is improper indexing on the SQL Server side. Typically the driver used is not going to have a huge impact on overall performance. The simplest way to verify if you have index issues is run your application against SQL 2005 and capture a SQL Profiler trace of the activity. Then take the profiler trace and run it though the "Database Engine Tuning Advisor" tool that comes with SQL 2005 and it will analyze the trace and make recommendations as to what indexes to build.

Poor plan choice

Hello,
A couple days ago one of our queries suddenly started to perform
abyssmally. The query is pretty straightforward - it joins several
tables all on foreign keys and includes a GROUP BY with COUNT(*). I
looked over the query plan and it looked a little odd so I tried
cutting the query down to see where the issue might be. I eventually
came up with the following:
SELECT COUNT(*)
FROM Table1 T1
INNER JOIN Table2 T2 ON T2.table2_id = T1.table2_id
WHERE T1.my_date = '2004-11-18'
The table and column names have been changed to protect the innocent,
but that is the exact format of the tables. Table1 has about 35M
records. Table2 has about 6.5M records. For the date in question,
Table1 has about 165K records.
There is a non-clustered index on T1.my_date and there is a clustered
index on T2.table2_id.
The query plan for this simple query does an index seek on T1.my_date
as I expected then it does a bookmark lookup (presumably because it
needs T1.table2_id). It then includes parallelism, a hash, and then a
bitmap creation. Meanwhile, it does an index scan using an index on
Table2 that includes a single column that isn't even mentioned in the
query(?!?!). It then uses parallelism and does a hash match/inner
join.
I've done UPDATE STATISTICS using WITH FULLSCAN for both tables and
I've done a DBCC CHECKTABLE on both tables. Neither had any effect. I
also tried to force the query to use the clustered index for Table2.
For the simple query above it doesn't seem to help performance as the
clustered index scan has a very large cost to it (I'm not sure that I
entirely understand why). In the original query it helps substantially
though. Instead of joining the 6.5M records to a lookup table first it
joins it to Table1 first, which cuts down the number of records to the
165K before going about with other joins.
What I'm looking for is any advice on other things that I can look at
or any ideas on why SQL Server might be making these kinds of choices.
I would have thought that the simple query above would have performed
much better than it is currently (~30-35 seconds). I realize that
there has to be a bookmark lookup, but I was still expecting a quick
response from the server based on the indexes.
Because of the table sizes, etc. I don't expect anyone to reproduce my
results, so please don't ask me to provide DDL for all of the tables
involved. If you have some ideas or even just guesses great, if not
then that's ok too.
Thanks,
-Tom.
Thomas, see inline
"Thomas R. Hummel" wrote:
> Hello,
> A couple days ago one of our queries suddenly started to perform
> abyssmally. The query is pretty straightforward - it joins several
> tables all on foreign keys and includes a GROUP BY with COUNT(*). I
> looked over the query plan and it looked a little odd so I tried
> cutting the query down to see where the issue might be. I eventually
> came up with the following:
> SELECT COUNT(*)
> FROM Table1 T1
> INNER JOIN Table2 T2 ON T2.table2_id = T1.table2_id
> WHERE T1.my_date = '2004-11-18'
> The table and column names have been changed to protect the innocent,
> but that is the exact format of the tables. Table1 has about 35M
> records. Table2 has about 6.5M records. For the date in question,
> Table1 has about 165K records.
> There is a non-clustered index on T1.my_date and there is a clustered
> index on T2.table2_id.
Consider adding a nonclustered index on T1(my_date,table2_id). This will
prevent the (quite expensive) bookmark lookups.

> The query plan for this simple query does an index seek on T1.my_date
> as I expected then it does a bookmark lookup (presumably because it
> needs T1.table2_id). It then includes parallelism, a hash, and then a
> bitmap creation. Meanwhile, it does an index scan using an index on
> Table2 that includes a single column that isn't even mentioned in the
> query(?!?!). It then uses parallelism and does a hash match/inner
> join.
Apparently SQL-Server estimates that the parallel plan will be faster.
If you expect differently, then you could add the hint OPTION (MAXDOP 1)
to force the serial plan.
Since the index on T2(table2_id) is clustered it is very wide at the
page level. In this case, SQL-Server estimates that it is faster to scan
a nonclustered index of table T2 (which also includes the clustered
index key) than it is to seek (or partially scan) the clustered index
for the estimated rows of the query.
Hope this helps,
Gert-Jan

> I've done UPDATE STATISTICS using WITH FULLSCAN for both tables and
> I've done a DBCC CHECKTABLE on both tables. Neither had any effect. I
> also tried to force the query to use the clustered index for Table2.
> For the simple query above it doesn't seem to help performance as the
> clustered index scan has a very large cost to it (I'm not sure that I
> entirely understand why). In the original query it helps substantially
> though. Instead of joining the 6.5M records to a lookup table first it
> joins it to Table1 first, which cuts down the number of records to the
> 165K before going about with other joins.
> What I'm looking for is any advice on other things that I can look at
> or any ideas on why SQL Server might be making these kinds of choices.
> I would have thought that the simple query above would have performed
> much better than it is currently (~30-35 seconds). I realize that
> there has to be a bookmark lookup, but I was still expecting a quick
> response from the server based on the indexes.
> Because of the table sizes, etc. I don't expect anyone to reproduce my
> results, so please don't ask me to provide DDL for all of the tables
> involved. If you have some ideas or even just guesses great, if not
> then that's ok too.
> Thanks,
> -Tom.
|||I fully agree. OPTION (MAXDOP 1) should resolve the problem. In my
experience, UPDATE STATISTICS would temporaily fix it. And index hint,
query hint would also force a right plan (not a best practice though). If
it's from a stored procedure, WITH RECOMPILE would also fix it (not a best
practice).
Gary
SELECT COUNT(*)
> FROM Table1 T1
> INNER JOIN Table2 T2 ON T2.table2_id = T1.table2_id
> WHERE T1.my_date = '2004-11-18'
"Gert-Jan Strik" <sorry@.toomuchspamalready.nl> wrote in message
news:419E4013.6DBC44D@.toomuchspamalready.nl...[vbcol=seagreen]
> Thomas, see inline
> "Thomas R. Hummel" wrote:
> Consider adding a nonclustered index on T1(my_date,table2_id). This will
> prevent the (quite expensive) bookmark lookups.
>
> Apparently SQL-Server estimates that the parallel plan will be faster.
> If you expect differently, then you could add the hint OPTION (MAXDOP 1)
> to force the serial plan.
> Since the index on T2(table2_id) is clustered it is very wide at the
> page level. In this case, SQL-Server estimates that it is faster to scan
> a nonclustered index of table T2 (which also includes the clustered
> index key) than it is to seek (or partially scan) the clustered index
> for the estimated rows of the query.
> Hope this helps,
> Gert-Jan
|||Thanks for the suggestions. I had thought of the nonclustered index,
but while that would help with the pared down query that I came up
with, it wouldn't help with the underlying query because that one
involves a lot of additional columns. I'm still considering a covering
index, but I'm not sure why it should be necessary. Due to the number
of columns in the query as well as the number of rows in the table,
I'm a little hesitant to create a covering index.
I may try changing the clustered index for Table1. Right now it is on
an identity column (not my design...) that is also being used as a
surrogate primary key. It makes more sense to me to have that on the
date column since most reports run off of that date column and use
date ranges. This should prevent bookmark lookups for these large
groups of rows as well. When looking up by the primary key it is
usually to grab one row anyway.
Thanks!
-Tom.
Gert-Jan Strik <sorry@.toomuchspamalready.nl> wrote in message news:<419E4013.6DBC44D@.toomuchspamalready.nl>...[vbcol=seagreen]
> Thomas, see inline
> "Thomas R. Hummel" wrote:
> Consider adding a nonclustered index on T1(my_date,table2_id). This will
> prevent the (quite expensive) bookmark lookups.
>
> Apparently SQL-Server estimates that the parallel plan will be faster.
> If you expect differently, then you could add the hint OPTION (MAXDOP 1)
> to force the serial plan.
> Since the index on T2(table2_id) is clustered it is very wide at the
> page level. In this case, SQL-Server estimates that it is faster to scan
> a nonclustered index of table T2 (which also includes the clustered
> index key) than it is to seek (or partially scan) the clustered index
> for the estimated rows of the query.
> Hope this helps,
> Gert-Jan

Poor plan choice

Hello,
A couple days ago one of our queries suddenly started to perform
abyssmally. The query is pretty straightforward - it joins several
tables all on foreign keys and includes a GROUP BY with COUNT(*). I
looked over the query plan and it looked a little odd so I tried
cutting the query down to see where the issue might be. I eventually
came up with the following:
SELECT COUNT(*)
FROM Table1 T1
INNER JOIN Table2 T2 ON T2.table2_id = T1.table2_id
WHERE T1.my_date = '2004-11-18'
The table and column names have been changed to protect the innocent,
but that is the exact format of the tables. Table1 has about 35M
records. Table2 has about 6.5M records. For the date in question,
Table1 has about 165K records.
There is a non-clustered index on T1.my_date and there is a clustered
index on T2.table2_id.
The query plan for this simple query does an index seek on T1.my_date
as I expected then it does a bookmark lookup (presumably because it
needs T1.table2_id). It then includes parallelism, a hash, and then a
bitmap creation. Meanwhile, it does an index scan using an index on
Table2 that includes a single column that isn't even mentioned in the
query(?!?!). It then uses parallelism and does a hash match/inner
join.
I've done UPDATE STATISTICS using WITH FULLSCAN for both tables and
I've done a DBCC CHECKTABLE on both tables. Neither had any effect. I
also tried to force the query to use the clustered index for Table2.
For the simple query above it doesn't seem to help performance as the
clustered index scan has a very large cost to it (I'm not sure that I
entirely understand why). In the original query it helps substantially
though. Instead of joining the 6.5M records to a lookup table first it
joins it to Table1 first, which cuts down the number of records to the
165K before going about with other joins.
What I'm looking for is any advice on other things that I can look at
or any ideas on why SQL Server might be making these kinds of choices.
I would have thought that the simple query above would have performed
much better than it is currently (~30-35 seconds). I realize that
there has to be a bookmark lookup, but I was still expecting a quick
response from the server based on the indexes.
Because of the table sizes, etc. I don't expect anyone to reproduce my
results, so please don't ask me to provide DDL for all of the tables
involved. If you have some ideas or even just guesses great, if not
then that's ok too.
Thanks,
-Tom.Thomas, see inline
"Thomas R. Hummel" wrote:
> Hello,
> A couple days ago one of our queries suddenly started to perform
> abyssmally. The query is pretty straightforward - it joins several
> tables all on foreign keys and includes a GROUP BY with COUNT(*). I
> looked over the query plan and it looked a little odd so I tried
> cutting the query down to see where the issue might be. I eventually
> came up with the following:
> SELECT COUNT(*)
> FROM Table1 T1
> INNER JOIN Table2 T2 ON T2.table2_id = T1.table2_id
> WHERE T1.my_date = '2004-11-18'
> The table and column names have been changed to protect the innocent,
> but that is the exact format of the tables. Table1 has about 35M
> records. Table2 has about 6.5M records. For the date in question,
> Table1 has about 165K records.
> There is a non-clustered index on T1.my_date and there is a clustered
> index on T2.table2_id.
Consider adding a nonclustered index on T1(my_date,table2_id). This will
prevent the (quite expensive) bookmark lookups.

> The query plan for this simple query does an index seek on T1.my_date
> as I expected then it does a bookmark lookup (presumably because it
> needs T1.table2_id). It then includes parallelism, a hash, and then a
> bitmap creation. Meanwhile, it does an index scan using an index on
> Table2 that includes a single column that isn't even mentioned in the
> query(?!?!). It then uses parallelism and does a hash match/inner
> join.
Apparently SQL-Server estimates that the parallel plan will be faster.
If you expect differently, then you could add the hint OPTION (MAXDOP 1)
to force the serial plan.
Since the index on T2(table2_id) is clustered it is very wide at the
page level. In this case, SQL-Server estimates that it is faster to scan
a nonclustered index of table T2 (which also includes the clustered
index key) than it is to seek (or partially scan) the clustered index
for the estimated rows of the query.
Hope this helps,
Gert-Jan

> I've done UPDATE STATISTICS using WITH FULLSCAN for both tables and
> I've done a DBCC CHECKTABLE on both tables. Neither had any effect. I
> also tried to force the query to use the clustered index for Table2.
> For the simple query above it doesn't seem to help performance as the
> clustered index scan has a very large cost to it (I'm not sure that I
> entirely understand why). In the original query it helps substantially
> though. Instead of joining the 6.5M records to a lookup table first it
> joins it to Table1 first, which cuts down the number of records to the
> 165K before going about with other joins.
> What I'm looking for is any advice on other things that I can look at
> or any ideas on why SQL Server might be making these kinds of choices.
> I would have thought that the simple query above would have performed
> much better than it is currently (~30-35 seconds). I realize that
> there has to be a bookmark lookup, but I was still expecting a quick
> response from the server based on the indexes.
> Because of the table sizes, etc. I don't expect anyone to reproduce my
> results, so please don't ask me to provide DDL for all of the tables
> involved. If you have some ideas or even just guesses great, if not
> then that's ok too.
> Thanks,
> -Tom.|||I fully agree. OPTION (MAXDOP 1) should resolve the problem. In my
experience, UPDATE STATISTICS would temporaily fix it. And index hint,
query hint would also force a right plan (not a best practice though). If
it's from a stored procedure, WITH RECOMPILE would also fix it (not a best
practice).
Gary
SELECT COUNT(*)
> FROM Table1 T1
> INNER JOIN Table2 T2 ON T2.table2_id = T1.table2_id
> WHERE T1.my_date = '2004-11-18'
"Gert-Jan Strik" <sorry@.toomuchspamalready.nl> wrote in message
news:419E4013.6DBC44D@.toomuchspamalready.nl...[vbcol=seagreen]
> Thomas, see inline
> "Thomas R. Hummel" wrote:
> Consider adding a nonclustered index on T1(my_date,table2_id). This will
> prevent the (quite expensive) bookmark lookups.
>
> Apparently SQL-Server estimates that the parallel plan will be faster.
> If you expect differently, then you could add the hint OPTION (MAXDOP 1)
> to force the serial plan.
> Since the index on T2(table2_id) is clustered it is very wide at the
> page level. In this case, SQL-Server estimates that it is faster to scan
> a nonclustered index of table T2 (which also includes the clustered
> index key) than it is to seek (or partially scan) the clustered index
> for the estimated rows of the query.
> Hope this helps,
> Gert-Jan
>|||Thanks for the suggestions. I had thought of the nonclustered index,
but while that would help with the pared down query that I came up
with, it wouldn't help with the underlying query because that one
involves a lot of additional columns. I'm still considering a covering
index, but I'm not sure why it should be necessary. Due to the number
of columns in the query as well as the number of rows in the table,
I'm a little hesitant to create a covering index.
I may try changing the clustered index for Table1. Right now it is on
an identity column (not my design...) that is also being used as a
surrogate primary key. It makes more sense to me to have that on the
date column since most reports run off of that date column and use
date ranges. This should prevent bookmark lookups for these large
groups of rows as well. When looking up by the primary key it is
usually to grab one row anyway.
Thanks!
-Tom.
Gert-Jan Strik <sorry@.toomuchspamalready.nl> wrote in message news:<419E4013.6DBC44D@.toomuch
spamalready.nl>...[vbcol=seagreen]
> Thomas, see inline
> "Thomas R. Hummel" wrote:
> Consider adding a nonclustered index on T1(my_date,table2_id). This will
> prevent the (quite expensive) bookmark lookups.
>
> Apparently SQL-Server estimates that the parallel plan will be faster.
> If you expect differently, then you could add the hint OPTION (MAXDOP 1)
> to force the serial plan.
> Since the index on T2(table2_id) is clustered it is very wide at the
> page level. In this case, SQL-Server estimates that it is faster to scan
> a nonclustered index of table T2 (which also includes the clustered
> index key) than it is to seek (or partially scan) the clustered index
> for the estimated rows of the query.
> Hope this helps,
> Gert-Jan
>

Poor plan choice

Hello,
A couple days ago one of our queries suddenly started to perform
abyssmally. The query is pretty straightforward - it joins several
tables all on foreign keys and includes a GROUP BY with COUNT(*). I
looked over the query plan and it looked a little odd so I tried
cutting the query down to see where the issue might be. I eventually
came up with the following:
SELECT COUNT(*)
FROM Table1 T1
INNER JOIN Table2 T2 ON T2.table2_id = T1.table2_id
WHERE T1.my_date = '2004-11-18'
The table and column names have been changed to protect the innocent,
but that is the exact format of the tables. Table1 has about 35M
records. Table2 has about 6.5M records. For the date in question,
Table1 has about 165K records.
There is a non-clustered index on T1.my_date and there is a clustered
index on T2.table2_id.
The query plan for this simple query does an index seek on T1.my_date
as I expected then it does a bookmark lookup (presumably because it
needs T1.table2_id). It then includes parallelism, a hash, and then a
bitmap creation. Meanwhile, it does an index scan using an index on
Table2 that includes a single column that isn't even mentioned in the
query(?!?!). It then uses parallelism and does a hash match/inner
join.
I've done UPDATE STATISTICS using WITH FULLSCAN for both tables and
I've done a DBCC CHECKTABLE on both tables. Neither had any effect. I
also tried to force the query to use the clustered index for Table2.
For the simple query above it doesn't seem to help performance as the
clustered index scan has a very large cost to it (I'm not sure that I
entirely understand why). In the original query it helps substantially
though. Instead of joining the 6.5M records to a lookup table first it
joins it to Table1 first, which cuts down the number of records to the
165K before going about with other joins.
What I'm looking for is any advice on other things that I can look at
or any ideas on why SQL Server might be making these kinds of choices.
I would have thought that the simple query above would have performed
much better than it is currently (~30-35 seconds). I realize that
there has to be a bookmark lookup, but I was still expecting a quick
response from the server based on the indexes.
Because of the table sizes, etc. I don't expect anyone to reproduce my
results, so please don't ask me to provide DDL for all of the tables
involved. If you have some ideas or even just guesses great, if not
then that's ok too.
Thanks,
-Tom.Thomas, see inline
"Thomas R. Hummel" wrote:
> Hello,
> A couple days ago one of our queries suddenly started to perform
> abyssmally. The query is pretty straightforward - it joins several
> tables all on foreign keys and includes a GROUP BY with COUNT(*). I
> looked over the query plan and it looked a little odd so I tried
> cutting the query down to see where the issue might be. I eventually
> came up with the following:
> SELECT COUNT(*)
> FROM Table1 T1
> INNER JOIN Table2 T2 ON T2.table2_id = T1.table2_id
> WHERE T1.my_date = '2004-11-18'
> The table and column names have been changed to protect the innocent,
> but that is the exact format of the tables. Table1 has about 35M
> records. Table2 has about 6.5M records. For the date in question,
> Table1 has about 165K records.
> There is a non-clustered index on T1.my_date and there is a clustered
> index on T2.table2_id.
Consider adding a nonclustered index on T1(my_date,table2_id). This will
prevent the (quite expensive) bookmark lookups.
> The query plan for this simple query does an index seek on T1.my_date
> as I expected then it does a bookmark lookup (presumably because it
> needs T1.table2_id). It then includes parallelism, a hash, and then a
> bitmap creation. Meanwhile, it does an index scan using an index on
> Table2 that includes a single column that isn't even mentioned in the
> query(?!?!). It then uses parallelism and does a hash match/inner
> join.
Apparently SQL-Server estimates that the parallel plan will be faster.
If you expect differently, then you could add the hint OPTION (MAXDOP 1)
to force the serial plan.
Since the index on T2(table2_id) is clustered it is very wide at the
page level. In this case, SQL-Server estimates that it is faster to scan
a nonclustered index of table T2 (which also includes the clustered
index key) than it is to seek (or partially scan) the clustered index
for the estimated rows of the query.
Hope this helps,
Gert-Jan
> I've done UPDATE STATISTICS using WITH FULLSCAN for both tables and
> I've done a DBCC CHECKTABLE on both tables. Neither had any effect. I
> also tried to force the query to use the clustered index for Table2.
> For the simple query above it doesn't seem to help performance as the
> clustered index scan has a very large cost to it (I'm not sure that I
> entirely understand why). In the original query it helps substantially
> though. Instead of joining the 6.5M records to a lookup table first it
> joins it to Table1 first, which cuts down the number of records to the
> 165K before going about with other joins.
> What I'm looking for is any advice on other things that I can look at
> or any ideas on why SQL Server might be making these kinds of choices.
> I would have thought that the simple query above would have performed
> much better than it is currently (~30-35 seconds). I realize that
> there has to be a bookmark lookup, but I was still expecting a quick
> response from the server based on the indexes.
> Because of the table sizes, etc. I don't expect anyone to reproduce my
> results, so please don't ask me to provide DDL for all of the tables
> involved. If you have some ideas or even just guesses great, if not
> then that's ok too.
> Thanks,
> -Tom.|||I fully agree. OPTION (MAXDOP 1) should resolve the problem. In my
experience, UPDATE STATISTICS would temporaily fix it. And index hint,
query hint would also force a right plan (not a best practice though). If
it's from a stored procedure, WITH RECOMPILE would also fix it (not a best
practice).
Gary
SELECT COUNT(*)
> FROM Table1 T1
> INNER JOIN Table2 T2 ON T2.table2_id = T1.table2_id
> WHERE T1.my_date = '2004-11-18'
"Gert-Jan Strik" <sorry@.toomuchspamalready.nl> wrote in message
news:419E4013.6DBC44D@.toomuchspamalready.nl...
> Thomas, see inline
> "Thomas R. Hummel" wrote:
> >
> > Hello,
> >
> > A couple days ago one of our queries suddenly started to perform
> > abyssmally. The query is pretty straightforward - it joins several
> > tables all on foreign keys and includes a GROUP BY with COUNT(*). I
> > looked over the query plan and it looked a little odd so I tried
> > cutting the query down to see where the issue might be. I eventually
> > came up with the following:
> >
> > SELECT COUNT(*)
> > FROM Table1 T1
> > INNER JOIN Table2 T2 ON T2.table2_id = T1.table2_id
> > WHERE T1.my_date = '2004-11-18'
> >
> > The table and column names have been changed to protect the innocent,
> > but that is the exact format of the tables. Table1 has about 35M
> > records. Table2 has about 6.5M records. For the date in question,
> > Table1 has about 165K records.
> >
> > There is a non-clustered index on T1.my_date and there is a clustered
> > index on T2.table2_id.
> Consider adding a nonclustered index on T1(my_date,table2_id). This will
> prevent the (quite expensive) bookmark lookups.
> > The query plan for this simple query does an index seek on T1.my_date
> > as I expected then it does a bookmark lookup (presumably because it
> > needs T1.table2_id). It then includes parallelism, a hash, and then a
> > bitmap creation. Meanwhile, it does an index scan using an index on
> > Table2 that includes a single column that isn't even mentioned in the
> > query(?!?!). It then uses parallelism and does a hash match/inner
> > join.
> Apparently SQL-Server estimates that the parallel plan will be faster.
> If you expect differently, then you could add the hint OPTION (MAXDOP 1)
> to force the serial plan.
> Since the index on T2(table2_id) is clustered it is very wide at the
> page level. In this case, SQL-Server estimates that it is faster to scan
> a nonclustered index of table T2 (which also includes the clustered
> index key) than it is to seek (or partially scan) the clustered index
> for the estimated rows of the query.
> Hope this helps,
> Gert-Jan
> > I've done UPDATE STATISTICS using WITH FULLSCAN for both tables and
> > I've done a DBCC CHECKTABLE on both tables. Neither had any effect. I
> > also tried to force the query to use the clustered index for Table2.
> > For the simple query above it doesn't seem to help performance as the
> > clustered index scan has a very large cost to it (I'm not sure that I
> > entirely understand why). In the original query it helps substantially
> > though. Instead of joining the 6.5M records to a lookup table first it
> > joins it to Table1 first, which cuts down the number of records to the
> > 165K before going about with other joins.
> >
> > What I'm looking for is any advice on other things that I can look at
> > or any ideas on why SQL Server might be making these kinds of choices.
> > I would have thought that the simple query above would have performed
> > much better than it is currently (~30-35 seconds). I realize that
> > there has to be a bookmark lookup, but I was still expecting a quick
> > response from the server based on the indexes.
> >
> > Because of the table sizes, etc. I don't expect anyone to reproduce my
> > results, so please don't ask me to provide DDL for all of the tables
> > involved. If you have some ideas or even just guesses great, if not
> > then that's ok too.
> >
> > Thanks,
> > -Tom.|||Thanks for the suggestions. I had thought of the nonclustered index,
but while that would help with the pared down query that I came up
with, it wouldn't help with the underlying query because that one
involves a lot of additional columns. I'm still considering a covering
index, but I'm not sure why it should be necessary. Due to the number
of columns in the query as well as the number of rows in the table,
I'm a little hesitant to create a covering index.
I may try changing the clustered index for Table1. Right now it is on
an identity column (not my design...) that is also being used as a
surrogate primary key. It makes more sense to me to have that on the
date column since most reports run off of that date column and use
date ranges. This should prevent bookmark lookups for these large
groups of rows as well. When looking up by the primary key it is
usually to grab one row anyway.
Thanks!
-Tom.
Gert-Jan Strik <sorry@.toomuchspamalready.nl> wrote in message news:<419E4013.6DBC44D@.toomuchspamalready.nl>...
> Thomas, see inline
> "Thomas R. Hummel" wrote:
> >
> > Hello,
> >
> > A couple days ago one of our queries suddenly started to perform
> > abyssmally. The query is pretty straightforward - it joins several
> > tables all on foreign keys and includes a GROUP BY with COUNT(*). I
> > looked over the query plan and it looked a little odd so I tried
> > cutting the query down to see where the issue might be. I eventually
> > came up with the following:
> >
> > SELECT COUNT(*)
> > FROM Table1 T1
> > INNER JOIN Table2 T2 ON T2.table2_id = T1.table2_id
> > WHERE T1.my_date = '2004-11-18'
> >
> > The table and column names have been changed to protect the innocent,
> > but that is the exact format of the tables. Table1 has about 35M
> > records. Table2 has about 6.5M records. For the date in question,
> > Table1 has about 165K records.
> >
> > There is a non-clustered index on T1.my_date and there is a clustered
> > index on T2.table2_id.
> Consider adding a nonclustered index on T1(my_date,table2_id). This will
> prevent the (quite expensive) bookmark lookups.
> > The query plan for this simple query does an index seek on T1.my_date
> > as I expected then it does a bookmark lookup (presumably because it
> > needs T1.table2_id). It then includes parallelism, a hash, and then a
> > bitmap creation. Meanwhile, it does an index scan using an index on
> > Table2 that includes a single column that isn't even mentioned in the
> > query(?!?!). It then uses parallelism and does a hash match/inner
> > join.
> Apparently SQL-Server estimates that the parallel plan will be faster.
> If you expect differently, then you could add the hint OPTION (MAXDOP 1)
> to force the serial plan.
> Since the index on T2(table2_id) is clustered it is very wide at the
> page level. In this case, SQL-Server estimates that it is faster to scan
> a nonclustered index of table T2 (which also includes the clustered
> index key) than it is to seek (or partially scan) the clustered index
> for the estimated rows of the query.
> Hope this helps,
> Gert-Jan
> > I've done UPDATE STATISTICS using WITH FULLSCAN for both tables and
> > I've done a DBCC CHECKTABLE on both tables. Neither had any effect. I
> > also tried to force the query to use the clustered index for Table2.
> > For the simple query above it doesn't seem to help performance as the
> > clustered index scan has a very large cost to it (I'm not sure that I
> > entirely understand why). In the original query it helps substantially
> > though. Instead of joining the 6.5M records to a lookup table first it
> > joins it to Table1 first, which cuts down the number of records to the
> > 165K before going about with other joins.
> >
> > What I'm looking for is any advice on other things that I can look at
> > or any ideas on why SQL Server might be making these kinds of choices.
> > I would have thought that the simple query above would have performed
> > much better than it is currently (~30-35 seconds). I realize that
> > there has to be a bookmark lookup, but I was still expecting a quick
> > response from the server based on the indexes.
> >
> > Because of the table sizes, etc. I don't expect anyone to reproduce my
> > results, so please don't ask me to provide DDL for all of the tables
> > involved. If you have some ideas or even just guesses great, if not
> > then that's ok too.
> >
> > Thanks,
> > -Tom.