Showing posts with label situation. Show all posts
Showing posts with label situation. Show all posts

Friday, March 30, 2012

Poorly written assembly and CLR

As the CLR is hosted in the SQL OS.

How will the SQL OS handle a situation wherein I have a badly written .net assembly which is chewing up lot of memory and other system resources of the server.

1. Won’t it directly affect the SQL OS and the overall performance of the server?

2. The bad assembly is also hosted in the same app domain as other assemblies;As all assemblies owned by same user are hosted in the same app domain; will this approach not affect other .net assemblies running in the same Appdomain?

Thanks!

1. Whether it will affect the overall performance of the server depends on whether or not the server has enough memory and resources to handle however much it is chewing up :) 2. Yes, other assemblies in the same AppDomain could be affected; but you can control this to some degree by splitting things up into different AppDomains by using different owners... But let's get down to the more important question: If this assembly is so badly written, and using so much memory, why are you hosting it in-process? Wouldn't this be a better candidate for the application tier? The situation you describe is really no different than a badly written or improperly implemented T-SQL stored procedure. If you have something eating up all of the RAM and processor time on your server, perhaps you need to take a hard look at it from an architectural point of view. -- Adam MachanicPro SQL Server 2005, available nowhttp://www..apress.com/book/bookDisplay.html?bID=457-- <Yedu@.discussions.microsoft..com> wrote in message news:79d20459-8f1e-435f-97d9-9c8423852d11@.discussions.microsoft.com... As the CLR is hosted in the SQL OS. How will the SQL OS handle a situation wherein I have a badly written .net assembly which is chewing up lot of memory and other system resources of the server. 1. Won't it directly affect the SQL OS and the overall performance of the server? 2. The bad assembly is also hosted in the same app domain as other assemblies;As all assemblies owned by same user are hosted in the same app domain; will this approach not affect other .net assemblies running in the same Appdomain?Thanks!|||Adam,
On point 1 is it true that SQL OS punishes threads/assemblies that are performing poorly? How does this happen? On what criteria does SQL OS decide this?

On Point 2 I agree with you.

Thanks|||

Most of the memory for CLR comes outside SQL Server buffer pool. If the system runs out of memory then your CLR routine would get an OOM exception and SQL Server would abort this thread and consequently the memory would be freed.

For CPU, yes SQL Server punishes the threads that do not yield the scheduler in a reasonable amount of time. It would force them to yield and put them at the end of the scheduler.

Thanks,
-Vineet.

Poor Performing Query

Here is the situation: I have a process that auto-generates reports and e-mails them to end users (no, we are not allowed to use SQL Mail due to security issues and the fact that we are a Lotus shop). The process runs the query, saves the data to a text file and then mails the file as an attachment.

If the particular report to be run has multiple recipients, we make multiple passes (ie, we run the same query multiple times and send each recipient a "personalized" version of the report).

The issue is that one of my auto reports fails for just the first recipient. Subsequent recipients all receive the report normally, but the first recipient consistently fails to receive the data.

When I run the T-SQL multiple times in Query analyzer, I get the following results:

Pass 1: 87 seconds
Pass 2: 2 seconds
Pass 3: 3 seconds

I know that the process that we wrote (which is wrapped in a DLL) uses an ADO connection/command timeout setting of 60 seconds (both properties are set to the same value).

The SQL is not a stored proc. When I run the Index tuning wizard, no additional indexes are recommended.

I'm struggling to determine the next step.

1. I am loathe to increase the command timeout setting, since I have already done that once (from 30 seconds to one minute). I'm pretty sure this will only defer final resolution.

2. I suppose I could create a stored proc (so that it doesn't have to recompile the execution plan each time), but that would make this one report different from the hundreds of other reports that my super users have created; it would also mean that my super users would not be able to control the layout of the report without coming to me.

3. I have a feeling I should be using query hints or some such, but I am completely unfamiliar with these optimization tools.

Is there another approach that I am overlooking?

Regards,

Hugh ScottIf these are similar then I would guess that for the first call it has to read the data from disk. For subsequent calls the data is in memory so is much quicker.

Maybe put in a dummy call first?|||Yes, the calls are identical. I agree with you that on subsequent passes, the data is being read from memory. What has me puzzled is what I can do about it. I have considered putting in a dummy call first (as you suggested), but I wanted to try something a little more orthodox first.

Do you think that adding memory to the server would help? The server currently has 2.6 GB of memory. The database itself is a little over 30 GB now. I have run performance monitor on the server. While pages/sec does spike during this operation, it usually averages around 4.5. It does not seem sufficiently spiky to warrant $4,000 for additional memory.

Some more information:

There are three tables involved:

table a left join view b left join table c

Table a: 5.4 million rows (~ 4 GB)
View b: 1.3 million rows (~ 2.3 GB)
View c: 15 rows

View b uses the UNION operator and thus cannot be indexed.
Table a is indexed six ways from Sunday.

Thanks again,

Hugh Scott

Originally posted by nigelrivett
If these are similar then I would guess that for the first call it has to read the data from disk. For subsequent calls the data is in memory so is much quicker.

Maybe put in a dummy call first?|||What's your UNION view code? UNION queries are very handy, and I've used them many times myself, but just as often I've seen them used unnecessarily. Perhaps there is a more efficient method of combining your view code directly in your procedure code.

blindman|||UNION views can be indexed. Also, have you tried to see what you get on IO stats ON when doing a SELECT from just that view using WHERE that would match your JOIN?|||Views can be indexed, but unless you use a clustered index then the index must be recreated each time the view is called, and you lose the benefit of indexes on the underlying tables.

Clustered indexes on views can greatly boost performance, but they cannot be used on UNION queries. From books-online:

A view must meet these requirements before you can create a clustered index on it:
.
.
.
The SELECT statement in the view cannot contain these Transact-SQL syntax elements:
.
.
.UNION operator.

blindman|||blindman: thanks for the reminder. UNION is not allowed in indexed views. But you're also referring to non-clustered indexes. I haven't found any info that would suggest that a non-clustered index on a view needs to be rebuilt each time the view is called. Can you enlighten me here?

hmscott: adding memory will not resolve 87-second processing time on the first call. Adding indexes to base tables may.|||Normally the results of a view are not stored in the database, but if a clustered index is created on the view then the results are stored just like a table, and the values in the view are updated whenever the values in the underlying table are updated. (This of course adds overhead to processing changes on the underlying tables.) The clustered index is necessary in order to update the values on the view.

Long story short, if you don't use a clustered index then the results are not stored and thus any indexing must be recreated each time.

Honestly, I don't know whether non-clustered indexes are maintained on views that also have clustered indexes. I think I also read somewhere that you can't create an index on a view unless it also has a clustered index, so maybe UNION queries can't have indexes at all.

It's also possible that the optimizer might be smart enough to apply filters from the procedure to the underlying tables prior to creating the UNION view when it is called, but this probably depends on a lot of factors.

I've just seen too many instances when a UNION view was used instead of a more appropriate WHERE clause criteria.

blindmansql

Friday, March 9, 2012

Please suggest!

Hi,
While using Reporting services, I have come across the following situation
where the grouping cirteria for the report is defined by the user using ASPX
page.
My problem is, I am not sure how I can send the the grouping preferences to
the Reporting service(I can't send this data as a part of Report parameters
as there is no limit on the number of groups permitted).
Thanks,
RajRaj,
Reports are cheap, and take up no space at all. So make as many reports as
you need that will satisfy your grouping options. On the ASPX have the user
select the grouping options, and your will the call the appropriate report by
name.
rwiethorn
"Raj" wrote:
> Hi,
> While using Reporting services, I have come across the following situation
> where the grouping cirteria for the report is defined by the user using ASPX
> page.
> My problem is, I am not sure how I can send the the grouping preferences to
> the Reporting service(I can't send this data as a part of Report parameters
> as there is no limit on the number of groups permitted).
> Thanks,
> Raj|||Thanks for the reply, rwiethorn.
The solution that you have suggested may not be too practical in my
application as the grouping is quite dynamic and its impossible to create all
the reports at design time.
Thanks again,
Raj
"rwiethorn" wrote:
> Raj,
> Reports are cheap, and take up no space at all. So make as many reports as
> you need that will satisfy your grouping options. On the ASPX have the user
> select the grouping options, and your will the call the appropriate report by
> name.
>
> rwiethorn
>
> "Raj" wrote:
> > Hi,
> >
> > While using Reporting services, I have come across the following situation
> > where the grouping cirteria for the report is defined by the user using ASPX
> > page.
> >
> > My problem is, I am not sure how I can send the the grouping preferences to
> > the Reporting service(I can't send this data as a part of Report parameters
> > as there is no limit on the number of groups permitted).
> >
> > Thanks,
> > Raj

Wednesday, March 7, 2012

Please provide me with a solution

I am developer who codes for both front end and back end.

HEre is the situation. I have two databases sqlnts1 and sqlnts2. I need to update certain tables in sqlnts1 and then certain tables in sqlnts2 and then back to the sqlnts1 to update few more tables. This is the program logic.

When I tried executing the stored procedure written in sql server 2000 I am provided with the following message

"[OLE/DB provider returned message: Cannot start more transactions on this session.]"

To set this problem correct the only option that I have is to set xact_abort on. But, setting this option on I cannot handle the errors in code. That is this option lets the server handle the system errors.

I want to handle the errors using @.@.error programatically.

Also when I try to commit the transaction that is started before updating sqlnts1 at the end of the stored procedure it provides me with the following error.

"No transaction or savepoint of that name was found".

To work around this problem I used set xact_abort on before making updates to sqlnts2 and set it back to off after finishing updates to sqlnts2.

This way it works, but am sure given that there occurs an error in updating sqlnts2 server tables - its gonna create problems. I want to handle them by myself in the code.

Please help.

Here is the program flow. I tried beginning the distributed transaction too. Case is also checked and begin and end tran are ensured to have the same case.

Begin distributed transaction abc / begin tran abc

Update tables in sqlnts1

set xact abort on

update tables in sqlnts2

set xact abort off

Update tables in sqlnts1

commit tran / rollback tran abc

Thanks,
SubhaAre these 2 DBs on the same server or different servers?|||The sqlnts1 and 2 are two servers.

I would look forward ot hear from you.
Thanks,
Subha

Originally posted by sbaru
Are these 2 DBs on the same server or different servers?|||What is the level of nested transactions you are going into ?
find max value of @.trancount at which the error
"[OLE/DB provider returned message: Cannot start more transactions on this session.]"
occurs|||I have only one transaction on. I don't open multiple transactions.
@.trancount is only 1

Originally posted by Enigma
What is the level of nested transactions you are going into ?
find max value of @.trancount at which the error
"[OLE/DB provider returned message: Cannot start more transactions on this session.]"
occurs

Please provide me with a solution

I am developer who codes for both front end and back end.

HEre is the situation. I have two databases sqlnts1 and sqlnts2. I need to update certain tables in sqlnts1 and then certain tables in sqlnts2 and then back to the sqlnts1 to update few more tables. This is the program logic.

When I tried executing the stored procedure written in sql server 2000 I am provided with the following message

"[OLE/DB provider returned message: Cannot start more transactions on this session.]"

To set this problem correct the only option that I have is to set xact_abort on. But, setting this option on I cannot handle the errors in code. That is this option lets the server handle the system errors.

I want to handle the errors using @.@.error programatically.

Also when I try to commit the transaction that is started before updating sqlnts1 at the end of the stored procedure it provides me with the following error.

"No transaction or savepoint of that name was found".

To work around this problem I used set xact_abort on before making updates to sqlnts2 and set it back to off after finishing updates to sqlnts2.

This way it works, but am sure given that there occurs an error in updating sqlnts2 server tables - its gonna create problems. I want to handle them by myself in the code.

Please help.

Here is the program flow. I tried beginning the distributed transaction too. Case is also checked and begin and end tran are ensured to have the same case.

Begin distributed transaction abc / begin tran abc

Update tables in sqlnts1

set xact abort on

update tables in sqlnts2

set xact abort off

Update tables in sqlnts1

commit tran / rollback tran abc

Thanks,
SubhaCrosspost

http://www.dbforums.com/t970872.html (http://www.dbforums.com/t970872.html)|||I could not see any reply posted by enigma??????

Originally posted by subkrish
I am developer who codes for both front end and back end.

HEre is the situation. I have two databases sqlnts1 and sqlnts2. I need to update certain tables in sqlnts1 and then certain tables in sqlnts2 and then back to the sqlnts1 to update few more tables. This is the program logic.

When I tried executing the stored procedure written in sql server 2000 I am provided with the following message

"[OLE/DB provider returned message: Cannot start more transactions on this session.]"

To set this problem correct the only option that I have is to set xact_abort on. But, setting this option on I cannot handle the errors in code. That is this option lets the server handle the system errors.

I want to handle the errors using @.@.error programatically.

Also when I try to commit the transaction that is started before updating sqlnts1 at the end of the stored procedure it provides me with the following error.

"No transaction or savepoint of that name was found".

To work around this problem I used set xact_abort on before making updates to sqlnts2 and set it back to off after finishing updates to sqlnts2.

This way it works, but am sure given that there occurs an error in updating sqlnts2 server tables - its gonna create problems. I want to handle them by myself in the code.

Please help.

Here is the program flow. I tried beginning the distributed transaction too. Case is also checked and begin and end tran are ensured to have the same case.

Begin distributed transaction abc / begin tran abc

Update tables in sqlnts1

set xact abort on

update tables in sqlnts2

set xact abort off

Update tables in sqlnts1

commit tran / rollback tran abc

Thanks,
Subha