every morning I have 7-10 identical messages in error log
1.
Configuration option 'allow updates' changed from 1 to 0. Run the RECONFIGURE statement to install..
2.
Error: 15457, Severity: 0, State: 1
3.
Configuration option 'allow updates' changed from 0 to 1. Run the RECONFIGURE statement to install..
It is standby server with custom log shipping and DTS transfering logins every 15 min.
Hi everyone In my SqlServer Management Studio Express, on start up it shows the server type option, but greyed.So that value is fixed to database engine. ( I'm trying to work on an SqlServer Compact Edition database through the SSMStudiothat's why I'm trying to get this to change.)Besides, after I connect i go to the Object Explorer, expand the server node, and go to Replication.When i expand replication, i get the "Local Subscription" option, but nothng for Publication.( I want to work on Merge Replication, that's why I desparately need Publication to work)Am i missing something here? I did not install SqlServer separately, I only have what comes bundled with the Visual Studio 2005 Setup.
I have a project that consists of a SQL db with an Access front end as the user interface. Here is the structure of the table on which this question is based:
Code Block
create table #IncomeAndExpenseData ( recordID nvarchar(5)NOT NULL, itemID int NOT NULL, itemvalue decimal(18, 2) NULL, monthitemvalue decimal(18, 2) NULL ) The itemvalue field is where the user enters his/her numbers via Access. There is an IncomeAndExpenseCodes table as well which holds item information, including the itemID and entry unit of measure. Some itemIDs have an entry unit of measure of $/mo, while others are entered in terms of $/yr, others in %/yr.
For itemvalues of itemIDs with entry units of measure that are not $/mo a stored procedure performs calculations which converts them into numbers that has a unit of measure of $/mo and updates IncomeAndExpenseData putting these numbers in the monthitemvalue field. This stored procedure is written to only calculate values for monthitemvalue fields which are null in order to avoid recalculating every single row in the table.
If the user edits the itemvalue field there is a trigger on IncomeAndExpenseData which sets the monthitemvalue to null so the stored procedure recalculates the monthitemvalue for the changed rows. However, it appears this trigger is also setting monthitemvalue to null after the stored procedure updates the IncomeAndExpenseData table with the recalculated monthitemvalues, thus wiping out the answers.
How do I write a trigger that sets the monthitemvalue to null only when the user edits the itemvalue field, not when the stored procedure puts the recalculated monthitemvalue into the IncomeAndExpenseData table?
So I started a new job recently and have noticed a few strange configurations. Typically I would never mess with min memory per query option and index create memory option configuration because i just haven't seen any need to. My typical thought is that if it isn't broke... They have been modified on every single server in my environment.
From Books Online: • This option is an advanced option and should be changed only by an experienced database administrator or certified SQL Server technician. • The index create memory option is self-configuring and usually works without requiring adjustment. However, if you experience difficulties creating indexes, consider increasing the value of this option from its run value.
Hi... I have data that i am getting through a dbf file. and i am dumping that data to a sql server... and then taking the data from the sql server after scrubing it i put it into the production database.. right my stored procedure handles a single plan only... but now there may be two or more plans together in the same sql server database which i need to scrub and then update that particular plan already exists or inserts if they dont...
this is my sproc... ALTER PROCEDURE [dbo].[usp_Import_Plan] @ClientId int, @UserId int = NULL, @HistoryId int, @ShowStatus bit = 0-- Indicates whether status messages should be returned during the import.
AS
SET NOCOUNT ON
DECLARE @Count int, @Sproc varchar(50), @Status varchar(200), @TotalCount int
SET @Sproc = OBJECT_NAME(@@ProcId)
SET @Status = 'Updating plan information in Plan table.' UPDATE Statements..Plan SET PlanName = PlanName1, Description = PlanName2 FROM Statements..Plan cp JOIN ( SELECT DISTINCT PlanId, PlanName1, PlanName2 FROM Census ) c ON cp.CPlanId = c.PlanId WHERE cp.ClientId = @ClientId AND ( IsNull(cp.PlanName,'') <> IsNull(c.PlanName1,'') OR IsNull(cp.Description,'') <> IsNull(c.PlanName2,'') )
SET @Count = @@ROWCOUNT IF @Count > 0 BEGIN SET @Status = 'Updated ' + Cast(@Count AS varchar(10)) + ' record(s) in ClientPlan.' END ELSE BEGIN SET @Status = 'No records were updated in Plan.' END
SET @Status = 'Adding plan information to Plan table.' INSERT INTO Statements..Plan ( ClientId, ClientPlanId, UserId, PlanName, Description ) SELECT DISTINCT @ClientId, CPlanId, @UserId, PlanName1, PlanName2 FROM Census WHERE PlanId NOT IN ( SELECT DISTINCT CPlanId FROM Statements..Plan WHERE ClientId = @ClientId AND ClientPlanId IS NOT NULL )
SET @Count = @@ROWCOUNT IF @Count > 0 BEGIN SET @Status = 'Added ' + Cast(@Count AS varchar(10)) + ' record(s) to Plan.' END ELSE BEGIN SET @Status = 'No information was added Plan.' END
SET NOCOUNT OFF
So how do i do multiple inserts and updates using this stored procedure...
We think we're having performance problems, and among the areas of investigations is the tempdb database. Since it resets itself after SQL is restarted, is there a way to find out how big it has grown in the past ? Does leaving it at the default size cause a performace hit ?? Right now it's 8.75 MB, with 7.38 MB available, which sounds pretty harmless.
Everywhere I read, it states that running SQL Profiler can affect performance of your SQL Server. My question is - how much of an impact will it really make? Will I see a 1% degredation in peformance? 5%? 50%? I haven't been able to find a good answer. We currently have SQL Profiler running all day long for almost 3 years, and the databases are still humming.
Is it the amount of data you are requesting from the trace that affects performance? There are some compliance tools out there (Idera Compliance Manager, IPLocks, etc) that run a profiler trace to get data. There are other DBAs in my organization who don't want to use them because "profiler traces will degrade my SQL Server performance". How true is this really.
Any help I can get would be extremely appreciated.
Does multiplication with 1 affect query performance?I have a a stored procedure that converts results to another unit if required. In alternative 1 below, the results are returned with a separate select statement if no conversion is necessary - in other words, no multiplication with a conversion factor is required. However, the code is not very nice since I need to repeat the select statement again in case a conversion is required, this time including the conversion factor.Alternative 2 uses cleaner-looking code. The conversion factor is set to 1 if no conversion is required, and a single SELECT statement is used to return the data. The @factor variable is defined as a float.I would rather use alternative 2, but I wonder if there is any performance penalty for doing that if no conversion is required since the results are always multiplied with the @factor? Or can SQL server somehow understand that @factor = 1 and no multiplication is required?--- Alternative 1: ---IF @fromunit_sid = @tounit_sid-- Return unconverted results SELECT ISNULL(ls_totalWaterConsumption,0) AS ls_totalWaterConsumption,ls_theoreticalWaterConsumption AS ls_theoretical_WaterConsumption,ls_totalWaterConsumption - ls_theoreticalWaterConsumption AS ls_extra_WaterConsumption FROM Results WHERE scenario_id = @scenario_idELSEBEGIN -- Get conversion factor EXEC getConversionFactor @fromunit_sid, @tounit_sid, @factor OUTPUT -- Get the converted results SELECT ISNULL(ls_totalWaterConsumption * @factor,0) AS ls_totalWaterConsumption, ls_theoreticalWaterConsumption * @factor AS ls_theoretical_WaterConsumption, (ls_totalWaterConsumption - ls_theoreticalWaterConsumption) * @factor AS ls_extra_WaterConsumptionFROM Results WHERE scenario_id = @scenario_idEND --- Alternative 2: ---IF @fromunit_sid = @tounit_sidSET @factor = 1ELSE -- Get conversion factor EXEC getConversionFactor @fromunit_sid, @tounit_sid, @factor OUTPUT
-- Get the converted results SELECT ISNULL(ls_totalWaterConsumption * @factor,0) AS ls_totalWaterConsumption, ls_theoreticalWaterConsumption * @factor AS ls_theoretical_WaterConsumption, (ls_totalWaterConsumption - ls_theoreticalWaterConsumption) * @factor AS ls_extra_WaterConsumptionFROM Results WHERE scenario_id = @scenario_id And another question: is using an IF function considerably faster than making a call to another stored procedure?In alternative 2 above I use an IF statement to check if @fromunit_sid = @tounit_sid, and . But in fact the function getConversionFactor that I'm calling does exactly the same thing: if I pass in identical from- and to-values, it simply returns 1, so I could omit the IF statement completely and just use alternative 3. But is it slower?--- Alternative 3 -- Get conversion factor EXEC getConversionFactor @fromunit_sid, @tounit_sid, @factor OUTPUT
Hi, I am wrote the following code in one store procedure called p_bcp_all. and then scheduled it to run over night. what if the first two bcp were successful but the third one failed. Is the whole procedure going to fail? also what if the first one failed, is the rest of the code going to be executed have the bcp process going for the second and the third table? thanks for your adivce
regards Ali
create procedure p_bcp_all as Exec master..xp_cmdshell "bcp servername..tblone in d:blone.txt /fd:formatfileblone.fmt /servername /Usa /password/b250000 /a8000"
I have a db which I have little control over most of it's makeup because of the vendor supplied tools. We currently have over 700 tables and 19000 columns. Has anyone seen a problem or saturation pont with these kinds of numbers? The database delivered to the clients will be from 2-50 gig depending on the site. I can probably through hardware at problems, but if anyone has been down this road any suggestions are appreciated.
I'm using a unique index on a table with ignore_dup_key to get around using distinct in a stored procedure. In Query analyser, when I issue the stored procedure, you get a message and also the data in the Grid. In a stored procedure call, initiated from MS Access, the message means I have to put additional client side programming to work around it. I know that duplicate keys are ignored; that is why I've created the table with this feature (unique index in combination with ignore_dup_key). So what's with the message. If I delete the Error message 3604 will I still get an error message?
I'm considering adding domain integrity checks to some of my database tableitems. How does adding such constraints affect SQL Server performance? Forexample, I have a simple constraint that restricts a couple of columns tohaving values within the values assigned in my application by anenumeration:(([Condition] >= 0 and [Condition] <=3) and ([Type] >= 0 and [Type] <=2))This enforces domain integrity for two enumerations having values 0, 1, 2, 3and 0, 1, 2 in the application. Is this an efficient way of performing suchchecks? What are the pitfalls of domain integrity checking?ThanksRobin
and in this table, i have found index for Name1-Nam4,
i don't why sql below is very slow?
select Name1, sum(C1), ...., Sum(C100) from ( select Name1, Name2, sum(C1) as C1, ...., Sum(C100) as C100 from ( select Name1, Name2, Name3, sum(C1) as C1, ...., Sum(C100) as C100 from ( select Name1, Name2, Name3, Name4, C1, ...., C100 from My_Table group by Name1, Name2, Name3, Name4 ) as T group by Name1, Name2, Nam3 ) as T group by Name1, Name2 ) as T group by Name1
In Flat File Source properties windows there's Preview node, when we check that node there's an option to skip the data in how many rows. Is it affect the result ?
Hi, I have a column in table which tells me whether Tax should be calculated on a price. This is stored in a column called taxIncluded. I have a 'price before tax' column e.g. grossPrice. I want to calculate the price after tax e.g. netPrice if the taxIncluded column is set to 1. How do I form my sql statement to test whether the taxIncluded is set to 1 & therefore add the tax at say 20%.
e.g. id name grossPrice taxIncluded netPrice <--- calculated within SQL -- ------------- ------------------- ---------------- ------------- 1 Product1 10.00 1 12.00 2 Product2 20.00 0 20.00
Im using MS SQL7 as database software ,but now I plan to upgrate SQL7 to SQL2000 . I would like to know that are there any affect on the old database that already use with sql7 after I upgrade to MSSQL2000 ?
I had a report (.rdl) using sql reporting services in sql server 2005, where it was running quite good. I have just installed sql server 2005 sp2. After that, when I run the report through report viewer, the result of the report contains the first record of the dataset keep on repeating for all other records. If I run the dataset the results are correct,but if I preview it, then the first record alone repeating for all other records.
I feel that sp2 might cause this rendering problem. Any suggestion please?
Does Service Pack 2 affect client tools? In other words, if you have a client machine with just the tools installed (management studio, BIDS, etc) do I need to run the SP2 package on this client as well?
I am developing a report that should be localized dynamically. The report will have both arabic and western users. Therefore I need to use the direction property of some single text boxes, which works fine. But when setting the direction property to RTL for a table it has no affect! You would assume that the whole tables content should be "reversed" and also the text in the table went from right to left. But no.
When viewing the report in IE and right clicked and click "View Source" you can see that the table do not have a direction style set, even though I set the property on the table in design mode.
I am going crazy about this issue! Could this be a MS bug?
I really need some help here...
PS. I have tried numerous of combinations of the different International properties to get the direction property to work - with no luck.
Could a simple update statement on a user database ever caused space usage in tempdb? Assuming the update statement fires no triggers and not using any temp tables?
IE:
User DatabaseA Update TableX Set col1 = X
Reason I ask is tempdb filled up and the only thing I could see running at that time was the update statement.
Hi below sample data incoming from a source that cannot be changed. Please ignore the mishandling of zls. Obviously it is not insurmountable - I am just interested in why it is happening because I cannot explain it. DECLARE @t TABLE(the_data CHAR(73)) SET DATEFORMAT dmySET NOCOUNT ON INSERT INTO @tSELECT ' 11'+SPACE(5)+'1649KN889001 2'+space(10)+'0'+space(10)+'08 01 2002'+space(10)+'04 10 2002'UNION ALLSELECT ' 11'+SPACE(5)+'1649KN889001 2'+space(10)+'109 08 2004'+space(20)+'21 07 2005'UNION ALLSELECT ' 11 13026721XX198734 1'+space(10)+'0'+space(10)+'XXXXXXXXXX'+space(10)+ '09 01 2003' SELECT CAST(REPLACE(REPLACE(date1_text,' ','/'),'XXXXXXXXXX',NULL) AS SMALLDATETIME) AS date_1_prob,CAST(REPLACE(REPLACE(date1_text,' ','/'),'XXXXXXXXXX','') AS SMALLDATETIME) AS date_1_ok_ish,CAST(NULLIF(REPLACE(date1_text,' ','/'),'XXXXXXXXXX') AS SMALLDATETIME)AS date_1_fine, date1_textFROM--derived table - selecting relevant substring(SELECT LTRIM(RTRIM(SUBSTRING(the_data, 44, 10))) AS date1_textFROM @t)AS der_t date_1_prob date_1_ok_ish date_1_fine date1_text----------------------- ----------------------- ----------------------- ----------NULL 2002-01-08 00:00:00 2002-01-08 00:00:00 08 01 2002NULL 1900-01-01 00:00:00 1900-01-01 00:00:00 NULL 1900-01-01 00:00:00 NULL XXXXXXXXXX Can anyone explain the result in the first row first column? Thanks
We have an Access application using Jet. I added some new indexes yesterday and now they are being blamed for poor Access application performance. I then dropped the new indexes. The poor performance continued until the Access application was re-linked to the SS2000 database. Then things returned to normal.
Question, does Access/Jet persist SqlServer schema info in its MDB (or elsewhere?) I am told that the MDB is copied from a share to the local PC where it grows during its use. Some people are saying the MDB persists schema info about the SS2000 schema which influences how Jet accesses the SS2000 database. Is that true? Is there a link where I can read about this? I am a dba and am not an Access developer . . .
With SQL2005 SP2, we are seeing that when auto stats run on one or more indexes of a large table (1.5M rows), then immediately the stored proc using that table starts acting as if the query plan is no longer any good. This causes a drastic slowdown in response time and a corresponding increase of table reads to get the data. E.g, the next execution of the procedure after the auto stats kick in goes from 355 reads to 755000 reads (as depicted by Profiler). Generally, there are about 25 people using the DB at any one time. They connect through a mid-tier VB component.
I tried adding WITH RECOMPILE to the stored proc in question, but that caused almost all executions to run at the higher number. I thought that the WITH RECOMPILE hint would create a new query plan for each execution of the procedure and that plan would the the latest and greatest. Perhaps it did, but most users got stuck with the higher number of reads anyway. After taking the hint out, everyone went back to getting the 335 number and quick response times.
What we are wrestling with is that when those auto stats hit, it really messes up everyone until we manually recompile the procedure. Daily we delete all records in the table that are over 45 days old, so the table stays pretty much the same size. We also set the recompile flag to cause a new plan to be generated that will reflect the smaller amount of data. Should we also run a stats update before recompiling the procedure? Profiler has been very helpful in capturing what is going on, so I think I have a good handle on that. However, I don't understand why WITH RECOMPILE produced a messed up plan for everyone. The compile itself seems to take only 1 ms when done from the query screen.
In a recent attempt to keep the size of my transaction log files down I altered the schedule of my SQL Server log backups from running every 15 minutes from 07:00 to 19:00 to run every 15 minutes.My company also uses a Dell AppAssure application to also take backups. The backups are of the entire drives so I don't this will affect the size of my SQL log files but I did notice that AppAssure has a tick box to truncate the SQL logs so it made me wonder that it could affect the size of my log file. The Appassure backups currently run every 15 minutes from 07:00 to 19:00. I'm wondering if I would be able to maintain my log files at a smaller size if I ran this every 15 minutes from 07:00 to 07:00.
With the Synonym, I was encouraged to separate my db to several smaller dbs, like base,dynamic,static and security. Now I am trying to use mirroring, I see it may cause problem, I think I need mirror all them to another server. My question is when the server is down, will all db switch to mirror server in the same time? And one can manually set which db is the principal db, but in my case, it will not work if principal server of all four dbs are not the same.
Recently we added a new table into our SQL2000 database specifically to store scanned in images of documents. This new table contains a PK field, a couple of datetime fields, a couple of char(1) fields and one 'image' field.
Before adding this table, the database size was approx 6GB. Six months after adding this new table, the database has grown to 18GB - 11GB of this is due to the scanned in images.
Would this new table affect the SQL performance with regards to accessing other data in the database that has nothing related to the new table?
If so, would moving this new table into it's own database be recommended?
Hi, We have SAN for our SQL server and all of DB backup copy pointing to one the SAN volume(ex. T). We are moving the bkp copy from this SAN volume into remoteserver for restoring the backup. We are getting a lot of Time out during this time. Is that copy process affect production time out?. Thanks,
for some filegroups, we are turning readwrite back on periodically to trickle some data into an archive. This lasts a few seconds and then we turn readonly back on. What is the impact of doing this while a query is running on the archive?