Reusing A Generated Column To Avoid Over Processing
Oct 22, 2007
Hi,
I'm constructing a query that will performs a lot o datetime
calculumns to generate columns.
All that operations are dependent of a base calculum that is performed
on the query and its result is stored in a columna returned.
I wanna find a way of reusing this generated column, to avoid
reprocessing that calculumn to perform the other operations, cause
that query will be used in a critical application, and all saving is
few.
I'm trying to test some queries in SQL analyser without reusing the query plan (already cached). I know that there is a way to avoid that but I don't remember right now. Another option would be to restart MS SQL service but I don't want to do that. Any thoughts...?
Begin Truncate table A Insert into A (Col1, Col2, Col3... ) Select Value1, Value2, Value3... From Table B End
The insert operation query takes approximately 3.5 minutes to execute. What's occurring is the Table is immediately truncated, and there are no rows in the table for those 3.5 minutes.
How can I avoid having this gap - where there are no rows in the table for that period of time during the job execution ? The table could be locked, but that doesn't seem like the best solution.
use dbWebsiteLO SELECT A.vehicleref,A.manufacturer,A.cvehicle_shortmodtext,A.derivative,min(a.ch) as price,A.capid,B.studioimages,(SELECT term FROM vwAllMatrixWithLombardAndShortModel WHERE vehicleref = a.vehicleref and ch = price) FROM vwAllMatrixWithLombardAndShortModel A LEFT OUTER JOIN dbPubMatrix..tblCarImages B on A.capid = b.capid WHERE a.source <> 'LOM' AND a.type = 'car' GROUP BY a.vehicleref,a.manufacturer,a.cvehicle_shortmodtext,a.derivative,a.capid,b.studioimages
I get Invalid column name 'price'. I'm trying to reference the "min(a.ch) as price"
Hi, with some help today I was able to get my stored procedure runningand have the results emailed to me. However, this is how its showingup:Accounting_Year WK_IN_FYEAR LocationGL_AccountCol Data Difference--------------- ----------- ------------------------------------------------------------------ -------------------- --------------------2007 49 Test1500-001-2587872.0200 -2587872.0200 .00002007 49 Test2500-001-3344713.5000 -3344713.5000 .00002007 49 Test3500-001Is there anyway to line them up side by side properly? When i have twocolms selected the format comes out ok. Thanks for all the helpagain!Here is the sp:CREATE PROCEDURE [dbo].[spEmailVariance](@SubjectLine as varchar(500),@EmailRecipient VARCHAR(100))ASDECLARE @strBody varchar(5000)set @SubjectLine = 'Weekly Flash Update'SET @strBody ='Select statement'exec master.dbo.xp_sendmail@recipients= 'XX@XXXX.com',@subject= @SubjectLine,@query = @strbodyRETURNGO
My question: Is it okay to drop all the auto generated column statistics? (for the following scenario)
- I am cleaning up unnecessary objects (tables, unused indexes, overlapping statistics etc) from databases and found out there are more than 1400 auto generated column statistics on one database (lets call it A). - Database A was used to be our reporting database but from last several years we are using database B for reporting. DB A has all the historical data while DB B only has valid records. - We are updating all the column statistics with full scan nightly on database A and it is talking almost 2.5 hours to do that. Now I want to drop all the "unnecessary" statistics those were created when DB A was reporting database and they are no longer in use. There is no way to know the creation date of the column statistics that I know of. Statistics "last update date" is of no use because of our nightly job. So I was thinking of dropping all the auto generated column statistics and let the sql server create as it needs from now.
Hi there, newbie here. I'm building a web application that allows for tagging of items, using ASP.NET 2.0, C# and SQL Server. I have my USERS, ITEMS and TAGS separated out into three tables, with an intersection table to connect them. Imagine a user has found an item they are interested in and is about to tag it. They type their tag into a textbox and hit Enter. Here's what I want to do: I want to search the TagText column in my TAGS table, to see if the chosen tag is already in the table. If it is, the existing entry will be used in the new relationship the user is creating. Thus I avoid inserting a duplicate value in this column and save space. If the value is not already in the column, a new entry will be created. Here's where I'm up to: I can type a tag into a textbox and then feed it to a query, which returns any matches to a GridView control. Now I'm stuck... I imagine I have to use "if else" scenario, but I'm unsure of the code I'll have to use. I also think that maybe ADO.NET could help me here, but I have not yet delved into this. Can anyone give me a few pointers to help me along? Cheers!
I have a column bar chart which displays values for each month. As per the requirement, I am displaying the column values by selecting "Show labels" options. I see few values overlap on the column bars.
Hello friends, I needed a suggestion, I am currently working on a reporting website that generates reports and i need to store all the reports in the database.
I usually go by row wise processing as it can be easily controlled but the problem is there will be a lot of reports, that is an estimation of 30,000 rows in a month and i m not sure if sql server can hold more than 2 billion rows per table.
Hi all, (I am using SQL Server 2005) I have created a new 'CUSTOMERS' table and created a colum 'CustomerID' as an Identity column. Now, a problem I find is that when I delete a particular record, its Identity value is used automatically for the New record I insert later! I do not want to re-use the already used Identity value. I just want to have the last CustomerID to be higher that all the previous ones. Is there any way to do this? Thanking you in advance, Tomy
select distinct case when LastStatusMessageIDName = 'Program completed with success' then 'Office 2013 SP1 Installed Successfully' when LastExecutionResult = '2013' then 'Machine Does not have Office 2013' when LastExecutionResult = '17023' then 'User cancelled installation' when LastExecutionResult = '17302' then 'Application failed due to low disk space.'
[Code] .....
The below is the output for the given query,here i want to see only one comment value in my list and the count is also sum of all where comment should be Application will be installed once machine is online(Bold columns o/p)
Comment Machine Name Application will be Installed once machine is Online 4 Application will be Installed once machine is Online 12 Application will be Installed once machine is Online 42 Application will be Installed once machine is Online 120 Machine Does not have Office 2013 25 User cancelled installation 32 Application failed due to low disk space 41 Office 2013 SP1 already Exist 60
I need o/p like below:in single line
Application will be Installed once machine is Online 178 Machine Does not have Office 2013 25 User cancelled installation 32 Application failed due to low disk space 41 Office 2013 SP1 already Exist 60
I have question about indexed and not indexed Persisted columns on sql server 2005. It's a bug?
First?, my version of SQL Server is Microsoft SQL Server 2005 - 9.00.3186.00 (Intel X86) Aug 11 2007 03:13:58 Copyright (c) 1988-2005 Microsoft Corporation Developer Edition on Windows NT 5.1 (Build 2600: Service Pack 2)
Now I create two tables and try four select queries:
Code Snippet SET ANSI_NULLS ON SET ANSI_PADDING ON SET ANSI_WARNINGS ON SET ARITHABORT ON SET CONCAT_NULL_YIELDS_NULL ON SET NUMERIC_ROUNDABORT OFF SET QUOTED_IDENTIFIER ON GO create table t1 (id int primary key, id_bigint as cast(id as bigint)) GO create table t2 (id int primary key, id_bigint as cast(id as bigint) persisted) GO select * from t1 -- (1) -- Clustered index scan with two times Compute Scalar GO select * from t2 -- (2) -- Clustered index scan with one times Compute Scalar GO create index IX_t2 on t2 (id_bigint) GO select * from t2 -- (3) -- Index Scan with one times Compute Scalar GO select * from t2 where id_bigint = 0 -- (4) -- Index Seek with one times Compute Scalar GO drop table t1 GO drop table t2 GO SET ANSI_PADDING OFF
1. I don't understand why access to computed column raise scalar computation wto times? 2. I don't understand why access to persisted computed column raise any scalar computation? 3. I don't understand why access to persisted computed column over index required any scalar computations?
Can anyone from Microsoft SQL Server Team told me about this mistake? It's a BUG or I incorrect understand value of the "PERSISTED" word?
-- Thanks with avanced. WBR, Roman S. Golubin grominc[at]gmail.com
I have to implement a complex algorithm that processes each row and each column.I have bunch complex steps at the end of which a table gets populated with the processed columns.
My question is whether it is possible and feasible to do this kind of processing using CLR integration or should I stick to T-SQL ?
One of the steps of processing involved per column is as follows:- 1)For each column,find successive invalid values from the start of the column.Invalid value= any value <0 2)find the invalid value depth of each column(no of successive invalid values from start) 3)If after these invalid vlaues,there is a valid value and another invalid value,replace current invalid value with valid value. ie replace invalid value only if it has a valid value above it. 4)find the column with maximum invalue value depth and delete that many rows from the table.
Here's an example:- Suppose there are 2 columsn colA and ColB.The columns have different datatypes eg decimal,int,string etc. For simplicity colA and colB are ints. RowID is keeping track of the row number.
Step1)successive invalid values from start=0,-5,-3 Step2)Invalid value depth=3(because there are 3 rows from step 1) Step3)0,-5,-3 do not have any valid value above them.But -9 has a valid value 4 above it.So replace -9 with 4.
so colA after the algo will look as follows RowID ColA ------------ 1 0 2 -5 3 -3 4 1 5 4 6 4(replaced -9 with 4) 7 5 8 8
Now do the next column colB RowID ColA ------------ 1 -6 2 0 3 0 4 -7 5 4 6 8 7 -5 8 -8
Step1)successive invalid values from start=-6,0,0,-7 Step2)depth of invalid values=4 Step3)Next invalid value -5 occurs at RowID 7 and has a valid value 8 above it. Replace -5 with previous valid vlaue ie 8.
RowID 8 has invalid value -8 .Its previous invalid value(-5) got replaced by a valid value 8.So replace RowID8 also with value of RowID 7 ie 8
Output at the end of these steps RowID ColA ------------ 1 -6 2 0 3 0 4 -7 5 4 6 8 7 8(replaced -5 with 8) 8 8(replaced -8 with 8)
Step4:Get the maximum invalid value depth In this case ColB had depth=4 which is greater than ColA which had dept=3. So delete 4 rows from the beginning of the table So the output will be
RowID colA colB ---------------------------------------- 5 4 4 6 4(replaced -9 with 4) 8 7 5 8 (replaced -5 with 8) 8 8 8(replaced -8 with 8)
I have built a package in which i use a derived column to create a new set of columns and then pass the same to another target transformation.
The issue now what I am facing is, the re are certain number of records coming from source(16 rows) and gets processed before the Derived Column transformation, but after that, no records gets processed after the derived column transformation.
The package status shows as Success, but there is no records being written in the target table.
I have to implement a complex algorithm that processes each row and each column.I have bunch complex steps at the end of which a table gets populated with the processed columns.
How do I do this using CLR integration?
One of the steps of processing involved per column is as follows:- 1)For each column,find successive invalid values from the start of the column.Invalid value= any value <0 2)find the invalid value depth of each column(no of successive invalid values from start) 3)If after these invalid vlaues,there is a valid value and another invalid value,replace current invalid value with valid value. ie replace invalid value only if it has a valid value above it. 4)find the column with maximum invalue value depth and delete that many rows from the table.
Here's am example:- Suppose there are 2 columsn colA and ColB.The columns have different datatypes eg decimal,int,string etc. For simplicity colA and colB are ints. RowID is keeping track of the row number.
Step1)successive invalid values from start=0,-5,-3 Step2)Invalid value depth=3(because there are 3 rows from step 1) Step3)0,-5,-3 do not have any valid value above them.But -9 has a valid value 4 above it.So replace -9 with 4.
so colA after the algo will look as follows RowID ColA ------------ 1 0 2 -5 3 -3 4 1 5 4 6 4(replaced -9 with 4) 7 5 8 8
Now do the next column colB RowID ColA ------------ 1 -6 2 0 3 0 4 -7 5 4 6 8 7 -5 8 -8
Step1)successive invalid values from start=-6,0,0,-7 Step2)depth of invalid values=4 Step3)Next invalid value -5 occurs at RowID 7 and has a valid value 8 above it. Replace -5 with previous valid vlaue ie 8.
RowID 8 has invalid value -8 .Its previous invalid value(-5) got replaced by a valid value 8.So replace RowID8 also with value of RowID 7 ie 8
Output at the end of these steps RowID ColA ------------ 1 -6 2 0 3 0 4 -7 5 4 6 8 7 8(replaced -5 with 8) 8 8(replaced -8 with 8)
Step4:Get the maximum invalid value depth In this case ColB had depth=4 which is greater than ColA which had dept=3.so delete 4 rows from the beginning of the table So the table will be
RowID colA colB ---------------------------------------- 5 4 4 6 4(replaced -9 with 4) 8 7 5 8 (replaced -5 with 8) 8 8 8(replaced -8 with 8)
I am working on a SSIS package. I want error records to be redirected to a different table. Natively, the package passes the Error Code and Column Code (don't know what to call it, but it's a number). I found a script to get the error description, but can't find an equivalent to get the Column name.
I have to implement a complex algorithm that processes each row and each column.I have bunch complex steps at the end of which a table gets populated with the processed columns.
My question is as to what is the best way to do this?CLR integration or T-SQL? Also I would appreciate any ideas as to how to go about using either approaches.
One of the steps of processing involved per column is as follows:- 1)For each column,find successive invalid values from the start of the column.Invalid value= any value <0 2)find the invalid value depth of each column(no of successive invalid values from start) 3)If after these invalid vlaues,there is a valid value and another invalid value,replace current invalid value with valid value. ie replace invalid value only if it has a valid value above it. 4)find the column with maximum invalue value depth and delete that many rows from the table.
Here's am example:- Suppose there are 2 columsn colA and ColB.The columns have different datatypes eg decimal,int,string etc. For simplicity colA and colB are ints. RowID is keeping track of the row number.
Step1)successive invalid values from start=0,-5,-3 Step2)Invalid value depth=3(because there are 3 rows from step 1) Step3)0,-5,-3 do not have any valid value above them.But -9 has a valid value 4 above it.So replace -9 with 4.
so colA after the algo will look as follows RowID ColA ------------ 1 0 2 -5 3 -3 4 1 5 4 6 4(replaced -9 with 4) 7 5 8 8
Now do the next column colB RowID ColA ------------ 1 -6 2 0 3 0 4 -7 5 4 6 8 7 -5 8 -8
Step1)successive invalid values from start=-6,0,0,-7 Step2)depth of invalid values=4 Step3)Next invalid value -5 occurs at RowID 7 and has a valid value 8 above it. Replace -5 with previous valid vlaue ie 8.
RowID 8 has invalid value -8 .Its previous invalid value(-5) got replaced by a valid value 8.So replace RowID8 also with value of RowID 7 ie 8
Output at the end of these steps RowID ColA ------------ 1 -6 2 0 3 0 4 -7 5 4 6 8 7 8(replaced -5 with 8) 8 8(replaced -8 with 8)
Step4:Get the maximum invalid value depth In this case ColB had depth=4 which is greater than ColA which had dept=3.so delete 4 rows from the beginning of the table So the table will be
RowID colA colB ---------------------------------------- 5 4 4 6 4(replaced -9 with 4) 8 7 5 8 (replaced -5 with 8) 8 8 8(replaced -8 with 8)
I tried the Beta 1 of the service pack 1 to .net 3.5. If I try to add an entity (and try to save this), I get the Exception "No support for server-generated keys and server-generated values".
How can I add entities to my Sqlce- database?
I tried to give the id- column (primary key) in the database an identity, another time without identity, only primary key --> none of them worked. I always get the same error.
What do I have to change to make successfully a SaveChanges()?
Hi all, I am accessing one database a bunch of different times all throughout my code...in various functions and different web pages. Is there a a way to create an sqlconnection that I can access all the time, instead of constanting hardcoding which database to go to? I've tried putting the info in another file and just including it where I want the database to open, but I can't use <!-- #INCLUDE --> inside of the server scripts. Can anyone help
Every time my asp.net app needs to open a connection, it tries to establish a new connection with the mssql server. I´ve already set the max pool size property in the connection string. After that, my app raises an "time out"error saying it couldn´t obtain a connection from the pool. The problem is that I have a lot of iddle connections. With the Enterprise Manager I can see the status of the connections. They´re all the same "awaiting command". How can I reuse this connections? I know that the connection string must be the same for all connections and it is. I´ve set it in the web.config file. If I remove the max pool size property from the connection string I get a lot, I mean A LOT of connections with the sql server. Any ideas?
I want to open and close sql connection only once and want to use in every function without open or close this connection in class file in asp.net 2003 . how can it possible .
I have this SP below, and I am trying to reuse the value returned by the Dateofplanningdate column so that I don't have to enter the code for each additional column I create. I have tried temp tables and derived tables with no luck.
The attempt I tried was looking up an existing dialog in the conversation_endpoints.
However on doing a scale test I would that the non blocking I was hoping wasn't happening. Even through I was giving each spid a new dialog by using a conversation_group_id related to the spid. I found that the following SQL was blocked by a transaction that contains a begin dialog. This suggests the locking on conversation_endpoints is too excessive.
select top 1 conversation_handle
from sys.conversation_endpoints ce
join sys.services s on s.service_id = ce.service_id
join sys.service_contracts c on c.service_contract_id = ce.service_contract_id
where s.name = 'jobStats'
and ce.far_service = 'jobStats'
and (ce.far_broker_instance = @targetBroker OR @targetBroker = 'CURRENT DATABASE')
and ce.state IN ('SO','CO')
and ce.is_initiator = 1
and (ce.conversation_group_id = @conversation_group_id )--or @conversation_group_id is null)
In the Package configurations wizard, I am trying to edit an existing configuration using the edit button. In the Configuration Filter, I get the list of several filters (the filters which were used for other packages). Whe I try to reuse an used filter, it is forcing me to set a new value and when I go back to SQL Server tables , I see the old value has got erased.
Can I not use an existing filter?. Do I need to use new filters for every new package?.
I have a replicated table that has a trigger attached to the it. The trigger fires off a service broker message for inserts. Originally for every insert, I would begin a conversation, send, and end the conversation when target send an end conversation. Since replication process is only using a single spid, I would like to reuse 1 conversation. the following is what I have for the send procedure in the initiator. I check the conversation_endpoints for any open conversation, if it's null, I start a new conversation and send else just send with the existing conversation. Is there anything wrong with this code? What could cause the conversation on the initiator to be null if I never end the conversation on the initiator side? thanks
DECLARE @dialog_handle uniqueidentifier
select @dialog_handle = conversation_handle from sys.conversation_endpoints where state = 'CO'
Is it possible to reuse a Lookup component which is configured with Full chaching?
My requirement is as follows....
A input file have 2 columns called CurrentLocation and PreviousLocation. In the dataflow, values of these two columns needs to be replaced with values from a look up table called "Location".
In my package i have added two LookUp components which replaces values of CurrentLocation and PreviousLocation with the values available in the table "Location". Is there any way to reuse the cache of first lookup component for second column also?
Hi! I'm wondering why is my sys.conversation_endpoints table inserting a new row for each message i send even when i reuse conversations? when i send the first message i get the first row in the sys.conversation_endpoints with a uniqueidentifier for the conversation_handle. this uniqueidentifier is then saved in the table which i query the next time i send a message to reuse the dialog conversation. But even though it looks like the uniqueidentifier is reused i still get a new row for every message i send with a different conversation_handle? this happens in both target and initator db.
I've tried to understand this by i don't.
Also for the moment i don't end conversations. But as i understand it this shouldn't matter.
Also the message successfully arives to the target and sys.transmission_queue is empty in both databases. Neither queues have any error messages in them.
I currently have multiple (parent and child) packages using the same config file. The config file has entries for connections to a number of systems. All of them are not used from the child packages. Hence, my child package throws an error when it tries to configure using the same config file because it can't find the extra connections in my connection collection.
Does anyone have any ideas on the best way to go about resolving this? Is multiple config files (one for each connection) the only way?
I have a real heartache with runtime parameter interogation on my DB. Sure I get the latest and greatest and sure I don't have to type in all those lovely parameter types..but...the hit I take on performance for making no less then 3 DB hits for each SqlAdapter is unreasonable!
So ...I like the idea of maybe calling it once for all my stored procs on application startup...and then maybe saving this in CacheObject.
My problem is that I can't see where you can even serialize a SqlParametersCollection or even for that matter assign it to a Command object. Can you cache a command object ?
LOL
I think I may just have to write some generic routine for creating and populating my command objects based on a key (type) and then use that to fetch my command.Update, command.Insert and command.
I would like to use the new AsynchBlock to do the fetching of the stored proc parameters and then just pull them from the Cache object....put a file watch so that if the DB's change my params it re-pulls them again.
*nice*.....
Then I get the best of both worlds...caching...and no parameter writing...