Hi,
I have large table rrsn_security_t - more than half a million rows. I do a complete update on all the rows of the table using the following query.
UPDATE rrsn_security_t
set
cusip = b.fmr_cusip,
master_issuer_num = b.mstr_isr_cusip,
ticker = b.fmr_symb,
description = b.fmr_name,
prim_exchange_code = c.exchange_key,
shares_otstndng = d.amount,
iv_type = b.iv_typ,
active = case when b.deact_date is null then 'Y' else 'N' end
FROM rrsn_security_t a INNER JOIN
ref_security_t b ON
a.security_id = b.fmr_cusip LEFT OUTER JOIN
shares_outstanding_feed_t d ON
b.fmr_cusip = substring(d.fmr_cusip,1,9) AND
d.fmr_type = 'OUTS' LEFT OUTER JOIN
rrs_exchange_t c ON
b.dft_exch_cd = c.exchange_id
where b.fmr_cusip not in (select security_id from rrsn_scrty_ovrrd_in_effect_t)
This is a part of a daily batch load and a DTS package. The table has one clustered index and three non clustered indexes. Two of them are covering indexes. The update is on all the columns that have the non clustered indexes.
The problem is that when I run the update the transaction log runs to more than a GB of memory and alomost takes an hour to do so. Without the index it takes around 300 MB and 7 mins.
I am not confortable with the idea of dropping and recreating the index since it is not necessary in SQL Server 7.0, though it was the case in the previous versions of SQL Server.
Also the query plan with the indexes on shows that Table spool/Eager spool to optimize rewinds takes 50% of the query cost.
Could any one help me with how I should deal with this situation.
I have created a table from another table where I specified that one of the fields, an number field, is sorted in ascending order and have NOT specified that it is to be an indexed field and there are 10 million records, from 1 to 10,000,000 exactly.
Now, if I query that table, asking to return records 1-1,000 from that non indexed number field that I sorted in ascending order (where number field <= 1,000) , will it run as fast as if it were indexed?
In other words, does SQL know somehow that these records are sorted in ascending order and so will not do a full table scan, stopping at 1,000 to return my data set?
Or is there no way for SQL to know this and only specifying an indexed field allows SQL to know that its in some order and so it doesn't have to do the full scan?
I'm a newby to SQL and looking for how-to-help. I have an existing DB and within a certain table, I have created a new Column via Studio manager, but need help with the following:
Need to make the new Col "Unique, and Indexed" but cannot see anywhere in studio manager interface to do it.
I am trying to use an indexed view to allow for aggregations to be generated more quickly in my test data warehouse. The Fact Table I am creating the indexed view on is a partitioned clustered columnstore index.
I have created a view with the following code:
ALTER view dbo.FactView with schemabinding as select local_date_key, meter_key, unit_key, read_type_key, sum(isnull(read_value,0)) as [s_read_value], sum(isnull(cost,0)) as [s_cost] , sum(isnull(easy_target_value,0)) as [s_easy_target_value], sum(isnull(hard_target_value,0)) as [s_hard_target_value] , sum(isnull(read_value,0)) as [a_read_value], sum(isnull(temperature,0)) as [a_temp], sum(isnull(co2,0)) as [s_co2] , sum(isnull(easy_target_co2,0)) as [s_easy_target_co2] , sum(isnull(hard_target_co2,0)) as [s_hard_target_co2], sum(isnull(temp1,0)) as [a_temp1], sum(isnull(temp2,0)) as [a_temp2] , sum(isnull(volume,0)) as [s_volume], count_big(*) as [freq] from dbo.FactConsumptionPart group by local_date_key, read_type_key, meter_key, unit_key
I then created an index on the view as follows:
create unique clustered index IDX_FV on factview (local_date_key, read_type_key, meter_key, unit_key)
I then followed this up by running some large calculations that required use of the aggregation functionality on the main fact table, grouping by the clustered index columns and only returning averages and sums that are available in the view, but it still uses the underlying table to perform the aggregations, rather than the view I have created. Running an equivalent query on the view, then it takes 75% less time to query the indexed view directly, to using the fact table. I think the expected behaviour was that in SQL Server Enterprise or Developer edition (I am using developer edition), then the fact table should have used the indexed view. what I might be missing, for the query not to be using the indexed view?
I have a student table like this studentid, schoolID, previousschoolid, gradelevel.
I would like to load this table every day from student system.
During the year, the student could change schoolid, whenever there is a change, I would put current records schoolid to the previous schoolid column, and set the schoolid as the newschoolid from student system.
My question in my merge statement something like below
Merge into student st using (select * from InputStudent ins) on st.id=ins.studentid
When matched then update
set st.schoolid=ins.schoolid , st.previouschoolid= case when (st.schoolid<>ins.schoolid) then st.schoolid else st.previouschoolid end , st.grade_level=ins.grade_level ;
My question is since schoolid is et at the first line of set statement, will the second line still catch what is the previous schoolid?
I'm using SQL Server 7. I have an invoice table. The invoice table has a datetime column called InvoiceDate. The InvoiceDate column contains the following date format: 5/3/00 I would like to use the InvoiceDate column to update the char (6) column called zInvoiceDate as a formatted date field like yymmdd.
The following syntax did not work: SET zInvoiceDate = Convert([ARInvoiceID],GetDate()12)
When a user clicks a button, an sql query is fired which increments the view_count value by one and calculates a new percentage value from this. The query to update the percentage value doesn't work, here's the query:
UPDATE [statistics] SET percentage = follow_count / view_count * 100 WHERE (stat_id = 15)
This code worked fine with MySQL, but since migrating to MSSql it doesn't seem to work. The data type of the percentage column is: decimal(5, 2)
I have a problem i'v been searching all day but i can't find an answer anywhere maybe someone here can help. What I want to do is give a column in a table the same value as another column from the same table. For example: Table:Requests A request has a relatedrequestId wich links another request to it. Now I want the date from the linked request in the date from the master request. Because all the master requests date's are empty and i want them to have the date from the linked request.
Hi,I have table with three columns as belowtable name:expNo(int) name(char) refno(int)I have data as belowNo name refno1 a2 b3 cI need to update the refno with no values I write a query as belowupdate exp set refno=(select no from exp)when i run the query i got error asSubquery returned more than 1 value. This is not permitted when thesubquery follows =, !=, <, <= , >, >= or when the subquery is used asan expression.I need to update one colum with other column value.What is the correct query for this ?Thanks,Mani
In the properties of the table in the MS Access there is an Indexed column. If someone would had set that to "Yes (Duplicates Ok)" how can one implement this to MSSQL???
i need to update the value in the column by adding to the value already in that column.
For eg, the value is in the column is already 5, i need to update it with addition of 8. i.e 5+8 = 13. The final value should be 13 in the column after that. How do I go about doing it?
UPDATE Table 1 SET ColumnName +=8 ?
Thanks in advance. really at a loss and very urgent.
I'm quite new to SQL and wondered if someone could help. I'm pretty good at writing reports but I now need to do an update.
Basically i need to update a column in table1 where coulmn 1 in table4 = Y, however to get to table4 I think I need to link to all 4 tables, table4 is the last table. The tables link based on ID's
Tried a few things such as
UPDATE table1 SET column1 = '' WHERE column1 IN ( SELECT column1 FROM table4 WHERE column1 = Y)
Now I need to get to table 4 I need some other wHERE clauses to link them together, just like I would if I were writing a SELECT statement.
I have a table with a column called data. In that column is a value like: <settings><myVarOne>valueOne</myVarOne><myVarTwo>valueTwo</myVarTwo></settings>
All I'd like to do, is update all the myVarOne values. So the new value for data would be: <settings><myVarOne>newValueHere</myVarOne><myVarTwo>valueTwo</myVarTwo></settings>
This will likely be SQL2000 not SQL2005 but it would be useful to know for both.
I've looked at OPENXML but all the examples seem keen on using sp_xml_preparedocument and then OPENXML needs the @idoc so I'm thinking there is something else.
If someone can point me in the right direction that would be extremely helpful as I haven't found anything that makes sense to me. UpdateGrams seems very overblown for manipulation like this when OPENXML is sooo very close to being correct.
I have a table with three fields that are marked as indexed fields with unique keys. I want to remove one of the fields. When I do this and hit the rebuild button -- I get an error, that there is a duplicate field, so it will not just let me remove the one field and leave the other two?? How can I get around this?
I am trying toe create an indexed view but cannot seem to get it right.
CREATE VIEW dbo.D_Policy_View with schemabinding AS SELECT Policy_ID, Environment_Code, CoB, Sub_CoB, Policy_No, Version_No FROM dbo.D_Policy WHERE (Policy_ID IN (SELECT MAX(Policy_ID) FROM dbo.d_Policy GROUP BY Environment_Code, COB, Policy_No, SUB_COB))
I have read on BoL that MAX is not allowed but don't know of any other way to get the latest record??
Hello, I'm currently performance tuning a table with 100 million rows in it (about 18 GB of data) and would like to know if -
1. Is the table too large to be performance tuned. Is it better to just redesign the schema ? 2. Can using techniques as indexed views really help me with tuning such a table. 3. How long would it take to create a clustered, non clustered index on a varchar column (for instance) on a table with 100 million rows ? (i know this is a function of hardware as well - let's assume i'm using afairly maxed out DL 360 - i.e. dual processor with 4 GB of memory)
I am looking for a little insight. I am using an SQL Server database created by a third party vendor. There are certain columns in a given table that I query for quite often. To speed things up, I created an indexed view.
Now I can no longer insert into the base table. Attempting an insert causes a SQL error stating that the system properties ARITHABORT and NUMERIC_ROUNDABORT are incorrect. If I remove the index from my view, the inserts work just fine.
Can somebody provide some insight as to why this happens and how I might be able to correct it (keep in mind that the DB was setup by a third party, so I cannot change too much of the underlying setup without possibly compromising their functionality).
I have a problem trying to create an indexed view on SQL Server 2000. There are multiple databases, one of which stores system wide data. I would like to create an indexed view on the system wide data for each of the site databases.
Is it possible to create an indexed view on data in another database?
I am trying to create an indexed view, but because I am using a MAX function, I get the error
Cannot create index on view "dbo.View" because it uses aggregate "MAX". Consider eliminating the aggregate, not indexing the view, or using alternate aggregates. For example, for AVG substitute SUM and COUNT_BIG, or for COUNT, substitute COUNT_BIG.
Am totally stuck on how I can replace the MAX function.
Any help would be appreciated.
SET ANSI_NULLS ON GO SET ANSI_PADDING ON GO SET ANSI_WARNINGS ON GO SET CONCAT_NULL_YIELDS_NULL ON GO SET NUMERIC_ROUNDABORT OFF GO SET QUOTED_IDENTIFIER ON GO SET ARITHABORT ON GO
CREATE VIEW [dbo].[View] WITH SCHEMABINDING AS
SELECT TOP 100 PERCENT MAX(js_id) AS job_event, job_id FROM dbo.JobEvent GROUP BY job_id ORDER BY job_event GO
CREATE UNIQUE CLUSTERED INDEX IX_VMaxJobEvent ON View (job_id)
snehalata writes "i create view as follows CREATE VIEW Data WITH SCHEMABINDING AS SELECT A.PartitionID,FundID,ReportDate,ForeignTaxWithheld,DomDividendIncome,RGainShortTerm,RGainLongTerm,NewIssueRGainShortTerm, NewIssueRGainLongTerm,ChgUnrealizedGain,ReplaceTax,TotIncomeBefFee,TotalIncome,EndingNetCapital,EndingRedemptionUnits, BeginRedemptionAmount,EndingRedemptionAmount,EndingUnits,InterestOverseas,ExpenseOverseas,OrdIncome, ReallocationExpense,BeginRedemptionFee,EndingRedFee,BeginGrossCapital,EndingGrossCapital,GPFees,FixedExpense,MergerCost, SellingCommission,GrossRoR,NAV,GAV,GPMgmtFee,IMMgmtFee,GPIncentivefee,IMIncentivefee,NetRoR,MonthCounter, BegUnits,BegAddUnits,BegAddAmount,EndAddAmount,GrossRealizedGain,BrokerCommission, NetRealizedGain,OperatingExpense,OffsellExp,OrgExp,USObligationIncome,FixIncomeIntrIncome, CapitalGain,SellingFee,SellingMgmtFeeMidQtr,SyndicateCost,BeginNetCapital,DomesticDividendExp,FixedIncomeIntrExp FROMdbo.vPart1A LEFT JOIN dbo.vPart2B ON A.PartitionID= B.PartitionID
then if i creat index as follows
SET ANSI_NULLS ON SET ANSI_PADDING ON SET ANSI_WARNINGS ON SET ARITHABORT ON SET CONCAT_NULL_YIELDS_NULL ON SET QUOTED_IDENTIFIER ON SET NUMERIC_ROUNDABORT OFF
create unique clustered index ind1 on Data(FundID,ReportDate) it gives me following error Cannot index the view 'MonthliesTest2.4.dbo.Data'. It contains one or more disallowed constructs.
Dear experts, I've been working for an ERP solutions, company, as a DBA....
we have around 1200 tables as wellas 650 views.....
we are not using clustered index on views..... using the clustered index will boost the performance? and the ERP is web based application. so that modifications will be done on a regular basis.... is it good thing to implement clustered indexes on these views....
please guide me in this regard
thank you verymuch
Vinod Even you learn 1%, Learn it with 100% confidence.
I am trying to index a large number of PDF files using SQL Server Full Text indexing, and am running into an issue where about 1% of the documents are not being indexed. I looked in the SQL Full Text logs and the following error appears thousands of times: Error '0x80043651: msftesql should reprocess this document in an isolated fashion to confirm the error.' occurred during full-text index population for table or indexed view '[DocumentWarehouse].[dbo].[Document2006_tbl]' (table or indexed view ID '485576768', database ID '5'), full-text key value 0x00E32429. Attempt will be made to reindex it.
The component 'MSFTE.DLL' reported error while indexing. Component path 'Y:Program FilesMicrosoft SQL ServerMSSQL.1MSSQLBinnMSFTE.DLL'. Occasionally this warning appears: Warning: No appropriate filter was found during full-text index population for table or indexed view '[DocumentWarehouse].[dbo].[Document2006_tbl]' (table or indexed view ID '5', database ID '485576768'), full-text key value 0x00FE8A91. Some columns of the row were not indexed. The problem isn€™t because certain PDF documents can€™t be indexed, because when you reinsert a doc that wasn€™t indexed, it does get indexed. My version of SQL Server is 9.00.2153.00 running on Windows Server 2003.
Is it possible to update from one table to another?Pls examine my code here: UPDATE tStaffDir SET tStaffDir.ft_prevemp = ISNULL(tStaffDir_PrevEmp.PrevEmp01, ' ') + ' ' + ISNULL(tStaffDir_PrevEmp.PrevEmp02, ' ') + ' ' + ISNULL(tStaffDir_PrevEmp.PrevEmp03, ' ') + ' ' + ISNULL(tStaffDir_PrevEmp.PrevEmp04, ' ') + ' ' + ISNULL(tStaffDir_PrevEmp.PrevEmp05, ' ') Where tStaffDir_PrevEmp.ID=tStaffDir.ID I am trying to concatenate the columns from tStaffDir_PrevEmp to tStaffDir but I have this error where tStaffDir_PrevEmp is recognised as a column and not a table. Pls advise how this can be done. Many Thanks.
The problem is with @NUMERY parameterin code behind i setDim dr As GridViewRowDim numery As Stringnumery = "" For Each dr In GridView1.RowsDim numeros As Label = dr.Cells(0).Controls(1)numery += numeros.Text & ", "Next numery = numery.TrimEnd(", ") SqlDataSource1.UpdateParameters("NUMERY").DefaultValue = numery'so numery will look like this 123, 65465, 54616, 56465Update command looks like this :UpdateCommand="UPDATE slon SET mrowka = @MROWKA WHERE (NUMER IN (@NUMERY))" <UpdateParameters><asp:Parameter Name="MROWKA" /><asp:Parameter Name="NUMERY" /></UpdateParameters> And because of @numery i have err: Error converting data type nvarchar to numeric. how should i post "123, 65465, 54616, 56465" as parameter for this query ?
I'm trying to update a datetime column from another datetime column. However, I just want the date transferred to the new column without the time. Any ideas? Thanks for your help.
I have a table wherein I have to update a particular column (receipt code) based on another column of the same table (receipt number). i have to do calculations in order to generate the correct receipt code, and i have to do this on every row of the table. how can i do this? will this update be in a loop or something?