I have a considerably large view that is pulling data from tables and other view, using user defined functions, and using case statements. This view is taking a lot of time to load.
I was under the impression that SQL server keeps the views uptodate, so selecting data from them is as fast as selecting from a table. It now seems like SQL server rebuilds the view evertime something accesses it.
Can someone please tell me more on this. I am now having to re-write everything :-(
I have a procedure that creates a large dynamic view of several tables. The view is a union view of up to 15 tables. The table names are all <name>_DDMM where name is the standard table name and ddmm is the day and month of the tables data. The tables are created by a software supplied by another company, so I can not ensure that the tables will always have exactly the same fields or number of fields. Sometimes the company will add more fields to the tables in thier updates. So, I have to include the field names in the SQL exec command to create the query. This makes for a very long exec command and depending on the number of tables it needs to include, it can require upwards of a 16,000 character string. Obviously, this can't work, so I had to break up the variable in order to create the procedure. However, I'm wondering if there isn't a better method than creating three different 8000 varchar variables and having overflow write to the next variable in line. Especially if the number of tables needs to be expanded, it could be a problem. Is there a better way to run a create view exec command on a large number of characters?
EDIT: Changed the title to read Procedurally generating a large view.
Hi, Our database has very large number of objects. We have a naming convension by modules, subprojects etc. But for example when we need to open a specific table it still takes time to find it. If we could create custom folders under table folder or stored procedure folder it will be easier to find an object. We could create sub folders by module, subproject and classify our objects with these folders. Will the next version SQL Server 2008 support this kind of functionality?
I have a simple query that joins a largeish fact table (3 million rows) to a view that returns 120 rows. The SKEY in the view is returned via a scalar function. The view returns instantly if queried on it's own however when joined to the fact table in the simple query below results in a query execution plan that runs forever. Interestingly if I change the INNER JOIN to a LEFT OUTER JOIN the query returns the matched results almost instantly.
Select Dimension.Age_Band.[10_Year_Age_Band], Count(*) From Fact.APC_Episodes Inner Join Dimension.Age_Band ON Fact.APC_Episodes.AGE_BAND_SKEY = Age_Band.AGE_BAND_SKEY Group By Dimension.Age_Band.[10_Year_Age_Band]
I know joining to a view using a column generated by a scalar function is not a good recipe for performance. I also know that I could fix this by populating a physical table with the view first as I have already tested this though I hoping not to have to go down that route.
Why a LEFT OUTER JOIN works and not an INNER JOIN or anyway I can get the query optimizer to generate an execution plan that works?
I am studying indexes and keys. I have a table that has a fixed width of data to be loaded in the first column which is parsed in a view based on data types within the fixed width specifications.
Example column A: (name phone house cost of house,zipcodecountystatecountry) -a view will later split this large varchar string based column b: is the source filename of the data load (varchar 256) ....
a. would there be a benefit of adding a clustered or nonclustered index (if so which/point in direction on why)
b. is there benefit of making one of these two columns a primary key (millions of records) or for adding a 3rd new column as a pk?
c. view: this parses the data in column a so it ends up looking more like "name phone house cost of house zipcode county state country" each having their own column.
-any pros/cons of adding indexes (if so which) to the view instead of the tables or both for once the data is parsed?
I am looking to create a constraint on a table that allows multiplenulls but all non-nulls must be unique.I found the following scripthttp://www.windowsitpro.com/Files/0.../Listing_01.txtthat works fine, but the following lineCREATE UNIQUE CLUSTERED INDEX idx1 ON v_multinulls(a)appears to use indexed views. I have run this on a version of SQLStandard edition and this line works fine. I was of the understandingthat you could only create indexed views on SQL Enterprise Edition?
Write a CREATE VIEW statement that defines a view named Invoice Basic that returns three columns: VendorName, InvoiceNumber, and InvoiceTotal. Then, write a SELECT statement that returns all of the columns in the view, sorted by VendorName, where the first letter of the vendor name is N, O, or P.
This is what I have so far,
CREATE VIEW InvoiceBasic AS SELECT VendorName, InvoiceNumber, InvoiceTotal From Vendors JOIN Invoices ON Vendors.VendorID = Invoices.VendorID
I created a query, which makes use of a temp table, and I need the results to be displayed in a View. Unfortunately, Views do not support temp tables, as far as I know, so I put my code in a stored procedure, with the hope I could call it from a View....
I compared view query plan with query plan if I run the same statementfrom view definition and get different results. View plan is moreexpensive and runs longer. View contains 4 inner joins, statisticsupdated for all tables. Any ideas?
I had given one of our developers create view permissions, but he wants to also modify views that are not owned by him, they are owned by dbo.
I ran a profiler trace and determined that when he tries to modify a view using query designer in SQLem or right clicks in SQLem on the view and goes to properties, it is performing a ALTER VIEW. It does the same for dbo in a trace (an ALTER View). He gets a call failed and a permission error that he doesn't have create view permissions, object is owned by dbo, using both methods.
If it is doing an alter view how can I set permissions for that and why does it give a create view error when its really doing an alter view? Very confusing.
CREATE VIEW dbo.vwFeat AS SELECT dbo.Lk_Feat.Descr, dbo.Lk_Feat.Price, dbo.Lk_Feat.Code, dbo.SubFeat.SubNmbr FROM dbo.Lk_Feat INNER JOIN dbo.SubFeat ON dbo.Lk_Feat.Idf = dbo.SubFeat.Idt
When ever I open using SQL Entreprise manager to edit it by adding or removing a field i inserts Expr1,2.. and I don t want that. The result I get is:
SELECT dbo.Lk_Feat.Descr AS Expr1, dbo.Lk_Feat.Price AS Expr2, dbo.Lk_Feat.Code AS Expr3, dbo.SubFeat.SubNmbr AS Expr4 FROM dbo.Lk_Feat INNER JOIN dbo.SubFeat ON dbo.Lk_Feat.Idf = dbo.SubFeat.Idt
I don t want Entreprise manager to generate the Expr fields since I use the real fields in my application. Thanks for help
I was looking through our vendors views, searching for something Ineeded for our Datawarehouse and I came across something I do notunderstand: I found a view that lists data when I use it in t-sql,however when I try to use the statement when I modified the view (viaMS SQL Server Management Studio) I can not execute the statement. I getThe column prefix 'dbo.tbl_5001_NumericAudit' does not match with atable name or alias name used in the query.Upon closer inspection, I found two ON for the inner join, which I dontthink is correct.So, how can the view work, but not the SQL that defines the view?SQL Server 2000, up to date patches:SELECT dbo.tbl_5001_NumericAudit.aEventID,dbo.tbl_5001_NumericAudit.nParentEventID,dbo.tbl_5001_NumericAudit.nUserID,dbo.tbl_5001_NumericAudit.nColumnID,dbo.tbl_5001_NumericAudit.nKeyID,dbo.tbl_5001_NumericAudit.dChangeTime,CAST(dbo.tbl_5001_NumericAudit.vToValue ASnVarchar(512)) AS vToValue, dbo.tbl_5001_NumericAudit.nChangeMode,dbo.tbl_5001_NumericAudit.tChildEventText, CASEWHEN nConstraintType = 3 THEN 5 ELSE tblColumnMain.nDataType END ASnDataType,dbo.tbl_5001_NumericAudit.nID,CAST(dbo.tbl_5001_NumericAudit.vFromValue AS nVarchar(512)) ASvFromValueFROM dbo.tbl_5001_NumericAudit WITH (NOLOCK) LEFT OUTER JOINdbo.tblColumnMain WITH (NoLock) INNER JOIN---- Posters comment: here is the double ON--dbo.tblCustomField WITH (NoLock) ONdbo.tblColumnMain.aColumnID = dbo.tbl_5001_NumericAudit.nColumnID ONdbo.tbl_5001_NumericAudit.nColumnID =dbo.tblCustomField.nColumnID LEFT OUTER JOINdbo.tblConstraint WITH (NOLOCK) ONdbo.tblCustomField.nConstraintID = dbo.tblConstraint.aConstraintID AND(dbo.tblConstraint.nConstraintType = 4 ORdbo.tblConstraint.nConstraintType = 9 ORdbo.tblConstraint.nConstraintType = 3)UNION ALLSELECT aEventID, nParentEventID, nUserID, nColumnID, nKeyID,dChangeTime, CAST(CAST(vToValue AS decimal(19, 6)) AS nVarchar(512)) ASvToValue,nChangeMode, tChildEventText, 5 AS nDataType,nID, CAST(CAST(vFromValue AS decimal(19, 6)) AS nVarchar(512)) ASvFromValueFROM dbo.tbl_5001_FloatAudit WITH (NOLOCK)UNION ALLSELECT aEventID, nParentEventID, nUserID, nColumnID, nKeyID,dChangeTime, CAST(vToValue AS nVarchar(512)) AS vToValue, nChangeMode,tChildEventText, 2 AS nDataType, nID,CAST(vFromValue AS nVarchar(512)) AS vFromValueFROM dbo.tbl_5001_StringAudit WITH (NOLOCK)UNION ALLSELECT aEventID, nParentEventID, nUserID, nColumnID, nKeyID,dChangeTime, CONVERT(nVarchar(512), vToValue, 121) AS vToValue,nChangeMode,tChildEventText, 3 AS nDataType, nID,CONVERT(nVarchar(512), vFromValue, 121) AS vFromValueFROM dbo.tbl_5001_DateAudit WITH (NOLOCK)
What happens if you have a website and the hard drive on your server is say 250GB. Then the database exceeds that and the database is 300GB in size.How would you spread your database into two different hard drives?Thanks
We currently have a data warehouse running on SQL 7.0, SP2. One of our primary fact tables is now well over 155 million rows in it. The table is not very wide, as it only contains 17 columns, most of which are defined as integers. The entire database is only 20 GB.
The issue is that the loads from the staging table to this fact table have significantly deteriorated over the last month or so, dropping from over 400 transactions per second to around 85. We drop all the indexes on the fact table before we load the data into it.
Are there issues with a manageable table size in SQL 7.0 that we need to be concerned about? And should we consider partitioning the table into several smaller tables and join them with a "union all" view?
I really need to get this performance issue resolved, as our IT support vendor is pushing us to port the data warehouse to UDB because they tell us that SQL server is not scalable enough to handle this volume of data.
Hi, anyone who administering the pretty big database not less than 30 Gb with the average number of rows in a table about 2M and more, please share you experience with maintenace of such a db. Esspecially i'm interesting in:
1) Indexes maintenance (When and how - just regular dbcc, maint. plan or some script to split the job twice and so on.)
2) Remove unused space from db. (not major)
The serever works 24*7, and it's transactional environment. SQL 7 sp3 on claster.
I run the sp. to rebuild all the indexes it takes about 2-3 hrs to determin the objects withfragmentation less than 80% and actually rebuilding, during this process the users experience the performance (specially for update/insert) problem. It looks like I need to change the plan or strategy to do this. Any thoughts appreciated!
I have inherited some databases whith extremely large Log files. I tried the truncate transaction log but did not work. Can some body please tell me how to truncate these log files.
I would like to know whether MS SQL Server 2000 Can handle Large Database size 28 gig, currently we are using Sybase but we want migrate to SQL Server 2000. Are there any performance issues?
OS - Window 2003 Standard SQL - 2000 Standard SP4 2 CPU 4 GB Memory
We have a table this is approximate 155 GB with 450 million rows. 90 days worth of data and normalized pretty tight. Obviously this beast is too much to handle (reporting, maintenance) so we are partitioning the data off into weekly tables. I will never need to store more than 90 days worth of data so at the end of each week, we will truncate the oldest week's table and change the constraint on it to handle the next week. Anyway, We have done this and it is working well in 2 testing environment with a lot less data.
I need to migrate the data from the 1 table 450 million rows to the 13 weekly tables that will approximately make up the 90 days worth of data. I hoped to BCP out the data and Bulk Insert (fast mode). Even 1 weeks (Approximate 35 million rows) goes forever and and don't even see it start to output. I have changed the batch to be larger. Is there a better/faster way to do this or am I just going to have to wait. I'm open to any suggestions.
Note: Hardware is all Raid 1+0, reading from 1 array and writing to a different array. Performance just seems to slow but I'm not sure what to expect from something this big.
We have many installations of our shopping cart database. One specific database is huge, now about 25 GIG, compared to the others that range from 20 to 75 MEG. The server this one resides on has three other instances of the same database that are normal size.
In a particular table in the large database there are 9700 rows taking 380MB, the same table in a normal db has 162000 rows and takes 6.MB. The tables are identical and the indexes are the same.
I have this large DB around 250GB. In which 70% of the space is occupied by one single table. This is not a high transaction DB, it has only a few users.
Backup is taking 2:30 hours!
Is there a way I can reduce this time?
In desperation, I am thinking of doing,
1. detach the DB (off peak hours) 2. copy .mdf and .ldf files (backup) 3. attach the DB
Any suggestion ?
------------------------ I think, therefore I am - Rene Descartes
I often in my job come across the following scenario:
Client rings up and says Run out of server space due to SQL 2000 Transaction log has consumed all the space or has consumed a very large portion of it.
what is the correct procedure in resolving this ASAP working with Full mode SQl 2000 Databases. as i have other guys within my company that all do different things with good result and sometimes bad results.
The procedure i use is the following:
METHOD 1
1.backup both database and transaction log 2.Right click the database and select Detach, which from my understanding is a clean detach method which ensures that uncommited transactions are commited to the database. 3. Rename the old transaction log to .ldfOLD 4. REATTACH Database which creates a new transaction log.
METHOD 2
1. I dont use this method but im pretty sure its risky and if possible can someone provide me with the reasons why:
1.Change database method from full to simple mode, shrink logs and then change back to full mode.
basically what i am asking is what is the fastest way to sort out the above issue most effectively and with the abilty to Roleback succesfully.
PLEASE dont comment on why is the transaction log so big as i dont want to look into that now all i am sking is what is the most effective method to shrink the log down and save space.
I am developing an application that has a table with lots of records(network traffic) but the data is summarize every so often to create summary records (old records are deleted). The problem is that I have a PK based on an autoincrement ID (int) that will run out of numbers. However, this ID is not referenced anywhere, (not a foreign key from another table, not use for deletion and there is no update in this table whatsoever).
So my possibilites are: 1.- reseed the id when it is about to run out. 2.- make the id bigint 3.- remove the id and change the PK to 2 other fields 4.- remove the id and without PK
I am leaning toward option 4, because I do not see the need for a PK, but I understand that it is quite out of the normal.. So I would like to hear from other people ( I do not have much experience with DB).
I also like option 3. I already have a index on one of the other fields (time).
I have a table which has over 450,000 records in it, I have now split this into 4 so each table has around 100,000 records in it but I'm still having the problem of the data being returned really slowly.
What I need to do to this data is group it by a code and show the total for each code for every month of the year (this is currently based on one column and selecting the data accordingly). I have created views and put some indexes onto my table but the results are still being returned slowly. Does anyone have any suggestions of how I can speed this up?
I'm trying to export data from an SQL table using classic ASP. In the past, I have created tab delimited text files using code similar to the following code:
RSb.Open "SELECT * FROM tblData1 WHERE ID=1 ORDER BY txtField1", con, 3, 3 Do Until RSb.EOF Response.Write RSb("txtField1")&vbTAB Response.Write RSb("txtField2")&vbTAB Response.Write RSb("txtField3")&vbTAB Response.Write vbCRLF
RSb.MoveNext Loop RSb.Close
This always worked fine in the past because the tables were small. Now, the tables are exporting 100,000 records and getting errors. I'm setting the page timeout and the sql connection timeout, but I'm still getting errors. I'm not exactly sure what's happening, but the export stops after exporting about 5Mb. It varies where it stops, so I'm thinking it's timing out somewhere.
Is there a better way to do this? Possibly use an export function in MS SQL that would export faster?
If I use BCP to export a very large table will that table be blockedfor writes during the export process? I don't want to prevent usersfrom accessing that table during the bcp process?Thank You, TFD.
Dear Experts,What is the best way to do a large insert WITHOUT having direct accessto the machine SQL Server is running on? For example, imagine I wantto insert something like 20,000 records. If I were to have access tothe server, I could BULK INSERT into a temp table and then insert intothe destination table. But if I can't create a file on the server touse for BULK INSERT, what is the next best alternative to doing lotsof 1 record insert statements?Thanks,-Emin
Hi All,My question is what are the best practices for administering largeDBs. (My coworker is the DB administrator. I'm more of thedeveloper. But slowly being sucked in.) My main concern is that wehave some DBs that take approx 3 hrs a night just to rebuild theindexes. I know that with MSSQL 2000, I can use partitioned views tobreak out the table(s) into smaller databases and tables. But we alsohave an older server that runs MSSQL 7. Lastly how do you handledrive space issues? Do you spread out the DB across multiple MDFfiles on different drives? Thanks in advance.
Hi, I have one column which contains some large data more than 30000 characters. When I exported the data to Excel file this, some data in this column showed as #######. This # are giving the problem in SSIS package. If I delete that data(####) then SSIS package is working fine. How to solve this problem.