I have a CTE query against a table with 32K rows that runs fine in 2008R2. I am running it in 2014 Std Ed. against the same data and it runs very slowly. Looking at the execution plan I think I see what's contributing to the slowness.
Note that the "actual number of rows" is some 351M...how is this possible?
the query:
declare @amts table (claim int,allowed decimal(12,2),copay decimal(12,2),deductible decimal(12,2),coins decimal(12,2)); ;with unpaid (claimID) as (select claimID from claim where amt+copay + disct+mm + ded=0) insert @amts select lineID, sum(rc), sum(copay), sum(deduct), case when sum(mm)>0 and (sum(mm)<sum(mmamt)) then sum(mm) else 0 end from claimln where status is null and lineID not in (select claimID from unpaid) group by lineID
it's like there's some massively recursive process going on?
In one of our forth coming projects, with ASP.Net/C#/MSSQL Server, We have to deal with a Business table having about 15 millions of records. We want to know, that which methodologies should we adopt, both regarding front end and back end perspective, so the site could give optimised performance. Also in place of a Dedicated Server, the Hosting Company provides MSDE (that come with .net). Will this create any problem with this project, that have such a huge table? Should we go for some advanced database technique, such as, Clustering, Spliting Tables, etc.
Followings are the fields that the business table contains:
ID, Category ID (which comes from a Category table, each business is under a category), BusinessName, SignupDate, Address1, Address2, Phone Number, Hours Of Operation, Years in Business, LicenseNumber, DiscountCoupon, Website
I need to alter a table (expand the column size for varchar(10) to varchar(255)) and the table has 200 million rows. Please suggest me the best and the fastest method to achieve it. The database is on SQL 7.0
Hello i want to ask about the huge table(table with many tera records) backup time cost , any one can help me please in determining the time cost nearly
I'm developing a Windows Mobile application, which is using RDA Pull for retrieving data from SQL Server 2005 database to PDA. Please, see the example:
Code Snippet
using (SqlCeEngine engine = new SqlCeEngine(connStr))
the sqlcesa30.dll cannot connect to SQL Server database.
In the sqlcesa30.log then I found following line:
Code Snippet
2007/04/17 10:43:31 Thread=1EE30 RSCB=16 Command=PULL Hr=80040E4D Login failed for user 'test'. 18456
The user 'test' is member of db_owner, db_datareader and public roles for the Demo database and in SQL Server Management Studio I'm able to login to the Demo database with using the 'test' users credentials and I'm able to run the select command on 'mytable'.
So, what's wrong? Why the sqlcesa30.dll process cannot login to the Demo database, and from another application with using the SAME connection string it works?
I have a table with 52 million rows which resides on Primary file group in my database. Because of huge number of rows the performance has gone very down and I would like to break the table into parts.
Can anyone suggest me the steps for doing the same and the number of parts that should be made. It is named as Account_Transactions and contains information of Policies in an insurance database.
I have a table with about 80 columns and 400 millions records. Each columns has different responses that I need to get frequency for. I need to get counts for each response from all the columns... I have a query that does it, but it will run forever... what is the best way to do so?
My starting query:
select res, sum(cnt) from ( select col1 res, count(*) as cnt from table1 with (nolock) group by col1 union all select col2 res, count(*) as cnt from table1 with (nolock) group by col2
........................
select col80 res, count(*) as cnt from table1 with (nolock) group by col80 )a group by res
how do I get the variables in the cursor, set statement, to NOT update the temp table with the value of the variable ? I want it to pull a date, not the column name stored in the variable...
create table #temptable (columname varchar(150), columnheader varchar(150), earliestdate varchar(120), mostrecentdate varchar(120)) insert into #temptable SELECT ColumnName, headername, '', '' FROM eddsdbo.[ArtifactViewField] WHERE ItemListType = 'DateTime' AND ArtifactTypeID = 10 --column name declare @cname varchar(30)
Hi, My DB size (Right click on DB Name, Data Files tab, Space Allocated field) was 10914 MB.
I delete a huge table (1.2 million records * 15 columns). I checked the db size again. It didnt change. Shouldn't it decrease because I delete a huge table ??
I have a huge table with 4 primary keys on it. I need to delete the data from this table ( approx. 5.6 millions records to be deleted). It takes a hell lot of time to delete it by normal query. Can someone please suggest me a better way? Any help will be appreciated.
We have a large test database with million of records for more than company site Code. Sometime we want to refresh the data of that database for one or more site Codes.
In order to do that I have to delete all records of the site code we want to refresh on the test database first then copy a new set of data from production database over. Since we refresh data based on the site code therefore I have to use the Delete command instead of Truncate.
Since this is a huge database with thousand of tables and million of records per table I have a performance issues with delete command. So what would be the best to delete a large number of records without writing any information to database log file?
FYI: The Recovery model of this database is Simple
Question A : Â I need to truncate a table, it has 21 millions of rows and it has a size of 14 GB. Â Â Â Â Â Â Â
                1-  How do I find out if this table is not being referenced by a FOREIGN KEY?                 2-  Does it Participates in a indexed view?                 3- Is being published by using transactional replication or merge replication?
Question B: Â How do I safely truncate that table?Â
I have a table (Sql server 2000) which has 14 cost columns for each record, and now due to a new requirement, I have 2 taxes which needs to be applied on two more fields called Share1 and share 2 e.g Sales tax = 10% Use Tax = 10% Share1 = 60% Share2 = 40%
So Sales tax Amt (A) = Cost1 * Share1 * Sales Tax So Use tax Amt (B) = cost1 * share2 * Use tax
same calculation for all the costs and then total cost with Sales tax = Cost 1 + A , Cost 2 + A and so on.. and total cost with Use tax = Cost1 +B, Cost 2 +B etc.
So there are around 14 new fields required to save Sales Tax amt for each cost, another 14 new fields to store Cost with Sales Tax, Cost with Use tax. So that increases the table size. Some of these fields might be used for making reports.
I was wondering which is a better approach out of the below 3: 1) To calculate these fields dynamically while displaying them on the User interface and not save in DB (while making reports, again calculate these fields dynamically and show), or 2) Add new formula field columns in database table to save each field, which would make the table size bigger, but reporting becomes easier. 3) Add only those columns in database on which reports needs to be made, calculate rest of the fields dynamically on screen.
I have the next question, and i would like to hear what do you thinkabout, and if is there a better solution for "my problem"here is the question, I have a huge table with 60GB of data (imagefiles). The problem happen always when i try to ALTER the structure ofthe table. For example I change a field char(3) to char(4)...thesqlserver then performs the "alter table" command...that must besomething similar than "insert into the new table + drop the actualtable" and for that I need about 60GB o space for my LOG file, andtakes hours to complete the operation.Is this the only way to alter a single field in my table??I would like to heard you opinions...Thanks..ALberto
I have a table (Sql server 2000) which has 14 cost columns for each record, and now due to a new requirement, I have 2 taxes which needs to be applied on two more fields called Share1 and share 2 e.g Sales tax = 10% Use Tax = 10% Share1 = 60% Share2 = 40%
So Sales tax Amt (A) = Cost1 * Share1 * Sales Tax So Use tax Amt (B) = cost1 * share2 * Use tax
same calculation for all the costs and then total cost with Sales tax = Cost 1 + A , Cost 2 + A and so on.. and total cost with Use tax = Cost1 +B, Cost 2 +B etc.
So there are around 14 new fields required to save Sales Tax amt for each cost, another 14 new fields to store Cost with Sales Tax, Cost with Use tax. So that increases the table size. Some of these fields might be used for making reports.
I was wondering which is a better approach out of the below 4: 1) To calculate these fields dynamically while displaying them on the User interface and not save in DB (while making reports, again calculate these fields dynamically and show), or 2) Add new formula field columns in database table to save each field, which would make the table size bigger, but reporting becomes easier. 3) Add only those columns in database on which reports needs to be made, calculate rest of the fields dynamically on screen.
4) Create a view just for reports, and calculate values dynamically in UI and not adding any computed values in table.
I want to append the column to the transaction table(60 million records in it.) ..
Our transaction table is being used in production.. but i have very less amount of time ..
Instead of alter table.. (IF we use the alter to take backup of table and do the processing it will take more time). Is there any way to append the column to the transaction table ..
I have recently moved jobs and have come across an unusual database setup that is not ideal for the resolution of my task at hand.
I need to query table A, which will return 1 and only 1 tablename as a result. there are various tablenames on table A but the query will always return a unique result. (from Table B-G for example)
My problem is now, how do i run my next query on the table i have returned from my first query. I need to be able to reference the value.
Making sense?
Ideal scenario would be if tables B-G were on a single table and had a unique identifier, but this architecture cannot be changed for numerous reasons and a workaround needs to be established. your help is much appreciated,
Dears, I'm trying to set up a rda connection through the pocket pc emulator (standalone version 1.0) and my laptop which runs Windows Vista Home Premium. To start up the rda I first need to connect from my emulated device to my laptop which runs IIS 7.0 (complete installation with compatibility toward IIE 6.0). Connection to http://localhost from my laptop is working fine.
I start the emulator (Pocket Pc with Windows Professional) and cradle the emulated device. Connection starts and I'm able to browse from my pc the content of the emulated device or to synchronize it. Then I run Internet Explorer on the emulator and try to connect to http://mycomputername (correct) but I receive a message which says that it cannot connect to the page I was looking for because connection was lost. Any suggestion on how to solve this issue? Kind regards and many thanks in advance Cristian
It seems that now I can connect to http://mycomputername and I'm able to browse the net with my pocket pc emulator. Anyway when I'm attempting to connect to htpp://mycomputername/subfolder/sqlcesa30.dll (the path is correct, I receive the following error message: "The page cannot be displayed or downloaded because the connection was lost. Check the connection and try later"..which is the same message I've been receiving before)
I have a table (named table1) with 20million rows. It takes around 11 minutes to apply the primary key to this table. There are some tables with over 100 million rows so based on the previous time if my calculations are correct it will take close to an hour apply this primary key for tables with around 100 million rows.
My current solution is to create another table (named table2) with no indexs or primary keys. Pump over only like 5 days worth of data, then apply the primary key. Then have a script that will eventually populate table2 with the rest of the data gradually. When I say gradually I mean like insert like every 100k per hour or something. Keep in mind this table2 is heavily updated with new records.
declare @error int, @rowcount int select @rowcount = COUNT(1) FROM STG_BCDR; while @rowcount > 0 begin  BEGIN TRAN Deletion
[code]....
Above code i try to delete records batch by batch to avoid table locking at BCDR table.total records in this BCDRÂ table is 40,000 records. Â However I run the code at execution plan, the BCDR table still clustered index scan which means that the locking still happend.
If i change the delete top (5000)...... to delete top (5).... then thre is clustered index seek, which is good..The problem here is  each time  only delete top 5 records which is means it will realy take very long time to remove those data.
how to cater the situation inorder for me to delete those huge data without table locking happend. If table locking happend , then other user will not be able to access this table at the same time.
I need to copy a large amount of data from one table and insert it into another table.
The design of the destination table is exactly the same as the source table except for the fact that it has one extra field. Can I copy; in a single SQL statement; all rows in one table (that match given criteria) into another table allowing for the extra field?
My dev environment: VS2005, SQLServer Mobile, SQLServer 2005 server db, PPC windows app.
Having a problem with pull. First pull works fine. I request an error table from the pull. When I need to repull, I drop the table, but if I try to programmatically delete the error table, I get the message that the table has restricted DDL operations allowed. Apparently Drop is not allowed. Sometimes dropping the main table deletes the error table, sometimes it doesn't. If I try to repull and the error table is there, the pull fails.
I've tried "flushing" my cache by closing my connection and reopening, but that doesn't work.
I am currently implementing RDA Pulls and Pushes. Both worked fine for me, except when I try to pull a certain table twice. I read that in order to pull the table a second time I must drop it on the client.
My original approach was to use a select statement with a where criterium in the pull statement (e.g. SELECT * from tblPhonebook where Pulled = 0), then set Pulled to 1 and pull again later.
My understanding of RDA was that I use the where criterium to filter the data at the server side and simply append that data at the client side.
Can I simply append data in some way or do I really have to drop the table on the client side every time I pull?
Please help. (I am using ASP, VB, SQL)I have a table with Office address information and it's ID. There could be a lot of offices in one city. But I would like to display only unique cities with certain names they start with and their id's.
I am building RDA process between SQL server and SQL server CE. Since when we use the method 'SqlCeRemoteDataAccess.pull()', the data will be pulled down from SQL server to CE server and a new table is created with some addtional columns. As I only intend to pull the data and never push it back to SQL server, if I pull the data down for the second time, there will be an error saying that table is already existing. I wonder whether I can pull the data to an existing table everytime I do the RDA?
I'm wondering if there is a single statement I can write to pull my data. Let's say my Order table has one field for userId and one field for supervisorId (among other fields) both of which are foreign keys into the Users table where their name, address, etc. are stored. What I'd like to do is to pull all the rows from Order and have a join that pulls the user name and supervisor name from the User table all in one go. Right now I pull all the Orders with just user name joined, and then go back over the objects to add the supervisor name as a separate query.
The reason I'd like to do this is to simplify the objects I'm passing to the GridView by doing a single fetch instead of multiples. I'm using SQL Server, .NET 2.0 and VS.NET 2005.