I inherited a system which has an index on a set of columns which allow more than 900 bytes of data in it. We know one of the fields can be shortened to shrink the potential key size below 900 bytes.
The problem is the table is about 120m rows, and the index currently on that column is seeked (sought?) on about 2.5m times a day.
At its simplest, I want to drop the existing index, alter the column to shrink the varchar size, and then rebuild the index on the newly shortened column.
On a smaller, less used table, I might just do this in outside of business hours and call it a day, but I'm concerned that this will take a long time and block a lot of operations.
1) IIRC, shrinking a column, unlike widening it, is much more expensive, even if there are no values which would actually end up trunacted. Is this right?
2) I did a few tests on some other smaller (2+ m) row tables and was still able to select data out of the table. I don't think this covered all the read scenarios, but are there known scenarios which would simply not work during an index build?
3) I haven't yet tried DML operations to the table while it's doing either the column update or an index build. what scenarios would or would not be blocked?
How can we monitor the all tables in all databases and send notifications to the team.Is there a way to check to find the no of rows and size of a table last month and find out growth % now
There are some more columns with more 'nvarchar' (max) and other INT data types. Anyway, I know a page is 8K size. How do I find out how much space does A ROW takes with above datatypes? If users add 5000 Rows per day, how do I figure out how much size the table will increase?
My prod server (only default instance) is configured TempDB 1024 MB data and Log 200MB. when I run 'sqlperf logspace' it shows most of time around 45% 'log space used'. There nothing going on the instance when I ran 'whoisactive' and select * from sys.sysprocesses where dbid = 2!!!
So my questions are is this normal to see log space around 45%, how to find what what CAUSED the tempdb log space to grow 45%? Is there something to do about it?
select * from sys.master_files - size column value here is 1024 for .mdf,size here for .ldf is 64 select * from tempdb.sys.database_files - size column value here is 3576 for .mdf,size here for .ldf is 224
Why is there a difference and not the same. size columns in the above 2 tables for temp db's do they represent different values ?
I've never worked with the XML data type in SQL Server, although I know its been there for a few iterations of SQL SErver. Now I've got a situation in which it might store some configuration data as XML, since that's the way it comes. (We had thought about storing the data in a VARCHAR(MAX) field.)
The first question is does the XML data type have a size limitation? For example do you do something like:
ConfigFile XML(1000) NULL
Or is it just something like this:
ConfigFile XML NULL
The second question is persisting the data to a file. As the name I choose for the variable suggests, we want to save the data from a configuration file into a SQL Server database. How do we go about doing that? We'll be developing a C# application, it will read and write the data both from the SQL table and the user's local HD.
I've got two databases on the same server and replicate some tables from one database to another.The replication is configured so not to drop the table if it exists, but to delete the data based on the filter if one exists. There are two tables on the subscriber that have some extra columns.
I get "field size too large" error when trying to replicate them. Is there a workaround without having to make the publisher and the subscriber tables identical by schema?
We have installed SQL Server 2008 R2 SP1 instance and it's having Share Point 2010 databases.
We have 2 dedicated drives for Tempdb on SAN with 50 GB space. Both tempdb data & log files are created with default size. I would like to presize them.
What are the best values to start with?
U ->Tempdbdata having tempdb.mdf file V->Tempdblog having templog.ldf file
My backups are failing sometimes.My db size is around 400 GB and we are taking backup to the remote server. Free size available on the disk is showing 600 GB but my database full backup run more than 10 hrs and failed. The failure reason is there is not enough space on the disk.
What could be the possible failure reasons? It has more than the the database size on the backup server but why it is showed that msg on the job failures. I noticed same thing occurred couple of times in the past.
Is there any way to find how much the backup file will be generate before we run the backup job?
i.e. If we run the full backup of test1 database now, it will generate ....bak file for that test1 db
currently we are using maintenance plan and with compression (2008R2) and the database has TDE enabled
Any good starting point to understand for a specific db, how many max VLFs are good to have so that it does not cause long startup or backup times?
Also, I need some calculation so that I can identify a best growth parameter I will setup for each database ?
I'm seeing the below msg in errorlog and curious to know the changes (right sizing/growth) to be done? As of now 100 MB of log file growth value is set (refer: [URL] ....)
Database BizTalkMsgBoxDb has more than 1000 virtual log files which is excessive. Too many virtual log files can cause long startup and backup times. Consider shrinking the log and using a different growth increment to reduce the number of virtual log files.
I'm inserting from TempAccrual to VacationAccrual . It works nicely, however if I run this script again it will insert the same values again in VacationAccrual. How do I block that? IF there is a small change in one of the column in TempAccrual then allow insert. Here is my query
INSERT INTO vacationaccrual (empno, accrued_vacation, accrued_sick_effective_date, accrued_sick, import_date)
I created am inventory table with few columns say, Servername, version, patching details, etc
I want a tracking of the table.
Let's say people are asked to modify the base table and I want a complete capture of the details modified and the session of the user ( ) who (system_user) is actually modifying the details.
I am trying to insert bulk data into main table from staging table in sql server 2012. If any error comes, this total activity is rollbacked. I don't want that to happen. I want to know the records where ever the problem persists, and the rest has to be inserted.
I have a scenario where I have to Update a table with date when there are new records in another table
For example:
I load ODS table with the data from a file in SSIS. the file has CustomerID and other columns.
Now, when there is new record for any customerID in Ods, then Update the dbo table with the most recent record for every CustomerID(i.e. update the date column in dbo for that customerID). Also Include an Identifier that relates back to the ODS table. How do I do this?
I have two table 'Cal_date' and 'RPT_Invoice_Shipped'.Table cal_data has column month_no, start_date and end_date. And table RPT_Invoice_Shipped has columns Day_No, Date, Div_code, Total_Invoiced, Shipped_Value, Line_Shipped, Unit_Shipped, Transaction_Date.
I am using below insert statment to insert data in RPT_Invoice_Shipped table.
insert into [Global_Report_Staging].[dbo].[RPT_Invoice_Shipped] (Day_No, Date, Div_code, Total_Invoiced, Transaction_Date) select , CONVERT(DATE,Getdate()) as Date, LTRIM(RTRIM(div_Code)), sum(tot_Net_Amt) as Total_Invoiced, (dateadd(day, -1, convert(date, getdate()))) from [Global_Report_Staging].[dbo].[STG_Shipped_Invoiced] WHERE CONVERT(DATE,Created_date )=CONVERT(DATE,Getdate()) group by div_code
while inserting in column Day_No in RPT_Invoice_Shipped table, I have to use formula (Transaction_Date-start_date+1) where Transaction_Date from STG_Shipped_Invoiced and start_date from Cal_date table. I was using datepart (mm, Transaction_Date) so it gives month_no, and this month_no we can join with month_no of Cal_date table and fetch start_date from Cal_date table, so that we can use start_date for formula (Transaction_Date-start_date+1).
But I am getting difficulty to arrange this in above query. how to achieve this?
I am trying to replicated table A into table B withing the same server and the same database.
I am getting an error message when trying to configure the subscription: You have selected the Publisher as a Subscriber and entered a subscription database that is the same as the publishing database. Select another subscription database.
im doing network monitoring app where basically i run a checks on servers every few minutes and log the data to a table. Naturally the table can get big, quite quickly. What I want is to be able to overwrite the table data at the start of each new day. Alternatively, rollup the data into a daily or weekly packets and then overwrite table data. How do i do this?
I have a process(Peoplesoft) in SQL Server which takes long time to finish. I am looking for is there any way to find out fragmentation of the table to defagment it or is there any way to allocate the size for particular table. Couple of users are running the process at same time but as SQL Server has table/page locks it locks and releases after the job is done. Can i make it row level lock by executing sp_tableoption procedure. I would appreciate for all your suggeations.
Same issue in VS 2005. Using a setup project (or ClickOnce) there seems to be a limit to how large a content file can be. I'm cross-posting in setup and ClickOnce, but thought that other SQL CE developers may have had this issue. --------------------------------------------------------- I cannot find a reference to this in the online help, and find no answer here. I have some SQL CE databases that are marked as 'content' in a project. One is less than a mb, the other is over 600mb. The larger one will always break the setup build if I leave it marked as content. Also will break a 'clickOnce publish' build.
How does one use the MS Visual Studio tools for setup when a file size is too big?
What is the max file size limitation?
I need to deploy with ClickOnce, but find it difficult to determine the directory that my large file will need to be copied to afterwards since a ClickOnce install uses such a confusing path to install to. So, if the workaround is to fire a script post install, it would be a problem to discover the path for that file to be copied to.-----------------------------------------------------------------
I have a sharepoint content database in sql 2008 R2 (WSS_Content) that is at 230Gb size, but has 40% of it is empty space. This is because we have removed a large amount of old content from sharepoint. The log file is fine. I have 60GB left in my drive that host the database files. I would like to shrink the datafile to get disk space back. I found that under the files property, the WSS_Content data file's initial size is 228702 MB (220 Gb or so).
When i try to do a shrink file (data file) from management studio, i see the 60 GB of drive space keep dropping. So i have to kill the process. what i should do to reduce this data file.
why it keep using up all the free space in the drive when i try to shrink the data file?