On my server C drive is of 34GB. Right now tempdb size is 22GB which is causing C drive to be full. How I can I reduce it? I dont want to move tempdb to any other drive, and I am only looking a way to reduce its size.
We are thinking about buying new harddrives to improve sql server performance. Currently TEMPDB is running on a dedicated RAID 0 with 3 harddrives of 136 GB, 10.000 RPM. When running a large bulk insert within a SSIS package to 15 destination tables we notice high numbers in the Avg. and current Read Queue length (above 3000) of the drive where TEMPB is on. No other programs or swap file is using this RAID 0 drive. Can anyone tell me if it is worth buying 4 harddrives of 15.000 RPM each 33 GB big replacing the current 3 drives? How much impact will it have on the Avg. and Current Read queue length and will it improve the time sql server needs to bulk insert data?
"tempdb is skipped. You cannot run a query that requires tempdb"?
We're running a .Net web application with a SQL Server 2000 backend, and we get the error intermittently. Restarting the SQL Server service seems to fix it, as it causes tempdb to be rebuilt, but this isn't a long term solution. Any direction or hints would be greatly appreciated. Thanks! - Mike
Hello!I have a question about the SqlDataSource object.If i make an SqlDataSource with the following sql statement: "SELECT COUNT(id) AS recordCount FROM tblCategory"How do i get the recordCount value to a Variable.Im writing in C#.<asp:SqlDataSource ID="SqlDataSource" runat="server" ConnectionString="<%$ ConnectionStrings:ConnectionString %>" SelectCommand="SELECT COUNT(id) AS recordCount FROM [tblCategory]"></asp:SqlDataSource>
I need to reduce the size of a db from it's original allocated size of 2.0 gb to 1.0 gb.so that I can allocate more to another db.How can I do?Thanks, Ravi.
Basically, what I'm doing is storing answers to questions in a survey. I have two ways I can organize the table:
1. Just having a table with lots of columns - one for each question
2. A table with only about maybe 3 columns: SurveyID QuestionID QuestionAnswer
In this second case, the the primary key would be both SurveyID and QuestionID combined, of course.
I don't fully know the pros and cons of the two approaches, and both look like they would work. Right now, I'm using option 1, but I keep wondering if option 2 might be better. Whenever I change the questions, I currently have to drop and recreate the table (altering it is too much effort), and I know option 2 would be a way of avoiding that. By the way, the questions themselves are stored in an xml file, if it means anything. Anyhow, once the survey is being used, there shouldn't be any further changing of questions. And there's also just too much I don't know about (how is performance affected, for example?).
I have a search function, which searches across many tables.
It's a pretty heavy SPROC, I'm wondering in general, what are the best way to reduce deadlocks ? Its used a fair bit, and altho I haven't noticed problems with it myself, there are definately a decent amount of deadlocks showing up in the logfiles.
I've always assumed this is something really difficult, and avoided it like the plague.
I'm new to SQL Server Maintainance. I need some disk space now. In my database, I have some table that has 7 year old data and I can get rid of that without any conequences. Could you please tell me the best course of action ?? I was thinking about:
1. Dropping those tables. 2. Recreating them again 3. Running ShrinkDB on entire DB.
However, this solution does not sound right or elegant to me. Could you please help me with that ? Thanks
I am running MS SQL 2005 Express I get per day 2-4 hackers attacks trying to login from €œsa€? Some 37 calls times per second one of attack was continuing 4 days
Is there some setting into MS SQL 2005 to reduce that?
Can you recommend me good firewall for DDOS attacks?
Is it there some legal action that I can take to this people I have their IPs most are from US and Canada?
The objective is to configure on a dev machine. Selecting New Database User is the easy part as the ***ASPNET user can be selected from the drop down listbox. What follows and what to do next is a myriad of choices. What needs to be done next? Is there a step-by-step document somewhere that you can refer?
I'm worhing with SQL Server7.0 and I have created a database. I have objets and data in my database My database have 300Mb and I only need about 50. How can I reduce to 50 the dimension of my database. Is it posible?? Any advice will be greatly apreciatted.
My database's transaction log has become 1.7 GB. Can I reduce it's size? I have tried to shrink database and also set truncate on checkpoint option and also taken the backup after that. but nothing helps. Please advice.
I am using SQL 7, SP1 / NT 4. The .LDF file has grown to 1.1GIG; I ran a DBCC SQLPerf(LogSpace), the used portion of the log is 2%. When I run a DBCC Shrinkdatabase and DBCC Shrinkfile, the log file does not reduce in size. How do I get the virtual log files that are not active released back to the system? Is there a way to tell if all the virtual log files are active, therefore, not reducing the size of the file? Any help is greatly appreciated.......
I had a database that’s comprised of different file groups and log files spread out among different hard drives. I have recently upgraded the database to SQL 7.0 on a RAID 10 volume. I would like to consolidate all the file groups and files as well as various log files into one primary datafile and logfile. How do I do that? Thanks in advance.
Can anyone suggest a method to reduce the size of one our log devices. The DB was set up initially at 500Mb with a log size of 1 Gb (typo by the client). We would like to reduce it to 100Mb. if possible. Our environment is SQL Server 6.5 with service pack 3.
I am running Slq2000 (EV) on NT4.0. I have problem is that a size of Transactional Log(*.ldf) file is 3GB. I want to reduced the size to 2GB.Can anyone help me answer this question.
Note: I hv already took the backup of Transactional file by choosing Trunct file after backup but size is not got reduced.
I'm trying to do a fulltext search which returns the adjacent words also in the result, like u do a google search and it returns the paragraph containing the searched phrase.
IF (EXISTS (SELECT name FROM sysobjects WHERE (name = N'Fn_Get_Consensus_Curve_41_Data') AND ((type = 'P') OR (type = 'IF') OR (type = 'TF') OR (type = 'FN')))) DROP FUNCTION [dbo].Fn_Get_Consensus_Curve_41_Data
GO
*/ declare @p_ENTITYID INT declare @p_CUSTOMERID INT
Declare @p_Login_Type int Declare @p_Result_Status int set @p_Login_Type = (SELECT DBO.GET_USER_LOGIN_TYPE_ID(@p_UserID))
If @p_Login_Type=1 and not (@p_CustId is null or @p_CustId='') Set @p_Result_Status = 1 Else if @p_Login_Type > 1 Set @p_Result_Status = 2 Else Set @p_Result_Status = 0
If @p_Result_Status > 0 -- if user is valid and given enough parameters than Begin If @p_Result_Status = 1 -- if User is trader and gives customer id Begin Declare Cur_Fetch_Curve_Cust_Data cursor for Select Distinct Customerid From PricesRR PRR Where Convert(Nvarchar,Matchdate,101) = Convert(Nvarchar,@p_Match_Date,101) And Sector_Id = @p_Sector_Id And Location_Code = @p_Location_Code And CustomerID = @p_CustId And --CustomerID <> 0 --CustomerID not in (0, -1, -2, -3, -100, -200) CustomerId Not In (Select CustomerId From Fn_Get_PricesRR_Not_To_Include_Cust_Id('V')) and isnull(PRR.Record_Last_Action,'N') <> 'D' and Version = dbo.GET_PRICESRR_MAX_VERSION(@p_Location_Code, @p_Sector_Id, @p_Match_Date, PRR.EntityID, @p_CustId, PRR.Date)
Declare Cur_Fetch_Curve_Entity_Data cursor for Select Distinct EntityID From PricesRR PRR Where Convert(Nvarchar,Matchdate,101) = Convert(Nvarchar,@p_Match_Date,101) And Sector_Id = @p_Sector_Id And Location_Code = @p_Location_Code AND EntityId IN ( Select Distinct Entity_Id from Fn_Get_Allowed_Entity_List(@p_Location_Code , @p_Sector_Id , @p_Match_Date ,@p_UserID )) and isnull(PRR.Record_Last_Action,'N') <> 'D' and Version = dbo.GET_PRICESRR_MAX_VERSION(@p_Location_Code, @p_Sector_Id, @p_Match_Date, PRR.EntityID, @p_CustId, PRR.Date)
End Else If @p_Result_Status = 2 -- if User is higher than trader.. means broker or higher Begin Declare Cur_Fetch_Curve_Cust_Data cursor for Select Distinct Customerid From PricesRR PRR Where Convert(Nvarchar,Matchdate,101) = Convert(Nvarchar,@p_Match_Date,101) And Sector_Id = @p_Sector_Id And Location_Code = @p_Location_Code And --CustomerID <> 0 --CustomerID not in (0, -1, -2, -3, -100, -200) CustomerId Not In (Select CustomerId From Fn_Get_PricesRR_Not_To_Include_Cust_Id('V')) and isnull(PRR.Record_Last_Action,'N') <> 'D' --and Version = dbo.GET_PRICESRR_MAX_VERSION(@p_Location_Code, @p_Sector_Id, @p_Match_Date, PRR.EntityID, @p_CustId, PRR.Date)
Declare Cur_Fetch_Curve_Entity_Data cursor for Select Distinct EntityID From PricesRR PRR Where Convert(Nvarchar,Matchdate,101) = Convert(Nvarchar,@p_Match_Date,101) And Sector_Id = @p_Sector_Id And Location_Code = @p_Location_Code and isnull(PRR.Record_Last_Action,'N') <> 'D' --and Version = dbo.GET_PRICESRR_MAX_VERSION(@p_Location_Code, @p_Sector_Id, @p_Match_Date, PRR.EntityID, @p_CustId, PRR.Date)
End delete from @Temp_Curve_Submission_Data
----------------------- -----------------------
Open Cur_Fetch_Curve_Cust_Data fetch next from Cur_Fetch_Curve_Cust_Data into @p_CUSTOMERID WHILE @@FETCH_STATUS = 0 BEGIN
IF @@FETCH_STATUS <> 0 break BEGIN ----------------------- ----------------------- Open Cur_Fetch_Curve_Entity_Data fetch next from Cur_Fetch_Curve_Entity_Data into @p_ENTITYID WHILE @@FETCH_STATUS = 0 BEGIN
i m using SQL 2000, i have a database with 86G mdf and 56G ldf size. i shrink the ldf and it reduced to 32M, however, i did not do anything on my mdf file, but the size of mdf has been reduced to 28G. just would like to check, is this correct?why is mdf size reduced when i only shrink my ldf? hope can help. thanks
Database log of my DB is around 2GB.The database is using FULL recovery option.I want to reduce the file size of the log cause it takes up a lot ofspace.I'd do a full database backup, then backup the transaction log as well.... both backup performed with a check on the option "clear inactiveentries from transaction log".But after I backup, the database log is still 2GB.What should I do to reduce the database log file size?Should I use?:==============================Dump Tran databaseName with no_logDBCC shrinkdatabase(databaseName, 30)==============================Is that safe to be used in production server?Peter CCH
Hi,Trying to get a grip on the "join" thing :)Up until now, I allways used this kinda method:"(select t1.a from t1 where t1.b in (select t2.ab from t2 where t2.b=0))"How can this be accomplished using joins? And if you have the time pleaseexplain the "bits" :)Thank you.--?TH
Hi, my database size has grown out of control and I need help with thefollowing issues. (I am very new to databases)I am storing financial tick data in one of the tables and after two monthsthe database has grown to 30GB. I do not need a permanent record of thistick data after it has been processed and tried to remove all rows from thistable (delete from Tickdata), however sql does not take kindly to removingmillions of rows and the operation seams to time out. The only solution Icould come up with was to delete the table.Secondly, after managing to clear out these tables I have noticed that thedatabase size is still 30GB, despite 29GB being available. Is there any wayto reduce the size of the database from 30GB. I tried the shrink databaseoption but it does not do anything. Any ideas?Thanks.
I have a table with column value like '123 345 678 143 648' like that. What I need to do is I have to take each code value and put it as a new record in another table. So, if I say 'Select substring(column_name,1,3) from table' then it is very fast (fraction of second). But since I need to take each code and the # of codes in each record may vary, I am using a while loop to take each code and so I delclared a variable @i and now my select statement is like this: 'Select substring(column_name,@i,3) from table'. Interesting now this select statement is taking almost 2 mins for each iteration.
Why it is like this? Is there any way I can reduce the time taken to execute each iteration?
In a DataFlowTask with several OLE DB Destinations, how can I "last", before ending this DFT execute a storedProcedure?
This storedprocedure is used for saving metadata (taskname, rowcounts etc) regarding this DFT and I dont want to add an ExecuteSQLTask after the DFT in the Control Flow
I want to write a quick one time query to create a new table based off an existing table. The idea is to make the new table more efficient by reducing the amount of records...see example below
id1 id2 country 1 2 US 3 4 AU 5 6 US 7 8 US 9 10 PE 11 12 PE 13 14 US 15 16 US 17 18 US 19 20 US 21 22 US
id1 id2 country 1 2 US 3 4 AU 5 8 US 9 12 PE 13 22 US
i have the following Problem: i need to have a database-design in which there are a variable number of customers , a variable numbers of products and a price per product per user. My solution looks like this:
Table Customer: CustomerID ...
Table Products: ProductID ...
Table Prices: CustomerID ProductID Price
Now my question: is it possible to get a Pricelist with one customer and all his prices in one row? E.g.:
CustomerID ; PriceProduct1 ; PriceProduct2 ; ....
So one Price-Column per Product, one Row per Customer? Can i do something like that with an sql-statement , view or stored-procedure so the number of Columns in the result depends on the number of Products and should be "dynamic" - which means when i add a new product to the product-table a new price-column is appended to the result?