I'm setting up SQL Server 7.0 as follows:
12 Gb initial data load;
heavy transaction OLTP;
4 processors;
RAID5
disk space NOT an issue.
I'm trying to figure out what a reasonable initial size for both the Transaction Log and
Tempdb should be. I was thinking 100Mb for each. Is this reasonable, or could either afford
to have more?
Is there a percentage of the .mdf that is recognized as a general rule of thumb for setting up
the Transaction Log and/or Tempdb?
Also, is there any value in assigning Filegroups in a RAID5 setup?
I've officially done everything. I've uninstalled everything I can think of. I've cleaned the registry until it sparkles. I've removed all directories having anything to do with VS or SQL. I've even run the uninstall tool multiple times and been told everything is OK. However, SQL Server Express STILL tells me there are "beta versions" of stuff on my machine. Do you have any suggestions, or should I just give up in disgust?!?!?!
I have a query that searches through a 4 million record table. The data is fed from the UNIX flat file systems so the data is not in optimal search format. So I created some views that massaged the data and then index them. I select and join the original table with the view, with NOEXPAND hint on the view. My question is this theory right: If I put the criteria in the FROM join part then it will make the join easier than if I put it in the where clause?
Example (any difference) SELECT stuff1, stuff2 FROM UglyData u INNER JOIN MassageTable m ON m.RecNumber LIKE '112%' AND u.ID = m.ID versus SELECT stuff1, stuff2 FROM UglyData u INNER JOIN MassageTable m ON u.ID = m.ID WHERE m.RecNumber LIKE '112%'
Does column order really matter for Query Optimizer to pick index.Case 1: Say my CUSTOMER table has one composite index containing FirstName and LastName. FirstName exists prior than LastName. Does the column, FirstName and LastName, order matter to have Query Optimizer to utilize the index when I write WHERE clause in a SELECT statement?Statement 1:SELECT * FROM CUSTOMER WHERE FirstName = 'John' and LastName ='Smith'Statement 2:SELECT * FROM CUSTOMER WHERE LastName ='Smith' and FirstName = 'John' Will both statement 1 and 2 use the composite index or only statement 1?Case 2:Say my CUSTOMER has two single-column indexes. One index is on column FirstName. Another is on column LastName.For statement 1 and 2 above, which index will be picked by Query Optimizer or both? How does QO pick for index?I read couple book and some books say column order matter but some say no. Which one should I go with? I'm kind of confused.
I have a question i need to ask. Since i'm new to SQL Server, i did a bit of experiment with SQL Server 2005 Express. I made up a simple Access Database file and migrated it to SQL Server. it was success which is good. Now i have a real Access Database file which i built for my client. Since he uses the SQL Server 2000 service pack 4, i was wondering if the version would matter a lot.
There are two fieldsA1 nvarchar(30)A2 nvarchar(800)I know nvarchar field is alterable length, if I store a string mystring='abc' to A1 field or to A2 field, I think they use the same disk space, so I think it's always a good way to define a big length nvarchar field such as A3 nvarchar(4000) for any length string, becuase they always use the same disk space, is it right?
I am trying to resize a database initial log file from 500M to 2M. I€™m using€?
ALTER DATABASE <DBNAME> MODIFY FILE ( NAME = <DBLOGFILENAME, SIZE = 2 ) "
And I'm getting "MODIFY FILE failed. Specified size is less than current size." I tried going into the database properties and setting the log file to 2M, but it doesn€™t keep the changes.
I have two tables that I am trying to join and both have similar columns with which I am trying to use. One is Date and the other is LOGdate. Date's format is "01/01/2000" BUT LOGdate's is "01/01/2000 12:41:00 PM/AM". Can I join these two tables?
Does column order matter when creating a table? For example, Should NOT NULL columns always come before NULL columns? Should most frequently used columns always be near the top? What about text, ntext and image data types? Should they always appear near the end of the column order?
I have two tables that I am trying to join and both have similar columns with which I am trying to use. One is Date and the other is LOGdate. Date's format is "01/01/2000" BUT LOGdate's is "01/01/2000 12:41:00 PM/AM". Can I join these two tables? If yes how do I get rid of the excess time on LOGdate.
select b.isin, a.market, a.ccy, a.stock_mkt, max(a.year) ,max(a.month), max(a.day) from market_table a inner join isin_table b on a.id = b.id where a.isin_closing_date=0 and a.market in ('XFFA', 'XFFA') and b.isin_type= 'I' and b.isin = 'DEX' group by b.isin, a.market, a.ccy, a.stock_mkt
what's needed is to get the earliest record all times, no mater the currency :
I would like to create a database that keeps track of our companies subject matter experts. I have roughed out some of the tables. I would appreciate any feedback on if this is the right approach and if there might be any issues when I start writting a front-end (probably VB 2005).
What makes this interesting (at least for me) is that a subject has an owner and at least 1 "expert", possibly up to 3. Here is what I am thinking for tables:
If I join Table1 to Table2 with a WHERE condition, isit the same if I would join Table2 to Table1 consideringthat the size of the tables are different.Let's assume Table2 is much bigger than Table1.I've never used MERGE, HASH JOINs etc, do any ofthese help in this scenario?Thank you
As I am creating the non-clustered indexes for the tables, I dont quite understand how dose it really matter to put the columns in the index key columns or put them into the included columns of the index?
I am really confused about that and I am looking forward to hearing from you and thank you very much again for your advices and help.
Sometimes we have some pretty serious CPU spikes and I would like to be able to pinpoint the exact issue. I have been able through the DMVs to associate a session with an operating system thread. That's great if I have a batch that's causing the problem with a non NULL sql_handle. However, how do I further narrow down what is going on with the background service broker tasks using the DMVs? Or am I just at the blind mercy of SQL Server at this point? What can I query to know what the heck is going on?
I installed sql 2005 a while back. Then I recently found out my file system was fat32 (I don't understand why the hardware people did this...) and I had to convert to NTFS. Naturally the sql service no longer worked so I uninstalled inorder to reinstall now I can't reinstall it I keep getting this message
native_error=5039, msg=[Microsoft][SQL Native Client][SQL Server]MODIFY FILE failed. Specified size is less than current size.
Hi folks,Can anyone enlighten me here? I'm trying to use a SPROC which, when supplied with an int, looks up the table and returns certain columns from it. I'm using a SqlCommand, here's my codebehind: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ SqlCommand dataSource = new SqlCommand("retrieveData", new SqlConnection(dbConnString)); dataSource .CommandType = CommandType.StoredProcedure; dataSource .Parameters.AddWithValue("id", poid); dataSource .Parameters.AddWithValue("title", title).Direction = ParameterDirection.Output; dataSource .Parameters.AddWithValue("creator", creator).Direction = ParameterDirection.Output; dataSource .Parameters.AddWithValue("assignee", assignee).Direction = ParameterDirection.Output; etc, etc... And the SPROC:------------------------------------------------------------------------------------------------------------------set ANSI_NULLS ONset QUOTED_IDENTIFIER ONGOALTER PROCEDURE [dbo].[retrieveData] @id int, @title varchar(50) OUTPUT, @creator varchar(50) OUTPUT, @assignee varchar(50) OUTPUT, @contact varchar(50) OUTPUT, @deliveryCost numeric(18,2) OUTPUT, @totalCost numeric(18,2) OUTPUT, @status tinyint OUTPUT, @project smallint OUTPUT, @supplier smallint OUTPUT, @creationDateTime datetime OUTPUT, @amendedDateTime datetime OUTPUT, @locked bit OUTPUT AS /**SET NOCOUNT ON; **/ SELECT [title] AS [@title], [datetime] AS [@creationDateTime], [creator] AS [@creator], [assignee] as [@assignee], [supplier] as [@supplier], [contact] AS [@contact], [delivery_cost] AS [@deliveryCost], [total_cost] AS [@totalCost], [amended_timestamp] AS [@amendedDateTime], [locked] AS [@locked] FROM purchase_orders WHERE [id] = @id; ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ The id being passed in is definately not null, and is set to a value of an item I know exists. The resulting error is:
Exception Details: System.InvalidOperationException: String[1]: the Size property has an invalid size of 0.Line 63: retrievePODetails.Connection.Open();Line 64: retrievePODetails.ExecuteNonQuery();[InvalidOperationException: String[1]: the Size property has an invalid size of 0.] System.Data.SqlClient.SqlParameter.Validate(Int32 index) +717091... ... Can anyone see anything I'm missing? Thanks,Ally
Using C#, SQL Server 2005, ASP.NET 2, in a web app, I've tried removing the size from parameters of type NCHAR, NVARCHAR, and VARCHAR. I'd rather just send a string and let the size of the parameter in the SP truncate any extra chars if need be. I began getting the error below, and eventually realized it happened only with output parameters, as in the code snippet below.String[3]: the Size property has an invalid size of 0. par = new SqlParameter("@BusinessEntity", SqlDbType.NVarChar); par.Direction = ParameterDirection.Output; cmd.Parameters.Add(par); cmd.ExecuteNonQuery();What's the logic behind this? Is there any way around it other than either finding out what the size should be, or assigning a size larger than would ever be needed? ThanksMike Thomas
I have one db test with one .mdf and .ldf file...mdf file size is 100mb and for some reson i removed all the tablesfrom that .mdf file and transfer it into new secondary file so all thetables moved into secondary file now i want to reduce the first .mdffile from 100 mb to 50mb is that possible,it's showing 90mb is free.Please reply
I am getting error to run stored procedure using executenonquery method. The Stored Procedure is having OUTPUT parameter. ExecuteNonQuery statement is called using SqlHelper. Error : String[18]: the Size property has an invalid size of 0
Just wanted to know what is a general rule of thumb when determining log file space against a database's data file.We allow our data file for our database to grow 10%, unlimited. We do not allow our log file to autogrow due to a specific and poorly written process (which we are in a three month process of remove) that can balloon the log file size.Should it be 10% of the Data file, i.e. if the Date file size is 800MB the log file should be 8MB?I realize there are a myraid of factors that go against file size but a general starting point would be nice.ThanksJeff--Message posted via http://www.sqlmonster.com
Hi, i use this script that show me the size of each table and do the sum of all the table size.
SELECT X.[name], REPLACE(CONVERT(varchar, CONVERT(money, X.[rows]), 1), '.00', '') AS [rows], REPLACE(CONVERT(varchar, CONVERT(money, X.[reserved]), 1), '.00', '') AS [reserved], REPLACE(CONVERT(varchar, CONVERT(money, X.[data]), 1), '.00', '') AS [data], REPLACE(CONVERT(varchar, CONVERT(money, X.[index_size]), 1), '.00', '') AS [index_size], REPLACE(CONVERT(varchar, CONVERT(money, X.[unused]), 1), '.00', '') AS [unused] FROM (SELECT CAST(object_name(id) AS varchar(50)) AS [name], SUM(CASE WHEN indid < 2 THEN CONVERT(bigint, [rows]) END) AS [rows], SUM(CONVERT(bigint, reserved)) * 8 AS reserved, SUM(CONVERT(bigint, dpages)) * 8 AS data, SUM(CONVERT(bigint, used) - CONVERT(bigint, dpages)) * 8 AS index_size, SUM(CONVERT(bigint, reserved) - CONVERT(bigint, used)) * 8 AS unused FROM sysindexes WITH (NOLOCK) WHERE sysindexes.indid IN (0, 1, 255) AND sysindexes.id > 100 AND object_name(sysindexes.id) <> 'dtproperties' GROUP BY sysindexes.id WITH ROLLUP) AS X ORDER BY X.[name]
the problem is that the sum of all tables is not the same size when i make a full database backup. example of this is when i run this query against my database i see a sum of 111,899 KB that they are 111MB,but when i do full backup to that database the size of this full backup is 1.5GB,why is that and where this size come from?
We have an application with replicated environment setup on sql server 2012 . Users will have a replica on their machines and they will replicate to the master database. It has 3 subscriptions subscribed to the publications on the master db.
1) We set up a replica(which uses sql server 2012) on a machine with no sql server on it. After the initial synchronization(used replmerge tool) the mdf file has grown to 33gigs and ldf has grown to 41 gigs. I went to sql server management studion . Right click and checked the properties of the local database. over all size is around 84 gb with little empty free space available.
2) We set up a replica(which uses sql server 2012) on a machine with sql server 2008 on it. After the initial synchronization(used replmerge tool) the mdf file has grown to 49 gigs and ldf has grown to 41 gigs. I went to sql server management studio , Right click and checked the properties of the local database. over all size is around 90 gb with 16 gb free space available.
3) We set up a replica(which uses sql server 2012) on a machine with sql server 2012 on it. We have dropped the local database and recreated the local db and did the initial synchronization using replmerge tool. The mdf file has grown to 49 gigs and ldf has grown to 41 gigs. I went to sql server management studio , Right click and checked the properties of the local database. over all size is around 90 gb with 16 gb free space available.
Why it is allocating the space differently? This is effecting our initial replica set up times.
Hi, I have a problem importing data from SQL Server 2000 'text' columns to SQL Server 2005 nvarchar(max) columns. I get the following error when encountering a transfer of any column that matches the above. The error is copied below,
Any help on this greatly appreciated...
ERROR : errorCode=-1071636471 description=An OLE DB error has occurred. Error code: 0x80004005.An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80004005 Description: "Unicode data is odd byte size for column 3. Should be even byte size.". helpFile=dtsmsg.rll helpContext=0 idofInterfaceWithError={8BDFE893-E9D8-4D23-9739-DA807BCDC2AC} (Microsoft.SqlServer.DtsTransferProvider)
Hi, I am using exec sp_helpdb go dbcc sqlperf(logspace) for getting database size and log size. Is this gives the correct database size and log size or Is there any other way to get the logsize and database size by means of query analyzer.
I have a SQL Server database that is showing 2853.44 mb in size but when Iexport the data into MS Access the size is less than 1 mb. Can anyone tellme how to reduce the size of my SQL Server database so that it's less than15 mb?Thanks in advance! Rob
I have a log file that is approximately 50 GIG. I backed up just the log and the file size of the .bak is 192 GIG . Why is this? Shouldn't it be closer to the 50 GIG.
Normally I wouldn't let log grow this much. But we are in process of getting new server up and running and don't have backups going yet. They are working on getting that up and running this week.
So I did a log backup to give me back some log space for now but was concerned when I saw the size of the .bak file.
When I view media contents of the backup device it shows one tranaction log back up and size of 192 GIG.
What is up with this. I know in SQL 2000 the log backup files where never this big. they were about the size of the log itself.