Performance Of NVARCHAR(n) For Large Strings
Jul 25, 2007
What are the performance or storage implications of using a large value for NVARCHAR? For example, if I specify a NVARCHAR(4000) column just to cope with the rare case there are strings that long, but have a table full of strings 255 characters long, is the performance identical to specifying an NVARCHAR(255) column? Is there a reason then NOT to specify NVARCHAR(4000) on everything? i.e. does the query optimizer use it?
I know how long a row is and how many rows can fit in a page (4096 bytes) affects performance, but my understanding is that NVARCHAR only stores the characters needed so it wouldn't be affected unless there was actually a longer string. I wanted to know if there are any other considerations to using a large value here.
Also, how does NTEXT compare to using NVARCHAR? I noticed the documentation said it used a new page after 256 characters, which sounds like it's different from how a 4000 byte NVARCHAR would be stored?
View 1 Replies
ADVERTISEMENT
Aug 2, 2006
We've got an app that is in final testing so we can't go around forcing SqlDbType.VarChar. Is there any way to make SqlParameter default to varchar if you don't explicitly set a type for a string value? It's bad for our db index usage because they're all varchar and nvarchar forces a convert_implicit messing up performance.
View 1 Replies
View Related
Jul 23, 2005
Hi all,I need to store data into about 104 columns. This is problematic with MSSQL, since it doesn't support rows over 8kb in total size.Most of the columns are of type NVARCHAR(255), which means we can't havemore than 8092/(255*2) = 15 columns of this type.With a row length of more than 8kb, SQL gives a warning that any rows overthat amount will be truncated.So far I'm seeing two possible solutions to this problem:1. Split data into multiple tables with the same ID column accross alltables, and then join them on SELECT statements.2. Use NTEXT instead of NVARCHAR. NTEXT's length is 16 bytes because itcontains a pointer to the actual value stored somewhere else. However, NTEXTdoesn't support regular indexing, only through a Full-Text Index catalog. Inthis case I'll need to user "WHERE CONTAINS(columnName, 'sometext')" toperform searches, which is bearable.I'm inclined toward #2. However I haven't used Full-Text indices before anddon't know their limitations. Will I run into problems with NTEXT? Is therea better solution?Thanks.-Oleg.
View 7 Replies
View Related
Aug 20, 2001
We have few stored procedures that use nvarchar datatype, this was not issue on SQL server 7.0 but in 2000 becomes a big issue.
For example query that runs for 3 minutes in SQL server 2000 by replacing NVARCHAR to VARCHAR the same query runs for 2 seconds.
The biggest challenge that I have deals with tables and user-defined datatypes of NVARCHAR that has been bounded to the table.
How can I alter those without data corruption?
View 2 Replies
View Related
Jan 28, 2004
Hi all
I have a table that contains an ntext column for storing values up to a couple of Mb in size.
However, I estimate that 95% of the values stored in this ntext field will fit into an nvarchar(4000) field.
Is it worth me having both fields in the table?
i.e. For rows where the values < 4000 characters I would store the value in the nvarchar column. Otherwise I would use the ntext column.
Can anyone confirm whether this technique would increase performance given that ntext values are sort of stored separately to the rest of the table data?
A colleague of mine is an Oracle DBA and he mentioned this technique is fairly caommonly adopted in the Oracle world.
Thanks
Matt
View 1 Replies
View Related
Feb 11, 2005
Am I right in assuming that when I have a column where all fields contain NULL, this does not increase the total data storage size if my database? Also, what kind of impact would it have on performance?
And what if I inserted "" in varchar columns? I would think the increase in size would be marginal?
The reason I'm asking is that I want to use an existing table and stored procedures for another purpose, but only need half of the columns. But it would significantly simplify application development.
View 3 Replies
View Related
Jan 25, 2008
Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.
I would really appreciate if you can give me some advice and if you have any good links that would be great...
View 10 Replies
View Related
Jan 25, 2008
Hi gurus, I'm creating a web application where I will have a large number of tables (between 10k and 20k), this is done for the sake of scalability as tables will be moved to different database servers as the application grows and also for performance (smaller indexes). I'm worried though how having a large number of tables could affect the performance of SQL Server as the application will start on one single database server. I tried to find some resources on that on the internet but couldn't find any.
I would really appreciate if you can give me some advice and if you have any good links that would be great...
Waleed Eissa
http://www.waleedeissa.com
View 9 Replies
View Related
Dec 5, 2007
Hi,
I have a table with over 61 million records having a clustered index on an identity column(Primary key). Simple count queries are taking minutes to execute on this table (ex: select count(1) from table1). I have checked the statistics on the primary key which displayed me the histogram having the 39th million record as the Range-hi-key. I updated the statistics on this column and tried requerying, but still it took atleast 5 minutes to give me the count of records in the table. Also, there were no users using the table when I queried. Inserts into this table were working fine. I have other tables in my database with 41 million records having no such issues. Can anyone point me to the problem areas in such scenarios?
Thanks,
Harish
View 6 Replies
View Related
Aug 15, 2007
We have a table that is 800GB. We are planning to re-build the clustered index on this table to a different filegroup. The new filegroup and files associated with it will sit on a SAN which will have a 1.5TB allocation. Does anyone have any suggestions in regards to how many files to have associated with the filegroup to provide optimal performance? Apparently we could have 3 LUNS (500gb each), so would 1 file on each LUN provide additional performance as opposed to one file on 1 LUN?
View 1 Replies
View Related
Mar 10, 2004
I have a large amout of dbs (150) on my SQL Server and when using the enterprise manager to do administrational tasks like Backup, Restore etc. it takes 1.5 hour to open the Database folder. Server is 2xP4, 3Gb RAM. Any ideas on how to manage this number of dbs on the same server and instance of SQL.
Cheers!
View 2 Replies
View Related
Jul 23, 2005
Hi,I am using SQL 2000 and has a table that contains more than 2 millionrows of data (and growing). Right now, I have encountered 2 problems:1) Sometimes, when I try to query against this table, I would get sqlcommand time out. Hence, I did more testing with Query Analyser and tofind out that the same queries would not always take about the sametime to be executed. Could anyone please tell me what would affect thespeed of the query and what is the most important thing among all thefactors? (I could think of the opened connections, server'sCPU/Memory...)2) I am not sure if 2 million rows is considered a lot or not, however,it start to take 5~10 seconds for me to finish some simple queries. Iam wondering what is the best practices to handle this amount of datawhile having a decent performance?Thank you,Charlie Chang[Charlies224@hotmail.com]
View 5 Replies
View Related
Dec 19, 2006
Hi,
I'm testing Mirroing.
1) I have dedicated NIC for Mirroring - 100Mb
There is no issue with the network (file of 25MB goes in 2.5 seconds)
2) I'm issuing the next simple command
Select * into dbo.Table2 from dbo.Table1
3) the size of the table is 25MB
in async mode it takes 3-4 sec (as if it runs only local)
in sync mode it takes 25-29 sec !!!!
WHY ? IS THIS NORMAL ?
is there any configuration i can change ?
View 5 Replies
View Related
May 10, 2006
Hello everybody,
I've got a little problem wich i'm trying to solve since 1-2 years and i hoped it would go away with SQL 2005 - but that wasn't the case :(.
Situation:
I've just bought a new Server containing:
SQL 2005
64 Bit Enviroment
4 GB RAM
2x AMD Opteron 2 GHz Prozeccors (Dual Core)
2x RAID Controllers (RAID 1) containing
1.1 System
1.2 Data
2.1 Transaction Logs
I've created a full-text table containing all the search terms i need to search.
Table build:
RecID - int - Primary Key
SrcID - varchar(30)
ArticleID - int - referring to an original table
SearchField - varchar(150) - Containing the search terms
timestamp - timestamp field
Fulltext index:
RecID as Primary Key
SearchField as indexed field - Wordbreaker: Neutral (containing several languages), Accent sensitivity off
Now i've got different tables imported in here resulting in a table size of ~ 13 million rows.
There is no problem with the performance on this catalog if i search a term wich isn't contained in more than 200-300 recordsets - but if i search for a term wich could occur in 200'000 upwards it gets extremely slow.
On the slow query the first records get in after no time, but until the query finished up to 60 seconds pass.
The problem is that i have to sort by a ranking value wich is stored externally - so i need all results to sort them...
current (debugging) query:
SELECT ArticleID FROM fullTextTable AS ft INNER JOIN CONTAINSTABLE(FullTextCatalog,SearchField,'"term*"') AS ftRes ON ftRes.[KEY]=ft.idEntry
Now if i check in the performance monitor:
As soon as i run the query the 'Avg. Disk Read Queue Length' counter on disk D (SQL Data Files) jumps to the top, until the query has finished.
Almost no read/write activity on C: where the Fulltext is stored...
If i rerun the query, after it finished once successfully - it takes place below 1-2 seconds, would be nice to get that result in first place :).
Does anybody know a workaround to this problem?
View 9 Replies
View Related
Jul 20, 2005
I am having performance issues on a SQL query in Access. My query isaccessing and joining several tables (one very large one). The tables arelinked ODBC. The client submits the query to the server, separated byseveral states. It appears the query is retrieving gigs of data from thetable and processing the joins on the client. Is there away to perform moreof the work on the server there by minimizing the amount of extraneous tabledata moving across the network and improving performance (woefully slowabout 6 hours)?
View 3 Replies
View Related
Dec 5, 2007
I have a query that joins two large partitioned tables and depending on the values in the where clause, I can get dramatically different performance results.
The first query completed in around 7s and has 47,000 logical reads.
select mo.monitor_id,
mo.site_id,
mo.testtime,
sum(mo.NumBytes),
sum(mo.DNSTime),
sum(mo.ConnectTime),
sum(mo.FirstByteTime),
sum(mo.ContentTime),
sum(mo.RelocTime)
from monitor_raw mr(nolock), monitor_object mo(nolock)
where mr.monitor_id in (5339, 5341, 5342, 943842, 943866)
and mr.testtime between 'Oct 31 2007 3:00:00:000PM' and 'Nov 30 2007 3:00:00:000PM'
and mo.returncode = 200
and mr.site_id in (101,102,105,109,110,112,115,117,119,122,126,151,132,139,129,135,121,138,143,142,159,148,128,171,176,177,178,111,113,116,118,120,127,133,131,130,174,179,185,205,200,202,203,204,210,211,208,209,212,213,216,199,214,224,225,229,230,232,235,241,245,247,250,254,261,267,264,265,266,268,269)
and mr.escalationlevel = 0
and mr.monitor_id = mo.monitor_id
and mr.testtime = mo.testtime
and mr.site_id = mo.site_id group by mo.monitor_id, mo.site_id, mo.testtime
The second query takes 188s to complete and has 1.8m logical reads. The only difference between the two is the value of the monitor_ids in the where clause.
select mo.monitor_id,
mo.site_id,
mo.testtime,
sum(mo.NumBytes),
sum(mo.DNSTime),
sum(mo.ConnectTime),
sum(mo.FirstByteTime),
sum(mo.ContentTime),
sum(mo.RelocTime)
from monitor_raw mr(nolock), monitor_object mo(nolock)
where mr.monitor_id in (152682, 5339, 5341, 5342, 268080)
and mr.testtime between 'Oct 31 2007 3:00:00:000PM' and 'Nov 30 2007 3:00:00:000PM'
and mo.returncode = 200
and mr.site_id in (101,102,105,109,110,112,115,117,119,122,126,151,132,139,129,135,121,138,143,142,159,148,128,171,176,177,178,111,113,116,118,120,127,133,131,130,174,179,185,205,200,202,203,204,210,211,208,209,212,213,216,199,214,224,225,229,230,232,235,241,245,247,250,254,261,267,264,265,266,268,269)
and mr.escalationlevel = 0
and mr.monitor_id = mo.monitor_id
and mr.testtime = mo.testtime
and mr.site_id = mo.site_id group by mo.monitor_id, mo.site_id, mo.testtime
The two tables have clustered indexes on monitor_id, testtime and site_id. Comparing the execution plan, I can see why there is such a difference in performance. The second query performs a clustered index seek on the monitor_object table starting at the lowest monitor_id, testtime & site_id through the highest monitor_id, testtime & site_id. The first query performs a clustered index seek where the monitor_id, testtime and site_id equals the same values from the monitor_raw table.
My question is, how can I force the second query to use the same execution plan as the first so that I can get better performance?
One possible workaround that I could use is to execute five individual queries, one for each monitor_id and then union the results together but this would require significant code changes to my stored procs.
Thanks,
Tim
View 5 Replies
View Related
Mar 5, 2008
Dear all,
I'm using SQL Server 2005 Standard Edetion.
I have the following stored procedure that is executed against two tables (RecrodedCalls) and (RecordedCallsTags)
The table RecordedCalls has more than 10000000 Records and RecordedCallsTags is about 7500000 Records
Now the lines marked in baby blue are dynamic (Dynamic where statement) that varies every time this stored procedure is executed, may it contains 7 columns in condetion statement or may it contains 10 columns, or 2 coulmns.....etc
Now I want to create non-clustered indexes on the columns used in the where statement, THE DTA suggests different indexing whenever the where statement changes.
So what is the right way to created indexes, to create one index on all the columns once, or to create separate indexes on each columns, sometimes the DTA suggests 5 columns together at one if I€™m using 5 conditions, I can€™t accumulate all the possible indexes hence the where statement always vary from situation to situation, below the SP:
CREATE TABLE #tempLookups (ID int identity(0,1),Code NVARCHAR(100),NameE NVARCHAR(500),NameA NVARCHAR(500))
CREATE TABLE #tempTable (ID int identity(0,1),TypesCount INT,CallsType NVARCHAR(50))
INSERT INTO #tempLookups SELECT Code, NameE, NameA FROM lookups WHERE [Type] = 'CALLTYPES' ORDER BY Ordering ASC
INSERT INTO #tempTable SELECT COUNT(DISTINCT(RecordedCalls.ID)) As TypesCount,RecordedCalls.CallType as CallsType
FROM RecordedCalls LEFT OUTER JOIN RecordedCallsTags ON RecordedCalls.ID = RecordedCallsTags.CallID
WHERE RecordedCalls.ID <= '9369907'
AND (RecordedCalls.CallDate BETWEEN cast ('01 Jan 1910 00:00:00:000' as datetime ) AND cast ( '01 Jan 2210 00:00:00:000' as datetime ))
AND (RecordedCalls.Duration BETWEEN 0 AND 1000000)
AND RecordedCalls.ChannelID NOT IN('62061','62062','62063','62064','64110','64111','64112','64113','64114','69860','69861','69862','69863','69866','69867','69868')
AND RecordedCalls.ServerID NOT IN('2')
AND RecordedCalls.AgentID NOT IN('1000010000')
AND (RecordedCallsTags.TagID is null OR RecordedCallsTags.TagID NOT IN('100','200'))
AND RecordedCalls.IsDeleted='false'
GROUP BY RecordedCalls.CallType
SELECT IsNull(#tempTable.TypesCount, 0) AS TypesCount, CASE('English')
WHEN 'Arabic' THEN #tempLookups.NameA
ELSE #tempLookups.NameE
END AS CallsType FROM
#tempTable RIGHT OUTER JOIN #tempLookups ON #tempTable.CallsType = #tempLookups.Code
DROP TABLE #tempLookups
DROP TABLE #tempTable
Thanks all,
Tayseer
Any suggestions how to create efficient indexes??!!
View 2 Replies
View Related
Jul 20, 2007
Recently we added a new table into our SQL2000 database specifically to store scanned in images of documents. This new table contains a PK field, a couple of datetime fields, a couple of char(1) fields and one 'image' field.
Before adding this table, the database size was approx 6GB. Six months after adding this new table, the database has grown to 18GB - 11GB of this is due to the scanned in images.
Would this new table affect the SQL performance with regards to accessing other data in the database that has nothing related to the new table?
If so, would moving this new table into it's own database be recommended?
Thanks
Rod
View 1 Replies
View Related
Aug 25, 2004
Hi,
We currently have a fairly new SQL server 2000 db (currently about 18mb is size) as a backend to an application (Navision). Performance seems to be below what it should be.
The db is increasing quite rapidly in size, with a lot of data scheduled to be loaded onto the db and also more and more shops and users coming onto the system with alot more transactions going onto the db.
The initial setup of the db has the database File properties set to "Automatically grow file" by "30%" and has an unrestricted file growth.
The server that the db sits on is high spec and very large disk space.
Because the database will be expanding alot and thus reaching its maximum space allocation and then performing a 30% increase in size (which I guess affects performance quite a bit??) quite regularly.
Is it best to set the intitial size of the db to a alot bigger size in the first place as we have large disk space availiable and also set the % increase bigger also.
any advice on best performance would be much appreicated.
Regards,
David
View 1 Replies
View Related
Mar 12, 2008
Hi all
I have a Large log table with large size data(I month only),If I run a query like SELECT * FROM <table_name> Server will go€¦very very slow€¦.
Because of large Data system is going slow€¦..
Please some body helps me with suggestion how get good performance.
View 4 Replies
View Related
Dec 12, 2007
Hi
We are using the SQL Server 2005 Full Text Service. The data is not huge, but the kind of data is that each record is small and there are a large number of records. There are 35 million records now with 11 GB of data and about 1.6 GB of FT catalog on the table. This is expected to grow to at least 10 times the size of this data. The issue is with FTS taking a long time to return results when the number of hits (rows) getting returned from FTS is large for some searches, it takes a very long time. With the same data & catalog, those full text queries for less common words return timely. The nature of the problem doesnt allow us to only have top results. We need all the results. So it’s not about the size of data but the number of results getting returned from FT. (As the catalog is inverted). The machine is dual processor with 4 GB RAM.
I am considering splitting the table and hence the catalog and using multiple servers to do full text searches in smaller catalogs. Is there any other way this issue can be solved ?
If splitting is the only way, can you give me an idea as to what is a statistical/standard limit to the number of search results/cataog size as which FTS gives good results
Thanks in advance
View 1 Replies
View Related
Jul 4, 2006
Hi,
We are processing 60,00, 000 rows(2 GB file) available in a flat file and loading them in to a database tables using OLEDB Destination components. In the data pipeline of an SSIS package we have 1 flat file source reader, 7 look up components(full cache mode), 1 multicast component and 2 OLE DB destinations with fast load option.
We have observed that first 10,00, 000 rows are processed and loaded in to target tables in just 4 minutes time. The second set of 10,00, 000 rows are processed in 15 minutes time. After this for processing each 1,00,000 rows SSIS is taking approximately 8 - 10 minutes time. We are not able to identify the reasons for the unexpected behaviour of SSIS.
We thought that as the input file size is 2 GB SSIS is not able to manage and slowing down over time of execution. We did split the big input file in to 60 small 37 MB (approx) size files. Then we modified the package by adding For-Each loop task to process all the 60 small files and load them in to database server sequentially. Even in this approach also we have identified data loading has slowed down drastically after processing 13 files.
In order to verify is there any problem with reading source file or transformation, we have replaced OLEDB destinations component with Flat File destinations. With Flat file destination the time taken for processing rows is very constant. For every 8 minutes package is able to process 10,00,000 rows and write them in to the destination files. So, there is no problem with the with either Look up components or flat file source reader.
We are sure that target database server is in same state/condition from the starting to the end of package execution. The client box in which we are running the package is having 1 GB RAM. During package execution time the CPU usage is at 30 % and PF usage is 580 MB. SP1 is also installed on both Client and Server.
Does any one have clue what is causing slow down of data load over the time of package execution?
View 3 Replies
View Related
Feb 19, 2007
I have a whole bunch of bit fields in an SQL data base, which makes it a little messy to report on.
I thought a nice idea would be to assigne a text string/null value to each bit field and concatenate all of them into a result.
This is the basic logic goes soemthing like this:
select case new_accountant = 1 then 'acct/' end +
case new_advisor = 1 then 'adv/' end +
case new_attorney = 1 then 'atty/' end as String
from new_database
The output would be
Null, acct/, adv/, atty, acct/adv/, acct/atty/... acct/adv/atty/
So far, nothing I have tried has worked.
Any ideas?
View 2 Replies
View Related
Jan 10, 2008
HI, I am running the below method which returns this error: The parameterized query '(@contactdate nvarchar(4000),@dnbnumber nvarchar(4000),@prospect' expects the parameter '@futureopportunity', which was not supplied" Please help.Private Shared Sub InsertData(ByVal sourceTable As System.Data.DataTable, ByVal destConnection As SqlConnection)
' old method: Lots of INSERT statements Dim rowscopied As Integer = 0
' first, create the insert command that we will call over and over:
destConnection.Open()Using ins As New SqlCommand("INSERT INTO [tblAppointmentDisposition] ([contactdate], [dnbnumber], [prospectname], [businessofficer], [phonemeeting], [followupcalldate2], [phonemeetingappt], [followupcalldate3], [appointmentdate], [appointmentlocation], [appointmentkept], [applicationgenerated], [applicationgenerated2], [applicationgenerated3], [comments], [newaccount], [futureopportunity]) VALUES (@contactdate, @dnbnumber, @prospectname, @businessofficer, @phonemeeting, @followupcalldate2, @phonemeetingappt, @followupcalldate3, @appointmentdate, @appointmentlocation, @appointmentkept, @applicationgenerated, @applicationgenerated2, @applicationgenerated3, @comments, @newaccount, @futureopportunity)", destConnection)
ins.CommandType = CommandType.Textins.Parameters.Add("@contactdate", SqlDbType.NVarChar)
ins.Parameters.Add("@dnbnumber", SqlDbType.NVarChar)ins.Parameters.Add("@prospectname", SqlDbType.Text)
ins.Parameters.Add("@businessofficer", SqlDbType.NChar)ins.Parameters.Add("@phonemeeting", SqlDbType.NVarChar)
ins.Parameters.Add("@followupcalldate2", SqlDbType.NVarChar)ins.Parameters.Add("@phonemeetingappt", SqlDbType.NVarChar)
ins.Parameters.Add("@followupcalldate3", SqlDbType.NVarChar)ins.Parameters.Add("@appointmentdate", SqlDbType.NVarChar)
ins.Parameters.Add("@appointmentlocation", SqlDbType.NVarChar)ins.Parameters.Add("@appointmentkept", SqlDbType.NVarChar)
ins.Parameters.Add("@applicationgenerated", SqlDbType.NVarChar)ins.Parameters.Add("@applicationgenerated2", SqlDbType.NVarChar)
ins.Parameters.Add("@applicationgenerated3", SqlDbType.NVarChar)ins.Parameters.Add("@comments", SqlDbType.Text)
ins.Parameters.Add("@newaccount", SqlDbType.NVarChar)ins.Parameters.Add("@futureopportunity", SqlDbType.NVarChar)
' and now, do the work: For Each r As DataRow In sourceTable.RowsFor i As Integer = 0 To 15
ins.Parameters(i).Value = r(i)
Next
ins.ExecuteNonQuery()
'If System.Threading.Interlocked.Increment(rowscopied) Mod 10000 = 0 Then
'Console.WriteLine("-- copied {0} rows.", rowscopied)
'End If
Next
End Using
destConnection.Close()
End Sub
View 6 Replies
View Related
Sep 4, 2007
I had this question for quite a long time.
It seems the latter one don't take any extra storage space than the previous one.
As long as the real string length is less than 10.
Is that mean the latter one not cost anything?
I once heard the different is when they are in memory. But not sure of it.
Can anyone explain it and provide some official reference on it?
Thank.
View 6 Replies
View Related
Sep 12, 2004
1. Use mssql server agent service to take the schedule
2. Use a .NET windows service with timers to call SqlClientConnection
above, which way would be faster and get a better performance?
View 2 Replies
View Related
Jun 23, 2006
Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server with aparticular query. It would take approximately 22 seconds to return 100rows, thats about 0.22 seconds per row. Note: I ran the query in singleuser mode. So I tested the query on the Development server by taking abackup (.dmp) of the database and moving it onto the dev server. I ranthe same query and found that it ran in less than a second.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue is relatedto some external hardware issue like: disk space, memory etc. Or couldit be OS software related issues, like service packs, SQL Serverconfiguations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating system related issue.Any Ideas would help me greatly!Thanks,Brian T*** Sent via Developersdex http://www.developersdex.com ***
View 2 Replies
View Related
Jun 22, 2006
Hello Everyone,I have a very complex performance issue with our production database.Here's the scenario. We have a production webserver server and adevelopment web server. Both are running SQL Server 2000.I encounted various performance issues with the production server witha particular query. It would take approximately 22 seconds to return100 rows, thats about 0.22 seconds per row. Note: I ran the query insingle user mode. So I tested the query on the Development server bytaking a backup (.dmp) of the database and moving it onto the devserver. I ran the same query and found that it ran in less than asecond.I took a look at the query execution plan and I found that they we'rethe exact same in both cases.Then I took a look at the various index's, and again I found nodifferences in the table indices.If both databases are identical, I'm assumeing that the issue isrelated to some external hardware issue like: disk space, memory etc.Or could it be OS software related issues, like service packs, SQLServer configuations etc.Here's what I've done to rule out some obvious hardware issues on theprod server:1. Moved all extraneous files to a secondary harddrive to free up spaceon the primary harddrive. There is 55gb's of free space on the disk.2. Applied SQL Server SP4 service packs3. Defragmented the primary harddrive4. Applied all Windows Server 2003 updatesHere is the prod servers system specs:2x Intel Xeon 2.67GHZTotal Physical Memory 2GB, Available Physical Memory 815MBWindows Server 2003 SE /w SP1Here is the dev serers system specs:2x Intel Xeon 2.80GHz2GB DDR2-SDRAMWindows Server 2003 SE /w SP1I'm not sure what else to do, the query performance is an order ofmagnitude difference and I can't explain it. To me its is a hardware oroperating systemrelated issue.Any Ideas would help me greatly!Thanks,Brian T
View 2 Replies
View Related
Aug 22, 2002
I have a table that looks like the example below. I need to return the tindex and the entire description on one row. Any clues? I'm drawing a blank.
thanks for you help
tindex tdline description
1234 1 Talk to Mr. Cartwright about
1234 2 new issues with patent law. Conferece
1234 3 call to discuss payment of past bills.
I need to see
1234,Talk to Mr. Cartwright about new issues with patent law. Conferece call to discuss payment of past bills.
View 4 Replies
View Related
Jul 18, 2007
Hi everyone.
Is it possible to put in a string value as one of the results? I'm trying to produce a string in the data table is the value is null so I want to do something like:
iif(somevalue is nothing, 'Other', somevalue)
Thank you in advance.
View 1 Replies
View Related
Nov 11, 2006
I am trying to develop a web site. I have a local ms sql database on my machine.
I am trying to connect to a ms Sql database on a goDaddy server from the application.
I am trying to understand the connection string and its total properties.
here is what I think should be in my web.config file
<
add name="Personal" connectionString="Server=whsql-v12.prod.mesa1.secureserver.net;
Database=DB_XX10;
User ID=myID;
Password=myypassword;
Trusted_Connection=False" providerName="System.Data.SqlClient"
/>
<remove name="LocalSqlServer"/>
can someone please tell me where I am going wrong, Thanks for your help.....
View 7 Replies
View Related
Feb 7, 2007
I am using query strings to pass data from web form to web form and I have two questions. First if i use a asp:sqldatasouce to fill up a grid view and I have my select command set to a paramater that get whatever is in the query string it will not work because whatever is in the quers string gets a " ' " put in front and in the back of it. So if the query string was 5 whene it does the sql statement it sets my paramater = '5' not just 5 so its wont work. How can I fix this using the asp:sql datasource my aspx code looks like
<asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:Rental PropertiesConnectionString %>"
SelectCommand="SELECT * FROM [APARTMENTS] WHERE ([PROPERITY_ID] = @PROPERITY_ID)">
<SelectParameters>
<asp:QueryStringParameter Name="PROPERITY_ID" QueryStringField="key" Type="Int32" />
</SelectParameters>
</asp:SqlDataSource>
Also since i have not been able to get around this so i have been wrting code in vb.net to attact a dataset to a grid view to populate it based on the query string i would do the following in vb.net to get ride of the ' in front and behind the query string
Dim y as string = "'" // " ' "
key = Request.QueryString("key").trim(y.tochararray)
But now i am doing another project in C# and I have re-written the above code in C# it will run but it will not take the " ' " out form infront or behind key. How does this need to be changed up?
string y = "'";
key = Request.QueryString["key"].trim(y.tochararray());
View 3 Replies
View Related
Aug 28, 2007
How can i make a sum (concatenation) of strings of one column in a table.
for example i have a table like this
field1 field2
1 abc
1 bcd
2 sdf
2 sdd
I want to get these strings added group by field 1
Thanks
View 5 Replies
View Related