I have a table containing URLs. I want to be able to look up an URL very
fast, so I used an nvarchar to store the URL, and put an index on it
(maybe naive).
Anyway, I bump into:
"The index entry of length 911 bytes for the index 'UQ__URL__1367E606'
exceeds the maximum length of 900 bytes."
What's the best way to handle this? I want to do the look up fast. The
only thing I could think up was adding an extra column containing a digest
for the URL, and look up all URLs with the same digest, *and* having the
same value (which could give either 1 or 0 results).
I am new to MS SQL, so I might describe a silly solution, basically I want
to look up URLs to ID the fastest way possible.
--
John MexIT: http://johnbokma.com/mexit/
personal page: http://johnbokma.com/
Experienced programmer available: http://castleamber.com/
Happy Customers: http://castleamber.com/testimonials.html
Hi All,Here's a challenge.If there is a 'one word' answer to this, i.e. the name of a built inSQL Server function I can use, that would be great! However...I have a standard 1:m relationship, e.g. tblHeader and tblDetail.I want to create a view of records from 'tblHeader' and within thatview have a column (called e.g. DetailTypes) that provides a singlecomma separated varchar of values from a varchar field (called e.g.DetailType) that appears in related records in the 'tblDetail' table.(hope you understood that :)e.g. the contents of my 'tblDetail' table could be....Detail ID | Header ID | DetailType-----------------------------------1 | 1 | A2 | 1 | A3 | 1 | B4 | 2 | A5 | 2 | C6 | 3 | BTherefore, the view I want to create should return:Header ID | DetailTypes-------------------------------1 | 2A, 1B2 | 1A, 1C3 | 1Bi.e. the first row can be read as "Header 1 has 2 'A' detail recordsand 1 'B' detail record."I have created a view e.g. 'vqryHeaders' which calls a user definedfunction that takes the HeaderID and opens a cursor on another view,e.g. 'vqryDetailGrouped'. The other view groups the records intblDetail so that I can get a count of each DetailType for each HeaderID. The cursor then loops through the returned records concatenatingthe count and detail type into a comma separated string (as shownabove).However, when I run this across 20k records it is soooo sloooooww. Ihave indexes on the relationship fields and I am using realisticallysized varchars - neither made any difference in speed. It isdefinately the function that I wrote that slows it down as the view islightning fast when I remove my function call.I can supply source code if necessary, but I think that this is akind-of generic problem so I don't see the point - yet.I really hope you can help.Regards,Jezz
I've two tables that I've made from some query subsets. Each table has a varchar field with notes/memos and I want to concatenate the fields into one long field.
The problem I'm running into is that when I run the query to check the concatenation, the field is truncated maybe 256 chars in.
I tried converting and casting the field as nvarchar 4000, and I've also done the same for the fields in the two tables, but that doesn't seem to help.
I can query for the fields from each table and none of them are truncated by themselves. It only happens after I concatenate them.
I've created a new table and inserted the results into it, but the field in it is also truncated.
I need to modify existing table in my database to varchar(max) from varchar(2000)
This table contains 30 million plus rows and has more than 70 columns.
now when i am running alter command for this it take too long(more than 9 mins) which is not acceptable. . Is their any way to reduce this execution time
Following is the query i am using for this
ALTER TABLE Receipt ALTER COLUMN CUSTOM VARCHAR(MAX) NULL
Please let me know if you have any suggestion to improve this
I need to handle this conversion in SSIS and not on oracle.
The following expression is executed on a datatype of dt_str with a length of 8000.
SUBSTRING((DT_STR,8000,1252)Column_name,1,8000)
Records longer then 4000 bytes take an error path
The next expression with 4000 bytes works but there is truncation.
SUBSTRING((DT_STR,4000,1252)Column_name,1,4000)
Basically I need to know how to cast a text or ntext into a varchar or nvarchar using ssis but I need to capture the first 8000 byes without truncation.
is this possible?
Using SSIS Reading From oracle I can convert to a text or ntext field but I am having a hard time going directly to a varchar.
I'm seeing a problem with printing very long strings using the PRINT command on a VARCHAR(MAX) variable. After a certain amount of characters the string is truncated....it looks like the limit is at around 8,000 characters.
Does anyone know of a solution or a workaround for this?
I have looked far and wide and have not found anything that works to allow me to resolve this issue.
I am moving data from DB2 using the MS OLEDB Provider for DB2. The OLEDB source sees the column of data as DT_TEXT. I setup a destination to SQL Server 2005 and everything looks good until I try and run the package.
I get the error: [OLE DB Source [277]] Error: An OLE DB error has occurred. Error code: 0x80040E21. An OLE DB record is available. Source: "Microsoft DB2 OLE DB Provider" Hresult: 0x80040E21 Description: "Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done.".
[OLE DB Source [277]] Error: Failed to retrieve long data for column "LIST_DATA_RCVD".
[OLE DB Source [277]] Error: There was an error with output column "LIST_DATA_RCVD" (324) on output "OLE DB Source Output" (287). The column status returned was: "DBSTATUS_UNAVAILABLE".
[OLE DB Source [277]] Error: The "output column "LIST_DATA_RCVD" (324)" failed because error code 0xC0209071 occurred, and the error row disposition on "output column "LIST_DATA_RCVD" (324)" specifies failure on error. An error occurred on the specified object of the specified component.
[DTS.Pipeline] Error: The PrimeOutput method on component "OLE DB Source" (277) returned error code 0xC0209029. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.
Any suggestions on how I can get the large string data in the varchar column in DB2 into the varchar(max) column in SQL Server 2005?
I am trying to create a store procedure inside of SQL Management Studio console and I kept getting errors. Here's my store procedure.
Code Block CREATE PROCEDURE [dbo].[sqlOutlookSearch] -- Add the parameters for the stored procedure here @OLIssueID int = NULL, @searchString varchar(1000) = NULL AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here IF @OLIssueID <> 11111 SELECT * FROM [OLissue], [Outlook] WHERE [OLissue].[issueID] = @OLIssueID AND [OLissue].[issueID] = [Outlook].[issueID] AND [Outlook].[contents] LIKE + ''%'' + @searchString + ''%'' ELSE SELECT * FROM [Outlook] WHERE [Outlook].[contents] LIKE + ''%'' + @searchString + ''%'' END
And the error I kept getting is:
Msg 402, Level 16, State 1, Procedure sqlOutlookSearch, Line 18
The data types varchar and varchar are incompatible in the modulo operator.
Msg 402, Level 16, State 1, Procedure sqlOutlookSearch, Line 21
The data types varchar and varchar are incompatible in the modulo operator.
For the life of me I cannot figure out why SSIS will not convert varchar data. instead of using the table to table method, I wrote a SQL query so that I could transform the datatype ntext to varchar 512 understanding that natively MS is going towards all Unicode applications.
The source fields from Access are int, int, int and varchar(512). The same is true of the destination within SQL Server 2005. the field 'Answer' is the varchar field in question....
I get the following error
Validating (Error)
Messages
Error 0xc02020f6: Data Flow Task: Column "Answer" cannot convert between unicode and non-unicode string data types. (SQL Server Import and Export Wizard)
Error 0xc004706b: Data Flow Task: "component "Destination - Query" (28)" failed validation and returned validation status "VS_ISBROKEN". (SQL Server Import and Export Wizard)
Error 0xc004700c: Data Flow Task: One or more component failed validation. (SQL Server Import and Export Wizard)
Error 0xc0024107: Data Flow Task: There were errors during task validation. (SQL Server Import and Export Wizard)
DTS used to be a very strong tool but a simple import such as this is causing me extreme grief and wondering of SQL2005 is ready for primetime. FYI SP1 is installed. I am running this from a workstation and not on the server if that makes a difference...
I have a table that contains a lot of demographic information. The data is usually small (<20 chars) but ocassionally needs to handle large values (250 chars). Right now its set up for varchar(max) and I don't think I want to do this.
How does varchar(max) store info differently from varchar(250)? Either way doesn't it have to hold the container information? So the word "Crackers" have 8 characters to it and information sayings its 8 characters long in both cases. This meaning its taking up same amount of space?
Also my concern will be running queries off of it, does a varchar(max) choke up queries because the fields cannot be properly analyzed? Is varchar(250) any better?
Should I just go with char(250) and watch my db size explode?
Usually the data that is 250 characters contain a lot of blank space that is removed using a SPROC so its not usually 250 characters for long.
my system has 2 db's - sql server 2000 & db2 @ separate locations. i have a select query which needs 2 pick up consolidated data from both the tables. also the schema on the db2 has minor changes when compared with the schema on sql server 2000.
while searching on microsoft i came across the technique of creating a linked server. would this be possible 2 implement in my scenario. also would in this case, be advised that i create another view in the db2 server which has changed the db2 schema to the sql server schema format??
One or more files listed in the statement could not be found or could not be initialized. (Microsoft SQL Server, Error: 5009)
I accidentaly created a log file on my drive E:, but every time that I try to delete the log file it keeps on returning the same error. Can someone please help me delete the log file.
hii have over million records in my DB, what is the best way to get the results fast in case i need to get details of an employe name say "robert", if i do it normally it will take long, should i use index or is there any other good way.thanx in advancecheers
Hello everybody . I have 40 GB db running mostly transaction processing. I set up 1. back full backup 2 times a day (takes 30 -40 min) 2. log backup every 15 min 3. custom log shipping 4. We don't won't use Cluster.
Once in while becouse of nethwork, or other problem log shipping fails, so I have to restart log shipping all over starting from restore in stand by mode last full back of my db.IT takes 2-3 hrs just to do this restore !!!
1. So I am asking advice is any way I can bring down time for restore ? 2. Should diffrential backup be taken ? 3. We will not use Custer
hello all !for MS SQL 2000i am having a table with > 100 000 rowsI must clean itDELETE FROM myTable WHERE Name LIKE 'aser%' AND info IS NULLDELETE FROM myTable WHERE Name LIKE 'tuyi%' AND Info = 'ok'DELETE FROM myTable WHERE Name LIKE 'hop%' AND info LIKE 'retro%'.....about 20 DELETE commandswhat is the best way to do it ?thank you
Hi everyone im in deep in need of help in a very easy query and few questions i want to ask,, i use msn boy22202@hotmail.com please i want to contact anyone who use sql server 2005 that can help me in it.... thank you
I need to insert data to a temp table in SQL , I have
CREATE TABLE TMP_X ( doc_name varchar(200) )
--select * from TMP_X
INSERT into TMP_X values ( '...,
but its saying there isn't a match, and i know why its trying to insert all the data as one row, but i need them as seperate rows as i want only 1 column.. is there another INSERT type function ?
If I have a table with one column and i want to insert a few 100's rows of names I can't use the INSERT stmt as that does one row at a time , how can i achieve this ?
I have stoopidly enough deleted default Db. That causes Enterprise Manager to be unable to work with my DB's. The default DB I deleted had no other functions other then being default DB, I mean it was outdated, and I had other DB's that contained all my importent work. They are still running, and I can view DB driven site hosted at localhost, even though default DB no longer excist. I am even able to upload new content, or add new users, so this means all my other DB's are fine. I can even see SQL server icon in my bottom right corner of my desktop, and it shows server running.
Now I am in the need of adding tables and rework some of my excisting tables and stored procedures, but I am not able to do that with Enterprise Manager, due to the lack of default Database.
How do I correct this problem? I have gotten one tip of doing the following: EXEC sp_defaultdb 'User', 'DB' but I am not sure what to do with this.....tried to run it from command line, and put my username and the DB I would set to default but nothing happend.
So I need more details, step-by-step guiding will work, as I don't know a hole lot about Enterprise Manager and SQL.
Btw, this is my error in Enterpr.Managr:
A connection could not be established to MyComputerVSDOTNET2003
Reason: Cannot open default database. Login failed..
Please verify SQL server is running and check your SQL server registration prpoerties and try again
Hello everybody, please advice: what is the fastest standard method of user interface access to SQL database? I am looking for fast display of one master record plus related dependent records, plus fast scrolling through master records with display of dependent records as fast as posible. Perhaps a standard problem with standard solution? At current state of matters, I am still much slower then with my old Access97 database.
I notice this morning that my tempdb grows very fast. I have 26GB in my hardrive and all the space occupied by tempdb and finaly the qeury got failed due to 0 space in hardrive and there is no space to grow tempdb. The select query supposed to bring about 40000 rows. I ran this same query in different server that is not growing even 1 mb. I checked the tempdb option the Trunc log on checkpoint is true.
Why this problem happening ?. I have just dbo permission to access all the database. Do you have any advice regarding this?. Thanks, Ravi
Hello everybody We need to move table T1 from database A to T1 database B on same server
size of table T1 15 GB and 40000000 rows
database B just created and will act as warehouse
could it be done simply by 1.creating table T1 on db B and then 2.set db to simple recovery 3. insert into B.dbo.T1 select * from A.dbo.T1 4. create all the indexes on table T1 in db B
I have a database of about 5Gb of size. Some queries where taking more than 1 minute to complete execution (all of them are stored procedures). Because of that lack of performance, I call the command DBREINDEX for each table, executed the sp_updatestats system stored procedure and finally I executed the sp_recompile system stored procedure for each sp in my database.
After all this task, queries completed in a matter of a few seconds instead of minutes. Strange enough is that some hours later (about 6 hrs), after normal use (this database belong to a Client/Server information system), the problem appeared again: Queries started to take too long to complete.
I am assuming that indexes are degrading too fast so that they required another ReIndex, but I am not sure.
I insert a record to a table and "later" I update it. I have two fields to capture time information: Created and LastModified. My update is very simple: update .... set ..,[LastModifiedDate] = GetDate() where id = @pId.
Now my problem is that I am seeing the created and lastmodified times as the same (in format 2007-09-05 12:38:42.383) !!??!
The record has definitely been updated (other fields are populated).
There is a big table with several million records. I am developing a query that retrieve the first rowset that meets WHERE condition. Any suggestions for the fast query? Thanks a lot.
I have made some stored procedures to check if a user is involved with a certain record. basically every stored procedure contains the following logic.
example spCheckClientRelated: select @res = count(*) from client_role where client_id = @cid and employee_id = @eid
if (@res = 0) begin ... next select end if (@res = 0) begin ... next select end .... return @res end
so far so good. But the final check in CheckClientRelated tests if a user is related to one of the sales projects for that client.
I allready have the spCheckSalesProjectRelated that returns 1 or 0 similar to the example above
so I want to find an efficient method that selects all the sales_project_id 's from the sales_project table where client_id = @cid (i use offcourse select @sid = sales_project_id from sales_project where client_id = @cid at the moment)
And then I have to execute the spCheckSalesProjectRelated method for each @sid and @eid. This if offcourse where my problem is located. I don't know how to do a fast check for every selected @sid, until spCheckSalesProjectRelated returns 1
As you probably can determine from my question, sql is not really my domain, and I'm certainly not an expert, but I don't mind reading or looking up some stuff, so even a clue or a direction to look in would be most appreciated
I have a very puzzling situation with a database. It's an Access 2000 mdbwith a SQL 7 back end, with forms bound using ODBC linked tables. At ourremote location (accessed via a T1 line) the time it took to go to a recordwas very slow. The go to mechanism was a box that the user typed the indexvalue into a combo box, with very simple code attached:with me.RecordsetClone.FindFirst "[Index] = " & me.cboGoToIf Not .NoMatch ThenMe.Bookmark = .BookmarkEnd Ifend withNow, one would say that going to a record is slow because I'm using..FindFirst over a T1 line. And that's what I thought. However, as I wasworking with the form, commenting out various sections not related to the GoTo, I found that the Go To functionality changed, though I didn't modify thecode.Previously, going to a record near the end of the 50,000 record recordsettook about 1-2 seconds, but going to a record near the beginning, took about20 seconds. After the form changed, going to any record in the recordsettook about 1-2 seconds.So the question remains: why did it take so long to go to a record near thebeginning of the recordset, but not near the end (and the ones in the middletook an amount of time about halfway between the two), and what changed sothat now the form is working fine for all records?I've compared the changed form with the previous copy, and I don't see anydifferences. I've compared all code in the form module, and I've comparedall form properties. The forms are identical as far as I could tell. Butsomething happened as I was commenting/uncommenting code in the form thatgot rid of the problem with it taking a long time to go to some of therecords.My first thought was that something got recompiled, and now the form isfast. So I went back to the original version and changed some code andrecompiled, also did a compact and repair. But it was still slow. I alsotried doing an explicit decompile and then recompiled it. But it was stillslow.So this is very frustrating that the form is now working fine, but I can'tsee anything that's changed. If I don't see why the form is now fast, thenthere's no reason to believe that it might not at some point go back tobeing slow again. And then I'd just have to hope that something changes. Itwould be good to figure this out.Any ideas as to what might have changed here to cause the form's Go To to befast would be appreciated.Thanks,Neil
I am trying to implement a very fast queue using SQL Server.The queue table will contain tens of millions of records.The problem I have is the more records completed, the the slower itgets. I don't want to remove data from the queue because I use thesame table to store results. The queue handles concurrent requests.The status field will contain the following values:0 = Waiting1 = Started2 = FinishedAny help would be greatly appreciated.Here is a simplified script to demonstrate what has been done.CREATE TABLE [dbo].[Queue] ([ID] [int] IDENTITY (1, 1) NOT NULL ,[JobID] [int] NOT NULL ,[Status] [tinyint] NOT NULL) ON [PRIMARY]GOCREATE INDEX [Status] ON [dbo].[Queue]([Status]) ON [PRIMARY]GOCREATE PROCEDURE dbo.NextItem@JobID integer,@ID integer outputASSELECT TOP 1 @ID = [ID]FROM Queue WITH (READPAST, XLOCK)WHERE (Status = 0) AND (JobID = @JobID)RETURNGO
I seem to remember reading many moons ago about a function where youcan retrieve a count of the last recordset you opened.For example:I've got a stored procedure that returns a recordset using the TOP 10so I only get the top 10 records. I need to know the recordcount but Idont want to reuse the SELECT statement because its quite complex.Any ideas?What does @@Count do?Thanks in advance