Problem About Caculating Wide-char Strings' Character Count
Jan 8, 2008
it's like this, i have a temporary table such as
create table temp_table (str varchar(50))
and i have a data table
create table data_table (str varchar(20))
now i import my data(in which there is some corrupted lines) into the temporary table, they should be all ansi-character strings and no more than 20 characters, but now some wrong datas in which there are wide-characters are mixed in. as the result of these wide-characters, the corrupted strings each takes over 20 bytes, but i can't filter them out, as when i enter in "len(str)", the sql server returns character counts, instead of byte counts, i thought this should only happen when i was using a unicode date type!(e.g. nvarchar). but now the server also behaves like this on those ansi date types. it seems all string manipulating functions refering string length behaves like this.
so when i am trying to run:
insert into data_table select str from temp_table where len(str) <= 20
or
insert into data_table select left(str,20) from temp_table
it will always end up with a string truncating error
String or binary data would be truncated. The statement has been terminated
So now my problem is how to get the count of byte, but character, of a string containing wide-characters?
i'm using sql server 2005 standard version with sp2
Hi ,Have a Visual C++ app that use odbc to access sql server database.Doing a select to get value of binary field and bind a char to thatfield as follows , field in database in binary(16)char lpResourceID[32+1];rc = SQLBindCol(hstmt, 1, SQL_C_CHAR,&lpResourceID,RESOURCE_ID_LEN_PLUS_NULL , &nLen1);and this works fine , however trying to move codebase to UNICODE antested the followingWCHAR lpResourceID[32+1];rc = SQLBindCol(hstmt, 1, SQL_W_CHAR,&lpResourceID,RESOURCE_ID_LEN_PLUS_NULL , &nLen1);but only returns 1/2 the data .Any ideas , thoughts this would work fine , nit sure why loosing dataAll ideas welcome.JOhn
A bug in my front end application inserted a new line character in string values that were saved to a varchar field in the database. e.g. "This is a string" was modified to "This is a string" where represnts a new line character.
1) I want to write a SQL query to cleanse the data by replacing the new line characters in this field (say the field name is reason) with a single space.
2) I'd also like to be able to create a SQL query that allows me to specify such strings in the WHERE clause. e.g. SELECT * FROM table WHERE reason = "this is a string".
I have a problem that I'm not sure how to handle. In the application I am working on I will be importing fixed length strings from a CD into the database. Each specific character in the string represents some value.
I can't decide if I should just create a single field of 895 characters or create 895 single character fields. When I need data from the database I won't need all 895 characters of information, I'll only need one or a small subset of values.
Does it really matter which approach I take?
Thanks in advance for any advice/tips you can provide.
I want to find the LONGEST NAME in the column and I am trying to use
: Select MAX(name) from tablename
SQL 2000 gives the max as per alphabetical order and in the above example the MAX is john/b
IS there any direct SQL statement to get the MAX name where MAX refers to the LONGEST in terms of length .
I have been able to do it with 2 sql statements where i first find the longest length and then give a query where i match the length..But is there any simpler way??
How bad is normalizing the database to the 3rd form, which requires that all fields depend on nothing but primary key. Consider the first table you create -- users. They have int primary keys, which duplicate the real primary keys -- user names. When user logs in, the user's entry is uniquely identified by its name, which is not primary key. The fundamental design rule -- avoid redundancy -- is violated. A VERY serious reason should be there for that.
Usually, design is compromised by redundancy for performance. Here, both copies are stored in one remote database, but integer keys may be located/used faster. Additionally, using long string references everywhere instead of short integer keys may save a lot of storage space (additionally increasing speed). How serious these impacts are? Am I missing something?
Usually, login names are not allowed to change. You have problems changing primary keys because all the foreign keys must be updated accordingly. Does it reveal that most user databases use character strings as primary keys?
hello all,is there a quick way to convert a zipcode of type int, to a 5 characterchar value ?e.g.declare @zip intdeclare @czip char(5)select @zip = 2109select @czip = convert(char, @zip)select @czipbut with @czip = 02109 instead of 2109 ?thanks in advance
Hi guys/ladies I'm still having some trouble formatting a select statement correctly. I am using a sqldatasource control on an aspx page. It is connecting via odbc string to an Informix database. Here is my select statement cut down to the most basic elements. SELECT commentFROM informix.ipr_stucomWHERE (comment > 70) The column "comment" contains student grades ranging from 0-100 and the letters I, EE, P, F, etc. Therefore the column is of a char type. This is a problem because I cannot run the above statement without hitting an alpha record and getting the following error "Character to numeric conversion error" How can I write this statement where it will work in the datasource control and have it only look at numeric values and skip the alpha values? I have tried case with cast and isnumeric... I don't think that I have the formating correct. I have also used: WHERE (NOT (comment = ' I' OR comment = ' EE' OR comment = ' NG' OR comment = ' WP' OR comment = ' WF' OR comment = ' P' OR comment = ' F')) This works but is very clunky and could possibly break if other letters are input in the future. There has to be a better way.I am sorry for my ignorance and thanks again for your help.
I want to know that how can i write a stored procedure that takes an input param, i.e., a string and returns the count of the matching words between the input parameter and the value of a column in a table. e.g. I have a table Person with a column address. now my stored procedure matches and counts the number of words that are common between the address of the person and the input param string. I am looking forward for any help in this matter. I am using Sql server 2005.
create table data_set (id int primary key, col1 varchar(10)) go insert into data_set values (1,'a'), (2,'b'),(3,'c'),(4,'d'),(5,'a'),(6,'b'),(7,'e'),(8,'f'),(9,'a'),(10,'a') select * from data_set
I tried this below
Declare @child_ids int @col_val varchar @count int
select @child_ids, @col_val, @count, count(col1) as records from data_set group by col1 order by col1
I have a string of characters in my data flow and I need to add a derived column showing the # of times a certain character appears within that string. For example, my string in the data flow is:
NNNNNRJKSURNNNEJNNNN
Now I need to count the number of "N"s in that column. From the example above, I should get the integer 12, and that would be the value of my derived column. Any ideas?
Because of a limitation on a piece of software I'm using I need to take a large varchar field and force a carriage return/linebreak in the returned sql. Allowing for a line size of approximately 50 characters, I thought the approach would be to first find the 'spaces' in the data, so as to not split the line on a real word. achieve.
--===== Simulate a passed parameter DECLARE @Parameter VARCHAR(8000) SET @Parameter = (select a_notes from dbo.notestuff as notes where a_id = '1')
I am trying to write a stored procedure that loops through the list of user tables, gets the record count for each one and write a record to an audit table with DATE, TABLENAME, RECORDCOUNT.I keep getting an error "Conversion failed when converting date and/or time from character string".Here is the script...
DECLARE @table nvarchar(500) DECLARE @sql nvarchar(520) DECLARE CursorSelect CURSOR FOR select table_name from INFORMATION_SCHEMA.tables where table_name not like 'sys%' order by table_name
Situation: In this stored procedure, I have to calculate in some manner: Font, FontSize, BoldText, ShowBox and number of characters to see how many lines it will take on a Crystal Report. Wondering if you have seen some like this on Web or have an ideas? Measurements(length, width) and character count seem appropriate. How about a function?
( @CompanyID NVARCHAR(36), @DivisionID NVARCHAR(36), @DepartmentID NVARCHAR(36), @ItemID NVARCHAR(36), @OrderNo NVARCHAR(36), @LineNo NVARCHAR(36), @TAllotedQty Numeric, @EmployeeID NVARCHAR(36), @Trndate datetime ) AS BEGIN By default iam passing 12 char itemid as parameter...
Here iam selecting the itemid from InventoryLedger -if it is 8 char than this query should be executed IF EXISTS(SELECT ItemID FROM InventoryLedger WHERE TransDate=@Trndate AND ItemID=@ItemID AND ILLineNumber =@LineNo AND TransNumber=@OrderNo AND TransactionType='Production' AND CompanyID=@CompanyID AND DivisionID= @DivisionID AND DepartmentID=@DepartmentID)
BEGIN DECLARE @Qty INT
select @Qty =QUANTITY from inventoryledger WHERE TransDate=@Trndate AND ItemID=@ItemID AND ILLineNumber =@LineNo AND TransNumber=@OrderNo AND TransactionType='Production' AND CompanyID=@CompanyID AND DivisionID= @DivisionID AND DepartmentID=@DepartmentID select qtyonhand=qtyonhand+@Qty from InventoryByWareHouse where ItemID=@ItemID END
Here iam selecting the itemid from InventoryLedger -if it is 12 char than this query should be executed(both queries are same) IF EXISTS(SELECT ItemID FROM InventoryLedger WHERE TransDate=@Trndate AND ItemID=@ItemID AND ILLineNumber =@LineNo AND TransNumber=@OrderNo AND TransactionType='Production' AND CompanyID=@CompanyID AND DivisionID= @DivisionID AND DepartmentID=@DepartmentID)
BEGIN DECLARE @Qty INT
select @Qty =QUANTITY from inventoryledger WHERE TransDate=@Trndate AND ItemID=@ItemID AND ILLineNumber =@LineNo AND TransNumber=@OrderNo AND TransactionType='Production' AND CompanyID=@CompanyID AND DivisionID= @DivisionID AND DepartmentID=@DepartmentID select qtyonhand=qtyonhand+@Qty from InventoryByWareHouse where ItemID=@ItemID END
Is a table 45 columns wide too wide? Most of the data is very small( check boxes, one word fields etc.) I have one giant form that needs to be filled out and uploaded to the database. I don't want to fragment the table too much because it will be a pain to update it. This table will have to be searched, that's why I am concerned with the width. I am hoping that an index would help out and would provide enough performance that I can keep a wide table.
ok, i have been charged with the task of searching a server that houses 50-60 client databases. all databases are identical, created from a prototype. i have discovered an issue whereby one of the clients db is missing a required sproc. they now want me to check all db's to see if anyone else is missing the sproc or not. i have a UN and PASS to login to the server that houses all db's. is there a method for me to query all db's at once and check for the sproc? or will i have to go through them all manually (which could become quite time consuming as there are so many db's, and quite a large number of sprocs in each. the sprocs seem to be listed in a semi-random order in each db as well) thanks all
Is there any way to create Server wide UDFs? We have alot functions that are spread across multiple databases and its time consuming going around databases looking for them. What would be nice is to create functions at the server level which can be accessed within any database, like the GETDate() function.
I'm working on a system that is very address-centric and detection ofduplicate addresses is very important. As a result we have brokenaddresses down into many parts (DDL below, but I've left out somereference tables for conciseness), these being state, locality, street,street number, and address. The breakdown is roughly consistent withAustralian addressing standards, we're working on finalising this.Because we carry the primary key down each of the levels, this hasresulted in our address table having a very wide primary key (around170 characters). We refer to addresses from a number of other tablesand although my instinct is to use this natural key in the other tablesI wonder if we should just put a unique index on the natural key,create a surrogate primary key and use it in the other table. Anythoughts?CREATE TABLE dbo.States (StateID varchar (3) NOT NULL ,StateName varchar (50) NOT NULL ,CONSTRAINT PK_AddressStates PRIMARY KEY NONCLUSTERED(StateID))CREATE TABLE dbo.Localities (Locality varchar (46) NOT NULL ,StateID varchar (3) NOT NULL ,Postcode char (4) NOT NULL ,CONSTRAINT PK_Localities PRIMARY KEY NONCLUSTERED(Locality,StateID,Postcode),CONSTRAINT FK_AddressLocalities_AddressStates FOREIGN KEY(StateID) REFERENCES dbo.States (StateID))CREATE TABLE dbo.Streets (StreetName varchar (35) NOT NULL ,StreetTypeID varchar (10) NOT NULL ,StreetDirectionID varchar (2) NOT NULL ,Locality varchar (46) NOT NULL ,StateID varchar (3) NOT NULL ,Postcode char (4) NOT NULL ,CONSTRAINT PK_Streets PRIMARY KEY CLUSTERED(StreetName,StreetTypeID,StreetDirectionID,Locality,StateID,Postcode),CONSTRAINT FK_Streets_Localities FOREIGN KEY(Postcode,Locality,StateID) REFERENCES dbo.Localities (Postcode,Locality,StateID))CREATE TABLE dbo.StreetNumbers (StreetName varchar (35) NOT NULL ,StreetTypeID varchar (10) NOT NULL ,StreetDirectionID varchar (2) NOT NULL ,Locality varchar (46) NOT NULL ,StateID varchar (3) NOT NULL ,Postcode char (4) NOT NULL ,StreetNumber varchar (15) NOT NULL ,BuildingName varchar (100) NOT NULL ,CONSTRAINT PK_StreetNumbers PRIMARY KEY CLUSTERED(StreetName,StreetTypeID,StreetDirectionID,Locality,StateID,Postcode,StreetNumber),CONSTRAINT FK_StreetNumbers_Streets FOREIGN KEY(StreetName,StreetTypeID,StreetDirectionID,Locality,StateID,Postcode) REFERENCES dbo.Streets (StreetName,StreetTypeID,StreetDirectionID,Locality,StateID,Postcode))CREATE TABLE dbo.Addresses (StreetName varchar (35) NOT NULL ,StreetTypeID varchar (10) NOT NULL ,StreetDirectionID varchar (2) NOT NULL ,Locality varchar (46) NOT NULL ,StateID varchar (3) NOT NULL ,Postcode char (4) NOT NULL ,StreetNumber varchar (15) NOT NULL ,AddressTypeID varchar (6) NOT NULL ,AddressName varchar (20) NOT NULL ,CONSTRAINT PK_StreetNumberPrefixes PRIMARY KEY CLUSTERED(StreetName,StreetTypeID,StreetDirectionID,Locality,StateID,Postcode,StreetNumber,AddressTypeID,AddressName),CONSTRAINT FK_Addresses_StreetNumbers FOREIGN KEY(StreetName,StreetTypeID,StreetDirectionID,Locality,StateID,Postcode,StreetNumber) REFERENCES dbo.StreetNumbers (StreetName,StreetTypeID,StreetDirectionID,Locality,StateID,Postcode,StreetNumber))
Requirement: I wanted to create another table based on the column values. For eg: I have to take the employee id and check for what value he has under owns column in the table. I take only 3 values and then these values should go to the newly created columns (owns1, owns2,owns3). if there is no value for any of these columns it should have null values loaded in them. The result of the modification should look like this:->
Note: eventhough employeeid 3 owns more than 3 things we only take 3 of what he owns and populate to above coloumns. In addition to it, the column OWNS will have more than 500 different values in them.
Its kind of urgent and if anyone knows how to , Can you please help me on this. Thanks a lot. -- Ragulan ;)
Dear All, what are different types of joins avialable with sql server6.0, 7.0, 2000 and 2005 and now 2008...
the reason behind the question is :
i've used the swiss Sql tool to boost the performance of my join queries. and the tool has given some best options. those are working fine with database but failing at application level.
i found the reason is because of Loop join, Hash Join, Merge join.
your valubel suggsions are needed.
thank you veru much
Vinod Even you learn 1%, Learn it with 100% confidence.
Can we create custom object wide roles? In the same manner that db_datareader in effect grants SELECT on all tables, can we create roles that affect all objects without having to explicitly grant the permission on every object?
Hi, I have an ASP.NET application that uses VARCHAR extensively in the tables and, more importantly, stored procedures (a couple hundred of them).
This app needs to start accepting foreign language in some areas, so I was wondering if there was some way to go through the tables and, more importantly, the stored procedures and change all "VARCHAR" references to "NVARCHAR" ?
Are the stored procedures stored as a text file somewhere on the server? If so I could use some sort of "replace" software utility to go through and change all VARCHAR to NVARCHAR
I have a wide fact table that I'm feeding to an SSAS cube. I was advised that splitting the measure group into two will improve performance when querying the cube.
I cannot find any documentation that supports this, in fact I get a blue curved line suggesting that I merge the measure groups since they have the same dimensionality and granularity.
I guess the best practice is what the blue line states, but without knowing the internals of SSAS I can undestand that a smaller measure group may be easier to handle, or create more specific aggregations for.