Transact SQL :: Replace Special Characters In ORACLE Or SERVER 2012?
Jun 8, 2015
I'm trying to replace special characters in SQL SERVER and all the solutions for this RDBMS that I found, it uses loops and the source of my data it's in Oracle. in ORACLE and they use REGULAR EXPRESIONS to solve it..Do you know what its the better option to replace special characters? Using loops in SQL SERVER or REGULAR EXPRESSIONS in ORACLE ?
i am using oledb to connect to oracle. i want to know if there is a way to handle different character sets in this type of connection. for sql to sql, i have been using auto translate in the connection string. what about for sql oledb to oracle? how can i make sure that the data from sql to oracle is transferred as is?
I would like to know how can i only get the characters before the special character?
For example if the mail id is The.Champ123@gmail.com i need to extract The.Champ123 and if the mail id is TheChamp@gmail.com I need to get TheChamp.So basically i would like to get the characters from the string before '@' character.
I am creating a key-wording module where I want to search data using the comma separated words.And the search is categorized into comma ',' and minus '-'. Take a look on the example what I exactly want to do is
7 woman,girl,Digital Tablet,working,smiling,happiness,hand on chin
If serch text is = Man,Businessman then result AS_ID is =1,2 If serch text is = Man,-Businessman then result AS_ID is =3 If serch text is = woman,girl,-Working then result AS_ID is =4,5
I'm testing, with SQL 2014 on the same DB, a procedure that extracts data from a table into a file and Loads data from that file into a different table which has the same columns as the initial table (I use a function to create the create table statement from the source table and change the name of the destination table)
It's a complicated process that takes 1 XML record that contains information + the Create Table Statement (to eventually be able to this on a different server/DB) + the title Row for each column + the Data... Each of these are created with a BCP command (all with the same options). they are then appended to each other with a copy /B c:file1.txt + c:File2.txt + c:File3.txt + c:File4.csv c:ResultFile
Once the result file is created I bulk insert the 2 first rows in one table "TableA" create the tmp table "TABLE B" with the create table statement that is in "TableA" and do another bulk insert of the remainder of the file into the newly created table.
What else can I try? Should I be creating a format file? what are the benefits of a format file?
It's a very long procedure that does both Extract and Load (with 12 parameters) not sure what I should put here.
Hi,I come from the "dark side" php/mysql and there often problems withcharacter sets (utf-8, latin...) and storing data in datebase.Exists in the world of dot.net and ms-sql-server similiar problems?To precise: I have to store xml-data in database. Maybe its better toencode (like base64) the strings?Perhaps there are some links to read?Thanks.klaus.
I need to find all uses of special characters in a database. I used the following code to do this:
USE dbName GO IF OBJECT_ID('tempdb.dbo.#Results') IS NOT NULL DROP TABLE #Results GO
[code]...
This will check all tables in the database, but if you want to check specific tables you can uncomment the line in the where clause and specify tables to be checked. The query will return any text fields that have any characters other than letters, numbers or spaces.
This code works fine for me because all the tables in my database have single column primary keys. However I know how much Jeff Moden hates cursors or RBAR queries, so my question is could this have been done by any method other than using a cursor?
Part 1: When there is ~ (tilde) and has any value after it then it goes into a new row and duplicating the other columns like the facility in the screenshot attached and new column having the sequence.
Part 2: When there is ^ (Caret) its a new column irrespective of a value present or not
CREATE TABLE [dbo].[Equipment]( [EQU] [VARCHAR](50) NOT NULL, [Notes] [TEXT] NULL, [Facility] [VARCHAR](50) NULL) INSERT INTO [dbo].[Equipment] ([EQU] ,[Notes] ,[Facility]) SELECT '1001','BET I^BOBBETT,DAN^1.0^REGULAR^22.09^22.090~BET II^^^REGULAR^23.56^0~','USA' union SELECT '998','BET I^JONES, ALANA^0.50^REGULAR^22.09^11.0450~BET II^^^REGULAR^23.56^0~','Canada' UNION select '55','BET I^SLADE,ADAM F.^1.5^REGULAR^27.65^41.475~','USA' SELECT * FROM dbo.Equipment
I created the table in excel and attached the screenshot for a clear picture as to what is required. I use text to Columns in excel to achieve this not sure if there is anything similar in sql.
I'm presented with a problem where I have a database table which must be migrated via a "custom tool", moving the data into a new table which has special character requirements that didn't exist in the source database. My data resides in an SQL Server 2008R2 instance.
I envision a one-time query which will loop through selected records and replace the offending characters with --, however I'm having trouble understanding how this works.
There are roughly 2500 records which meet the criteria of "contains bad characters", frequently containing multiple separate bad chars, and the table contains roughly 100000 rows.
Special Characters are defined as #%&*:<>?/{}|~ and ..
While the field is called "Filename" it isn't always so, it is a parent/child table where foldernames are also stored.
The examples I'm finding are all oriented around SELECT statements, to change the output of what I see returned, however I'd rather just fix the entire column using an UPDATE. Initial testing using REPLACE fails because I don't always have a single character as the bad thing in a string.
In a better solution, I found an example using a User Defined Function to modify the output of a select, but I cannot use that UDF in an UPDATE.
My alternative is to learn enough C# to modify the "migration tool" to do this in-transit, but I know even less about C# than I do of SQL.
I gather I want to use @@ROWCOUNT to loop through the rows but I really can't put it all together in a cohesive way.
Our SQL 2005 databases have "SQL_Latin1_General_CP1_CI_AS" collation settings on the server. One of tables has one column declared as "ntext". When we try to insert "‚¬" character directly trough insert statement and retrieve it, we're getting right results. When we use web-services to insert data with this character (euro), and retrieving the data back, we're not getting this sign back. We are getting some characters like little squares. If I try to copy and paste them into notepad, I am getting ??? However, when I tried to place them here, I got them right:
‚¬ Å Å¡ Ž ž Å’ Å“ Ÿ
How can we assure visually they are inserted properly?
On the bills that our system generates there is a comments field that users fillout. We have occasional problems with special characters in the text messing up the validation code. Does anyone know of a query that can identify special characters in a text field? Like carriage returns, tabs, etc.? Thanks, Dave
I have to bring in data from a text file into the sql server database. I use the BCP utility in pulling the data.
The flat file data contains special characters like ®. This is the registered trade mark symbol. When I bcp the file using the character mode the special characters are automatically changed to «. So If I am going to pull a data like ACRYLON®, after the BCP I get it as ACRYLON«.
Inorder to replace all these I have written a program but since it has to scan every record the time taken is more. Will in be possible to accomodate iun the BCP to pull the data with special characters.
I have an DB in SQL Server 7, and in Portuguese we have special characters like "á","õ",etc. And I want to let the visitor to a site to do a search (written in ASP), and not to need to write the correct way (without the accents). But either he writes or not in the correct way, the results are the same, not necessarly in the same order. Is there a SQL Server mechanism that permits this functionality, without doing a very complicated SELECT (takes a lot of time) or replicating a field in the DB (takes a lot of extra space)...
I have a problem inserting German special chracters into a MSSql table.
üöäÖ works fine, but ßÜÄ are reduced to "_". I'm using a html form and a php script to enter the data into the database. Any idea why this isn't working? Funny thing is, when I use a german version of the SQL Server it also works without a problem... but I need this to work on ANY mssql server. Any help would be very much appreciated.
Hi,I have a MSSQL Server communicating with an Oracle database through aMSSQL linked server using a MS ODBC connection.If I query the Oracle database through the Oracle ODBC 32Bit Test, theresult is fine:select addrsurname from address where addrnr = 6666;HÅKANSSONIf I do the same query within the SQL Query Analyzer (using the linkedserver), I get:select * from openquery(TESTSW, 'select addrsurname from address whereaddrnr = 6666');H?KANSSONI have tried to both check and uncheck the Automatic ANSI to OEMconversion, but the result remains the same.Does anyone know what to do to make the result display the specialcharacters in SQL Query Analyzer?Thanks,Kenneth
Hi,This is a generic question, but for arguement's sake, let's say, myenvironment is SQL Server 2000.It seems that setting quoted_identifier off is the best way toaccomodate all sort of data input/update especially for data set thatcontains special characters like single quote as in O'Brien, otherfunky stuff like %^$*@#(!).However, this option won't help with a situation like data value of5'9",and let's say, we really can't predicate if this type of value will beused for ColA or ColB or ColC ...Examplecreate table #tmp (col1 varchar(30));set quoted_identifier off;insert into #tmp values("O'Brien's barking dog'");-- successinsert into #tmp values("O'book has funky (%^$*7) stuff");-- successinsert into #tmp values("John said "funky is OK" in his speech");-- failedGlobaly search and replace the double quotes before insert/update isjust too "expensive", not a good option.Now, if we can set a custom quoted_identifier, we can solve the aboveproblem easily, for instance,if MS SQL Server permits/acceptsset quoted_identier = '//$$';theninsert into #tmp values(//$$John said "funky is OK" in hisspeech//$$);(or the like)would succeedAny other thoughts/ideas? Thanks.Quote "Never stop thinking even though at times it may produce waste,which we all do, btw :) "
Currently I have got a report where I want to insert special characters (like @, !, ?) at the end of a String which will be displayed in a textbox of the report. Wenn I insert the String value "test!?" the report will return this value "!?test".
Good day I've looked for information about this but to no avail. Essentially why does Reporting Services not support spaces and special characters in a Field Name? I know that SQL supports spaces utilizing [] brackets e.g. [Field Name With Spaces]. So how come RS doesn't? Thanks
I am trying to display data that has trademark and copyright symbols. Reporting services does not display them in the standard format. Anyhelp is appreciated on how to get this done.
I have an database that is housing a path used to locate an external file. This application was written many years ago and I am now trying to bring the files into the database as a VARBINARY.
The table is holding the path like this "/folder/folder/file"
I am trying to convert that path to "folderfolderfile"
In my Select statement I have
SELECT ProdID, REPLACE (PATH, /, ) FROM dbo.blahblah
The problem is that I can't figure out to make SQL understand that "/" is the character I want to replace.
I am trying to store the content (body) of an email message that I want to create in a column SQL 2000. I need to know how I can store the special characters (carriage returns, bullets, etc) so that when I select them from the database to reform the email the correct formatting appears.
I am trying to move distinct data from one table to another based on two columns regardless of special characters such as : ( ) ; ' " / - _ = >< etc. The two columns that are the defining factors that I need to match as duplicates are Col1 & Col3, the rest I want to get the maximum data from each column for that row (I hope this made sense).
Here is what I have tried, but it does not seem to work:
Code:
select ID, Col1, max(Col2)as Col2, Col3, max(Col4)as Col4, max(Col5)as Col5, max(Col6)as Col6, max(Col7)as Col7, max(Col8)as Col8, max(Col9)as Col9 where patindex('%[^-:]%',Col1) = 0 AND patindex('%[^-:]%',Col3) = 0 into Newtable from dbo.CLEANING_bk group by Col1,Col3
Basically if there are 10 rows of duplicate data based on Col1 & Col3 (regardless of special characters), I want to move the distinct info to a new table with the maximum amount of info from the other columns of the duplicate rows.
If row 3, 4,5, & 6 are all considered duplicate and Row3/Col2 has info in it that row 4 & 5/Col2 does not, I want to combine it with the single row I export to the new table retaining as much information from the rows of the duplicates as possible.
Can anyone help and let me know why my script is not wanting to play nice?
In our schools we have a number of East-European, Turkish, Scandinavian, ... students. Their names contain "special" characters, like Ö, Ü, Ø, ... Our users want to be able to search for student names without having to enter those special characters. Most often they don't know the exact spelling of the names and they get "no match found" messages as a result.
They want to have persons with the name Ösgür, Osgueld, ... in the result set after entering "osgu" in the search field.
What is the best way to do this? I was thinking about using another collation near the LIKE, but I don't know if that would work and how it should be done. The Database collation is Latin1_General_CI_AS.