The following TSQL code will take a character string and perform the encoding that is necessary to generate a BARCODE 128 formatted string to be used with a BARCODE 128 font.declare @myString varchar(255)
select @myString = 'BarCode 1'
-- Define the final holding place of our output string
declare @finalArray varchar(255)
-- Define the variables that we'll need to be using
declare @checksumTotal int
declare @checksum int
select @checksumTotal = 104;
select @checksum = 0;
-- Start building our output
select @finalArray = @startchar
-- Loop through our input variable and start pulling out stuff
declare @position int
declare @thisChar char(1)
select @position = 1
while @position <= len(@myString)
begin
select @thisChar = substring(@myString, @position, 1)
select @checksumTotal = @checksumTotal + (@position * (ascii(@thischar)-32))
select @finalArray = @finalArray + @thisChar
select @position = @position + 1
end -- We've gone past the length now
-- Now we need to figure out and add the checksum character
select @checksum = @checksumTotal % 103
if @checksum = 0
select @finalArray = @finalArray + @spacechar
else
-- Barcorde array assumes 0 as initial offset so we need to add 1 to checksum
select @finalArray = @finalArray + substring(@asciiString, @checksum+1, 1)
-- Now we append the stop character
select @finalArray = @finalArray + @stopchar
-- The @final Array represents the barcode encoded string
select @finalArray
Hope it helps,
Dalton
Blessings aren't so much a matter of "if they come" but "are you noticing them."
I'm working on a final project for school. It is a warehouse application. For this application I will need to generate a barcode. I have read some articles online and found out that I could use the code 39 barcode font.
I just don't fully understand how to generate the unique barcode. Could someone please help me? I also would like to store the barcodes in a sql server table. Could I use reports to print the barcode?
I'm using a free barcode font so i can create scannable tickets via reporting services 2k5. When I print the tickets, everything seems fine. But when i export to PDF though, it looks like the barcode font shrunk. All the lines are pulled together, making scanning impossible. Is there a certain setting or so that i can use to ensure the font's width ?
-Update- The weird thing is: when i export it inside visual studio, the barcode is shown as it is supposed to in the pdf !?
I am having an issue exporting a Code 39 barcode to pdf (works fine on RS and excel). Once in pdf it inserts white spaces between the bars to make it unreconizable. I have tried resizing, repositioning and different font sizes. SSRS and SQL are running on Windows Server 2003 Standard Edition Service Pack 1. Is there a hotfix out there for exporting EMF's to pdf's?
hi friends i am new to working in crystal reports in my application i want to display barcode in crystal reports. i was totally confused in this. can u help me..........
Hello all. I need to build a barcode and stuff it in a image control at runtime in SRS. I have the code at it works perfect for webforms etc but not in the actual SRS report itself. In SRS I put the code in the report properties code but it blows up on bitmap, graphics etc so I added the system.drawing and system.drawing.imaging namespace and it still fails with the same error. I'm certain there is a simple fix for this and would greatly appreciate anyones assistance in making this happen.
Error:
There is an error on line 15 of custom code: [BC30002] Type 'Bitmap' is not defined.
Dim ValidCodes As String = "17401644163811761164110012241220112416081604157214361244123014841260125416501628161417641652190218681836183018921844184217521734159013041112109414161128112216721576157014641422113414961478114219101678158217681762177418801862181418961890181819141602193013281292120011581068106214241412123212181076107415541616197815561146134012121182150812681266195619401938175817821974140013101118151215061960195415021518188619661724168016926379"
Dim Digit As Integer = 104
Dim i As Integer
For i = 1 To input.Length
Digit += (i * InStr(1, ValidInput, Mid(input, i, 1)))
I am trying to encode the barcode on the reporting service using SQL server 2005, they told me it is possible to maintain the barcode on exported excel sheet, however, I did not find the barcode on it, only squre black image.
Ok, so I have a report that is generating product labels on a standard mailing lable sheet (30 of them on a 8.5X11 sheet of stock). The whole process works just fine even the printing of the bar codes when I am printing the lables from VS2005's Preview function. However, when I moved the report to our reports server and run the same report, the barcodes don't print out. I have checked, and the barcode font has been installed on the server.
Any suggestions would be greatly appreciated. Thanks! - Eric-
Hi all, I have an application which will send out email in plain text in multi langauage.the email content will be pull from txt file save in UTF-8.i can send out email from the template with the encoding. but when i insert data from the SQl server. the data from the SQL server are not encoded.how do i encode the data (in other lanagauge) from sql server into UTF-8 so that it can be send together with the template. I have try changing the data into byte and encode it in UTF-8.but it won't displayed correctly. pls help. thanks
Hi, I am getting this error: {"XML parsing: line 1, character 43, unable to switch the encoding"} System.Data.SqlClient.SqlException when I run the code below. I know it is caused by the fact that the encoding of the XML file I'm trying to insert is not utf-16, but rather utf-8. However I would like to be able to enter any encoding. Is this possible? If not is there a way to convert the encoding before I insert? Or any other ideas anyone might have. Thanks! XmlDataSource xds = new XmlDataSource(); xds.DataFile = tbLink.Text.Trim(); xds.XPath = "rss/channel/item"; XmlDocument xmlDoc = new XmlDocument(); xmlDoc = xds.GetXmlDocument(); string strConn = WebConfigurationManager.ConnectionStrings["ConnectionString"].ConnectionString; sqlComm.Parameters.Add(new SqlParameter("@XMLData", xmlDoc.InnerXml)); strSQL = " INSERT INTO tblCastStore ( intCastID, CastXML ) VALUES ( @@IDENTITY, @XMLData );"; sqlComm.CommandText = strSQL; try { sqlConn.Open(); sqlComm.ExecuteNonQuery(); sqlConn.Close(); } catch (SqlException se) { lblError.Text = se.Message; }
The problem is: reading data with ADO is OK (Lithuanian), but when I try to write, most of national encoding goes to hell (plain ascii). What's wrong with it? I see no option to set code page.
I have a user who is using data from my database for a webportal. One of my tables had a field that was type NText. The technology he is using couldn't cope with NText so i changed the data type to nvarchar instead. The user is now getting some superfluous characters coming back as part of the data in the field e.g. '12 ' where a space appears if looking at the data through something like Query Analyser. He was asking if I could change the character encoding to Unicode.
I thought that datatypes like Nvarchar were unicode anyway but I guess the fact that I changed the type might mean that I need to explicitly declare it as unicode. Does anyone know if this is the case?
I'm pretty new in SQL and I'm kinda confused with the concept of encoding in SQL. I tried to read several article but there are still things that I don't understand. I have a table with two columns and these column contain english and chinese character. CREATE TABLE Names (FirstName NVARCHAR (50), LastName NVARCHAR (50)); The collation for both column is Latin1_General_BIN
My question is 1. Does all data that is saved in nvarchar column have the same encoding type which is UCS-2? 2. If a client application input a chinese character into the database table, what is the encoding type of that data? Is it UCS-2? 3. If a client application successfully enter chinese characters into database table and i want to display those chinese characters saved in the database into a web page, do i need to convert those chinese character from UCS-2 (Unicode) into Big-5 encoding?
Hi all,I have an issue on querying against UTF-16 encoded characters inSQL2000 database: For example the "López" is saved into database as"López" (due to the UTF-16 encoding); somehow, when I query datawith conditions of "like 'lop%'" or "like 'Lóp%'", the row of Lópezwould not return.NOTE: the accent insensitive collation can not help in this case.Thank you,Albion(052X)
I've seen a dump of the TDS traffic going from my webserver to the SQLServer database and it seems encoded in Unicode (it has two bytes perchar). Seems it would have a huge impact on performance if ittravelled in one byte. Why might this be?rj
I have to import large xml file in sql server and I use data transfer task and xml source.
The xml file are generated without specifing any encoding, and so I obtain many character error if i don't change the encoding.
When i put "by hand" (with a xml editor) in the xml file this encoding <?xml version="1.0" encoding="ISO8859-1"?>, the SSIS task works perfectly without any error.
So, i'm looking for a way to use this encoding without editing xml file (more than 500 mb...). the way that i can imagine are:
1) change the package encoding (but I haven't find this kind of settings)
2) change the xml source encoding (but I haven't find this kind of settings)
3) change the console chcp (normally i have 850, i have tested 1252 but without any success)
4) make a xml trasformation (but i don't know the best way); I've tried with XML task without any success...
I'm storing data in SQL Server 2005 Express ntext fields. I've added <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> to the Master Page, but apostrophes and other characters are appearing incorrectly in the browser. For example, apostrophes are appearing as Æ. They seem to have changed to this when I upsized from Access to SQL Server (so I have Æ in the db field).The back-end of the site is still in classic asp, where I can set the CodePage to 65001 and the CharSet to utf-8, and everything appears fine there. What can I change in the ASPX to get characters to display correctly?
Hello,I faced a problem while reading some strings from the database (SQL 2005). Some letters are encoded with html codes making the sorting really difficult. I tried to use IComparer objects with different CultureInfo information, but it doesn't do any good.I am quite new to the web development, so I don't really have any clue if there is a recommended and clean way of solving this problem. It could be that it's something really simple, but I only came up with an idea of search and replace in all strings. If possible, I would like to omit this. :) If not, please let me know.Thanks in advance.
I found a problem that I want to input chinese to SQL2000. I have already change the data type to "ntext" and for the ASP webpage I have chose the encoding to "big5". But the problem is I can't insert chinese string to the database from ASP. Can anyone help??
1) In which codeset OPENROWSET reads the datafile? Is there any way to specify to OPENROWSET to read in a specific format(i.e. as Unicode data or non-unicode data etc.)? 2) Can we specify datatype as "SQLCHAR" and collation type as "SQL_Latin1_General_CP1_CI_AS", in format file, for unicode characters? If this is wrong,what is the alternate for Unicode characters? 3) Is there any other place the problem can be?
I wrote a CLR function, which is receiving some XML parameters. In certein situations it gives me the following error message:
Msg 6522, Level 16, State 1, Line 58 A .NET Framework error occurred during execution of user-defined routine or aggregate "svmScale": System.Xml.XmlException: Invalid character in the given encoding. Line 1, position 27. System.Xml.XmlException: at System.Xml.XmlTextReaderImpl.Throw(Exception e) at System.Xml.XmlTextReaderImpl.Throw(String res, String arg) at System.Xml.XmlTextReaderImpl.InvalidCharRecovery(Int32& bytesCount, Int32& charsCount) at System.Xml.XmlTextReaderImpl.GetChars(Int32 maxCharsCount) at System.Xml.XmlTextReaderImpl.ReadData() at System.Xml.XmlTextReaderImpl.ParseText(Int32& startPos, Int32& endPos, Int32& outOrChars) at System.Xml.XmlTextReaderImpl.ParseText() at System.Xml.XmlTextReaderImpl.ParseElementContent() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.XmlTextReader.Read() at System.Xml.XmlWriter.WriteNode(XmlReader reader, Boolean defattr) at System.Data.SqlTypes.SqlXml.CreateMemoryStreamFromXmlReader(XmlReader reader) at System.Data.SqlTypes.SqlXml..ctor(XmlReader value) at UserDefinedFunctions.svmScale(SqlXml sql_problem, Boolean perChannel, Single x_factor, Single y_factor)
If I cahnage one of the zeros in the given tag, the error message disappears. If i cut the given tag, the error message disappears. If I cut the given tag, and then paste the previous one in place of it, the error message APPEARS. So my conclusion is, that any tag can be wrong on a specific position in the XML. This makes me wonder.
Another wonderfull thing is, that if I take this CLR function, and the same TSQL code, and I do run it on my notbook with my SQL Express, no error message. But if I try to use it on the server, with SQL 2005 it drops me this error message. The resulting XML is coming from an SELECT FOR XML AUTO, so i suspect it not having illegal characters inside. This is probably true, becouse the above replace procedure can make the message disapear.
The server: Product: Microsoft SQL Server Enterprise Edition Op System: Microsoft Windows NT 5.2 (3790) Platform: NT INTEL X86 Version: 9.00.3027.00 Language: English (United States) Memory: 4095 (MB) Processors: 2 Collation: SQL_Latin1_General_CP1_CI_AS Clustered: False
The notebook: Product: Microsoft SQL Server Express Edition Op System: Microsoft Windows NT 5.1 (2600) Platform: NT INTEL X86 Version: 9.00.1399.06 Language: English (United States) Memory: 1015 (MB) Processors: 1 Collation: SQL_Latin1_General_CP1_CI_AS Clustered: False
I am trying to do string scrubbing in a sql clr function, including removing certain HTML formatting. I would like to use HtmlDecode method, but it's my understanding that System.Web is not available for Sql Clr (without marking code unsafe - not an option for me as this is for an application we sell externally, and unsafe calls woudl not go over well with customers). Is there any class that IS supported for Sql Clr that exposes this functionality? Thanks.
Can someone point me to some code that properly prepares a string for an INSERT or UPDATE into a char or varchar column? I need to be able to handle single & double quotes and any other possible issues. thanks
After generating one of my reports, I process the XML output through an XSLT stylesheet and export it to a text file. The issue is that after the export, the generated output text file begins with the special Byte-Order-Mark marker "EF BB BF" standard to Unicode files encoded in UTF-8, UTF-16 or UTF-32. I have explicitly set the attributes of the xsl output element to <xsl:output encoding="us-ascii" media-type="text/plain" method="text">, but it seems as though those are ignored when the output file is written. I cannot have these characters, because I am generating a fixed-width file for input into a legacy system.
Any suggestions or thoughts on what is causing the BOM to be written to my file, even though I have set the encoding to be different than UTF-8?
I have a table column type as nText, however there are some Chinese character stored in that field and it is a messed up as it is not readable.
In my vb.net code, I did Convert to unicode by getting the byte of each character and encode it with UTF8 e.g:
Public Shared Function ConvertToUnicode(ByVal s As String) As String
' Convert To Unicode
Dim MyBytes As Byte() = Encoding.Default.GetBytes(s)
Dim GBencoding As Encoding = System.Text.UTF8Encoding.UTF8
Return GBencoding.GetString(MyBytes)
End Function
This works well but ,the problem is that it slows down the process quite alot, and I wonder are there any text encoding method I can use in SQL that can run when i do the SELECT Statement?
SELECT Convert(MyNTEXTColumn) .... something like that?
declare @test as varchar(32) declare @test2 as varchar(32)
set @test='today''s problem' set @test2='my <string> '
select @test as '@attribute' for xml path ('myrow') select @test2 as '@attribute' for xml path ('myrow')
I want for xml path to correctly encode the single apostrophe as &apos but the single apostrophe doesn't get encoded. In the second example the greater and less than does get encoded.
Our company has just migrated to 2005 from 2000. I've got Management Studio doing much of what I need it to do from the old enterprise manager (which I wish I could go back to).
My problem is this.
When saving CSV files from a script run inside Management Studio (ad hoc reports, custom queries, etc) the default encoding is to UNICODE. This produces files that excel doesn't handle very well (most of the people I'd be sending the results to would freak out when the CSV file doesn't appear as expected).
I've discovered that I can change the encoding to ANSI when I save the query but as it's a manual process I'm sure I'm going to get tired of it very quickly. I would really like to make ANSI encoding the default.
(at the moment and for the foreseeable future, we don't need the functionality that Unicode provides).
Anyone know how to do this -- I've tried searching the docs without success.
I have an issue when generating a flat file with SSIS.
Here are the steps:- 1 .Create a a package and Data Flow task in it. 2. In the Data Flow Task, insert an OLE DB Source and a Flat File Destination 3. In the OLE DB Source , specify this query :- "select '000' as [record code]" 4. In the Flat File Desitnation, create a new Flat File Connection specify the path and choose Code Page "65001 (UTF8)" 5. I've specified the Data Type "Unicode String DT_WSTR" in the Advanced property
When I execute the task, I get this error : [Flat File Destination [16]] Error: Data conversion failed. The data conversion for column "record code" returned status value 2 and status text "The value could not be converted because of a potential loss of data.".
I've also chosen the data type "Unicode Text Stream DT_NTEXT" and "string [DT_STR]" but I get errors such as : "The code page on intup column [record code] is 1252 and is required to be 65001."
The Locale I am using is English(US).
Would be grateful if test can be done for all other data types such as int, datetime, float, decimal, uniqueidentifier, etc.
I have an issue when generating a flat file with SSIS.
Here are the steps:- 1 .Create a a package and Data Flow task in it. 2. In the Data Flow Task, insert an OLE DB Source and a Flat File Destination 3. In the OLE DB Source , specify this query :- "select '000' as [record code]" 4. In the Flat File Desitnation, create a new Flat File Connection specify the path and choose Code Page "65001 (UTF8)" 5. I've specified the Data Type "Unicode String DT_WSTR" in the Advanced property
When I execute the task, I get this error : [Flat File Destination [16]] Error: Data conversion failed. The data conversion for column "record code" returned status value 2 and status text "The value could not be converted because of a potential loss of data.".
I've also chosen the data type "Unicode Text Stream DT_NTEXT" and "string [DT_STR]" but I get errors such as : "The code page on intup column [record code] is 1252 and is required to be 65001."
The Locale I am using is English(US).
Would be grateful if test can be done for all other data types such as int, datetime, float, decimal, uniqueidentifier, etc.