I actually extracted a set of from an email to excel then to a SQL Server table
I'm trying to update a column in a database to clean out everything except the name "George Jeffrey"
bodytext1 column :-
" The following person is leaving the department, please update your records accordingly. Person Leaving: George Jeffrey/Users/DVC Division/Agency: People and Community Advocacy Date Leaving: Thursday 10, November 2005 "
I tried to use the replace method
update test.dbo.test
set bodytext1 = REPLACE(bodytext1,'The following person is leaving the department','')
but this error came out
Server: Msg 8116, Level 16, State 1, Line 1
Argument data type text is invalid for argument 1 of replace function.
Can you not add a text column to a full text index?? If I change it to a nvarchar it works fine but if I change it to a text column it wont index. Anyone know how to fix this?
I need to separate data within one column into two. For example I have a column in one table that contains first and lastnames separated by a space. I need to export that data in to another table (or create a view) that has a lastname and firstname column. I'm unable to specify character lengths within the substring statement because obviously the data varies so it was suggested I use the instring function within it. However instring doesn't seem to be a recognised function. How come something so simple within excel is causing me such a headache in SQL. I would really appreciate any suggestions.
I have 2 text data type columns that I would like to combine into a new column. I'd also like to add a newline character between each column value when I combine them. I've tried columnA + columnB but that didn't work. How could I do that?
Hi I have a table that has two columns whose values need to be combined into one column and pipe delimed into the first one. But I am not sure how to writed the query.
Here is my atmempt, which by the way does not work, but you might understand what I am trying to do.
update tabmodulesettings set settingvalue = (Select settingvalue from tabmodulesettings as t2 where t2.settingname='m2' and t.tabmoduleid=t2.tabmoduleid) + '|' + (select settingvalue from tabmodulesettings as t3 where t3.settingname='m7' and t.tabmoduleid=t3.tabmoduleid) from tabmodulesettings t where settingname='m2'
Easy so far. But I what I need is to find out whether the audited table contains text(or nText / image) columns or not. If it contains any text columns the statement should automatically be changed to
select a,b into #inserted from inserted
Remember that the trigger must be usable for any table in the database without manually entering the tables structure or the column names.
Best would be fetching the structure fropm the system views (INFORMATIONSCHEMES)
I'm at my wits end with sql server here, perhaps someone can explain to me why this is happening and hopefully a way around it or to fix it... I recently had this happen while working with a table and exporting the results of a query to an xml file, one field in the table is a text field, before being output to xml I must parse out some of the text usually near the beginning, during which the text is converted to varchar(max), I handled that just fine but when I examine the output file, The tail end of the field has several bytes that gets duplicated... Example...
empowering them to squeeze every drop of performance available. #x0D;.#x00;ble. #x0D;.#x00;</NOTES>
The first #x00; is the terminating character(which ultimatly i need to strip out to make the file comply by the normal xml rules but thats a different issue alltogether) here is another example that has even more data duplicated...
encompassing the production, post production, broadcasting and computer video markets.#x0D;.#x00;ing and computer video markets.#x0D;.#x00;#x00;</NOTES>
If anyone can shed some light on why this is happening or at least how I can prevent it, your help would be greatly appreciated...
A side note on this is that when I run the select statement in the sql manager, the results do not include the duplicated bytes...
Hi,I have a problem, I have a table with a text type column and anvarchar(2000) type column on my MS SQL 2000 Server.I know that the longest text in the text field is 1000 chars. I want tocopy the content the content of the text field into the nvarchar field.I tried convert and cast but after the update there are only 255 charsin the nvarchar field.Best regardsMarc
We have remote users running MSDE entering information into adatabase. To send the data back to the home office, we've written someroutines that export the data into SQL Scripts in text files:DELETE <table> where KeyID=<x>INSERT INOT <table> (fields) VALUES (fields).We then zip up these scripts, and either email or ftp the ZIPs back tothe home office, where a program opens them up and simply "runs" theSQL statements.The new version of my database application contains both TEXT fields(containing rich text) and IMAGE fields (containing file attachmentslike JPGs or excel spreadsheets). I'm worried that our SQL Scripterengine (which we really like) is going to choke on the text and imagefields. Can anyone suggest a similar method to send the SQL dataaround?thanksmatt tag
Is there a way using MS SQL Server and Enterprise Manager to get a textdocument (or perhaps even a Word document) listing all table names,column names, etc of a database?--Sugapablo------------------------------------http://www.sugapablo.com <--musichttp://www.sugapablo.net <--personal
I was trying to execute an OLE DB Source task with a SQL Command and I got the following error:Only text pointers are allowed in work tables, never text, ntext, or image columns. The query processor produced a query plan that required a text, ntext, or image column in a work table. I read this forum and I checked the data types in boths sides (source and target) and they are the same data types, TEXT. http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=46086
I already execute my query from SSMS using linked server to connect to the source and it worked. I could load the data into my target table. Then, I tried to execute it on SSBIMS and it failed. I wanted to try an Execute SQL Task but the problem is that I can only have one connection object assigned to the task. So I cannot pull the data from one db and insert them into another with one Execute SQL Task.
Any ideas of why am I getting this error? Do I need to set a property to something in order to run my query using OLE DB Source Task?
I'm looking at using full-text indexing for tables to query. I have some smaller fields (varchar(50) that stores names) that I was contemplating using full-text indexing for. I was just curious if it is worth it?
Basically the data that will be there are one-word names, without any spaces or whatnot.
I'm creating a web-based NT RAS report site and am looking for the most efficient way to import the data from NT Event log into SQL2k. I'm using the 'dumpel' utility from rsc kit and all is fine except the 10th column - the message detail:
"The user DOMAINuserid connected on port Mdm15 on 08/23/2002 at 07:25am and disconnected on 08/23/2002 at 07:27am. The user was active for 2 minutes 23 seconds. 78809 bytes were sent and 50675 bytes were received. The port speed was 49300."
I need to parse this one long text string into 6 distinct columns: userID, port, duration, bytes_xmt, bytes_rcv and portspeed. After a quick review of the rowsets, the strings seem to hold a consistent output ... no real variances I can see.
I've dablled with views but am facing a small performance issue that could get bigger: The sql server not only has to run the text file import package, but also the view to format the text dump into a workable dataset, then my report code bangs over 30 queries against the final dataset. It already takes our SQL2k server over 3 minutes to parse about 20,000 rows and the server's a beast (dual 1.8 p4 cpu, 3gb ram, raid, etc).
What I think would work best is to abandon the view (performance will only get worse as the row count increases) and instead INSERT the rows into one table.
Any ideas anyone? any good scripts out there that can help me to parse the long text string quicker that using substring and replace functions?
I have some bad data that i'm trying to do a workaround for. I have a comma delimited text file with a column which contains a few records that has, gasp, a comma in it. Thus creating an extra column and pushing out NULL values in an empty column that isn't recognized by my DTS package. Can i fix this?
And if I'm correct in my assumption that I can't, how do I import only the fields preceding the bad data field? When i attempt to do that, I get the following message
'Too many columns found in the current row; non white-spaced characters found after the last defined column's data.'
Blindman, you probably know exactly what to do, huh?
Are the XML tags searched, as well as, the contents within the tags, when an xml column is included in a full-text index? Or is it only the contents within the tags? Would I need to combine full-text search with XPath/XQuery to get at the actual tags?
Hi,I have 2 tables and each has a text column. When i compared the 2columns with the LIKE operator i found something strange.create table test1( var1 text )create table test2( var1 text )insert into test1 values ( '-- [ CustomerType = 1 ]' )insert into test2 values ( '-- [ CustomerType = 1 ]' )select * from test1, test2where test1.var1 like test2.var1The last query surprisingly did not return any results. However when itook of the '[' in the text data and had something like thisinsert into test1 values ( '-- CustomerType = 1 ]' )insert into test2 values ( '-- CustomerType = 1 ]' )the query returned expected results, showing one row.any helpful would be great.Regards,Arun Prakash. B
When you set the results from a query to text, very often you end up with a format like:
job_id name ---------------------------------------------------------------------- ------------------------------------------------------------------------------------------------
Is there a way to get rid of the extra space between two columns? Sometimes it's more than the width of my screen between columns
Hi, I have tried this code from http://jtkane.spaces.live.com/Blog/cns!1pWDBCiDX1uvH5ATJmNCVLPQ!316.entry for full-text search on multiple tables & columns. Here's my code: SELECT * from [tStaffDir] AS e, [tStaffDir_PrevEmp] t,CONTAINSTABLE([tStaffDir], *, @Name) as AwhereA.[KEY] = e.[ID] andt.[ID] = e.[ID] I have FT the both the tables above and I am able to get results from the [tStaffDir] table but not the [tStaffDir_PrevEmp] table.The [tStaffDir_PrevEmp] table does have a column (which is [ID]) that is indexed, unique and non-Nullable.Please advise what I should do and look out for. Many Thanks.
I am using Microsoft SQL 2005, I need to do a BULK INSERT from a .csv I just downloaded from paypal. I can't edit some of the columns that are given in the report. I am trying to load specific columns from the file.
bulk insert Orders FROM 'C:Users*******DesktopDownloadURL123.csv' WITH ( FIELDTERMINATOR = ',', FIRSTROW = 2,
ROWTERMINATOR = '' )
So where would I state what column names (from row #1 on the .csv file) would be used into what specific column in the table.
I saw this on one of the sites which seemed to guide me towards the answer, but I failed.. here you go, it might help you:
FORMATFILE [ = 'format_file_path' ]
Specifies the full path of a format file. A format file describes the data file that contains stored responses created using the bcp utility on the same table or view. The format file should be used in cases in which:
The data file contains greater or fewer columns than the table or view.
The columns are in a different order.
The column delimiters vary.
There are other changes in the data format. Format files are usually created by using the bcp utility and modified with a text editor as needed. For more information, see bcp Utility.
I explicitly set one column to have text qualifiers in a flat file connection mgr and specified to use double quotes as the qualifier, yet in the output file, the column is not qualified. What did I leave out ?
We have found that using the SSIS "Import and Export Wizard" using the "Microsoft Excel" data source that there appears to be a maximum column length of 255 characters for any row.
Even when defining the destination table columns as nvarchar(4000), the wizard fails with the errors shown below.
We have found no workaround except manually changing the imput data. There doesn't appear to be any "Advanced" options for the Excel importer as there are for the flat-text importer. So, no question here, just posting the bug so that *next* time someone searches the web for an answer, this post comes up
MessagesError 0xc020901c: Data Flow Task: There was an error with output column "English String" (18) on output "Excel Source Output" (9). The column status returned was: "Text was truncated or one or more characters had no match in the target code page.". (SQL Server Import and Export Wizard) Error 0xc020902a: Data Flow Task: The "output column "English String" (18)" failed because truncation occurred, and the truncation row disposition on "output column "English String" (18)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component. (SQL Server Import and Export Wizard) Error 0xc0047038: Data Flow Task: The PrimeOutput method on component "Source - Sheet1$" (1) returned error code 0xC020902A. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing. (SQL Server Import and Export Wizard) Error 0xc0047021: Data Flow Task: Thread "SourceThread0" has exited with error code 0xC0047038. (SQL Server Import and Export Wizard) Error 0xc0047039: Data Flow Task: Thread "WorkThread0" received a shutdown signal and is terminating. The user requested a shutdown, or an error in another thread is causing the pipeline to shutdown. (SQL Server Import and Export Wizard) Error 0xc0047021: Data Flow Task: Thread "WorkThread0" has exited with error code 0xC0047039. (SQL Server Import and Export Wizard)
edit: After searching further this is documented under "Excel Source" in BOL which provides a registry-based workaround. I guess the issue is that the wizard considers truncation to be a 'fail' case and there's no easy way to override this behaviour, specify the column types nor determine which line is in error)
Truncated text. When the driver determines that an Excel column contains text data, the driver selects the data type (string or memo) based on the longest value that it samples. If the driver does not discover any values longer than 255 characters in the rows that it samples, it treats the column as a 255-character string column instead of a memo column. Therefore, values longer than 255 characters may be truncated. To import data from a memo column without truncation, you must make sure that the memo column in at least one of the sampled rows contains a value longer than 255 characters, or you must increase the number of rows sampled by the driver to include such a row. You can increase the number of rows sampled by increasing the value of TypeGuessRows under the HKEY_LOCAL_MACHINESOFTWAREMicrosoftJet4.0EnginesExcel registry key. )
we can easily load a file into db tables. However, my main concern here is the number of columns in the file. A text file TEXT_1400.txt has 1400 columns. I am unable to load data to my db table using BCP or BULK INSERT commands, as maximum of 1024 columns are allowed per table in SQL Server 2008.
We can still go ahead and create ‘Wide Table’ (a special table that holds up to 30,000 columns. The maximum size of a wide table row is 8,019 bytes.). But when operating on wide table, BCP/BULK INSERT commands still fail. After few hours of scratching my head over BCP and BULK INSERT, I observed that while inserting BCP/BULK INSERT commands are unable to identify SPARSE columns and skip these columns, which disturbs column mapping and results in data conversion and trancation errors.
Is there any proper way to load this kind of files into the db table?
Case: Exporting Report to PDF/Printing/TIFF Report: Contains 1 table with 19 Columns. 1 column is static, the other 18 are visible at the users descretion. Report when printed/exported to pdf spans 2 pages naturally, 16 on the first page, 3 on the second, and the column widths have been adjusted to provide a perfect page span .
User A elects to hide two of the columns, and show the rest. The report complies and the viewable version is perfect, the excel export is perfect.. the PDF export on the first page causes every fith column, starting with the last column that was hidden to be expanded to take up additional width. On the spanned page, it renders the first column on that page correctly, then there is a white space gap equal to the width of the hidden columns and then the rest of the cells show with the last column expanded to take up the same width that the original 2 columns were going to take up, plus its width.
We have tried several different settings to see if it helps this issue or makes it worse. So far cangrow/canshrink/keep together have made no impact. It is not possible to increase the page size due to limited page size selection availablility for the client. There are far too many combinations of what the user can elect to show or hide to put together different tables to show and hide on the same report to remove this effect.
Any help or suggestion on this issue would be appreciated
Similar to a previous post (http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=244646&SiteID=1), I am trying to import data into a SQL Table.
I am trying to program a small application that will import product data obtained through suppliers via CD-ROM. One supplier in particular uses Fixed width colums, and data looks like this:
Example of Data
0124015Apple Crate 32.12
0124016Bananna Box 12.56
0124017Mango Carton 15.98
0124018Seedless Watermelon 42.98 My Table would then have: ProductID as int Name as text Cost as money
How would I go about extracting the data with an XML Format file? I am stumbling over how to tell it where to start picking up data for a specific column. Is there any way that I could trim the Name column (i.e.: "Mango Carton " --> "Mango Carton")?
I don't know if it makes any difference, but I've been calling SQL from my code by doing this:
Code in C# Form
SqlConnection SqlConnection = new SqlConnection(global::SQLClients.Properties.Settings.Default.ClientPhonebookConnectionString); SqlCommand cmd = new SqlCommand();
SqlConnection.Open(); cmd.ExecuteNonQuery(); SqlConnection.Close(); RefreshData(); I am running Visual Studio C# Express 2005 and SQL Server Express 2005.
I have a business need to create a report by query data from a MS SQL 2008 database and display the result to the users on a web page. The report initially has 6 columns of data and 2 out of 6 have JSON data so the users request to have those 2 JSON columns parse into 15 additional columns (first JSON column has 8 key/value pairs and the second JSON column has 7 key/value pairs). Here what I have done so far:
I found a table value function (fnSplitJson2) from this link [URL]. Using this function I can parse a column of JSON data into a table. So when I use the function above against the first column (with JSON data) in my query (with CROSS APPLY) I got the right data back the but I got 8 additional rows of each of the row in my table. The reason for this side effect is because the function returned a table of 8 row (8 key/value pairs) for each json string data that it parsed.
1. First question: How do I modify my current query (see below) so that for each row in my table i got back one row with 19 columns.
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.* FROM PRODUCT A CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B
If updated my query (see below) and call the function twice within the CROSS APPLY clause I got this error: "The multi-part identifier "A.ITEM6" could be be bound.
2. My second question: How to i get around this error?
SELECT A.ITEM1,A.ITEM2,A.ITEM3,A.ITEM4, B.*, C.* FROM PRODUCT A CROSS APPLY fnSplitJson2(A.ITEM5,NULL) B, fnSplitJson2(A.ITEM6,NULL) C
I am using Microsoft SQL Server 2008 R2 version. Windows 7 desktop.
I'd like to first figure out the count of how many rows are not the Current Edition have the following:
Second I'd like to be able to select the primary key of all the rows involved
Third I'd like to select all the primary keys of just the rows not in the current edition
Not really sure how to describe this without making a dataset
CREATE TABLE [Project].[TestTable1]( [TestTable1_pk] [int] IDENTITY(1,1) NOT NULL, [Source_ID] [int] NOT NULL, [Edition_fk] [int] NOT NULL, [Key1_fk] [int] NOT NULL, [Key2_fk] [int] NOT NULL,
[Code] .....
Group by fails me because I only want the groups where the Edition_fk don't match...
Here is My requirement, I'm not sure if this is possible. Creating table called master like col1, col2 col3, col4 , col5 ...Where Col1, col2 are updatable - this can be done easily
Col3, col4 are columns in another table but these can be just a read only ?? Is this possible ? this is possible with View but not friendly with share point CRUD...Col 5 is a computed column of col 2 and col5 ? if above step can be done then sure this can be done I guess.
I have query which retrieves multiple column vary from 5 to 15 based on input parameter passed.I am using table to map all this column.If column is not retrieved in the dataset(I am not talking abt Null data but column is completely missing) then I want to hide it in my report.
As I am creating the non-clustered indexes for the tables, I dont quite understand how dose it really matter to put the columns in the index key columns or put them into the included columns of the index?
I am really confused about that and I am looking forward to hearing from you and thank you very much again for your advices and help.