Hi, I was wondering if any SQL Server gurus out there could help me...I have a table I'm trying to apply a full text catalog to, however no results are ever returned due to the text column being cataloged being of varbinary(max) that's being populated from a converted nvarchar(max) value - I've narrowed it down to this specifically, populating with non nvarchar text seems to work fine.To re-create the problem quickly...If I populate the column viaCONVERT(varbinary(max), 'test text')then there is no problem, I get results as expected.However if I populate the column viaCONVERT(varbinary(max), CAST('test text' as nvarchar(max)))no results are ever returned.Is this a bug with SQL Server 2005 Full Text Indexing? I'm happily creating full text catalogs when an nvarchar is not getting converted into a varbinary.I'm setting the Document Type column to '.html' (I've tried changing this to '.txt' in case it was a fault with the html ifilter but the problem persists so I believe I can rule this out).The reason I need to convert an nvarchar to varbinary is that the table holds multi-lingual text and I'm adding a html meta tag <META NAME="MS.LOCALE" CONTENT="ES"> to the beginning in order for the full text indexing word breaker to select the correct language to catalog the text with. The aim being to provide more relevant searches in users native languages (I've read a few articles that describe this technique, but it's the first time I've tried to apply it).Any pointers / suggestions would be greatly appreciated. Cheers,Gavin.
Hi, I was wondering if any SQL Server gurus out there could help me...
I have a table I'm trying to apply a full text catalog to, however no results are ever returned due to the text column being cataloged being of varbinary(max) that's being populated from a converted nvarchar(max) value.
To re-create the problem quickly...
If I populate the column via CONVERT(varbinary(max), 'test text') then there is no problem, I get results as expected.
However if I populate the column via CONVERT(varbinary(max), CAST('test text' as nvarchar(max))) no results are ever returned.
Is this a bug with SQL Server 2005 Full Text Indexing? I'm happily creating full text catalogs when an nvarchar is not getting converted into a varbinary.
I'm setting the Document Type column to '.html' (I've tried changing this to '.txt' in case it was a fault with the html ifilter but the problem persists so I believe I can rule this out).
The reason I need to convert an nvarchar to varbinary is that the table holds multi-lingual text and I'm adding a html meta tag <META NAME="MS.LOCALE" CONTENT="ES"> to the beginning in order for the full text indexing word breaker to select the correct language to catalog the text with. The aim being to provide more relevant searches in users native languages (I've read a few articles that describe this technique, but it's the first time I've tried to apply it).
Any pointers / suggestions would be greatly appreciated. Cheers, Gavin.
UPDATE: Below is a T-SQL script you can run to demonstrate the effect I'm experiencing...
Code Snippet
-- Create test database CREATE DATABASE FullTextTest GO USE FullTextTest GO
-- Create test data table CREATE TABLE TestTable ( pk UNIQUEIDENTIFIER NOT NULL CONSTRAINT tablePK PRIMARY KEY, varbinarycol VARBINARY(MAX), documentExtension VARCHAR(5), ) GO
-- The below single entry WILL BE FOUND (the text source is being entered directly) INSERT INTO TestTable (pk, varbinarycol, documentExtension) VALUES (NEWID(), CONVERT(VARBINARY(MAX),'<META NAME="MS.LOCALE" CONTENT="EN">test entry 1'), '.html')
-- The bellow two entries below WILL NOT BE FOUND (the text source is taken from an NVARCHAR(MAX) value) INSERT INTO TestTable (pk, varbinarycol, documentExtension) VALUES (NEWID(), CONVERT(VARBINARY(MAX), CAST('<META NAME="MS.LOCALE" CONTENT="EN">test entry 2' AS NVARCHAR(MAX))), '.html') INSERT INTO TestTable (pk, varbinarycol, documentExtension) VALUES (NEWID(), CONVERT(VARBINARY(MAX), CAST('<META NAME="MS.LOCALE" CONTENT="EN">test entry 3' AS NVARCHAR(MAX))), '.html') GO
-- Create the full text catalog sp_fulltext_database 'enable' GO CREATE FULLTEXT CATALOG TEST AS DEFAULT GO CREATE FULLTEXT INDEX ON TestTable (varbinarycol TYPE COLUMN documentExtension LANGUAGE 1033) KEY INDEX tablePK GO
-- NOTE: You might need to give the catalog a chance to build before running the script below.
-- Now do a search that SHOULD RETURN 3 ROWS of data, but ONLY 1 ROW IS RETURNED SELECT CAST(varbinarycol AS NVARCHAR(MAX)) FROM TestTable WHERE CONTAINS(varbinarycol, 'test')
I am new to SQL querying and I came across an issue while experimenting.
Say, I have two tables like the following, a Users table at the top, and a Roles table at the bottom (drawing the issue as I'm not fluent in English to explain):
The Roles table was changed to include a new column Roles.RoleId and the old column is named Roles.RoleOldId. Same is done for Users table.
Assume these are not foreign and/or primary keys. How can I fill Users.RoleId with new Roles.RoleIds, matching RoleOldIds at each tables? The resulting (x, y, z) set of Users table will be (10, 20, 20).
I want to populate a datetime column on the fly within a stored procedure. Below is the query that I currently have that does same but slows down query performance.
CREATE TABLE #TaxVal ( ID INT , PaidDate DATETIME , CustID INT , CompID INT
[code]...
Which is the best way to write this query for better performance?
Sorry I'm pretty new to SQL so I don't know if this is a simple question. I have a table, and I am trying to add a column to the table and populate this column using what would be called an 'IF' function in Excel.
Basically 'column A' has numbers in it. I want SQL to look at 'column A' and if the first 5 digits of the number in 'column A' are 00001, then put 'description A' into new column 'column B'. If the first 5 digits of the number in 'column A' are 00002, then put 'description B' into 'column A' etc.
I have a table of id numbers that I wish to mask. My thought was to create a new column for this new id number and populate it with a unique sequential value - start at 1 and go as high as needed. My problem is that I cannot recall how to populate that column with a number...
I'm having some issues getting OLE DB Data Sources to work w stored procs in SSIS. Here's the situation.
I have an OLE DB Data Source set up to call a stored proc w no parameters. The stored procedure loops through a set of databases and inserts data from each database into a results table. I'm attempting to return the results table to SSIS, but the Available External Columns are not populating. However, previewing the query in SSIS does show results. The insert in to the results table is done by a call to sp_executesql.
I've tried setting the results table up as a temp table, table variable, and static table. I have NOCOUNT set ON and am only returning one recordset. I've seen the other threads in here about similar problems, but none of their solutions seem to work for me.
I put a blank or ‘0’ in one of column of a text file and then I used BCP to load this file to a table of SQL server 6.5. The field in SQL server table is char type with size 1. After I run this process, all rows with this column received ‘0’ and no blank or null at a black value place. Could you please help me to fix this problem and make some of rows in the column get a blank or null value.
Hi i hv a doubt in Sql server reporting..I do generate some reports based on some criteria.In the results screen i hv empty fields based on the search i hv generated.I need to set "0" instead of blank spaces in the fields..Can any one help me?
I have a problem using the odbc datasource reader to execute a sql command on a progress database. My query is something like:-
select max(id), sum(amount) from my_table
OR
select a, b, c, recid(my_table) from my_table
which produces external columns and output columns with no name. The progress sql doesn't support using aliases on column names and setting validateexternalmetadata to false and manually naming the input and output parameters in the 'Advanced Editor' doesn't seem to work either. I either get the error 'The name for output column "" is blank and columns can not be blank' or if I add my own column names in the input and output parameters it fails in the pre-execute phase saying it can't find a column in the datasource with name 'myalias'
I can get around the aggregate functions by transfering all the data and doing the aggregate on the local server but I also need to call functions such as recid() which I can't work around. SQL2000 DTS ignored these things and matched as best it could where SQL 2005 IS seems overly strict.
Has anyone encountered similar problems and does anyone have any ideas? I'm currently at a loss :(
I am running into an issue when adding data from multiple columns intoone alias:P.ADDR1 + ' - ' + P.CITY + ',' + ' ' + P.STATE AS LOCATIONIf one of the 3 values is blank, the value LOCATION becomes NULL. Howcan I inlcude any of the 3 values without LOCATION becoming NULL?Example, if ADDR1 and CITY have values but STATE is blank, I get aNULL statement for LOCATION. I still want it to show ADDR1 and CITYeven if STATE is blank.Thanks
i have the following problem: I have a matrix with a right subtotal column and this matrix was in a list (because in the end i will have more than one matrix). The list fits perfectly the matrix in design mode. But if i render the report in the viewer or to pdf, an additional blank area (like a blank copy of the subtotal column) was inserted after the right subtotal column of the matrix and increases the list too. You can see this easy by set the backgroundcolor of a list to a color. Without the subtotal column the list fits perfect after rendering. The problem is that this additional blank "column" creates empty pages in .pdf rendering, if the width of the matrix is near the page width. The same behaviour happens if i put the matrix with subtotal in a rectangle. I must use a list in the end because the the final report contains some matrixes and a subreport. So is this a bug? Someone must have this problem too?
I am dealing with what I believe is Oracle that is the source of a SQL View.
I am seeing a data type of Integer in the View, but I am not able to see what makes up that View. When I query the View, I can see that an Integer data type column is storing a blank space. I use ISNUMERIC(ColumnName) = 0 and there are a lot of rows that show as a zero length blank space, or text, or something. I just know that it is not an Integer.
I have attempted to CAST and Convert this value, but it will not. I have changed the data type on the table that is being inserted in too, and it still fails with a Conversion error. I have tried REPLACE(), but still the same conversion error.
W2k3 server, SQL 2005. @@version = Microsoft SQL Server 2005 - 9.00.1399.06 (Intel X86) Standard Edition on Windows NT 5.2 (Build 3790: Service Pack 1)
I have my first SSIS package almost working, but I'm having an odd problem and can't find any information to help resolve it.
I'm importing from a flat file (csv) to an existing table (append). I've got a Derived Column transformation in the middle to do some data cleanup. It's all working except for one little problem...
One of the transformations is 'REPLACE([Column 3],"^","; ")', output to a new column. (The input file has a field that uses carets as delimiters between an unknown number of items; I'm changing that to semicolons for easier reading.) Not all rows have data in this column, some will have one item, some will have multiple items.
The REPLACE works except that it fills in repeated data for all the blank rows.
Example:
Incoming data is:
1 Smith,Jane^Jones,Jane
2 Brown,John
3
4 Adams,James^Adams,Jim
5
6 White,Debra
Data inserted into the table is:
1 Smith,Jane; Jones,Jane
2 Brown,John
3 Brown,John
4 Adams,James; Adams,Jim
5 Adams,James; Adams,Jim
6 White,Debra
I've tried to use a Conditional to skip the empty rows, but I can't get that working at all (get syntax errors no matter what I put in).
Any suggestions on how to fix this would be most appreciated!
I have flat file source from which data is imported to a Sql table.The target column is int and input column is string .The column has some numeric values and some blank values.when I tried to convert into int values it fails.
Hi every one, I am facing problem in printing the reports from browser and also when i export it to pdf,the problem i am facing is blank pages are coming when report column getting the large amount of text around 2500 characters into column value. can any one help me in this issue?. if the report is getting acceptable amout of data it is printing in proper way i.e no balnk pages at all.i maintained all properties like margins+body size < page size.
I have a database with literally hundereds of tables in it. Can anyone advise me of how to tell which ones are populated with data and which ones arent?
I ran the code below in QA, and @@ERROR never gets populated with any error code, despite the fact that an error occured. This event is very misleading and errors are never caught. Any advice greatly appreciated.
-- Create the table create table test_table ( test_field int )
-- Execute the code DECLARE @err_status int, @row_count int insert into test_table ( test_field ) values ( 'TEST STRING' ) select @err_status = @@ERROR, @row_count = @@ROWCOUNT
IF @err_status <> 0 PRINT @err_status
-- Results: Server: Msg 245, Level 16, State 1, Line 1 Syntax error converting the varchar value 'TEST STRING' to a column of data type int.
Hi, I hav eplaced an expression for the flat file connection as below
@[User::FileDirectory] + @[User::FileName]
This is supposed to be used instead of the ConnectionString property of the flat file connection.
You can see that I have created two variables.
The variable @[User::FileDirectory] is set to the directory. i.e. I have hardcoded the path to it and assigned it to this variable.
The variable @[User::FileName] is picked up automatically.
The question is:
When I go to the properties of the flat file connection, I delete the value inside the connectionstring property becuase there is now the expression which is set to the connectionstring. But when I come back to this property then I am not sure why the connectionstring property gets populated with the directory that I hardcoded to the variable.
Hi, I have a stored proc which returns multiple result sets. These results sets I am capturing using a strongly typed dataset which in turn I am using to display in the code. My dataset will have 5 tables. However when I run the code only 3 tables get populated and the remaining 2 gets no data. I have seen the problem earlier and could not resolved it. Please let me know if any one can help.
How can I change a field type whilst it is populated?
I have tried :
insert new_table select * from old_table
but I get :
Disallowed implicit conversion from datatype 'text' to datatype 'varchar' Table: 'davy.dbo.new_table', Column: 'de_area' Use the CONVERT function to run this query.
My table formats are as follows :
TABLE dbo.new_table ( de_pk int NOT NULL , de_name char (25) NOT NULL , de_area char(255) NULL )
TABLE dbo.new_table ( de_pk int NOT NULL , de_name char (25) NOT NULL , de_area text(16) NULL )
I need to query some hierarchical data. I've written a recursive query that allows me to examine a parent and all it's related children using an adjacency data model. The scenario is to allow users to track how columns are populated in an ETL process. I've set up the sample data so that there are two paths:
1. col1 -> col2 -> col3 -> col6 2. col4 - > col5
You can input a column name and get everything from that point downstream. The problem is, you need to be able to start at the bottom and work your way up. Basically, you should be able to put in col6 and see how the data got from col1 to col6. I'm not sure if it's a matter of rewriting the query or changing the schema to invert the relationships.
We have a lot of VB6 code that uses ADO 2.7 and stored procs wih Sql 2005. I have noticed recently that if I use the follow code:
Dim con As New ADODB.Connection con.ConnectionString = "driver={SQL Server};server=(local);database=test;uid=sa;pwd=" con.Open
Dim com As New ADODB.Command
com.ActiveConnection = con com.CommandText = "usp_GetSetting" com.CommandType = adCmdStoredProc
com.Parameters.Append com.CreateParameter(...)
It will fail. the reason being the after setting the CommantText and Type ADO then seems to automatically go away and populate the Parameters collection from the databases metadata according to the SP we are calling.
I have never seen this before, I thought the Refresh method had to be called before the parameters collection get populated.
However, the first three columns are not being populated in the destination table. The other columns come over fine.
The SQL stmt. returns data as expected when run against the source database.
I deleted the source and destination and recreated the flow to prevent metadata mapping issues. In the source editor preview I see all of the columns and data. In the destination editor preview, the first three columns of data are null ???.
It appears that the columns are not mapping properly even though they are in the source and destination of the mapping editor.
I have made sure that the destination mapping contains all the columns in the UI.
The source and destination have the columns represented in the advanced editor metedata. I also checked the XML to verify that the columns are in the destination.
There is a row count between the source and destination. which should have no effect.
This is a part of a larger DW load where I have 10 other tables populated within the dataflow. I also do not get any validation, or error messages. So, I have eliminated truncation errors or the like.
I am really puzzled. Has anyone run accross anything like this?
I have a SSIS package with multiple objects executing ODBC SQL to Teradata. All of the SQL uses the same variable (today) to susbtitute today's date into the native teradata SQL.
Today is populated using the following expression:
The problem is that some of the code uses the correct date, but others appear to use the date that the SSIS package is added to the scheduler. It is not always the same SQL statements which deriver the incorrect date. The code below is four of the SQL statements generated at the same time using this variable - two show the correct date (2006-08-22) and two the date the package was added to the schedule (2006-08-18): SELECT MarketSegmentCode , TradeTypeInd , BroadcastUpdateAction , CurrencyCode , SUM ( TradePrice * TradeSize / 1000000 ) , COUNT ( * ) FROM ProdMD_Midas_Base.v_TradeReport WHERE ( TradeDate = (dAtE'2006-08-22') ) GROUP BY MarketSegmentCode , TradeTypeInd , BroadcastUpdateAction , CurrencyCode ORDER BY TradeTypeInd , MarketSegmentCode
SELECT ParticipantCode , CASE WHEN ParticipantCode = ParticipantCodeBuyer THEN ParticipantCodeSeller ELSE ParticipantCodeBuyer END AS Counterparty , MarketSegmentCode , BroadcastUpdateAction , CurrencyCode , SUM ( TradePrice * TradeSize / 1000000 ) AS Consideration , COUNT ( * ) AS Bargains FROM ProdMD_Midas_Base.v_TradeReport WHERE ( TradeTypeInd = 'O' ) AND ( TradeDate = (dAtE'2006-08-18') ) GROUP BY ParticipantCode , ParticipantCodeBuyer , ParticipantCodeSeller , MarketSegmentCode , BroadcastUpdateAction , CurrencyCode
SELECT ParticipantCode , CurrencyCode , MarketSegmentCode , SUM ( TradePrice * TradeSize / 1000000 ) AS Consideration , COUNT ( * ) AS Bargains FROM v_TradeReport WHERE ( TradeTypeInd = 'AT' ) AND ( TradeDate = (dAtE'2006-08-22') ) GROUP BY ParticipantCode , CurrencyCode , MarketSegmentCode
SELECT ( CASE ParticipantCode WHEN ParticipantCodeBuyer THEN ParticipantCodeSeller ELSE ParticipantCodeBuyer END ) AS Cparty , CurrencyCode , MarketSegmentCode , SUM ( TradePrice * TradeSize / 1000000 ) AS Consideration , COUNT ( * ) AS bargains FROM v_TradeReport WHERE ( TradeTypeInd = 'AT' ) AND ( TradeDate = (dAtE'2006-08-18') ) GROUP BY Cparty , CurrencyCode , MarketSegmentCode
Has anybody experienced similar problems with variables in SSIS packages and the scheduler?