Why Was ReadWriteVariable Not Populated/updated With A Value Of The Dynamically Generated String?
Jul 27, 2007
Hi, I have a script task in SSIS which dynamically generates a string for a file name to be used as flat file source. I execute the task and it executed with success; but when I checked the result of the variable TotFileName from the Expression builder window for the flat file connection manager it was not populated with a file name like \MyServerMyDriveMyFolder200706daily.txt. So something might still be missing from the script below. Or is the way I do it correct? Can someone help with this? Thanks a lot!!
I have created package level variables ImportFolder (value like: \MyServerMyDriveMyFolder ) and TotFileName (value field empty) and make ImportFolder a ReadOnlyVariable and TotFileName a ReadWriteVariable. Then I use expression to set the property for flat file connection manager to use the TotFileName variable.
(Basically the idea is: if now is July 2007 then the filename should be 200706daily.txt; if now is Jan 2008 then the filename should be 200712daily.txt)
-----code------
' Microsoft SQL Server Integration Services Script Task
' Write scripts using Microsoft Visual Basic
' The ScriptMain class is the entry point of the Script Task.
Imports System
Imports System.IO
Imports System.Data
Imports System.Math
Imports Microsoft.SqlServer.Dts.Runtime
Public Class ScriptMain
' The execution engine calls this method when the task executes.
' To access the object model, use the Dts object. Connections, variables, events,
' and logging features are available as static members of the Dts class.
' Before returning from this method, set the value of Dts.TaskResult to indicate success or failure.
'
' To open Code and Text Editor Help, press F1.
' To open Object Browser, press Ctrl+Alt+J.
Public Sub Main()
Dim TotFileName As String
Dim TrueFileName As String
Dim sYear As String
Dim sMonth As String
Dim sDate As String
I have below SQL. When I run it I get the 'Each GROUP BY expression must contain at least one column that is not an outer reference' error. The date and string expressions are generated dynamically and need to be grouped upon if possible. What am I missing?
INSERT INTO tblStaffPayrollHistory (StaffID, FromDate, ToDate, PayrollNo, EventID) SELECT DISTINCT tblStaffBookings.StaffID, CONVERT(DATETIME, '2015-05-17', 102), CONVERT(DATETIME, '2015-05-17', 102), 'tree', tblEvents.ID FROM tblStaffBookings INNER JOIN tblEvents ON tblStaffBookings.EventID = tblEvents.ID WHERE ... GROUP BY tblStaffBookings.StaffID, CONVERT(DATETIME, '2015-05-17', 102), CONVERT(DATETIME, '2015-05-17', 102), 'tree', tblEvents.ID
I am getting the following error when executing a CLR stored procedure. I have set generate serialization assembly to 'ON' with no effect. The error is:
System.InvalidOperationException: Cannot load dynamically generated serialization assembly. In some hosting environments assembly load functionality is restricted, consider using pre-generated serializer. Please see inner exception for more information. ---> System.IO.FileLoadException: LoadFrom(), LoadFile(), Load(byte[]) and LoadModule() have been disabled by the host.
In article http://support.microsoft.com/kb/913668 it says to generate a serialization assembly and pop this into the database along side the assembly with the serialized type. All well and good, but how doe the XmlSerializer know to look in the database for the assembly - does it use the name of the assembly + XmlSerializer?
It is certainly ambiguous in the aforementioned kb article. In one place it generates an assembly in the database called
CREATE ASSEMBLY [MyTest.XmlSerializers] from 'C:CLRTestMyTestMyTestinDebugMyTest.XmlSerializers.dll' WITH permission_set = SAFE
and in another it uses
USE dbTest GO CREATE ASSEMBLY [MyTest] from 'C:CLRTestMyTest.dll' GO CREATE ASSEMBLY [MyTest.XmlSerializers.dll] from 'C:CLRTestMyTest.XmlSerializers.dll' GO
I can't get either to work, and I've tried various combinations, this and the fact that the kb artice uses two different naming conventions suggests to me that there must be "something else" which links the assembly with the serialization assembly.
I have an OLEDB source that uses a stored procedure which pivots records and returns me data with columns which are dynamic (Changing every time). How can I export this data with dynamic number of columns to excel destination?
I have the following ASP code that builds part of the example SQL statement below (it's the same SQL as in my earlier thread here (http://www.dbforums.com/showthread.php?t=1214044) but a very different question):
if sFindTicketEventId > 0 then sSQL = sSQL & " AND [tblEvents].[id]=" & sFindTicketEventId if sFindTicketStandId > 0 then sSQL = sSQL & " AND [tblStands].[id]=" & sFindTicketStandId
SELECT [tblC].[id] AS CombinationID, [tblC].[availability], [tblC].[description], [tblC].[price] AS combinationPrice, [tblC].[combination_open], [tblT].[TicketID] AS TicketID, [tblT].[price] AS ticketPrice, [tblT].[availability], [tblT].[ticket_open], [tblT].[quantity], [tblT].[event_name], [tblT].[event_open], [tblT].[stand_name], [tblT].[stand_open], [tblT].[admission_start_date], [tblT].[admission_end_date], [tblT].[date_open], [tblT]., [tblT]., [tblT2].[description], [tblT2].[admin_description] FROM( SELECT [tblCombinations].[id], [tblTickets].[id] As TicketID, [tblTickets].[price], [tblTickets].[availability], [tblTickets].[ticket_open], [tblCombinations_Tickets].[quantity], [tblEvents].[event_name], [tblEvents].[event_open], [tblStands].[stand_name], [tblStands].[stand_open], [tblAdmissionDates].[admission_start_date], [tblAdmissionDates].[admission_end_date], [tblAdmissionDates].[date_open], [tblBookingDates].[booking_start_date], [tblBookingDates].[booking_end_date] FROM [tblCombinations] LEFT JOIN [tblCombinations_Tickets] ON [tblCombinations_Tickets].[combination_id] = [tblCombinations].[id] LEFT JOIN [tblTickets] ON [tblCombinations_Tickets].[ticket_id] = [tblTickets].[id] LEFT JOIN [tblEvents] ON [tblEvents].[id] = [tblTickets].[event_id] LEFT JOIN [tblStands] ON [tblStands].[id] = [tblTickets].[stand_id] LEFT JOIN [tblAdmissionDates] ON [tblAdmissionDates].[id] = [tblTickets].[admission_date_id] LEFT JOIN [tblBookingDates] ON [tblBookingDates].[id] = [tblTickets].[booking_date_id] LEFT JOIN [tblTicketConcessions] ON [tblTicketConcessions].[id] = [tblTickets].[ticket_concession_id] LEFT JOIN [tblBookingQuantities] AS [tblBookingMinQuantities] ON [tblBookingMinQuantities].[id] = [tblTickets].[booking_min_quantity_id] LEFT JOIN [tblBookingQuantities] AS [tblBookingMaxQuantities] ON [tblBookingMaxQuantities].[id] = [tblTickets].[booking_max_quantity_id] LEFT JOIN [tblMemberships] ON [tblMemberships].[id] = [tblTickets].[membership_id] WHERE 1=1 [B]AND [tblEvents].[id]=2 [B]AND [tblStands].[id]=3 --AND [tblAdmissionDates].[id]=@admissionDateId --AND [tblBookingDates].[id]=@bookingDateId --AND [tblTicketConcessions].[id]=@concessionId --AND [tblBookingMinQuantities].[id]=@bookingMinQuantityId --AND [tblBookingMaxQuantities].[id]=@bookingMaxQuantityId --AND [tblMemberships].[id]=@membershipId GROUP BY [tblCombinations].[id], [tblTickets].[id], [tblTickets].[price], [tblTickets].[availability], [tblTickets].[ticket_open], [tblCombinations_Tickets].[quantity], [tblEvents].[event_name], [tblEvents].[event_open], [tblStands].[stand_name], [tblStands].[stand_open], [tblAdmissionDates].[admission_start_date], [tblAdmissionDates].[admission_end_date], [tblAdmissionDates].[date_open], [tblBookingDates].[booking_start_date], [tblBookingDates].[booking_end_date] ) as [tblT] JOIN [tblCombinations] as [tblC] on [tblT].[id]=[tblC].[id] LEFT JOIN [tblTickets] as [tblT2] on [tblT].[TicketID]=[tblT2].[id]
I want to turn this SQL into a stored proc; there are currently about 8 parameters that I want to pass into it. The field value for each will be either NULL or a positive integer, and the paramater will be passed in as an integer.
If the passed parameter value is a positive integer then it should return all records where the corresponding field value matches that integer. If the passed parameter is 0, it should return all rows regardless of whether the field value is an integer or NULL.
And I can't for the life of me figure out how to do it. Do I need an IF statement in there or something?
Hello all,I'm at a loss on how to do this. We're using MS SQL 2000 Server and Ihave a list of fields I need to find the first and last entry for.Here's an example of the table:Number - VarChar(10)Jan - IntFeb - IntMar - IntApr - IntMay - IntJune - IntANd it'll look something like this:NumberJanFebMarAprMayJun12322001901922012032054432433322 4565423754694665And I need to create a table with this:NumberFirstLastDifference123220020554432433456235423754665-89I'm not sure if this'll copy over correctly, but I have gaps in thedata so I can't just say Jun-Jan, but I need tofind the first fieldwith data and last field with data, then find the difference of these.Suggestions? Is there a loop or something I can do in TSQL that'll dothis? I'd like to do this in Query Analyzer since it's just a one-timereport. Thanks --Alex
Hi, I developed an SSIS package in which i declared a global variable"FileName" . In script task of SSIS I am assigning some value to that global variable"FileName". Now, In script page of "ScriptTaskEditor" either I declare global variable"FileName" as ReadOnlyVariable or as ReadWriteVariable, In both conditions i am able to get the new assigned value from global variable"FileName". So, my question is that, Here what is the difference between "ReadOnlyVariable " and "ReadWriteVariable", since i am getting result from both and what declaration should I use ?
I have 3 web identical web apps whose only diff is that they access different SQl Server DB's. I use the SQLDataSource in a number of pages to retrieve data from the db. The apps all use Forms Auth. I would like to be able to see who is the logged on user user and assign the approp connection string to all the SQLDataSources in the app. For example when user UserA logs in they are retrieving data from on db but when UserB logs in they are retrieving data from another db. I am sure this can be done but could use a little guidance. Thanks in advance.
I am looking to dynamically change the connection string in my SSIS package, to avoid changing the connection string each time I want to run in different environments.
I need to make my site aware of which server_name it is loading from so it uses a different connection string. (have dev + prod servers for web/sql)Currently my connection string is in web.config as follows: <connectionStrings> <!-- Development and Staging connection string --> <add name="myconnection" connectionString="server=myserver; user id=mysuer; password=mypassword; database=mydatabase" /> </connectionStrings> I need to make sure the 'name' is the same for both connection strings since that is how the rest of my site looks for it. However, I'm not sure how to get both in here with some sort of 'if/then' statement to determine which one to use.I've heard it could be done in global.asax with something similar to the code below, but I dont know how to assign a 'name' to a connection string for that type of setup. Sub Session_OnStart ServerName = UCase(Request.ServerVariables("SERVER_NAME")) IF ServerName = "prod.server.com" THEN ...Set Prd string... ELSE ...Set Dev string... END IF End Sub
Hi, I tried to follow the widely talked about method to dynamically populate the connection string property of my flatfileconnection manager from a variable. I keep getting the following non-fatal error.
TITLE: Microsoft Visual Studio ------------------------------
Nonfatal errors occurred while saving the package: Error at Package [Connection manager "FFCM"]: The file name ""C:ProjectsSSISHLoadTOutputOut.csv"" specified in the connection was not valid.
Error at Package: The result of the expression "@[User::CsvFullFileName]" on property "ConnectionString" cannot be written to the property. The expression was evaluated, but cannot be set on the property.
Here is what I am trying to do. I have a foreach loop that iterates through a list of xml config files and reads the config information including the destination csv file name and does a data transformation. So I created a flatfile connection to a csv file did my data mappings. Created a package level variable to hold the destination file path In the Flat file conn. manager's properties -> expression -> set the @[User::CsvFullFileName] (which even evaluates fine)
When I try to run the package I keep getting the above mentioned non-fatal error..I checked the path and it is valid. I even tried
the c:\projects\...notation and the UNC path notation...all seem to give the same error
Anyone experience this before ? any thoughts would be appreciated.
Hi Folks ...Question for everyone that I have not been able to figure out. I have an application that is spread across tiers:SQL Connection defined in Web.Config file that connects to SQLServer database.DAL layer created references SQL Connection in Web.Config file - have noticed this copies the connection string information to the local properties for each TableAdapter that is defined in the DAL layer.BLL Layer that references Table Adapters declared in the DAL layer.When the web site is called, the link will provide an encoded id.Sample call to website: http://www.mysamplesite.com/default.aspx?company=AE2837FG28F7B327Based on the encoded id that is passed to the site, I need to switch the connection string to use different databases on the backend.Sample connection string: Data Source=localhost;Initial Catalog=dbSystem_LiveCorp1;User ID=client;Password=live2006 I would need to change Initial Catablog to the following:Data Source=localhost;Initial Catalog=dbSystem_LiveCorp196;User ID=client;Password=live2006How do I do this and have the connection string reflected in all of the Table Adapters that I have created in the DAL layer - currently 100+ Table Adapters have been defined.As we gain new clients, I need to be able to have each clients information located in a different database within SQL Server. Mandated - I have no choice in this requirement. Being as I don't want to have to recreate the DAL for several dozen clients and maintain that whenever I make a change to the DAL to then replicate across multiple copies. There has to be a way to dynamically alter the SQLConnection and have it recognized across all DAL TableAdapters.I'm developing with MS-Visual Studio 2005 - VB. Any help would be greatly appreciated. Thanks ...David Any day above ground is a good day ...
I have create packages which loads the data from flat file to sql server table, now I want to make my destination table connection dynamic what is format of connection string. I also need to pass user name and password for sql server dynamically in this case, what is the format for the connection string.
Also in package i used ADO.net as source for *.mdb files how i can set the commection to .mdb files dynamically which is used as source in my package.
I tried the Beta 1 of the service pack 1 to .net 3.5. If I try to add an entity (and try to save this), I get the Exception "No support for server-generated keys and server-generated values".
How can I add entities to my Sqlce- database?
I tried to give the id- column (primary key) in the database an identity, another time without identity, only primary key --> none of them worked. I always get the same error.
What do I have to change to make successfully a SaveChanges()?
Hi,I want to save the last modification date when the row is updated. I have a column called "LastModification" in the table, every time the row is update I want to set the value of this column to the current date. So far all I know is that I need to use a trigger and the GetDate() function, but could any body help me with how to set the value of the column to getdate()? thanks for your help.
I have a database with literally hundereds of tables in it. Can anyone advise me of how to tell which ones are populated with data and which ones arent?
I ran the code below in QA, and @@ERROR never gets populated with any error code, despite the fact that an error occured. This event is very misleading and errors are never caught. Any advice greatly appreciated.
-- Create the table create table test_table ( test_field int )
-- Execute the code DECLARE @err_status int, @row_count int insert into test_table ( test_field ) values ( 'TEST STRING' ) select @err_status = @@ERROR, @row_count = @@ROWCOUNT
IF @err_status <> 0 PRINT @err_status
-- Results: Server: Msg 245, Level 16, State 1, Line 1 Syntax error converting the varchar value 'TEST STRING' to a column of data type int.
Hi, I hav eplaced an expression for the flat file connection as below
@[User::FileDirectory] + @[User::FileName]
This is supposed to be used instead of the ConnectionString property of the flat file connection.
You can see that I have created two variables.
The variable @[User::FileDirectory] is set to the directory. i.e. I have hardcoded the path to it and assigned it to this variable.
The variable @[User::FileName] is picked up automatically.
The question is:
When I go to the properties of the flat file connection, I delete the value inside the connectionstring property becuase there is now the expression which is set to the connectionstring. But when I come back to this property then I am not sure why the connectionstring property gets populated with the directory that I hardcoded to the variable.
I have a table with two columns: OwnerName Owner John;Smith Mary;Smith
OwnerName is populated. Owner is not. I want to populate the Owner column with the OwnerName in alphabetical order. I have already created a function to do this. Select fnGetOwner(OwnerName) from OwnerTable. This returns: Smith, John Smith, Mary How do I populate the blank Owner field beside the OwnerName in the OwnerTable?
Hi, I have a stored proc which returns multiple result sets. These results sets I am capturing using a strongly typed dataset which in turn I am using to display in the code. My dataset will have 5 tables. However when I run the code only 3 tables get populated and the remaining 2 gets no data. I have seen the problem earlier and could not resolved it. Please let me know if any one can help.
How can I change a field type whilst it is populated?
I have tried :
insert new_table select * from old_table
but I get :
Disallowed implicit conversion from datatype 'text' to datatype 'varchar' Table: 'davy.dbo.new_table', Column: 'de_area' Use the CONVERT function to run this query.
My table formats are as follows :
TABLE dbo.new_table ( de_pk int NOT NULL , de_name char (25) NOT NULL , de_area char(255) NULL )
TABLE dbo.new_table ( de_pk int NOT NULL , de_name char (25) NOT NULL , de_area text(16) NULL )
I need to query some hierarchical data. I've written a recursive query that allows me to examine a parent and all it's related children using an adjacency data model. The scenario is to allow users to track how columns are populated in an ETL process. I've set up the sample data so that there are two paths:
1. col1 -> col2 -> col3 -> col6 2. col4 - > col5
You can input a column name and get everything from that point downstream. The problem is, you need to be able to start at the bottom and work your way up. Basically, you should be able to put in col6 and see how the data got from col1 to col6. I'm not sure if it's a matter of rewriting the query or changing the schema to invert the relationships.
We have a lot of VB6 code that uses ADO 2.7 and stored procs wih Sql 2005. I have noticed recently that if I use the follow code:
Dim con As New ADODB.Connection con.ConnectionString = "driver={SQL Server};server=(local);database=test;uid=sa;pwd=" con.Open
Dim com As New ADODB.Command
com.ActiveConnection = con com.CommandText = "usp_GetSetting" com.CommandType = adCmdStoredProc
com.Parameters.Append com.CreateParameter(...)
It will fail. the reason being the after setting the CommantText and Type ADO then seems to automatically go away and populate the Parameters collection from the databases metadata according to the SP we are calling.
I have never seen this before, I thought the Refresh method had to be called before the parameters collection get populated.
However, the first three columns are not being populated in the destination table. The other columns come over fine.
The SQL stmt. returns data as expected when run against the source database.
I deleted the source and destination and recreated the flow to prevent metadata mapping issues. In the source editor preview I see all of the columns and data. In the destination editor preview, the first three columns of data are null ???.
It appears that the columns are not mapping properly even though they are in the source and destination of the mapping editor.
I have made sure that the destination mapping contains all the columns in the UI.
The source and destination have the columns represented in the advanced editor metedata. I also checked the XML to verify that the columns are in the destination.
There is a row count between the source and destination. which should have no effect.
This is a part of a larger DW load where I have 10 other tables populated within the dataflow. I also do not get any validation, or error messages. So, I have eliminated truncation errors or the like.
I am really puzzled. Has anyone run accross anything like this?
I have a SSIS package with multiple objects executing ODBC SQL to Teradata. All of the SQL uses the same variable (today) to susbtitute today's date into the native teradata SQL.
Today is populated using the following expression:
The problem is that some of the code uses the correct date, but others appear to use the date that the SSIS package is added to the scheduler. It is not always the same SQL statements which deriver the incorrect date. The code below is four of the SQL statements generated at the same time using this variable - two show the correct date (2006-08-22) and two the date the package was added to the schedule (2006-08-18): SELECT MarketSegmentCode , TradeTypeInd , BroadcastUpdateAction , CurrencyCode , SUM ( TradePrice * TradeSize / 1000000 ) , COUNT ( * ) FROM ProdMD_Midas_Base.v_TradeReport WHERE ( TradeDate = (dAtE'2006-08-22') ) GROUP BY MarketSegmentCode , TradeTypeInd , BroadcastUpdateAction , CurrencyCode ORDER BY TradeTypeInd , MarketSegmentCode
SELECT ParticipantCode , CASE WHEN ParticipantCode = ParticipantCodeBuyer THEN ParticipantCodeSeller ELSE ParticipantCodeBuyer END AS Counterparty , MarketSegmentCode , BroadcastUpdateAction , CurrencyCode , SUM ( TradePrice * TradeSize / 1000000 ) AS Consideration , COUNT ( * ) AS Bargains FROM ProdMD_Midas_Base.v_TradeReport WHERE ( TradeTypeInd = 'O' ) AND ( TradeDate = (dAtE'2006-08-18') ) GROUP BY ParticipantCode , ParticipantCodeBuyer , ParticipantCodeSeller , MarketSegmentCode , BroadcastUpdateAction , CurrencyCode
SELECT ParticipantCode , CurrencyCode , MarketSegmentCode , SUM ( TradePrice * TradeSize / 1000000 ) AS Consideration , COUNT ( * ) AS Bargains FROM v_TradeReport WHERE ( TradeTypeInd = 'AT' ) AND ( TradeDate = (dAtE'2006-08-22') ) GROUP BY ParticipantCode , CurrencyCode , MarketSegmentCode
SELECT ( CASE ParticipantCode WHEN ParticipantCodeBuyer THEN ParticipantCodeSeller ELSE ParticipantCodeBuyer END ) AS Cparty , CurrencyCode , MarketSegmentCode , SUM ( TradePrice * TradeSize / 1000000 ) AS Consideration , COUNT ( * ) AS bargains FROM v_TradeReport WHERE ( TradeTypeInd = 'AT' ) AND ( TradeDate = (dAtE'2006-08-18') ) GROUP BY Cparty , CurrencyCode , MarketSegmentCode
Has anybody experienced similar problems with variables in SSIS packages and the scheduler?
I am tring to figure out how to simplify the process of populating a database created by an application with the same database, only with data already in it. So far i have created a backup of the database and used that backup file with SQL server management express to overwrite the existing database with that backup file on a new computer so the program will have data when initally installed for Demonstration purposes. I was hoping there was an executable script that i could use, so that when someone wants a demonstration of our product, they can see its options and functionallity with data available. Maby i am going about this the wrong way, i need to know if there is a way that when our program is installed an executable can simply be run to populate our database with a backup of our sample database. Any imput would be helpful. Thanks. Isaias
I have a real heartache with runtime parameter interogation on my DB. Sure I get the latest and greatest and sure I don't have to type in all those lovely parameter types..but...the hit I take on performance for making no less then 3 DB hits for each SqlAdapter is unreasonable!
So ...I like the idea of maybe calling it once for all my stored procs on application startup...and then maybe saving this in CacheObject.
My problem is that I can't see where you can even serialize a SqlParametersCollection or even for that matter assign it to a Command object. Can you cache a command object ?
LOL
I think I may just have to write some generic routine for creating and populating my command objects based on a key (type) and then use that to fetch my command.Update, command.Insert and command.
I would like to use the new AsynchBlock to do the fetching of the stored proc parameters and then just pull them from the Cache object....put a file watch so that if the DB's change my params it re-pulls them again.
*nice*.....
Then I get the best of both worlds...caching...and no parameter writing...
Hi, I was wondering if any SQL Server gurus out there could help me...
I have a table I'm trying to apply a full text catalog to, however no results are ever returned due to the text column being cataloged being of varbinary(max) that's being populated from a converted nvarchar(max) value.
To re-create the problem quickly...
If I populate the column via CONVERT(varbinary(max), 'test text') then there is no problem, I get results as expected.
However if I populate the column via CONVERT(varbinary(max), CAST('test text' as nvarchar(max))) no results are ever returned.
Is this a bug with SQL Server 2005 Full Text Indexing? I'm happily creating full text catalogs when an nvarchar is not getting converted into a varbinary.
I'm setting the Document Type column to '.html' (I've tried changing this to '.txt' in case it was a fault with the html ifilter but the problem persists so I believe I can rule this out).
The reason I need to convert an nvarchar to varbinary is that the table holds multi-lingual text and I'm adding a html meta tag <META NAME="MS.LOCALE" CONTENT="ES"> to the beginning in order for the full text indexing word breaker to select the correct language to catalog the text with. The aim being to provide more relevant searches in users native languages (I've read a few articles that describe this technique, but it's the first time I've tried to apply it).
Any pointers / suggestions would be greatly appreciated. Cheers, Gavin.
Below is a T-SQL script you can run to demonstrate the effect I'm experiencing...
-- Create test database CREATE DATABASE FullTextTest GO USE FullTextTest GO
-- Create test data table CREATE TABLE TestTable ( pk UNIQUEIDENTIFIER NOT NULL CONSTRAINT tablePK PRIMARY KEY, varbinarycol VARBINARY(MAX), documentExtension VARCHAR(5), ) GO
-- The below single entry WILL BE FOUND (the text source is being entered directly) INSERT INTO TestTable (pk, varbinarycol, documentExtension) VALUES (NEWID(), CONVERT(VARBINARY(MAX),'<META NAME="MS.LOCALE" CONTENT="EN">test entry 1'), '.html')
-- The bellow two entries below WILL NOT BE FOUND (the text source is taken from an NVARCHAR(MAX) value) INSERT INTO TestTable (pk, varbinarycol, documentExtension) VALUES (NEWID(), CONVERT(VARBINARY(MAX), CAST('<META NAME="MS.LOCALE" CONTENT="EN">test entry 2' AS NVARCHAR(MAX))), '.html') INSERT INTO TestTable (pk, varbinarycol, documentExtension) VALUES (NEWID(), CONVERT(VARBINARY(MAX), CAST('<META NAME="MS.LOCALE" CONTENT="EN">test entry 3' AS NVARCHAR(MAX))), '.html') GO
-- Create the full text catalog sp_fulltext_database 'enable' GO CREATE FULLTEXT CATALOG TEST AS DEFAULT GO CREATE FULLTEXT INDEX ON TestTable (varbinarycol TYPE COLUMN documentExtension LANGUAGE 1033) KEY INDEX tablePK GO
-- NOTE: You might need to give the catalog a chance to build before running the script below.
-- Now do a search that SHOULD RETURN 3 ROWS of data, but ONLY 1 ROW IS RETURNED SELECT CAST(varbinarycol AS NVARCHAR(MAX)) FROM TestTable WHERE CONTAINS(varbinarycol, 'test')