Context_info / Connection Wide Variables / Information
Feb 26, 2008
Hello
I know that the following is possible:
declare @x varbinary(128)
select @x = convert(varbinary(128), 'Some user name')
set context_info @x
Then in a trigger, you can say:
UPDATE tbl
SET who_was_kilroy = convert(varchar, p.context_info)
FROM inserted i
JOIN tbl t ON i.keycol = t.keycol
CROSS master.dbo.sysprocesses p
WHERE p.spid = @@spid
I am searching for a more generic way (varbinary 128 is not big enough) to store and access connection wide variables
Currently our need is to track the data that is changed by the user and record it as a part of the Audit Log. We have accomplished this need ny making use of set context_info and also by passing the UserId from all the Layers and finally log the user name using the set context_info in the stored proc. What we thought of was using the trigger for each update,delete records for the table.So by setting the context in teh stored proc we could access the UserId in the trigger and able to record the changes made by Tagged userId. What we want to make sure before making this as our permananet solution is (1) Reliability - In a web application (N Tier) how would the context not get out of scope. 2) any Performannce related issues, you guys can think of. 3) This Context used in the database -- would that be the part of the connection used from the UI, meaning the if a request is made from UI to the Database . would the context be alive till the connection doesn't go back to the connection pool. Thanks Sweety.
I've defined a System DSN (and it works just fine for the classic ASP web site) and for a separate project I would very much like to determine what server its connecting to.
My environment is .Net 2.0.
On some boxes, the DSN is configured to connect to SQL Server "George" and on others it's configured to connect with "Carl." Both servers have a database named Lighhouse, I want to connect to another DB using the newer SQLConnection/SQLCommand objects. Which, don't support and ODBC DSN in the connection string. However, when constructing my connection string I need to connect to the same server that's defined in the DSN.
Is there a way extract the information from the DSN?
I want to make a SSIS package with Oracle and deploy it in no of oracle databases, for it every time I have to open package and change connection information.
How can I make oracle connection information as variable value so that when I deploy my package on Oracle database it will pick all oracle connection information(User Id, Pwd, Server Name) automatically.
We have a Main package and which is calling 2 more other packages. The first package contains a connection and we are using a Dataflow task. The data flow task has OleDB Data source which is taking getting columns using a Stored Procedure. And the output we need to write in a Flat File.
The second Package also contains the same(The same Tasks, Database and Stored Procedure Calling) The difference is in the stored procedure Parameters. Based on the different parameters Stored procedures returns the different Columns and Rows output. When we are trying to Get the second package output in OleDb Data source it shows all the columns which is the output of the First Package because it stores External Meta Data.
So My understanding is the Connection to the same database keeps the External metadata information with the connection and because of that it is always getting the same output columns in Ole DB Data source task in the second Package also.
How to Get my correct output from the second package in this case? Or If we dont want to store external Meta data with the Connection then is that possible? If yes then How?
What is the best way to run SSIS scripts on different servers without changing connection information. Our test server is ppntt140 and our production server is ppntd110. If I create a script on server ppntt140 what can I do so I can move it to server ppntd110 without changing any connection information? Database names are the same, it is just the server that changes. What is the best way to handle this? Thanks in advance.
while connecting to Reporting Services from Mangament Studio getting following error
"The attempt to connect to the report server failed. Check your connection information and that the report server is a compatible version. (Microsoft.SqlServer.Management.UI.RSClient)"
Hello, I am getting very frustrated! I have got a Foreach loop container which I am processing files within a folder. I have a flatfile connection manager which I have set up using a test file and have updated the expressions attribute to be the package variable I set up in the collection for the loop container however everytime I run it I get the error: 0xC0202094 cannot retrieve the column information from the flatfile connection manager. I can only guess that it is either the variable being passed to the connection manager or the way I set up the connection manager. When I msgbox the variable in a script component before the dataflow step, the variable for the file seems fine. Any suggestions are REALLY appreciated.
I have no issues connecting OLE DB manager with variables (basically i am passing source and destination) but i have issues to connect SMO Connection Manager
I tried to pass the variable into SqlServerName on SMO connectin properties..no luck... tried on control flow item (Transfer SQL server Objects Task) with sourceconnection and Soruce db .. no luck.
I am trying to transfer objects from one server to another server but I'll pass source and destination server names dynamically thru variables..
I'll spare everyone my diatribe about MS and Booksonline...
I considered myself pretty good at Server 2000 DTS. It certainly has it's shortcomings, but there are a couple books on the market that helped a lot.
I'm now trying to learn Integration Services, and not getting very far. There are two books out so far. one is poor, the other worthless.
What I want to do seems very simple. That is to loop through a series of .txt files, and extract the information into a three table database. So far, I'm unable to figure out how to use a variable in the ConnectionString property of a connection manager. I see how to use variables in scripts (I should hope so) but using the same syntax in properties either errors at design time or run time.
Do I have to do everything in VB just as in DTS? Or do the new objects really work in a usuable fashion?
Color me frustrated. Any help will be appreciated.
I've developed a package that is working well at development machine from VS 2013 for a sample flat file. Also, over development machine, I've deployed it to SSISDB catalogue and even from there also it is running well for the same file.When the same package is deployed to production server's SSISDB catalogue database, it throws following error while processing the same sample flat file, “Unable to retrieve column information from the flat file connection manager”
I am trying to set the connection string in a connection manager at runtime. Here is what I have done:
1. Created a gv_DataSource, gv_Username and gv_Password
2. Created a ForEach Loop that reads DataSource, Username and password values from a variable (it is an For Each ADO loop Enumerator). The ADO recordset is read into by an Execute SQL task before the loop.
3. Mapped values from the recordset to variable in the ForEach loop's "Variable Mappings" page.
4. Used the variables in my Sybase OLEDB Connection Manager's "Expression" property, setting the "ConnectionString" property to:
5. I set the values in my database table for the connection-I set 2 connections for which I have Sybase OLEDB datasources setup.
When I run the package, I just get the first server's data twice, it doesn't set the second server's data during the second loop. I made sure the first one was working (i.e. the ConnectionString's property was being set by the data from the current variables) by setting the variables incorrectly in the variable properties page, and then running the package. So the first row of connection information is working, but the second loop around it doesn't seem to be working. I used a msgbox in a script task to show that the variables are mapping correctly in the loop, so it seems the second time around the connection information isn't taking from the variables.
As I was developing my SSIS package, I created several variables and tasks ( FTP, WMI Reader Task ). I am now cleaning up, deleting unwanted variables and connections in the design window. I save and build the package and when I load the package, I get warnings that these variables are referenced but can't find them and errors that the WMI connection is not found.
When a package calls a sub-package, it stores the absolute path of the child package in its dtsx xml file in a Connection String property. How annoying !!! . When I deploy this to another machine with a different file structure, it becomes a problem. Why can't it store the path relative to the parent package, which would be typically in a sub-directory under the parent ?
These last 2 days have been nothing but frustration and my deadline is slipping. Any help is appreciated.
Is a table 45 columns wide too wide? Most of the data is very small( check boxes, one word fields etc.) I have one giant form that needs to be filled out and uploaded to the database. I don't want to fragment the table too much because it will be a pain to update it. This table will have to be searched, that's why I am concerned with the width. I am hoping that an index would help out and would provide enough performance that I can keep a wide table.
ok, i have been charged with the task of searching a server that houses 50-60 client databases. all databases are identical, created from a prototype. i have discovered an issue whereby one of the clients db is missing a required sproc. they now want me to check all db's to see if anyone else is missing the sproc or not. i have a UN and PASS to login to the server that houses all db's. is there a method for me to query all db's at once and check for the sproc? or will i have to go through them all manually (which could become quite time consuming as there are so many db's, and quite a large number of sprocs in each. the sprocs seem to be listed in a semi-random order in each db as well) thanks all
Is there any way to create Server wide UDFs? We have alot functions that are spread across multiple databases and its time consuming going around databases looking for them. What would be nice is to create functions at the server level which can be accessed within any database, like the GETDate() function.
I'm working on a system that is very address-centric and detection ofduplicate addresses is very important. As a result we have brokenaddresses down into many parts (DDL below, but I've left out somereference tables for conciseness), these being state, locality, street,street number, and address. The breakdown is roughly consistent withAustralian addressing standards, we're working on finalising this.Because we carry the primary key down each of the levels, this hasresulted in our address table having a very wide primary key (around170 characters). We refer to addresses from a number of other tablesand although my instinct is to use this natural key in the other tablesI wonder if we should just put a unique index on the natural key,create a surrogate primary key and use it in the other table. Anythoughts?CREATE TABLE dbo.States (StateID varchar (3) NOT NULL ,StateName varchar (50) NOT NULL ,CONSTRAINT PK_AddressStates PRIMARY KEY NONCLUSTERED(StateID))CREATE TABLE dbo.Localities (Locality varchar (46) NOT NULL ,StateID varchar (3) NOT NULL ,Postcode char (4) NOT NULL ,CONSTRAINT PK_Localities PRIMARY KEY NONCLUSTERED(Locality,StateID,Postcode),CONSTRAINT FK_AddressLocalities_AddressStates FOREIGN KEY(StateID) REFERENCES dbo.States (StateID))CREATE TABLE dbo.Streets (StreetName varchar (35) NOT NULL ,StreetTypeID varchar (10) NOT NULL ,StreetDirectionID varchar (2) NOT NULL ,Locality varchar (46) NOT NULL ,StateID varchar (3) NOT NULL ,Postcode char (4) NOT NULL ,CONSTRAINT PK_Streets PRIMARY KEY CLUSTERED(StreetName,StreetTypeID,StreetDirectionID,Locality,StateID,Postcode),CONSTRAINT FK_Streets_Localities FOREIGN KEY(Postcode,Locality,StateID) REFERENCES dbo.Localities (Postcode,Locality,StateID))CREATE TABLE dbo.StreetNumbers (StreetName varchar (35) NOT NULL ,StreetTypeID varchar (10) NOT NULL ,StreetDirectionID varchar (2) NOT NULL ,Locality varchar (46) NOT NULL ,StateID varchar (3) NOT NULL ,Postcode char (4) NOT NULL ,StreetNumber varchar (15) NOT NULL ,BuildingName varchar (100) NOT NULL ,CONSTRAINT PK_StreetNumbers PRIMARY KEY CLUSTERED(StreetName,StreetTypeID,StreetDirectionID,Locality,StateID,Postcode,StreetNumber),CONSTRAINT FK_StreetNumbers_Streets FOREIGN KEY(StreetName,StreetTypeID,StreetDirectionID,Locality,StateID,Postcode) REFERENCES dbo.Streets (StreetName,StreetTypeID,StreetDirectionID,Locality,StateID,Postcode))CREATE TABLE dbo.Addresses (StreetName varchar (35) NOT NULL ,StreetTypeID varchar (10) NOT NULL ,StreetDirectionID varchar (2) NOT NULL ,Locality varchar (46) NOT NULL ,StateID varchar (3) NOT NULL ,Postcode char (4) NOT NULL ,StreetNumber varchar (15) NOT NULL ,AddressTypeID varchar (6) NOT NULL ,AddressName varchar (20) NOT NULL ,CONSTRAINT PK_StreetNumberPrefixes PRIMARY KEY CLUSTERED(StreetName,StreetTypeID,StreetDirectionID,Locality,StateID,Postcode,StreetNumber,AddressTypeID,AddressName),CONSTRAINT FK_Addresses_StreetNumbers FOREIGN KEY(StreetName,StreetTypeID,StreetDirectionID,Locality,StateID,Postcode,StreetNumber) REFERENCES dbo.StreetNumbers (StreetName,StreetTypeID,StreetDirectionID,Locality,StateID,Postcode,StreetNumber))
Requirement: I wanted to create another table based on the column values. For eg: I have to take the employee id and check for what value he has under owns column in the table. I take only 3 values and then these values should go to the newly created columns (owns1, owns2,owns3). if there is no value for any of these columns it should have null values loaded in them. The result of the modification should look like this:->
Note: eventhough employeeid 3 owns more than 3 things we only take 3 of what he owns and populate to above coloumns. In addition to it, the column OWNS will have more than 500 different values in them.
Its kind of urgent and if anyone knows how to , Can you please help me on this. Thanks a lot. -- Ragulan ;)
Dear All, what are different types of joins avialable with sql server6.0, 7.0, 2000 and 2005 and now 2008...
the reason behind the question is :
i've used the swiss Sql tool to boost the performance of my join queries. and the tool has given some best options. those are working fine with database but failing at application level.
i found the reason is because of Loop join, Hash Join, Merge join.
your valubel suggsions are needed.
thank you veru much
Vinod Even you learn 1%, Learn it with 100% confidence.
Can we create custom object wide roles? In the same manner that db_datareader in effect grants SELECT on all tables, can we create roles that affect all objects without having to explicitly grant the permission on every object?
Hi, I have an ASP.NET application that uses VARCHAR extensively in the tables and, more importantly, stored procedures (a couple hundred of them).
This app needs to start accepting foreign language in some areas, so I was wondering if there was some way to go through the tables and, more importantly, the stored procedures and change all "VARCHAR" references to "NVARCHAR" ?
Are the stored procedures stored as a text file somewhere on the server? If so I could use some sort of "replace" software utility to go through and change all VARCHAR to NVARCHAR
I have a wide fact table that I'm feeding to an SSAS cube. I was advised that splitting the measure group into two will improve performance when querying the cube.
I cannot find any documentation that supports this, in fact I get a blue curved line suggesting that I merge the measure groups since they have the same dimensionality and granularity.
I guess the best practice is what the blue line states, but without knowing the internals of SSAS I can undestand that a smaller measure group may be easier to handle, or create more specific aggregations for.
I have added several Active Directory groups and set the system roles for each to "System User" and set one of the groups (DBAdmin) to "System Adminstrator"
My issue is that even after doing this, the users in the other groups are able to access the "Configure site-wide security" link under Security and change the permissions. The only system permission these users have is "View shared schedules" so it doesn't seem that this should be possible.
I would appreciate any feedback on this issue. Thanks!
I'm building a system that imports data from several source, Excel files, text files, Access databases, etc. using DTS. The entire process revolved around MS SQL Server, by the way.
I figured I would create denormalized tables that mirror the Excel and flat files, for example, in structure, import data to those, clean up and remove duplicates there, then break those out into my normalized table structure later.
Now I've finished the importing part (though this is going to happen once a week) and I'm onto breaking up the denormalized tables.
I'm hesitating because I'm not sure I've made the best decisions in terms of process, etc.
I've decided to use cursors to loop over the denormalized tables and use batch insert statements to push data out to the appropriate tables.
Any comments? Suggestions? All is welcome.
I'm specifically interested in hearing back on the way I've set up the intermediate, denormalized tables and how I'm breaking them up using cursors (step 2 of the process below). Still, all comments are welcome. As are suggestions for further reading.
Thanks again...
simplified example (my denormalized tables are 20 - 30 colums wide)
denormalized table: =================== name, address, city, state, cellphone, homephone
I'm breaking up the denormalized tables like this (*UNTESTED*): =================================================
DECLARE @vars.... (one for each column in my normalized table structure, matching size and type)
DECLARE myCursor CURSOR FAST_FORWARD FOR SELECT name, address, city, state, cellphone, homephone FROM _DNT_myWideTable INTO
WHILE @@Fetch_Status = 0 BEGIN -- grab the next row from the wide table FETCH NEXT FROM myCursor INTO @name, @address, @city, @state, @cellphone, @homephone
-- create the person first and get the ID with @@IDENTITY INSERT INTO tblPerson (name) VALUES (@name)
SET @personID = @@IDENTITY
-- use that ID to coordinate inserts across other tables INSERT INTO tblAddress (FK_person, address, city, state, addressType) VALUES(@person, @address, @city, @state, 'HOME')
INSERT INTO tblContact (FK_person, data, contactType) VALUES(@person, @cellphone, 'CELLPHONE')
INSERT INTO tblContact (FK_person, data, contactType) VALUES(@person, @homephone, 'HOMEPHONE')
When rendered, the pages with the small tables on have a lot of white blank space at the right of the table. This is probably caused by the big table on page 3.
This report is distributed by email in Excell format. So on sheet 1 and 2 there are a lot of white cells on the right of the tables. When trying to print, they just want to use the "landscape" option and the "fit to page" option. Because of the empty white cells, the fit to page option reduces the first 2 tables to a very small table which covers only 50 % of the page width. The other 50 % is reserved for the empty cells.
Off course, I know that deleting the empty cells offers a solutions, but it would be a lot more handier if there were no empty cells in the first place.