Is there any way to create Server wide UDFs? We have alot functions that are spread across multiple databases and its time consuming going around databases looking for them. What would be nice is to create functions at the server level which can be accessed within any database, like the GETDate() function.
I am trying to import this years worth of failed logins and last successful login for each user out of the logs using master.dbo.xp_readerrorlog. The script essentially loops through the linked servers I have on my DBA box and reaches out for the log data. It works, but here is the error I am getting on most of our production servers:
OLE DB provider "SQLNCLI11" for linked server "AWSCADENCEDB01" returned message "The partner transaction manager has disabled its support for remote/network transactions.".
Msg 7391, Level 16, State 2, Line 17 The operation could not be performed because OLE DB provider "SQLNCLI11" for linked server "AWSCADENCEDB01" was unable to begin a distributed transaction.
I know how to enable distributed transactions on the servers that error out, but if it is not needed for anything other then my audit script, I doubt the business will approve me turning on distributed transactions at those locations (so I am not even going to ask).
I am attempting to setup a singular audit .rdl with the information I want to review quarterly.
CREATE PROC [dbo].[Import_Login_Data] AS IF EXISTS ( SELECT 1 FROM master.sys.servers WHERE is_linked = 1
I'm having a problem to which I'm sure the answer is simple...
All I want is a list of databases on my server with their allocated size and the free space within. Something similar to the first table that sp_spaceused gives you but on a server wide scale.
As I say, I'm sure there's a simple solution out there, but alas Google has failed me.
I have created an SQL server table in the past on a server that was all case sensative. Over time I found out that switching to a server that is not case sensative still caused my data to become case sensative. I read an article that said you should rebuild your master database then re-create your tables. So after rebuilding the master database, a basic restore would not be sufficient? I would have to go and manually re-create every single table again?
Hi ,Have a Visual C++ app that use odbc to access sql server database.Doing a select to get value of binary field and bind a char to thatfield as follows , field in database in binary(16)char lpResourceID[32+1];rc = SQLBindCol(hstmt, 1, SQL_C_CHAR,&lpResourceID,RESOURCE_ID_LEN_PLUS_NULL , &nLen1);and this works fine , however trying to move codebase to UNICODE antested the followingWCHAR lpResourceID[32+1];rc = SQLBindCol(hstmt, 1, SQL_W_CHAR,&lpResourceID,RESOURCE_ID_LEN_PLUS_NULL , &nLen1);but only returns 1/2 the data .Any ideas , thoughts this would work fine , nit sure why loosing dataAll ideas welcome.JOhn
Is a table 45 columns wide too wide? Most of the data is very small( check boxes, one word fields etc.) I have one giant form that needs to be filled out and uploaded to the database. I don't want to fragment the table too much because it will be a pain to update it. This table will have to be searched, that's why I am concerned with the width. I am hoping that an index would help out and would provide enough performance that I can keep a wide table.
ok, i have been charged with the task of searching a server that houses 50-60 client databases. all databases are identical, created from a prototype. i have discovered an issue whereby one of the clients db is missing a required sproc. they now want me to check all db's to see if anyone else is missing the sproc or not. i have a UN and PASS to login to the server that houses all db's. is there a method for me to query all db's at once and check for the sproc? or will i have to go through them all manually (which could become quite time consuming as there are so many db's, and quite a large number of sprocs in each. the sprocs seem to be listed in a semi-random order in each db as well) thanks all
I'm working on a system that is very address-centric and detection ofduplicate addresses is very important. As a result we have brokenaddresses down into many parts (DDL below, but I've left out somereference tables for conciseness), these being state, locality, street,street number, and address. The breakdown is roughly consistent withAustralian addressing standards, we're working on finalising this.Because we carry the primary key down each of the levels, this hasresulted in our address table having a very wide primary key (around170 characters). We refer to addresses from a number of other tablesand although my instinct is to use this natural key in the other tablesI wonder if we should just put a unique index on the natural key,create a surrogate primary key and use it in the other table. Anythoughts?CREATE TABLE dbo.States (StateID varchar (3) NOT NULL ,StateName varchar (50) NOT NULL ,CONSTRAINT PK_AddressStates PRIMARY KEY NONCLUSTERED(StateID))CREATE TABLE dbo.Localities (Locality varchar (46) NOT NULL ,StateID varchar (3) NOT NULL ,Postcode char (4) NOT NULL ,CONSTRAINT PK_Localities PRIMARY KEY NONCLUSTERED(Locality,StateID,Postcode),CONSTRAINT FK_AddressLocalities_AddressStates FOREIGN KEY(StateID) REFERENCES dbo.States (StateID))CREATE TABLE dbo.Streets (StreetName varchar (35) NOT NULL ,StreetTypeID varchar (10) NOT NULL ,StreetDirectionID varchar (2) NOT NULL ,Locality varchar (46) NOT NULL ,StateID varchar (3) NOT NULL ,Postcode char (4) NOT NULL ,CONSTRAINT PK_Streets PRIMARY KEY CLUSTERED(StreetName,StreetTypeID,StreetDirectionID,Locality,StateID,Postcode),CONSTRAINT FK_Streets_Localities FOREIGN KEY(Postcode,Locality,StateID) REFERENCES dbo.Localities (Postcode,Locality,StateID))CREATE TABLE dbo.StreetNumbers (StreetName varchar (35) NOT NULL ,StreetTypeID varchar (10) NOT NULL ,StreetDirectionID varchar (2) NOT NULL ,Locality varchar (46) NOT NULL ,StateID varchar (3) NOT NULL ,Postcode char (4) NOT NULL ,StreetNumber varchar (15) NOT NULL ,BuildingName varchar (100) NOT NULL ,CONSTRAINT PK_StreetNumbers PRIMARY KEY CLUSTERED(StreetName,StreetTypeID,StreetDirectionID,Locality,StateID,Postcode,StreetNumber),CONSTRAINT FK_StreetNumbers_Streets FOREIGN KEY(StreetName,StreetTypeID,StreetDirectionID,Locality,StateID,Postcode) REFERENCES dbo.Streets (StreetName,StreetTypeID,StreetDirectionID,Locality,StateID,Postcode))CREATE TABLE dbo.Addresses (StreetName varchar (35) NOT NULL ,StreetTypeID varchar (10) NOT NULL ,StreetDirectionID varchar (2) NOT NULL ,Locality varchar (46) NOT NULL ,StateID varchar (3) NOT NULL ,Postcode char (4) NOT NULL ,StreetNumber varchar (15) NOT NULL ,AddressTypeID varchar (6) NOT NULL ,AddressName varchar (20) NOT NULL ,CONSTRAINT PK_StreetNumberPrefixes PRIMARY KEY CLUSTERED(StreetName,StreetTypeID,StreetDirectionID,Locality,StateID,Postcode,StreetNumber,AddressTypeID,AddressName),CONSTRAINT FK_Addresses_StreetNumbers FOREIGN KEY(StreetName,StreetTypeID,StreetDirectionID,Locality,StateID,Postcode,StreetNumber) REFERENCES dbo.StreetNumbers (StreetName,StreetTypeID,StreetDirectionID,Locality,StateID,Postcode,StreetNumber))
Requirement: I wanted to create another table based on the column values. For eg: I have to take the employee id and check for what value he has under owns column in the table. I take only 3 values and then these values should go to the newly created columns (owns1, owns2,owns3). if there is no value for any of these columns it should have null values loaded in them. The result of the modification should look like this:->
Note: eventhough employeeid 3 owns more than 3 things we only take 3 of what he owns and populate to above coloumns. In addition to it, the column OWNS will have more than 500 different values in them.
Its kind of urgent and if anyone knows how to , Can you please help me on this. Thanks a lot. -- Ragulan ;)
Dear All, what are different types of joins avialable with sql server6.0, 7.0, 2000 and 2005 and now 2008...
the reason behind the question is :
i've used the swiss Sql tool to boost the performance of my join queries. and the tool has given some best options. those are working fine with database but failing at application level.
i found the reason is because of Loop join, Hash Join, Merge join.
your valubel suggsions are needed.
thank you veru much
Vinod Even you learn 1%, Learn it with 100% confidence.
Can we create custom object wide roles? In the same manner that db_datareader in effect grants SELECT on all tables, can we create roles that affect all objects without having to explicitly grant the permission on every object?
Hi, I have an ASP.NET application that uses VARCHAR extensively in the tables and, more importantly, stored procedures (a couple hundred of them).
This app needs to start accepting foreign language in some areas, so I was wondering if there was some way to go through the tables and, more importantly, the stored procedures and change all "VARCHAR" references to "NVARCHAR" ?
Are the stored procedures stored as a text file somewhere on the server? If so I could use some sort of "replace" software utility to go through and change all VARCHAR to NVARCHAR
I have a wide fact table that I'm feeding to an SSAS cube. I was advised that splitting the measure group into two will improve performance when querying the cube.
I cannot find any documentation that supports this, in fact I get a blue curved line suggesting that I merge the measure groups since they have the same dimensionality and granularity.
I guess the best practice is what the blue line states, but without knowing the internals of SSAS I can undestand that a smaller measure group may be easier to handle, or create more specific aggregations for.
declare @x varbinary(128) select @x = convert(varbinary(128), 'Some user name') set context_info @x
Then in a trigger, you can say:
UPDATE tbl SET who_was_kilroy = convert(varchar, p.context_info) FROM inserted i JOIN tbl t ON i.keycol = t.keycol CROSS master.dbo.sysprocesses p WHERE p.spid = @@spid
I am searching for a more generic way (varbinary 128 is not big enough) to store and access connection wide variables
I have added several Active Directory groups and set the system roles for each to "System User" and set one of the groups (DBAdmin) to "System Adminstrator"
My issue is that even after doing this, the users in the other groups are able to access the "Configure site-wide security" link under Security and change the permissions. The only system permission these users have is "View shared schedules" so it doesn't seem that this should be possible.
I would appreciate any feedback on this issue. Thanks!
I'm building a system that imports data from several source, Excel files, text files, Access databases, etc. using DTS. The entire process revolved around MS SQL Server, by the way.
I figured I would create denormalized tables that mirror the Excel and flat files, for example, in structure, import data to those, clean up and remove duplicates there, then break those out into my normalized table structure later.
Now I've finished the importing part (though this is going to happen once a week) and I'm onto breaking up the denormalized tables.
I'm hesitating because I'm not sure I've made the best decisions in terms of process, etc.
I've decided to use cursors to loop over the denormalized tables and use batch insert statements to push data out to the appropriate tables.
Any comments? Suggestions? All is welcome.
I'm specifically interested in hearing back on the way I've set up the intermediate, denormalized tables and how I'm breaking them up using cursors (step 2 of the process below). Still, all comments are welcome. As are suggestions for further reading.
Thanks again...
simplified example (my denormalized tables are 20 - 30 colums wide)
denormalized table: =================== name, address, city, state, cellphone, homephone
I'm breaking up the denormalized tables like this (*UNTESTED*): =================================================
DECLARE @vars.... (one for each column in my normalized table structure, matching size and type)
DECLARE myCursor CURSOR FAST_FORWARD FOR SELECT name, address, city, state, cellphone, homephone FROM _DNT_myWideTable INTO
WHILE @@Fetch_Status = 0 BEGIN -- grab the next row from the wide table FETCH NEXT FROM myCursor INTO @name, @address, @city, @state, @cellphone, @homephone
-- create the person first and get the ID with @@IDENTITY INSERT INTO tblPerson (name) VALUES (@name)
SET @personID = @@IDENTITY
-- use that ID to coordinate inserts across other tables INSERT INTO tblAddress (FK_person, address, city, state, addressType) VALUES(@person, @address, @city, @state, 'HOME')
INSERT INTO tblContact (FK_person, data, contactType) VALUES(@person, @cellphone, 'CELLPHONE')
INSERT INTO tblContact (FK_person, data, contactType) VALUES(@person, @homephone, 'HOMEPHONE')
When rendered, the pages with the small tables on have a lot of white blank space at the right of the table. This is probably caused by the big table on page 3.
This report is distributed by email in Excell format. So on sheet 1 and 2 there are a lot of white cells on the right of the tables. When trying to print, they just want to use the "landscape" option and the "fit to page" option. Because of the empty white cells, the fit to page option reduces the first 2 tables to a very small table which covers only 50 % of the page width. The other 50 % is reserved for the empty cells.
Off course, I know that deleting the empty cells offers a solutions, but it would be a lot more handier if there were no empty cells in the first place.
it's like this, i have a temporary table such as create table temp_table (str varchar(50)) and i have a data table create table data_table (str varchar(20))
now i import my data(in which there is some corrupted lines) into the temporary table, they should be all ansi-character strings and no more than 20 characters, but now some wrong datas in which there are wide-characters are mixed in. as the result of these wide-characters, the corrupted strings each takes over 20 bytes, but i can't filter them out, as when i enter in "len(str)", the sql server returns character counts, instead of byte counts, i thought this should only happen when i was using a unicode date type!(e.g. nvarchar). but now the server also behaves like this on those ansi date types. it seems all string manipulating functions refering string length behaves like this.
so when i am trying to run: insert into data_table select str from temp_table where len(str) <= 20 or insert into data_table select left(str,20) from temp_table
it will always end up with a string truncating error String or binary data would be truncated. The statement has been terminated
So now my problem is how to get the count of byte, but character, of a string containing wide-characters?
i'm using sql server 2005 standard version with sp2
Hi, I€™m trying to create a VERY wide table, with 1,000 columns of type varchar(MAX), nullable. The CREATE TABLE statement (both in SQL 2005 & 2008), gives the following warning:
Warning: The table "WIDE_TABLE" has been created, but its maximum row size exceeds the allowed maximum of 8060 bytes. INSERT or UPDATE to this table will fail if the resulting row exceeds the size limit.
When I insert data into the table, filling all columns with small, 10-byte string values, I get the following error:
Msg 50000, Level 16, State 1, Procedure sp_pivot, Line 118
Cannot create a row of size 15034 which is greater than the allowable maximum of 8060.
I€™d like to verify this observation: each row is created with 2000 bytes of offset data (2 byte * 1000 columns), 125 bytes for null bitmap (1000 columns / 8 bits) and some more €œwasted€? row information. This leaves less than 6K for the data itself. But since not all columns can fit within the page, forwarding pointers in the row need to be created, 24 byte per column, which very quickly add up to more than 8K, thus the error. So the 8K limit is met for much less columns than the max 1024 column restriction.
Furthermore, in SQL 2008, SPARSE columns will not solve the problem (maybe save some €œmetadata€? space in case the columns are null, but if not, I€™m with the same problem again, or even worse, since now each value takes more storage space. The max 30,000 columns in 2008 is only for cases where the column values are really sparse€¦
Is this the right observation? if so, is there a workaround besides splitting to multiple tables?
we can easily load a file into db tables. However, my main concern here is the number of columns in the file. A text file TEXT_1400.txt has 1400 columns. I am unable to load data to my db table using BCP or BULK INSERT commands, as maximum of 1024 columns are allowed per table in SQL Server 2008.Â
We can still go ahead and create ‘Wide Table’ (a special table that holds up to 30,000 columns. The maximum size of a wide table row is 8,019 bytes.). But when operating on wide table, BCP/BULK INSERT commands still fail. After few hours of scratching my head over BCP and BULK INSERT, I observed that while inserting BCP/BULK INSERT commands are unable to identify SPARSE columns and skip these columns, which disturbs column mapping and results in data conversion and trancation errors.  Is there any proper way to load this kind of files into the db table?Â
I have a very wide report of more than 20 inches. I've placed several parameter values in the report header section so that the user can see what filters have been applied to the data. The testboxes shift their position several inches to the right when the report is run from the Report Manager.
Is there a way to make sure that a textbox is displayed at an absolute position? I thought maybe there would be a property on the report or body object that controls this but I don't see one.
My server is a dual AMD x64 2.19 GHz with 8 GB RAM running under Windows Server 2003 Enterprise Edition with service pack 1 installed. We have SQL 2000 32-bit Enterprise installed in the default instance. AWE is enabled using Dynamically configured SQL Server memory with 6215 MB minimum memory and 6656 maximum memory settings.
I have now installed, side-by-side, SQL Server 2005 Enterprise Edition in a separate named instance. Everything is running fine but I believe SQL Server2005 could run faster and need to ensure I am giving it plenty of resources. I realize AWE is not needed with SQL Server 2005 and I have seen suggestions to grant the SQL Server account the 'lock pages in memory' rights. This box only runs the SQL 2000 and SQL 2005 server databases and I would like to ensure, if possible, that each is splitting the available memory equally, at least until we can retire SQL Server 2000 next year. Any suggestions?
We have an old machine which holds SQL server 2000 database. We need to migrate a whole database to a new machine which has SQL server 2005.
When we tried to move whole database using Import and Export Wizard, only tables can be selected to import/export. However we want to import/export the whole database, including tables, stored procedure, view, etc. Which tool should we use?
We have an old machine which holds SQL server 2000 database. We need to migrate a whole database to a new machine which has SQL server 2005.
When we tried to move whole database using Import and Export Wizard, only tables can be selected to import/export. However we want to import/export the whole database, including tables, stored procedure, view, etc. Which tool should we use?
When I proposed start to use SQL Server 2005 for new VS 2005 web sites, one of my co-workers responded that we will update the old SQL Server 2000 databases to SQL Server 2005 when we are ready to use 2005 SQL Server.
Questions: 1. Any expected problems to upgrade old 2000 databases to new 2005 SQL Server? 2. I have installed both 2005/Management Studio Express and 2000/Enterprise Manager in my PC. Any expected problems when running both 2000 and 2005 SQL Server at the same database server? 3. What is the best configuration for running SQL Server 2005 when we have old 2000 databases? Upgade or not upgrade?