SQL Server 2008 :: Return From Script Instead Of Drop Existing For DDL Scripts?
Mar 25, 2015
Let's say I have a function named MyCustomFunction and I want to ensure that it exists in the database. Let's say I have a create script for the function. I want the script to be runnable multiple times if needed. A common way to do this is to check for an object_id at the top of the function like this:
IF OBJECT_ID('MyCustomFunction') IS NULL
DROP FUNCTION MyCustomFunction
GO
CREATE FUNCTION MyCustomFunction...
But is there a more elegant way to do this? For example, instead of dropping and recreating the function, is there a way to simply exit from the script and do nothing if the function already exists? Something like this:
IF OBJECT_ID('MyCustomFunction') IS NULL RETURN
GO
CREATE FUNCTION MyCustomFunction...
I have a legacy table which name contains non-printable characters (CHAR(31), to be more specific). The non-printable character is beside a underscore, and I've discovered that the shortcut CTRL+SHIFT+_ shoots the CHAR(31) character (which means "US" - Unit Separator). The previous developer should've hit by mistake this combination, and created the table with this weird character on it.
When we issue a SELECT command against the table, it returns results. But when we try to issue any DDL to it (DROP, sp_rename, etc), it simply doesn't work.
Examples: DROP Table_Name; raises "Msg 15225 - No item by the name of 'Table_Name' could be found in the current database 'MyDB', given that @itemtype was input as '(null)'". sp_rename 'Table_Name', 'NewTableName'; raises "Msg 102 - Incorrect syntax near '_Name'".
I already duplicated the table with the correct name, and have corrected it on the referenced objects (SP's, Views etc). The remaining step is just dropping it from the database. when we copy+paste from SQL Server to Notepad++, it shows the hidden chacacter ("US") on the middle of the table name, beside the underscore.
Can we create the Partition on Existing Table?e.g Create table t ( col1 number(10,0), Col2 Varchar(10)) ;After the table Creation can we alter the table to partition the table.
This table takes up almost 80% of my database size, and the information is this table just captures the time spent by a user on the website(not very critical data)
I would like to know how to delete the entire index (which is what is occupying most space) to free up disk space. the index is a clustered index.
Below is a XML column data. How to get the Id and respective Names for "Case Manager" Dropdown ONLY in SQL server 2008. I don't want to get anything related to "Intake Staff" drop down.
I am splitting data from SQL table and sending it to excel file but everytime i rerun the package ,it appends the existing data in excel file ..I tried using execute sql task with excel connection and write drop table `tablename` and then one more execute sql task with create table `tablename` (`Id` int ,`fname` varchar(100)) ....But it does not seem to work.
I have a scenario where I need to add a blank column to a table that is a publisher. This table contains over 100 million records. What is the best way to add the column? In the past where I had to make an update, it breaks replication because the update would take forever as jobs are continuously updating the table so replication can't catch up.
If I alter a table and add a column, would this column automatically get picked up in replication?
We have an existing BI/DW process that adds large chunks of data daily (~10M rows) to an existing table, as well as using Deletes to remove stale data. This scenario seems to beg for partitioning to support switching in/out data.
After lots of reading on this, I have figured out the mechanics of the switching, bit I still have some unknowns about the indexes needed to support this.
The table currently has several non-clustered indexes, including one on the partitioning column - let's call that column snapshotdate. Fortunately there are no FKs involved, and no constraints.
Most of the partitioning material I see focuses on creating a clustered PK to assist with switching. Not sure if this is actually necessary, but assume I create one using an Identity column (currently missing) plus snapshotdate.
For the other non-clustered, non-unique indexes, can I just add the snapshotdate to the end of the index? i.e. will that satisfy the switching requirement?
We have a SQL 2008 Instance existing on active/passive cluster with 2 nodes running on Windows server 2008 R2 Ent. Edn.
Now we need to install another SQL instance on this cluster. So what are the prerequisites apart from new IP address? Do we need shared disks or can the existing disks can be utilized?
How can I find calls which do not exist in stored procedures and functions?We have many stored procedures, sometimes a stored procedure or function which is called does not exist. Is there a query/script or something that I can identify which stored procedures do not 'work' and which procedure/ function they are calling?I am searching for stored procedures and functions which are still called, but do not exist in the current database.
I have some huge tables (think 200+GB for a single table) which are excellent candidates for sparse columns. The tables have many columns which are defined with decimal datatypes (13,2) with a large percentage of them (over 50% in most cases- some as much as 99%) being 0.00. Since this is very expensive in terms of storage my idea is to set all the 0.00 values equal to NULL then set them as sparse. Across 100 or so identical databases, I have 5 such tables, with 20-40 columns in each table.
1.) three steps for each column in each table in each db.
Step 1: update table to allow for nulls
Step 2: update tabe set column=null where column =0.00
Step 3 update table set sparse columns
2.)
Step 1: Create entirely new table with sparse column definitions
Step 2: copy entire table, transforming 0.00 to null for affected columns via SSIS
Step 3: drop original table, rename new table to original name
hello!! i am using a couple of temp tables for a select statement. i need to drop the tables only if they exist before using them, because if i don't drop, then i will get this error:
There is already an object named '#Tmp01' in the database.
and if i try to drop the tables for the first time i run the query, then i will get an error saying that i cannot drop a table that doesn't exist.
i have tried using this sentence:
IF EXISTS (SELECT * FROM sysobjects WHERE type = 'U' AND name = '#Tmp01') DROP TABLE #Tmp01 IF EXISTS (SELECT * FROM sysobjects WHERE type = 'U' AND name = '#Tmp02') DROP TABLE #Tmp02 IF EXISTS (SELECT * FROM sysobjects WHERE type = 'U' AND name = '#Tmp03') DROP TABLE #Tmp03
but it only seems to work with normal tables, since temp tables are not found in sysobjects.
I need to recover some data in a table but i'm not 100% sure the right way to do this safely.
I'll need to query the two tables to compare the before and after but how do i go about restoring/attaching the backup database to SQL without causing conflicts?
If I restore, I assume this would just overwrite which is obviously the worst thing that can happen. if i attach the backup, how does this affect the current live DB? how do i make sure that it's not getting accessed and mistaken for the live DB?
I am unable to drop the above login. The error is Error:15141 The server principal owns an event notification and cannot be dropped.
I have looked in the table sys.server_event_notifications and there are rows returned that have a server_principal_id of the user I am trying to drop. No events have been created on this server, so I am assuming these notifications are either created by default or are somehow related to Database Mail?
All the event notifications belong to service name "SQL/Notifications/ProcessWMIEventProviderNotification/v1.0" and begin with SQLWEP (i.e. SQLWEP_RECHECK_SUBSCRIPTIONS, SQLWEP_B415ADB8_A604_4057_976F_600002FA5AF6). How can I find out what these are for and how they were created?
What purpose do these event notifications have? Is there some syntax to change the owner of these event notifications so I can successfully drop the login? If the only way is to directly update the system view, is this safe and what repercussions could this have?
I have multiple databases in the server and all my databases have tables: stdVersions, stdChangeLog. The stdVersions table have field called DatabaseVersion which stored the version of the database. The stdChangeLog table have a field called ChangedOn which stored the date of any change made in the database.
I need to write a query/stored procedure/function that will return all the database names, version and the date changed on. The results should look something like this:
I'm creating a sql stored procedure inside this proc it returns some information about the user, i.e location, logged in, last logged in, etc I need to join this on to the photos table and return the photo which has been set as the profile picture, if it hasn't been set then return the first top 1 if that makes sense?
The user has the option to upload photos so there might be no photos for a particular user, which I believe I can fix by using a left join
My photos table is constructed as follows:
CREATE TABLE [User].[User_Photos]( [Id] [bigint] IDENTITY(1,1) NOT NULL, [UserId] [bigint] NOT NULL, [PhotoId] [varchar](100) NOT NULL, [IsProfilePic] [bit] NULL,
[Code] ....
Currently as it stands the proc runs but it doesn't return a particular user because they have uploaded a photo so I need to some how tweak the above to return null if a photo isn't present which is where I'm stuck.
As you can I have a uniqueidentifier (UniqueId) column which I populate with NewID() I'm trying to return this as I need it for other functionality of the website but I can't figure out how I can get it after the insert has completed?
I have several databases to deal with, all with + 250 tables. The databases are not identical and do not conform to a specific naming convention for table names. Most but not all tables have a column called "LastUpdated" containing a date/time (obviously). I'd like to be able to find all rows within a whole database (table by table) where the date/time is greater than a specified date/time.
I'm looking for a reliable query that will return all the rows in each of the tables but without me having to write hundreds of individual scripts "SELECT * FROM [dbo.xyz] WHERE LastUpdated > '2015-01-01 09:00:00:000'", or have to look through each table first to determine which of them has the LastUpdated field.
My requirements are to return the sales for each customer for each store, for the past 6 weeks.
The catch is that I only want those customers which have had sales over 10,000 within the last week.
This is my query:
WITH SET [CustomerList] AS FILTER( ([Store].[Store Name].[Store Name], [Customer].[Customer Name].[Customer Name]), [Measures].[Sales] >= 10000 AND [Date].[Fiscal Week Name].&[2015-01-26 to 2015-02-01]
I'm in the process of importing a series of flat files into SQL Server. I'm using a ~ to separate the columns and the row delimiter is {CR}{LF}. One of the files has a field that contains the CRLF combination in a few places so that field is split over several rows. This is readily visible when I look at the flat file. However, when I'm importing the file, the Import and Export wizard seems to ignore them and import the files as they should with one row per record.
I have a table where hours are being loaded in a weekly basis. The YearWeek is populated when the data is loaded. The value format of the Year Week is 2015-39, 2015-41, etc. I need to calculate the total hours per Fiscal Year.For example, week '2015-39' will be return FY15 and week '2015-41' will return FY16, and so on. By extracting the year, I can do a group by and have total hours for each year.
Currently, I have it working by splitting the value into year and week and then looping through each year and week, so I can assign the totals to the corresponding FY.select sum(hours) as total, yearweek from tablename group by yearweek...Then I loop through using C#.I can return the FY using an actual date,how to do it for year-week format for any given year.
select CASE WHEN CAST(GETDATE() AS DATE) > SMALLDATETIMEFROMPARTS(DATEPART(YEAR,GETDATE()),09,30,00,000) THEN DATEPART(YEAR,GETDATE()) + 1 ELSE DATEPART(YEAR,GETDATE()) END AS FY
Query: SELECT m.MemberID, vw.Category, vw.Type, FROM dbo.TestVW vw JOIN dbo.TestMember m ON vw.MemberKey = m.MemberKey WHERE vw.Type = 'test' GROUP BY m.MemberID,
[Code] ...
but cannot seem to be able to return one record with its corresponding value criteria.
I recently updated the datatype of a sproc parameter from bit to tinyint. When I executed the sproc with the updated parameters the sproc appeared to succeed and returned "1 row(s) affected" in the console. However, the update triggered by the sproc did not actually work.
The table column was a bit which only allows 0 or 1 and the sproc was passing a value of 2 so the table was rejecting this value. However, the sproc did not return an error and appeared to return success. So is there a way to configure the database or sproc to return an error message when this type of error occurs?
The only way to add a new column to an existing mapping that I know is to go to advanced editor and refresh. This however keeps only the default mapping (where the field names match), the rest is wiped out, so need to restore the mapping manually after that. Risky and annoying at the same time. Is there any alternative?
Hi,I found this SQL in the news group to drop indexs in a table. I need ascript that will drop all indexes in all user tables of a givendatabase:DECLARE @indexName NVARCHAR(128)DECLARE @dropIndexSql NVARCHAR(4000)DECLARE tableIndexes CURSOR FORSELECT name FROM sysindexesWHERE id = OBJECT_ID(N'F_BI_Registration_Tracking_Summary')AND indid 0AND indid < 255AND INDEXPROPERTY(id, name, 'IsStatistics') = 0OPEN tableIndexesFETCH NEXT FROM tableIndexes INTO @indexNameWHILE @@fetch_status = 0BEGINSET @dropIndexSql = N' DROP INDEXF_BI_Registration_Tracking_Summary.' + @indexNameEXEC sp_executesql @dropIndexSqlFETCH NEXT FROM tableIndexes INTO @indexNameENDCLOSE tableIndexesDEALLOCATE tableIndexesTIARob
I went to look at the connection string previously entered for a dataset created in a new report, and am not seeing anything intuitive for bringing up the associated datasource dialog box that was used to enter name, type and connection string. I'm also noticing nothing intuitive for deleting an existing dataset. How do you do these two very simple things in an existing project? I dont see the dataset in solution explorer, I see it only in the text box on the data tab and in a limited kind of way on the dataset view where the columns show and maint is allowed mostly on the columns only. I tried hilighting the dataset here and hitting the delete key to no avail.