SQL Server 2012 :: Create Table Syntax Dynamically On Run Time
Apr 19, 2015
I am having 100 of flat files need to load in respective staging table.I want to create table on run time as per filename input.suppose if input filename is ABC then table name should be Staging_ABC if file name is XYZ then it should be Staging_XYZ.Table structure is below need to create at run time
CREATE TABLE Staging_'Filename'(
[COL001] [varchar](4000) NULL,
[Id] [int] IDENTITY(1,1) NOT NULL,
[LoadDate] [datetime] NOT NULL default getdate()
)
I am getting error when I am trying to create table on runtime
Declare @FileName varchar(100) Declare @File varchar(100) set @FileName='brkrte_121227102828' SET @File = SUBSTRING(@FileName,1,CHARINDEX('_',@FileName)-1) --=select @File
[Code] ....
Error massage:- Msg 203, Level 16, State 2, Line 16
The name 'CREATE TABLE DataStaging.dbo.Staging_brkrte ( [COL001] VARCHAR (4000) NOT NULL, [Id] Int Identity(1,1), [LoadDate] datetime default getdate() )' is not a valid identifier.
I have created some dynamic sql to check a temporary table that is created on the fly for any columns that do contain data. If they do the column name is added to a dynamic sql, if not they are excluded. This looks like:
If (select sum(Case when [Sat] is null then 0 else 1 end) from #TABLE) >= 1 begin set @OIL_BULK = @OIL_BULK + '[Sat]' +',' END However, I am currently running this on over 230 columns and large tables 1.3 mil rows and it is quite slow. How I can dynamically create a sql script that only selects the columns in the table where there is data in a speedier manner. Unfortunately it has to be on the fly because the temporary table is created on the fly.
I have a function that returns a table from a comma-delimited string.
I want to take this a step further and create a function that will return a set of tablenames in a table based on a 'group' parameter which is a simple integer...1->9, etc.Obviously, what I am doing is not working out.
CREATE FUNCTION dbo.fnReturnTablesForGroup ( @whichgroup int ) RETURNS @RETTAB TABLE ( TABLENAME VARCHAR(50)
Is there a way to dynamically create a connection manager @ run time? I would like to do this from a data set of connection strings so I can link them into a union all component.
I am running a script by the end of the day. What I need is the rows in my temp table get saved in a permanent table.
The name of the table should end with the current date at the end.
Declare @tab varchar(100) set @tab = 'MPOG_Research..ACRC_427_' + CONVERT(CHAR(10), GETDATE(), 112 ) IF object_id(@tab ) IS NOT NULL DROP TABLE '@tab'; Select * INTO @tab from #acrc427;
I am having 2 tables one is staging temp and another is main import table.
In my staging table there are 3 column Col001,Id,Loaddate
in Col001 column data are present with '¯' delemeter.
I am having function which is used to load data from staging to import table using one function.
this function create a insert statement.
My Existing function
-- Description: To Split a Delimited field by a Delimiter ALTER FUNCTION [dbo].[ufn_SplitFieldByDelimiter] ( @fieldname varchar(max) ,@delimiter varchar(max) ,@delimiter_count int
[Code] ....
I am unable to get correct statement with above function.
I need to pass some SQL to someone else who will run it on their database. I have got the SQL for SQL Server 2000 but they are running SQL Server 7. Apparently the below MSSQL 2000 script doesn't work;
SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO CREATE TABLE [dbo].[tableName]( [id] [int] IDENTITY(1,1) NOT NULL, [ArticleID] [int] NULL, [Heading] [text] COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [BodyContent] [text] COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [WrittenDate] [datetime] NULL, CONSTRAINT [tableName] PRIMARY KEY CLUSTERED ( [id] ASC ) )
What is the equivalent of the above in for SQL Server 7? I don't have access to it via SQL Server Manager so have to run the script.
I am trying to dynamically create the connection to a database within an SSIS package.
the requirement is to allow the user to pass through the database as a variable and that variable will dynamically create the connection string in the connection manager.
ALTER TRIGGER [dbo].[Trigger1] ON [dbo].[Table1] with execute as SELF AFTER INSERT
[code]....
I am trying to create a trigger so every time a entry is made on a table, and the Colum1 is 'entry', it starts a job. But the users running the inserts do not have permission to Start jobs so I need to make it run as a super user. Where do i put the syntax in here? I Have tried Execute as login 'superuser' before the exec statement but it errors on the principal not being valid
I’ve got a situation where the columns in a table we’re grabbing from a source database keep changing as we need more information from that database. As new columns are added to the source table, I would like to dynamically look for those new columns and add them to our local database’s schema if new ones exist. We’re dropping and creating our target db table each time right now based on a pre-defined known schema, but what we really want is to drop and recreate it based on a dynamic schema, and then import all of the records from the source table to ours.It looks like a starting point might be EXEC sp_columns_rowset 'tablename' and then creating some kind of dynamic SQL statement based on that. However, I'm hoping someone might have a resource that already handles this that they might be able to steer me towards.Sincerely, Bryan Ax
I have N1 table where columns name(id,Field). Base on the fields of this table I want to create N2 table from SP where data from N1 will be columns in N2. id Field -- ------ 1 ID 2 First 3 Last
I am having SP which gives, two result sets. The columns which are coming from result sets are also dynamic. i.e. some time 5 columns and some time 10 columns.
Now I want to load this output into 2 different tables on daily basis. This would be truncate/delete table and load again.
Now my problem is that as I am not sure about columns, Is it possible to create table(Physical Table) depends on output of SP, and after load data into it.
During each load we can drop table, No issue and we can handle this through SSIS Package.
Hi, There is a table exists in a database, I have to write a stored procedure to create the same table in different database, with the same column name and field. This should be done in runtime. Is it possible. The table will be passed as a parameter to the stored procedure.
i am inserting something into the temp table even without creating it before. But this does not give any compilation error. Only when I want to execute the stored procedure I get the error message that there is an invalid temp table. Should this not result in a compilation error rather during the execution time.?
--create the procedure and insert into the temp table without creating it. --no compilation error. CREATE PROC testTemp AS BEGIN INSERT INTO #tmp(dt) SELECT GETDATE() END
only on calling the proc does this give an execution error
Hi, I have a sproc that returns somevalues and everything is working fine... and in my reports i am assigning the header data (in a detail column) based on the some feilds in the sproc... and there around 20 feilds that i want to show... but at a given time i am pretty sure that there wont be more than 10 fields that will have data.
So is it possible that show only the columns that have data in it and sometimes if there is less that 5 - 6 fields.. i want to realign the widths in those tables..
I'm programmatically able to import data between tables when the Destination table already exists but when Detination table has to be created on the fly (Name will be provided), I'm not successful in doing so.
Basically the requirement is to dump the resultset from the source in to a temp table so that the temp (Destination) table matches the Source's Schema exactly.
can sql server know when the row in table Saved CREATE TRIGGER date time on the ROW ? add new field call "date_row_save" date+time inside the the sql server i need to know whan the row Saved is it possible to do this in TRIGGER ? TNX
I used SQL Server 2012 Management Studio to create a new table on an 2014 SQL Server instance and got this message: 'This backend version is not supported to design database diagrams or tables'. Does this mean that I have to have SQL Server 2014 Management Studio to create a table on a SQL Server 2014 instance?
I am using the following script to check existence of table in the Database and create it dynamically...
This is working when table not existed, it error-ed when the table existed...
This script i am using in the Exec Sql Task.....
[Execute SQL Task] Error: Executing the query "declare @ODSDB varchar(50) declare @SQLSTMT varcha..." failed with the following error: "There is already an object named 'addressTable' in the database.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
declare @ODSDB varchar(50) declare @SQLSTMT varchar(max) set @ODSDB = 'SampleDB' begin set @SQLSTMT = ' IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(''' + @ODSDB + '.dbo.addressTable'') and Type=''U'')
I need to take a temporary table that has various times stored in a text field (4:30 pm, 11:00 am, 5:30 pm, etc.), convert it to miltary time then cast it as an integer with an update statement kind of like:
Update myTable set MovieTime = REPLACE(CONVERT(CHAR(5),GETDATE(),108), ':', '')
how this can be done while my temp table is in session?
I want to use time series algorithm to mine data from my case table and nested table. Case table is Date table, while nested table is the fact table. E.g, I want to predict the monthly sales amount for different region (I have region table related to the fact table), how can I achieve this?
Thanks a lot and I hope it is clear for your help and I am looking forward to hearing from you shortly.
Ok I am new to Sql and am wonder How I can (when createing a table ) have a field that is auto incremented.
Here is what I have so far
Code:
CREATE TABLE test (`id` INTEGER NOT NULL, `test` VARCHAR(50) NOT NULL, PRIMARY KEY (`id`)
I want the primary key id to be an incremented number.
I am used to mysql and I would just have the field created as an AUTO_INCREMENT.
Do I use the INCREMENT function within the first query or do I have to have another query that will alter the table and make the field an auto incremented number.
I am trying to create a table that would represent a workload for each shop. In order to do that I need to have WorkLoad table and ShopWorkLoad table which is actually just aggregation of WorkLoad.
WorkLoad contains a list of following items:
current orders that are in the process (one select statement) scheduled orders (another select statement) expected orders (third select statement) that come through a third-party system
All of this needs to be live. So, for example, as soon as order is added to Order table it should be included in WorkLoad if certain conditions are met. Same goes for scheduled orders (which come from another table). Expected orders will be loaded on a daily bases (based on historical data).
ShopWorkLoad table is aggregation of WorkLoad table.
Currently I did it this way:
Added after insert/update trigger on Order table: when order is created/updated, if it meets certain conditions, it should be inserted in WorkLoad, otherwise remove it from workload if it's in there and doesn't meet conditions
Added after insert/update trigger on Schedule table: when order is scheduled, if it meets certain conditions, it should be inserted in WorkLoad, otherwise remove it from workload if it's in there and doesn't meet conditions
Running daily job that populates WorkLoad table with expected orders based on historical values
Final step is to create an indexed view vShopWorkLoad
My biggest concern is usage of triggers which call pretty complex logic to determine whether item should be added to workload or not.
One other option was to create vWorkLoad view and somehow make it an indexed view but currently I don't see a way of doing that because the query consists of 4 union select statements, below is pseudo example. But even if doing it that way, how to build aggregated indexed view on top of vWorkLoad indexed view?
Third option is to use sql agent job which would run every x seconds (maybe 20) and it would execute all of these queries to populate WorkLoad table with delay of 10-20 seconds, but I am still not sure if this is acceptable to the client.
Fourth option is to create 3 or 4 indexed view where sum of them makes a workload. Then, ShopWorkLoad view would be built on top of these 3 or 4 indexed views, but in this case I don't know how this would affect performance since ShopWorkLoad query would be often queried.
Example of workload pseudo query:
select WorkLoadType = 'Order in process', OrderId, ShopId, ... from Order
I am looking for a way to leave a Data Flow Task destination table name as-is, and have SSIS auto-create the table if it doesn't exist already.
I searched on this in the forums but based on the question it's difficult to kow if it has been answered or not.
Details:
I am writing some SSIS packages that need to be executable on another server. Many of the Data Flow Tasks copy data (such as from a Fuzzy Grouping transformation, and lots of other stuff) into a new table. But the other server will not have these tables set up for the first run.
My current solution is to check information_schema.tables and drop IF EXISTS. But, then the Data Flow Task will not work (becase table does not exist). So, I script to new window a create table statement based on the existing table that I use in my dev environment. This is a hack and I want to find a better method.
It is quite possible (although unlikely) that the source columns could be changed in the future, or some query used to pull the data might be modified. If this happens, then I would need to change the CREATE TABLE Execute SQL task. I want my package to accommodate without having to modify it.
When I use the Import/Export Wizard, I can select a table name from the drop down list OR type in a new name. When I type in the new name, it assumes I want to create the table. NOW, is there a way to mimic this in BI Developer Studio? Yep, I saved the Wizard version of the SSIS package and all it does is run a CREATE TABLE statement first.
I am looking for a way to leave a Data Flow Task destination table name as-is, and have SSIS auto-create the table if it doesn't exist already.
I am having problems displaying time values in my SSRS report. below is info. Tried expressions still does not work. I want the values to show what in the SQL Server table 00:00:00.82. I tried stored proc still does not work.
SQL Server table time value shown in milliseconds: 00:00:00.82
We are designing a Staging layer to handle incremental load. I want to start with a simple scenario to design the staging.
In the source database There are two tables ex, tbl_Department, tbl_Employee. Both this table is loading a single table at destination database ex, tbl_EmployeRecord.
The query which is loading tbl_EmployeRecord is, SELECT EMPID,EMPNAME,DEPTNAME FROM tbl_Department D INNER JOIN tbl_Employee E ON D.DEPARTMENTID=E.DEPARTMENTID.
Now, we need to identify incremental load in tbl_Department, tbl_Employee and store it in staging and load only the incremental load to the destination.
The columns of the tables are,
tbl_Department : DEPARTMENTID,DEPTNAME
tbl_Employee : EMPID,EMPNAME,DEPARTMENTID
tbl_EmployeRecord : EMPID,EMPNAME,DEPTNAME
How to design the staging for this to handle Insert, Update and Delete.